|USFDC Home | USF Electronic Theses and Dissertations||| RSS|
This item is only available as the following downloads:
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001469412
007 cr mnu|||uuuuu
008 040524s2004 flua sbm s000|0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0000314
Performance evaluation of TCP over optical channels and heterogeneous networks
h [electronic resource] /
by Jianxuan Xu.
[Tampa, Fla.] :
University of South Florida,
Thesis (M.S.C.S.)--University of South Florida, 2004.
Includes bibliographical references.
Text (Electronic thesis) in PDF format.
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
Title from PDF of title page.
Document formatted into pages; contains 42 pages.
ABSTRACT: Next generation optical networks will soon provide users the capability to request and obtain end-to-end all optical 10 Gbps channels on demand. Individual users will use these channels to exchange large amounts of data and support applications for scientific collaborative work. These new applications, which expect steady transfer rates in the order of Gbps, will very likely use either TCP or a new transport layer protocol as the end-to-end communication protocol. This thesis investigates the performance of TCP and newer TCP versions over High Bandwidth Delay Product Channels (HBDPC), such as the on demand optical channels described above. In addition, it investigates the performance of these new TCP versions over wireless networks and according to old issues such as fairness. This is particularly important to make adoption decisions. Using simulations, it is shown that 1) the window-based mechanism of current TCP implementations is not suitable to achieve high link utilization and 2) congestion control mechanisms, such as the one utilized by TCP Vegas and Westwood are more appropriate and provide better performance. Modifications to TCP Vegas and Scalable TCP are introduced to improve the performance of these versions over HBDPC. In addition, simulation results show that new TCP proposals for HBDPC, although they perform better than current TCP versions, still perform worse than TCP Vegas. Also, it was found that even though these newer versions improve TCP's performance over their original counterparts in HBDPC, they still have performance problems in wireless networks and present worse fairness problems than their old counterparts. The main conclusion of this thesis is that all these versions are still based on TCP's AIMD strategy or similar and therefore continue to be fairly blind in the way they increase and decrease their transmission rates. TCP will not be able to utilize the foreseen optical infrastructure adequately and support future applications if not redesigned to scale.
Adviser: Labrador, Miguel A.
high bandwidth-delay product network.
x Computer Science
t USF Electronic Theses and Dissertations.
Performance Ev aluation of TCP o v er Optical Channels and Heterogeneous Netw orks by Jianxuan Xu A thesis submitted in partial fulllment of the requirements for the de gree of Master of Science in Computer Science Department of Computer Science and Engineering Colle ge of Engineering Uni v ersity of South Florida Major Professor: Miguel A. Labrador Ph.D. W ei Qian, Ph.D. Raf ael Perez, Ph.D. Date of Appro v al: March 30, 2004 K e yw ords: congestion control, High bandwidth-delay product netw ork c r Cop yright 2004, Jianxuan Xu
T ABLE OF CONTENTS LIST OF FIGURES ii ABSTRA CT i v CHAPTER 1 INTR ODUCTION 1 1.1 Introduction and moti v ations 1 1.2 Contrib utions of the thesis 2 1.3 Outline of the thesis 3 CHAPTER 2 LITERA TURE REVIEW 4 2.1 The T ransmission Control Protocol(TCP) 4 2.2 Current TCP v ersions 5 2.3 Ne w TCP v ersions for HBDP netw ork 10 CHAPTER 3 PERFORMANCE EV ALU A TION OF TCP O VER HIGH B AND WIDTH DELA Y PR ODUCT CHANNELS 13 3.1 Simulation topology and parameter 13 3.2 Performance e v aluation 14 3.3 Modication of TCP V e gas 20 3.4 Modication to Scalable TCP 24 CHAPTER 4 PERFORMANCE EV ALU A TION O VER WIRELESS NETW ORKS AND F AIRNESS AN AL YSIS 28 4.1 Simulation topology 28 4.2 Performance e v aluation 31 4.3 F airness analysis 32 CHAPTER 5 CONCLUSIONS 34 REFERENCES 35 i
LIST OF FIGURES Figure 2.1 Beha vior of the congestion windo w v ariable in TCP T ahoe 6 Figure 2.2 TCP Reno and Ne wReno FR/FR procedure 6 Figure 2.3 Beha vior of the congestion windo w of TCP V e gas 9 Figure 3.1 The netw ork topology 13 Figure 3.2 Channel utilization of TCP as a function of the bottleneck link bandwidth 15 Figure 3.3 Sequence number of TCP with the bottleneck link bandwidth set a 1 Gbps 15 Figure 3.4 Congestion W indo w of TCP Reno, Ne w Reno and SA CK when the link bandwidth is set to 1 Gbps 16 Figure 3.5 Congestion W indo w of TCP T ahoe, V e gas and W estw ood 17 Figure 3.6 Congestion W indo w of TCP HighSpeed TCP 17 Figure 3.7 Duration of the Slo w Start phase when the bottleneck bandwidth is set to 1Gbps 19 Figure 3.8 PLR during the Slo w Start phase when the bottleneck bandwidth is set to 1Gbps 19 Figure 3.9 Reco v ery time of the protocols as a function of the bandwidth of the bottleneck link 21 Figure 3.10 cwnd and ssthr es of TCP W estw ood o v er time 21 Figure 3.11 Congestion W indo w of V e gas and the modied V e gas for dif ferent v alues of r when the bottleneck link bandwidth is set to 1Gbps 23 Figure 3.12 Channel Utilization of V e gas and the modied V e gas for dif ferent v alues of r when the bottleneck link bandwidth is set to 1Gbps 24 Figure 3.13 Throughput of STCP and the modied STCP for dif ferent v alues of a 25 Figure 3.14 Congestion W indo w of STCP when the bottleneck link bandwidth is set to 1Gbps 26 Figure 3.15 Congestion W indo w of modied STCP in the 1Gbps case 26 Figure 4.1 Simulation topology 29 ii
Figure 4.2 T w o state Mark o v chain to model errors in wireless channels 29 Figure 4.3 Throughput of the current TCP v ersions under consideration as a function of the channel errors 30 Figure 4.4 Throughput of the TCP v ersions for HBDPC under consideration as a function of the channel errors 30 Figure 4.5 Throughput of the modied TCP v ersions under consideration as a function of the channel errors 31 Figure 4.6 F airness of HSTCP and STCP 33 Figure 4.7 F airness of the TCP Ne wreno, W estw ood and V e gas 33 iii
PERFORMANCE EV ALU A TION OF TCP O VER OPTICAL CHANNELS AND HETER OGENEOUS NETW ORKS Jianxuan Xu ABSTRA CT Ne xt generation optical netw orks will soon pro vide users the capability to request and obtain end-to-end all optical 10 Gbps channels on demand. Indi vidual users will use these channels to e xchange lar ge amounts of data and support applications for scientic collaborati v e w ork. These ne w applications, which e xpect steady transfer rates in the order of Gbps, will v ery lik ely use either TCP or a ne w transport layer protocol as the end-to-end communication protocol. This thesis in v estigates the performance of TCP and ne wer TCP v ersions o v er High Bandwidth Delay Product Channels (HBDPC), such as the on demand optical channels described abo v e. In addition, it in v estigates the performance of these ne w TCP v ersions o v er wireless netw orks and according to old issues such as f airness. This is particularly important to mak e adoption decisions. Using simulations, it is sho wn that 1) the windo w-based mechanism of current TCP implementations is not suitable to achie v e high link utilization and 2) congestion control mechanisms, such as the one utilized by TCP V e gas and W estw ood are more appropriate and pro vide better performance. Modications to TCP V e gas and Scalable TCP are introduced to impro v e the performance of these v ersions o v er HBDPC. In addition, simulation results sho w that ne w TCP proposals for HBDPC, although the y perform better than current TCP v ersions, still perform w orse than TCP V e gas. Also, it w as found that e v en though these ne wer v ersions impro v e TCP' s performance o v er their original counterparts in HBDPC, the y still ha v e performance problems in wireless netw orks and present w orse f airness problems than their old counterparts. The main conclusion of this thesis is that all these v ersions are still based on TCP' s AIMD strate gy or similar and therefore continue to be f airly blind in the w ay the y increase and decrease their transmission rates. TCP will not be able to utilize i v
the foreseen optical infrastructure adequately and support future applications if not redesigned to scale. v
CHAPTER 1 INTR ODUCTION 1.1 Intr oduction and moti v ations Ne xt generation optical netw orks are e xpected to of fer a Dynamic Bandwidth on Demand (DBoD) service that customers will use to establish connections o v er all optical links with v ery high bandwidth, v ery lo w bit error rates and rather long propagation delays. This can be the case of one user connected to a gigabit Ethernet or 10 Gbps Ethernet switch which at the same time is connected to the optical netw ork by means of a Multiservice Pro visioning Platform or MSPP device using the GMPLS f amily of protocols. In this scenario, end users can establish an obtain an end-to-end all optical channel at OC-48 or OC-192 rates on demand to satisfy their communication needs. This DBoD service will support foreseen applications allo wing the transfer of huge les or the real-time e xchange of v ery lar ge amounts of data required to do scientic collaborati v e w ork. Se v eral applications already en visioned will need this type of service and infrastructure. In the First International W orkshop on Protocols for F ast Long-Distance Netw orks [1 ] held early 2003, se v eral presentations made the case about foreseen requirements for end-to-end steady transfer rates in the order of se v eral gigabits per second to support collaborati v e w ork and the transfer of huge amounts of data generated by high ener gy physic projects such as CERN' s Lar ge Hadron Collider As stated in [2 ], although it is e xpected that communication netw orks, storage technologies and po werful computers will support these transfer rates, communication protocols will become the bottleneck if we don' t redesign them to scale. This is the case of TCP when running o v er all optical ne xt generation netw orks, or similarly o v er high bandwidth-delay product channels. Ideally we should be able to modify current protocols and pro vide a smooth transition to the ne w en vironment. This is in f act the approach tak en by se v eral researchers who ha v e proposed modications to the most widely used transport layer protocol, TCP Ho we v er these proposals still f ace important challenges, and more research is needed to nd an appropriate solution. In [3 ], Sally 1
Flo yd e xplains TCP' s three main challenges. First, in order for TCP to achie v e transfer rates in the order of gigabits per second, links should ha v e bit error rates considerably smaller than current possibilities. Furthermore, e v en if these BER were achie v able, TCP congestion control mechanism is e xpected to present problems since congestion signals will ha v e v ery lar ge interarri v al times. The second problem is related to the Slo w Start mechanism, which increases TCP' s congestion windo w e xponentially During Slo w Start, a TCP connection o v er a high bandwidth-delay product channel will increase its congestion windo w to a v ery lar ge v alue and once the connection is about to ll the channel' s a v ailable bandwidth man y pack ets will be dropped. Finally TCP has been sho wn to w aste too much bandwidth because of its Congestion A v oidance mechanism. Considering the v ery high link capacities and long propagation delays of DBoD channels, it will tak e TCP a v ery long time to ll the entire link and achie v e full utilization. Se v eral modications ha v e already been proposed to address the problems of TCP o v er high bandwidth-delay product channels, mainly based on modications to the underlying Additi v e Increase Multiplicati v e Decrease (AIMD) strate gy of TCP [4, 5, 6 3, 7, 8]. Ho we v er we don' t kno w much about these ne w v ersions in se v eral aspects yet. First, there is no study where all these v er sions are compared and analyzed together Second, the y ha v e not been compared against all current TCP v ersions. Third, it is completely unkno wn ho w these ne w v ersions perform in old en vironments, such as normal wired and wireless scenarios, which is important for adoption decisions. Finally it is also unkno wn ho w these ne w v ersions perform re garding to kno wn TCP issues, such as f airness. 1.2 Contrib utions of the thesis This thesis includes the follo wing contrib utions to the state of the art in transport layer protocols. The thesis includes a performance e v aluation of TCP in HBDPC where the most important ne w proposals for these types of netw orks and old and current TCP v ersions are analyzed together for the rst time. Modications to TCP V e gas and Scalable TCP are proposed to impro v e their performance in HBDPC. 2
Ne w TCP v ersions for HBDPC are also e v aluated o v er wireless netw orks and normal wired netw orks. This is important for users that are thinking about adopting these ne w v ersions based only on the performance sho wn in HBDPC. The thesis also addresses the issue of f airness of these ne w TCP v ersions to clarify the unkno wn aspect of whether these v ersions also confront the same f airness problems of old v er sions or not. 1.3 Outline of the thesis The thesis is or ganized as follo ws. Chapter 2 discusses in detail the current v ersions of the TCP protocol' s congestion control algorithm and also the ne w TCP proposals for high bandwidth delay product channels (HBDPC). A performance e v aluation of all TCP v ersions in HBDPC is included in Chapter 3, including proposed modication to TCP V e gas and Scalable TCP to impro v e their performance in this en vironment. In Chapter 4 the performance of ne w TCP v ersions for HBDPC is analyzed in wireless netw orks along with the f airness of these ne w v ersions. Finally Section 5 includes the conclusions and points out directions for future research. 3
CHAPTER 2 LITERA TURE REVIEW 2.1 The T ransmission Contr ol Pr otocol(TCP) Unrestricted access to common resource may result in poor performance in the form of long delays in data deli v ery lo w netw ork utilization, high pack et loss rates and e v en a possible collapse of the communication system. This phenomenon, kno wn as netw ork congestion, occurs when the aggre gate demand for a resource (e.g., link bandwidth) e xceeds the a v ailable capacity of the resource. Netw ork congestion in the current Internet is detected and controlled by the TCP protocol. TCP interprets pack et loss as a signal of congestion and reduces its sending rate to alle viate it. If no pack et loss occur TCP increases its transmission rate until it lls the channel capacity and pack et loss occur TCP follo ws the Additi v e Increase/Multiplicati v e decrease (AIMD) strate gy to obtain better performance and deal with congestion. TCP uses a v ariable called the congestion windo w to change the output rate of the connection in a manner consistent with the AIMD strate gy As such, the congestion windo w v ariable is increased in an additi v e manner if there is no congestion in the netw ork while it is decreased in a multiplicati v e manner in the e v ent of pack et loss. TCP also has a o w control mechanism to a v oid o v ero wing the b uf fers of the recei v er and control the w ay information is injected into the netw ork. Once the source sends all allo wed pack ets as gi v en by the current v alue of the congestion windo w it has to w ait for ackno wledgments ( A CKs ) in order to increment the congestion windo w and be able to send ne w pack ets. As a result, TCP is usually said to be self-clocking. The source automatically slo ws do wn the source when the netw ork becomes congested because ackno wledgments are delayed. This feature w as the only congestion control mechanism in the Internet before V an Jacobson' s proposal in 1988 [9 ]. Current TCP v ersions which detect congestion and adjust their rate dif ferently are re vie wed ne xt. 4
2.2 Curr ent TCP v ersions Current widely deplo yed TCP protocols are TCP T ahoe, TCP Reno and TCP Ne wreno [9 10 11 ]. The o w and congestion control procedure has tw o phases: Slo w-start and Congestion A v oidance. the Slo w-Start phase is used when the connection is initialized and its main goal is to ll the a v ailable capacity as quickly as possible. During this phase the congestion windo w is increased e xponentially It starts with a congestion windo w of 1 and then increases it by 1 for e v ery A CK recei v ed. This continues until the congestion windo w reaches a threshold called the Slo w-Start threshold ( ssthr esh ), which is set at the be ginning of the TCP connection. Once the congestion windo w reaches ssthr esh the Slo w-Start phase ends, and the congestion a v oidance phase be gins. In the congestion a v oidance phase the congestion control mechanism adopts the standard AIMD (Additi v e Increase, Multiplicati v e Decrease) strate gy: When no losses are observ ed, it gradually increases the congestion windo w v ariable in an additi v e manner (additi v e increase) augmenting the congestion windo w by 1 = b cw nd c for e v ery incoming A CK, which is roughly equi v alent to increasing the windo w by 1 e v ery RTT When pack et losses are detected, the algorithm reduces the congestion windo w in a multiplicati v e manner which is necessary to a v oid congestion. Dif ferent TCP v ersions reduce the congestion windo w in dif ferent w ays in reaction to pack et losses. In the original TCP T ahoe, upon a pack et loss, the algorithm reduces the ( ssthr esh ) to half, sets the congestion windo w to 1 and starts the Slo w-Start phase again. Figure 2.1 sho ws the beha vior of the congestion windo w of TCP T ahoe and the tw o phases just described. In Reno and its of fspring Ne wreno, the mechanism is modied and the F ast Retransmit/F ast Reco v ery procedure is included. Under this ne w procedure, when a pack et loss is detected the ( ssthr esh ) is set to ssthr esh =max(cwnd/2,2) and the congestion windo w to cwnd=sshtr esh+3 a v oiding the slo w-start phase after each pack et loss (or duplicated pack et). This ne w beha vior of the congestion windo w is sho wn in Figure 2.2 and clearly sho ws that it of fers better performance to TCP compared to the original beha vior TCP SA CK w as de v eloped by Flo yd and F all [12 ] to tak e care of the inef ciencies of Reno to handle multiple pack et drops and the problem of unnecessary retransmissions and long reco v ery times in Ne wreno. As mentioned in RFC 2018 [12 ], both the sender and the recei v er must come to an agreement to implement SA CK. The recei v er is also able to indicate to the sender using Selecti v e 5
. Figure 2.1 Beha vior of the congestion windo w v ariable in TCP T ahoe Figure 2.2 TCP Reno and Ne wReno FR/FR procedure 6
Ackno wledge(SA CK) blocks the e xact sequence number of the pack ets that ha v e been recei v ed and those missing. This gi v es the sender then benet of retransmitting only those pack ets that are lost. W ith this accurate information the TCP SA CK sender not only retransmits the missing pack ets b ut also accomplishes the retransmission during only one R TT reducing this time compared to the other v ersions.ting only those pack ets that are lost. W ith this accurate information the TCP SA CK sender not only retransmits the missing pack ets b ut also accomplishes the retransmission during only one R TT reducing this time compared to the other v ersions. TCP V e gas w as rst introduced by Brakmo et al. in [13 ]. It introduces se v eral changes compared to the old-f ashioned TCP First, the congestion a v oidance mechanism that TCP V e gas uses is quite dif ferent from that of TCP T ahoe or Reno. TCP Reno uses the loss of pack ets as a signal of congestion in the netw ork and has no w ay of detecting an y incipient congestion before pack et losses occur Thus, TCP Reno reacts to congestion rather than pre v enting it. TCP V e gas, on the other hand, uses the dif ference between the estimated throughput and the measured throughput of the connection as a w ay of estimating the congestion state of the netw ork. No w note that this mechanism used in V e gas does not purposely cause an y pack et loss and therefore remo v es the oscillatory beha vior and achie v es higher a v erage throughput and ef cienc y TCP V e gas k eeps track of the minimum round-trip propagation delay seen by the connection, which is denoted by the BaseR TT In NS-2 [14 ] V e gas uses the rst R TT as the initial v alue for the BaseR TT and replaces its v alue with an y shorter R TT during the connection lifetime. So the BaseR TT at time t is estimated to be the smallest R TT seen by the source. W ith the BaseR TT TCP V e gas calculate the Expected Throughput using the formula belo w which represent the maximum throughput that the connection can achie v e. E xpected T hr oug hput = C W N D =B aseR T T (2.1) TCP V e gas also compute the Actual Throughput, which is the throughput that the connection is actually obtaining. This calculation is made using the Equation 2.2 belo w where R TT is the normal R TT of the pack ets during the current congestion windo w Actual T hr oug hput = C W N D =R T T (2.2) 7
Based on these tw o v alues, TCP V e gas calculates the dif ference between them and increase or decrease the congestion windo w according to the follo wing formulas. D if f = ( E xpected Actual ) B aseR T T (2.3) cwnd =8>>>>><>>>>>:cwnd + 1 if Dif f < cwnd 1 if Dif f > No c hang e Otherwise (2.4) The and parameters are tw o thresholds of V e gas, with < If D if f < V e gas increases the congestion windo w linearly during the ne xt R TT If D if f > then V e gas decreases the congestion windo w linearly during the ne xt R TT Otherwise, it lea v es the windo w size unchanged. The rationale behind V e gas is v ery simple. If the actual throughput is much smaller than the e xpected throughput, then it suggests that it is lik ely that the netw ork is congested. Thus, the source should reduce the o w rate. On the other hand, if the actual throughput is too close to the e xpected throughput, then the connection may not be utilizing the a v ailable capacity and hence it should increase the rate. Therefore, the goal of TCP V e gas is to k eep the rate of the connection between the Expected and the Actual rate. Another interpretation is that V e gas maintains a certain number of pack ets or bytes in the queues of the netw ork [13 ] gi v en by the and parameters. In NS [14 ], is set to 1 and is set to 3, representing the use of at least one b ut no more than three b uf fers of the output queue of the bottleneck link [13 ]. Another dif ference between TCP V e gas and Reno is the retransmission mechanism. In TCP Reno, a rather coarse grained timer is used to estimate the R TT and the v ariance, which results in poor estimates. V e gas e xtends Reno' s retransmission mechanism as follo ws. TCP V e gas records the system clock each time a pack et is sent. When an A CK is recei v ed, V e gas calculates the R TT and uses this more accurate estimate to decide the retransmission of TCP pack et [10 ]. If it recei v es a duplicate A CK, V e gas checks to see if the R TT is greater than round trip timeout(R T O). If it is, then without w aiting for the third duplicate A CK, it immediately retransmits the pack et. 8
. Figure 2.3 Beha vior of the congestion windo w of TCP V e gas V e gas also modies the Slo w-Start mechanism of TCP Instead of increasing the congestion windo w e xponentially e v ery R TT ,It does this e v ery tw o R TT The Figure 2.3 sho ws the beha vior of the congestion windo w of TCP V e gas, which is quite dif ferent from the normal sa wtooth of TCP In TCP W estw ood [15 ], the sender side measures the bandwidth of the connection by looking at the rate of the incoming ackno wledgments. Based on the information it calculates the F air Share Estimate (FSE), which is then used to appropriately set the v alues of the cwnd and ssthresh v ariables after 3 duplicate ackno wledgments or a time out. In this w ay instead of halving the cwnd, TCP W estw ood changes the cwnd and ssthresh to v alues that are more consistent with the ef fecti v e bandwidth used by the connection when congestion occurred. This modication seems appropriate since TCP W estw ood should not w aste as much bandwidth as re gular TCP v ersions during congestion. Ho we v er TCP W estw ood doesn' t modify the Slo w Start and Congestion A v oidance mechanisms, using TCP Reno as the underlying protocol. 9
2.3 New TCP v ersions f or HBDP netw ork The abo v e widely used TCP protocols tolerate pack et delay and pack et losses rather gracefully and its underlying congestion control mechanism is the responsible of the stability of the Internet [9 ]. Ho we v er as the channel speeds are increased from the kilobits per second range in the early eighties to the gigabits or e v en terabits per second range we ha v e today the windo w-based mechanism of current TCP implementations starts becoming a problem. It can be easily seen that the widely adopted TCP v ersions described abo v e substantially underutilize the netw ork bandwidth o v er high bandwidth delay product channels. F or e xample, In order for TCP to increase its windo w to achie v e full utilization of a 10Gbps channel with 1500-byte pack ets, it requires o v er 83,333 R TTs. W ith a 100ms R TT it tak es TCP approximately 1.5 hours to achie v e full channel utilization. Also, after a simple pack et loss, TCP reduces its congestion windo w by half, responding too drastically and again underutilizing the bandwidth of the channel. Using the well kno wn formula that models the throughput of a TCP connection in steady state [16 ], it w as pointed out that TCP needs to ha v e one loss e v ent per 5,000,000,000 pack ets transmitted to achie v e the necessary transfer rate to ll one of these high bandwidth channels. This pack et loss rate is less than the theoretical limit of current netw orks' bit error rates [3 ]. After recognizing these limitations, se v eral promising ne w protocols ha v e been put forw ard. This is the case of Scalable TCP (STCP) [4 ], High Speed TCP (HSTCP) [3 ], eXplicit Control Protocol , and F AST TCP [6 ]. These protocols adapti v ely adjust their increase/decrease rates based on the current windo w size. So the lar ger the congestion windo w is, the f aster it gro ws, and when the congestion occurs, the slashed windo w size is more accurate to obtain a short reco v ery time. HighSpeed TCP w as introduced in [3 ] to achie v e high throughput in high bandwidth-delay product links without requiring unrealistically lo w pack et loss rates. In addition, HighSpeed TCP is TCP-friendly when competing with re gular TCP v ersions. HighSpeed TCP modies the TCP response function making the parameters and dependent of the current congestion windo w and utilizes a pre-computed lookup able for nding the v alues of a(w) and b(w) HighSpeed TCP needs another solution proposed in [17 ] to limit the number of se gments by which a congestion windo w can be increased during the Slo w Start phase. During the Slo w Start phase, TCP' s congestion 10
windo w v ariable can reach a v ery lar ge number and a v ery lar ge number of pack ets can get dropped if the congestion windo w is doubled just right before the channel is about to be full of pack ets. The eXplicit Control Protocol (XCP)  w orks with the help of tw o controllers embedded in the routers that calculate the feedback that sources will use to adjust their beha vior and mak e the system to con v er ge to optimal ef cienc y and minmax f airness. XCP modies the pack et headers to include information about the connection so that routers can mak e these calculations and send e xplicit feedback to the sources. Although XCP w as sho wn to perform v ery well, it modies the pack et structure and needs router support. In addition, further analysis are needed to in v estigate XCP' s performance when the connection is initiated. In [7 ], the authors claim that XCP reaches the desired rate after one R TT b ut the y ne v er sho w it. Scalable TCP (STCP) [4 ] is a sender -based modication that b uilds on HighSpeed TCP meant to utilize high bandwidth delay product links in a simple and rob ust manner Scalable TCP utilizes constants for the parameters a and b of TCP(a,b) to achie v e similar goals than HighSpeed TCP As such, it is still based on the same underlying AIMD strate gy used by TCP Further e v aluations are needed to mak e sure Scalable TCP doesn' t inherits the same disadv antages of TCP such as the long times required to achie v e full utilization after Slo w Start and Congestion A v oidance procedures. Scalable TCP also relies on pack et loss to change the congestion windo w The congestion control algorithm of STCP w orks as follo ws. F or each ackno wledgment recei v ed in a round trip time in which congestion has not been detected, it increase the cwnd by a x ed amount equal to 0 : 01 Upon detection of congestion in a gi v en round trip time, STCP reduces the cwnd by a x ed amount equal to 0 : 125 the current windo w The congestion control algorithm of STCP is then based on the general windo w update algorithm of TCP as follo ws: cw nd cw nd b b cw nd c cw nd cw nd b 0 : 125 cw nd c cw nd cw nd + a cw nd cw nd + 0 : 01 (2.5) The v alues of a = 0 : 01 and b = 0 : 125 were found considering STCP' s impact on le gac y traf c, bandwidth allocation properties, o w rate v ariance, con v er gence properties, and control theoretic 11
stability . Ho we v er as it is sho wn in the follo wing chapter the performance of STCP is not what it is supposed to be. F AST TCP  uses pack et loss and queueing delay to assess congestion and to solv e the problem of v ery lar ge pack et loss interarri v al times. F AST reacts to pack et losses as TCP Reno b ut queueing delay information is used to change the congestion windo w when no losses occur F AST borro ws some ideas from TCP V e gas to estimate the queuing delay and increase the congestion windo w In [6 ] it is said that the implementation should be less aggressi v e than Slo w Start and less drastic than rate-halving. In this thesis, only HSTCP and Scalable TCP are considered because the detailed description of F AST TCP is not yet a v ailable. XCP is not considered either because it needs router support, making its deplo yment much more dif cult. 12
CHAPTER 3 PERFORMANCE EV ALU A TION OF TCP O VER HIGH B AND WIDTH DELA Y PR ODUCT CHANNELS 3.1 Simulation topology and parameter In this Chapter the description of the simulation topology and parameters used to e v aluate the performance of TCP T ahoe, Reno, Ne w Reno, SA CK, V e gas, W estw ood, HighSpeed TCP and Scalable TCP is included. The netw ork topology utilized consists of one TCP source, one TCP sink node (destination), and tw o routers connected by a bottleneck link as sho wn in Figure 3.1. The Netw ork Simulator 2 (NS-2) [14 ] is the simulation tool utilized to run all the e xperiments. T o carry out the e xperiments, in the simulations, the maximum v alues of the congestion windo w for the TCP v ersions are set such that the connections can achie v e full link utilization. The bottleneck link tw o w ay propagation delay is x ed and set to 25 msec while the link' s bandwidth is v aried to sho w the performance of these protocols as the bandwidth-delay product increases. The b uf fer size at the bottleneck link is set to 200 pack ets to absorb part of the sudden congestion. W e perform e xperiments to sho w the link utilization and congestion windo w beha vior of each protocol as well as the pack et loss rate during the Slo w Start Phase and the time to reach full link speed after a pack et drop e v ent. Figure 3.1 The netw ork topology 13
3.2 P erf ormance e v aluation Figures 3.2 and 3.3 plot the normalized throughput achie v ed by the protocols under consideration as a function of the link bandwidth, and the TCP sequence numbers when the link bandwidth is set to 1 Gbps. As the gures sho w there are considerable dif ferences among these protocols. As a general trend, it can be seen that the performance of most protocols de grade as the bandwidth increases sho wing clear scalability problems. This is the e xpected beha vior and conrms what other researchers ha v e found. Only TCP V e gas, W estw ood and HighSpeed TCP seem to perform well and scale better to higher speeds. The importance of this graph is the addition of other TCP v ersions as well as V e gas and W estw ood, which had not been compared before in a HBDP scenario. The TCP v ersions also perform as e xpected with T ahoe presenting the w orst performance follo wed by Reno, Ne w Reno and SA CK in that order This sequence reects the beha vior of these protocols according to their reaction to pack et losses and multiple pack et losses from the same congestion windo w This is the same beha vior widely kno wn of these v ersions in lo wer normal speed channel, therefore, is not in v estigated here an y further Instead, this thesis focus on the proposed ne w v ersions. Finally TCP W estw ood impro v es o v er the re gular TCP v ersions b ut still belo w HighSpeed TCP and V e gas, the throughput performance of the protocols in HBDPC can be e xplained looking at the beha vior of the congestion windo w v ariable. The beha vior of the congestion windo w of the dif ferent protocols is analyzed ne xt. In Figures 3.4, 3.5, and 3.6 the cwnd of the protocols o v er time in the case where the bottleneck bandwidth is set to 1 Gbps is plotted. W ith the e xception of V e gas, the gure sho ws the e xpected sa wtooth pattern of TCP There, it can be seen that TCP T ahoe is the only protocol reducing its cwnd to 1 while the other TCP v ersions only reduce it to half the current v alue. Reno presents deeper and longer reactions while Ne w Reno and SA CK are v ery similar Interesting beha viors are the ones e xperienced by TCP V e gas, W estw ood and HighSpeed TCP TCP W estw ood achie v es better throughput because its cwnd doesn' t drop as deep as the re gular TCP v ersions, guided by the F air Share Estimate (FSE). It will be sho wn later that TCP W estw ood goes through a rather long Congestion A v oidance phase. The cwnd of HighSpeed TCP tak es v alues similar to TCP W estw ood b ut it achie v es better throughput because it manages to transmit more pack ets, in particular at the be ginning of the connection. HighSpeed TCP presents the oscillatory beha vior also e xperienced in the simulation results in [18 ] 14
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 100 200 300 400 500 600 700 800 900 1000 Throughput (0-1) Bandwidth (Mbps) Tahoe Reno Newreno Sack Vegas Westwood Scalable Highspeed Figure 3.2 Channel utilization of TCP as a function of the bottleneck link bandwidth 0 5e+06 1e+07 1.5e+07 2e+07 2.5e+07 3e+07 3.5e+07 4e+07 4.5e+07 5e+07 0 50 100 150 200 250 300 350 400 sequence number time (Sec) Vegas Westwood Sack Newreno Reno Tahoe Highspeed Scalable Figure 3.3 Sequence number of TCP with the bottleneck link bandwidth set a 1 Gbps 15
0 500 1000 1500 2000 2500 3000 3500 0 50 100 150 200 250 300 350 400 Congestion Window Time (Sec) Reno Newreno Sack .Figure 3.4 Congestion W indo w of TCP Reno, Ne w Reno and SA CK when the link bandwidth is set to 1 Gbps when only one source is in the system. TCP V e gas' s beha vior is e v en better as it can be seen that the cwnd is rather steady after the Slo w Start. T w o conclusions are important at this point. First, it is deniti v ely impossible to achie v e full bandwidth utilization using the windo w-based approach utilized by current TCP implementations. The beha vior of the cwnd sho ws that it tak es TCP too much time to reach the maximum windo w size and too little time to reduce its size in the presence of pack et losses. Furthermore, the reduction of the cwnd is v ery drastic. The second conclusion is more important and has to do with V e gas' beha vior In order to achie v e full link utilization, the mechanism used by V e gas could be a good w ay to go. If we use the case where the bottleneck bandwidth is set to 1 Gbps as an e xample, we kno w that the theoretical v alue of the congestion windo w needed to achie v e full link utilization is about 3325 pack ets, gi v en by the bandwidth-delay product of the netw ork plus the b uf fer size. It can be observ ed from Figures 3.4, 3.5 and 3.6 that this is in f act the maximum v alue achie v ed by all protocols and that TCP V e gas' congestion windo w is v ery much steady and at a v ery close v alue after the Slo w Start phase, indicating that TCP V e gas does a v ery good job estimating the a v ailable 16
0 500 1000 1500 2000 2500 3000 3500 0 50 100 150 200 250 300 350 400 Congestion Window Time (Sec) Tahoe Vegas Westwood Figure 3.5 Congestion W indo w of TCP T ahoe, V e gas and W estw ood 0 500 1000 1500 2000 2500 3000 3500 0 50 100 150 200 250 300 350 400 Congestion Window Time (Sec) Highspeed Figure 3.6 Congestion W indo w of TCP HighSpeed TCP 17
bandwidth. The main problem of V e gas relies in the rst Congestion A v oidance phase; it tak es V e gas a rather lar ge amount of time to reach the 3325 v alue for the rst time. Ne xt, the performance of the protocols during the Slo w Start phase is e v aluated. Here, the interest is in the Slo w Start time and the P ack et Loss Rate (PLR) during that period of time. The Slo w Start time is important because the longer it tak es the more the w asted capacity The PLR is an indication of ho w ef cient the Slo w Start mechanism is. Ob viously the higher the PLR the w orse. The PLR is measured as the number of pack ets lost di vided by the total pack ets sent during the Slo w Start phase. From Figure 3.7 it can be seen that all protocols ha v e a similar and v ery short Slo w Start duration. This is e xpected as the y all use the same e xponential mechanism. V e gas has a slightly longer duration because it increases the cwnd e xponentially b ut e v ery other R TT ; ho we v er 0.4 seconds is still a v ery short Slo w Start time. The second observ ation is that most TCP v ersions ha v e a similar and steady PLR as the bandwidth is increased. This is e xpected because the b uf fer at the bottleneck link lls out at the same time no matter the link capacity This is in contradiction to other studies that say that one of the problems of current TCP v ersions is the v ery lar ge v alue of cwnd achie v ed during Slo w Start and consequently a high PLR. The e xplanation is in the b uf fer size of the bottleneck link. If the b uf fer size is set to the bandwidth-delay product of the link, the cwnd will in f act gro w to v ery lar ge v alues (in the order of 10000 in our 1 Gbps case) when in reality the system can only absorb around 6250 pack ets. Ho we v er if the b uf fer size is set to more realistic v alues, as in this case, the cwnd will gro w to modest v alues and not too man y pack ets will be dropped. F or instance, in this case the PLR w as in the order of 6%. An interesting point to mention here is the f act that TCP V e gas and HighSpeed TCP were the only protocols with almost zero PLR. While TCP V e gas' Slo w Start phase tak es a little bit longer than the other protocols, its Slo w Start procedure is rather ef fecti v e in a v oiding pack et losses during this time. This is in complete alignment with the design goals of V e gas e xplained in [19 ]. The performance of the protocols during the Congestion A v oidance phase is also considered. Here, the focus is on the Congestion A v oidance phase time. W e call this time the r eco very time or the time that it normally tak es the congestion windo w to reach its maximum v alue after a drastic reduction because of a pack et drop. Figure 3.9 sho ws this time in seconds for the dif ferent protocols as the bandwidth of the channel is increased. As e xpected, the reco v ery time tak es longer as the channel capacity increases. From the graph, it can be seen that the reco v ery time for re gular TCP 18
0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0 100 200 300 400 500 600 700 800 900 1000 SlowStart time Bandwidth(Mbps) Reno Vegas Figure 3.7 Duration of the Slo w Start phase when the bottleneck bandwidth is set to 1Gbps 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0 100 200 300 400 500 600 700 800 900 1000 Packet Loss Rate Bandwidth (Mbps) Tahoe Reno Newreno Sack Vegas Westwood Scalable Highspeed Figure 3.8 PLR during the Slo w Start phase when the bottleneck bandwidth is set to 1Gbps 19
v ersions is around 70 seconds or 2800 R TTs while the reco v ery time of TCP W estw ood and V e gas is around 10 seconds longer Also, It can be observ ed that the r eco very time of HighSpeed TCP is v ery small. This is due to the oscillatory beha vior that this protocol presents as observ ed in Figures 3.4, 3.5, and 3.6. F or this e xperiment, only data from the rst Congestion A v oidance phase w as utilized. Another interesting point is related to TCP W estw ood. The bandwidth calculation during the initial phase is not v ery accurate and therefore after the initial loss of pack ets, TCP W estw ood sets the cwnd and ssthr esh at v ery lo w v alues. Figure 3.10 sho ws the v alues of cwnd and ssthr esh o v er time in the case where the bottleneck link is set to 1 Gbps. The gure clearly sho ws the bandwidth estimation problems of W estw ood during the initial phase of the connection and ho w the Congestion A v oidance phase starts with a v ery small cwnd and ssthr es v alues. As a result, it stays in that phase for a v ery long time, w asting a lot of bandwidth. In f act, the cwnd w as set equal to 41 and gre w until 3325 in a linear manner A similar case w as found in V e gas where the rst Congestion A v oidance phase started at a cwnd of 72. At the be ginning of the Slo w Start phase, the e xpected bandwidth is a high v alue because the netw ork is empty Ho we v er the actual bandwidth decreases substantially since the e xponentially increase of the cwnd lls the b uf fers quite f ast. At this time, V e gas losses some pack ets, reduces its cwnd and then it enters into Congestion A v oidance with a v ery lo w v alue of cwnd Under realistic netw ork conditions with normal b uf fer sizes this situation is rather una v oidable. Since the e xpected bandwidth is v ery close to the link speed, the cwnd starts increasing linearly until the actual bandwidth equals the e xpected bandwidth, and at that time the cwnd stays steady until the end of the simulation achie ving full utilization. The problem is that this initial Congestion A v oidance phase is v ery long and increases with the bandwidth of the bottleneck link. W e can solv e this problem modifying the Slo w Start phase procedures or the algorithms that dri v e the Congestion A v oidance phase. In this thesis, we try the latter approach. 3.3 Modication of TCP V egas As sho wn in the last section, TCP V e gas has se v eral good properties that mak e it a good candidate for high bandwidth-delay product links. F or e xample, TCP V e gas e xperiences minimal pack et losses during the Slo w Start phase and no drastic reductions in the cwnd, and its bandwidth esti20
0 10 20 30 40 50 60 70 80 90 100 200 300 400 500 600 700 800 900 1000 Recover time Bandwidth (Mbps) Tahoe Reno Newreno Sack Vegas Westwood Highspeed Figure 3.9 Reco v ery time of the protocols as a function of the bandwidth of the bottleneck link 0 500 1000 1500 2000 2500 3000 3500 0 50 100 150 200 250 300 350 400 Slowstart time Bandwidth (Mbps) cwnd ssthresh Figure 3.10 cwnd and ssthr es of TCP W estw ood o v er time 21
mation technique allo ws it to maintain its cwnd steady and at a v ery close v alue to the maximum possible. Although TCP V e gas gi v e the best performance in the simulations, it is not a perfect solution for the congestion control in high bandwidth-delay product netw orks. First,TCP V e gas' s Slo w-Start is too slo w .It increases the windo w size e v ery other R TT which increases the time to fulll the capacity of the link. The second aspect is about the friendliness. Consider about the follo wing situation: TCP V e gas competes with TCP Reno for the limited bandwidth of bottleneck link. TCP Reno' s Slo w Start and Congestion A v oidance schemes are both aggressi v e compared to V e gas in the sense that the y lea v e little room in the b uf fer for other connections, while TCP V e gas is conserv ati v e and tries to occup y little b uf fer space. When a TCP V e gas connection shares a link with TCP Reno connection, the TCP Reno connection uses most of the b uf fer space and the TCP V e gas connection backs of f, interpreting this as a sign of netw ork congestion. So it is necessary to achie v e f airness between V e gas and Reno connections for deplo yment of TCP V e gas into the operating netw ork. F ollo wing, we propose tw o v ery simple modications to TCP V e gas that will allo w it to achie v e higher throughput in high bandwidth-delay product links and mitigate the friendliness problem. The rst proposed modication changes the slo w start phase of V e gas so that it increases its cwnd e xponentially e v ery R TT as re gular TCP v ersions do. Although this modication in f act reduces V e gas' s Slo w Start time, it increases TCP V e gas' ef cienc y by an almost unnoticed mar gin. This is because the Slo w Start phase is v ery short. In addition, it is w orth mentioning that this modication interferes with the bandwidth estimation procedures of V e gas, and therefore it is not recommended. The second modication is meant to reduce the time V e gas spends in the Congestion A v oidance phase or V e gas' r eco very time Instead of increasing and decreasing the cwnd by one e v ery R TT follo wing Equation 2.4, the proposed modication increases the cwnd by a f actor r and reduces the cwnd by a f actor As a result, the formula that dri v es the beha vior of V e gas during the Congestion A v oidance phase is gi v en by Equation 3.1 as follo ws: 22
0 500 1000 1500 2000 2500 3000 3500 0 10 20 30 40 50 60 70 80 90 100 BandWidth Time (Sec) TCP_VEGAS 2 4 8 16 .Figure 3.11 Congestion W indo w of V e gas and the modied V e gas for dif ferent v alues of r when the bottleneck link bandwidth is set to 1Gbps cwnd =8>>>>><>>>>>:cwnd + r if Dif f < cwnd if Dif f > cwnd Otherwise (3.1) Equation 3.1 allo w us to play with the aggressi v eness and responsi v eness of V e gas just using dif ferent v alues of r and The results presented here utilize a conguration that mak es the V e gas v ersion more aggressi v e setting the v alue of r equal to 2, 4, 8, and 16 and equal to 1. Figure 3.11 compares the cwnd of V e gas as a function time in the case where the bottleneck link is set to 1 Gbps with and without the proposed modications. As it can be seen, the r eco very time reduces the as the v alue of r is increased from a v alue close to 80 seconds to a v alue of only 10 seconds. In addition, it can also be seen from Figure 3.12 that this change also translates into better throughput. The modied V e gas with equal v alues of and r set to 2, 4, 8, and 16 w as also tried. In this case, it is equally aggressi v e and responsi v e. Results not sho wn here demonstrate that these v ersions achie v e similar performance than in the case sho wn in Figures 3.11 and 3.12 where only 23
0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1 0 100 200 300 400 500 600 700 800 900 1000 Throughput (0-1) Bandwidth (Mbps) Vegas Vegas-2-1 Vegas-4-1 Vegas-8-1 Vegas-16-1 .Figure 3.12 Channel Utilization of V e gas and the modied V e gas for dif ferent v alues of r when the bottleneck link bandwidth is set to 1Gbps the r v ariable is changed e xcept in the case where r and are set to 16. In this particular case, the throughput of the connection reduced to half. This ef fect and the v ariation of these tw o v ariables needs further in v estigation. 3.4 Modication to Scalable TCP Although Scalable TCP w as announced to ha v e ne gligible impact on e xisting netw ork traf c while impro ving b ulk transfer performance in highspeed wide area netw orks , this adv antage is not seen here. Actually using a limited b uf fer size, its performance is w orse than e xpected. Figure 3.13 sho ws that using realistic b uf fer sizes, e.g. 200 pack ets of b uf fer space in the output queue of the bottleneck link, the STCP protocol performs poorly in HBDPC. Analyzing the simulation results of STCP it can be inferred that the use of the x ed v alue of a = 0 : 01 as the increment of the congestion windo w is where the problem is. The congestion windo w size is increased with that x ed v alue for each ackno wledgment recei v ed in a round trip time. As the congestion windo w size increases, the sum of the increment also increases. In HBDPC, in order to achie v e high bandwidth 24
utilization, the congestion windo w v alue needs to gro w to really lar ge v alues and, if a x ed v alue is used as the increment, the increment of the cwnd will become correspondingly lar ge for e v ery R TT 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 100 200 300 400 500 600 700 800 900 1000 Throughput Bandwidth(Mbps) Scalable Scalable Modification Figure 3.13 Throughput of STCP and the modied STCP for dif ferent v alues of a In Figure 3.13, it can be seen that it tak es 19.25 seconds for STCP' s congestion windo w to reach the peak size ( cwnd =3172). After that, congestion occurs, and STCP reduces the cwnd to 7 = 8 its current size. The connection stays in that size and after nding that it can not resolv e the congestion in the netw orks it cuts 1 = 8 of the congestion windo w again. Since congestion remains, a timeout occurs and the connection restarts its Slo w-Start phase with a windo w size of one. Because it tak es a long time for the netw ork to reco v er from the congestion and most of the bandwidth dissipates in that reco v ery period, to a v oid this situation, a more adapti v e mechanism which will be more cautious in the w ay its congestion windo w is increased, is needed. As a result, the increase f actor a of Scalable TCP is changed back to the same v alue of the normal TCP v ersions as follo ws: cw nd cw nd + 1 = b cw nd c (3.2) 25
0 100 200 300 400 500 600 0 50 100 150 200 250 300 350 400 Congestion Window Time (Sec) Scalable Figure 3.14 Congestion W indo w of STCP when the bottleneck link bandwidth is set to 1Gbps 0 500 1000 1500 2000 2500 3000 3500 0 50 100 150 200 250 300 350 400 Congestion Window Time (Sec) Scalable Modification Figure 3.15 Congestion W indo w of modied STCP in the 1Gbps case 26
This modication actually mak es STCP beha v e as the standard AIMD algorithm, and because it slashes the windo w size by 1/8 instead of the one half, it sho ws better performance. This is sho wn in Figure 3.13. Comparing Figures 3.14 and 3.15, which sho w the congestion windo w of the original STCP and modied STCP it can be seen that the time needed by the modied STCP to reach the rst peak size of the congestion windo w (80.525s) is longer than the original STCP Ho we v er since its increment is considerably less aggressi v e than the original v ersion, the congestion that the modied v ersion causes is also considerably smaller and therefore, the connection needs only 0.025s to resolv e the problem and step into the congestion a v oidance phase. On the other hand, the original STCP needs 23s to resolv e the congestion and get back to that le v el. That is the reason why the modied STCP performs better than the original STCP 27
CHAPTER 4 PERFORMANCE EV ALU A TION O VER WIRELESS NETW ORKS AND F AIRNESS AN AL YSIS Ne w TCP proposals for HBDPC ha v e started to emer ge and ne w ones are e xpected soon. So f ar these v ersions ha v e pro v ed one w ay or another than the y do a better job than current TCP v ersions in this specic en vironment. Ho we v er nobody has analyze ho w these ne w v ersions perform re garding old or current technologies and where the y stand in terms of some important issues that ha v e been analyzed for old v ersion, such as f airness. These are the tw o main aspects analyzed in this chapter 4.1 Simulation topology It is interesting to note that all these ne w TCP proposals for high bandwidth delay product channels do in f act impro v e the performance of TCP in this en vironment. Ho we v er do all these ne w v ersions also perform well in common wireless en vironments such as the well kno wn wired cum wireless scenario sho wn in Figure 4.1? This is a f air question since all these scenarios are going to coe xist for quite some time and we all w ant to ha v e the best performing TCP v ersion in most of them. The upcoming analysis will help users decide which TCP v ersion to use according to their needs. Figure 4.1 sho ws the well-kno wn tw o nodes wired-wireless topology that has been used in man y other studies [20 ] [21 ]. In this topology a sender connected to a wired infrastructure communicates with a recei v er in a wireless local area netw ork with a wireless access point connecting these tw o dif ferent technologies. The sender is connected to the base station by means of a 10Mbps capacity channel with 20ms of propagation delay The base station implements the widely kno wn IEEE 802.11 standard w orking at 2 Mbps with a wireless channel characterized by ne gligible propagation delay A TCP agent runs end-to-end from the sender to the recei v er An innite FTP source at the sender generates pack ets of length 1000 bytes. 28
. Figure 4.1 Simulation topology As suggested in [22 23 24], a tw o state Mark o v model generates errors in the wireless channel to simulate the ef fects of the wireless media. The tw o state Mark o v model consists of a good state and a bad state. It remains in each of the tw o states for a mean duration of time after which it mo v es on to the ne xt state. A pack et is transmitted when the system is in the good state and dropped otherwise. A series of pack ets in the good state are thus transmitted whereas the ne xt series of pack ets in the bad state are dropped. This is an appropriate model for the wireless channel where correlated errors are common. Figure 4.2 sho ws the tw o state Mark o v chain with the steady state transition probabilities of and . Figure 4.2 T w o state Mark o v chain to model errors in wireless channels 29
0 2000 4000 6000 8000 10000 12000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Sequence Number Error Rate Reno Newreno Sack Westwood Vegas .Figure 4.3 Throughput of the current TCP v ersions under consideration as a function of the channel errors 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Sequence Number Error Rate Highspeed TCP Scalable TCP .Figure 4.4 Throughput of the TCP v ersions for HBDPC under consideration as a function of the channel errors 30
0 2000 4000 6000 8000 10000 12000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Sequence Number Error Rate modified Vegas modified Scalable .Figure 4.5 Throughput of the modied TCP v ersions under consideration as a function of the channel errors 4.2 P erf ormance e v aluation Figures 4.3 and 4.4 sho w that the throughput achie v ed by all the current TCP and HBDPC TCP v ersions as a function of the errors introduced in the channel. In the graphs, the X axis represents the percentage of time the Mark o v chain is in the bad state. First, it can be seen that the beha vior of all TCP v ersions is totally e xpected. As the channel errors increase, the throughput decreases. Also, if the channel is v ery error prone, most TCP v ersions perform similarly Ho we v er as the channel conditions impro v e the congestion control mechanisms of the TCP v ersions dif fer and some v er sions pro vide better performance than others. Again, V e gas is the best performing v ersion because it detects losses f aster reduces its congestion windo w fe wer number of times and a v oids more timeouts [13 ]. Comparing the tw o graphs, it can be seen that HSTCP and STCP also perform w orse than TCP V e gas in this en vironment. Ho we v er the ne w TCP proposals perform dif ferently o v er wired than wireless netw orks. No w STCP performs better than HSTCP b ut both perform w orse than V e gas. The beha vior of HSTCP is e xpected as this v ersion performs lik e TCP SA CK under small congestion windo w v alues. In Figure 4.5, which presents the throughput of the modied v ersions 31
of V e gas and Scalable TCP it can be seen that the modied V e gas beats STCP and all other TCP v ersions especially in the case of high channel error rates. 4.3 F air ness analysis Another interesting aspect not analyzed so f ar is the f airness characteristics of these ne w TCP v ersions. In particular it is important to kno w (1) if these ne w TCP v ersions are f air to themselv es, and (2) if the y present the same unf airness problems that old TCP v ersions present when similar connections ha v e dif ferent R TTs. Figures 4.6 and 4.7 present the f airness simulation results to answer the tw o main questions presented abo v e. Figure 4.6 presents the results for the ne w TCP v ersions. The graph plots in the y-axis the throughput achie v ed by tw o connections of the same class sharing a 10Mbps bottleneck link channel. Experiments are repeated k eeping the R TT of one connection constant and v arying the R TT of the second connection. This is represented in the x-axis where the numbers 2, 3, 4, 5 and 6, mean that the second connection has an R TT that is tw o, three, four v e and six times longer than the rst one. As it can be seen, the ne w proposals are f air to themselv es when the connections ha v e the same R TTs (when the f airness f actor is equal to one). Ho we v er the y present se v ere f airness problems otherwise with the shorter connection obtaining most of the bandwidth at the e xpense of the longer one. It can be seen that HSTCP and STCP beha v e f airly similar presenting se v ere unf airness against the longest connections. According to Figure 4.7, TCP W estw ood, Ne wreno and V e gas also present f airness problems although less pronounced. In f act, V e gas is sho wn to be the less unf air of all. The f airness beha vior of current TCP v ersions such as T ahoe and Reno is not plotted here because the results are well kno wn. In summary it can be concluded that ne wer TCP v ersions present a more se v ere unf airness problem than old TCP v ersions when connections with dif ferent R TTs share the same bottleneck link. These ne w v ersions increase their windo w sizes in a more aggressi v e manner in order to capture the huge amount of bandwidth that is supposed to be a v ailable in high bandwidth delay product netw orks. Therefore, the same ne w algorithms that were designed to deal with the scalability problems of TCP in these netw orks are the responsible for the more se v ere unf airness in current en vironments. 32
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 Throughput Fairness factor Westwood Westwood Newreno Newreno Vegas Vegas Figure 4.6 F airness of HSTCP and STCP 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 Throughput Fairness factor Highspeed1 Highspeed2 scalable1 scalable2 Figure 4.7 F airness of the TCP Ne wreno, W estw ood and V e gas 33
CHAPTER 5 CONCLUSIONS This thesis e v aluates the performance of e xisting congestion control algorithms in heterogeneous en vironments. Using simulations, it is sho wn that the windo w-based congestion control mechanism of current TCP v ersions doesn' t scale well as the bandwidth-delay product of the connection increases. TCP reduces its congestion windo w v ery f ast and drastically in the presence of congestion while it increases it v ery slo wly during the congestion a v oidance phase. On the other hand, for this ne w en vironment, bandwidth estimation techniques such as the ones utilized by TCP V e gas and W estw ood can be v ery benecial as the y a v oid pack et drops considerably and can mak e full utilization of the a v ailable bandwidth during congestion a v oidance. The thesis also sho ws that HighSpeed TCP and Scalable TCP perform better than normal TCP v ersions b ut still w orse than TCP v e gas. The fundamental reason for this is that the y all continue to increase and decrease the congestion windo w v ariable in kind of a blindly manner After analyzing the TCP' s congestion control mechanism, simple modications are introduced in TCP V e gas and STCP to impro v e their performance. The f airness of the ne w TCP proposals is also e v aluated. It is sho wn that these ne w v ersions, as the old ones, present the same unf airness problem when connections with dif ferent R TTs share the same bottleneck link b ut e v en w orse. The same algorithms that were included in these v ersions to mak e them better in HBDPC are the responsible for the w orse f airness perfor mance. The ne w proposals and the modied TCP v ersions are also e v aluated in wireless netw orks. It is found that there is no compelling reason to switch to neither of these ne wer v ersions and that we should use TCP V e gas instead, which is the best performing v ersion in all the scenarios considered. 34
REFERENCES  First International W orkshop on Protocols for F ast Long-Distance Netw orks, PFLDnet 2003, http://datata g .web .cern .c h/da ta ta g /p d net 20 03/ ind e x .html February 2003.  J. Bunn, J. Do yle, S. Lo w H. Ne wman, and S. Y ip, Ultr ascale Network Pr otocols for Computing and Science in the 21st Century White paper to DoE' s Ultr ascale Simulation for Science A vailable at http://netlab .caltec h.ed u/ F AST 2002.  S. Flo yd, HighSpeed TCP for Lar ge Congestion W indo ws, IETF RFC 3649, Experimental December 2003.  T K elly Scalable TCP: Impro ving Performance in Highspeed W ide Area Netw orks, in Pr oceedings of F ir st International W orkshop on Pr otocols for F ast Long-Distance Networks February 2003. [Online]. A v ailable: http://datatag.web .cern.c h/d ata tag /p dnet 20 03/ pro gr am.html.  A. Kamra, V Misra, and D. T o wsle y Achie ving High Throughput in Lo w Multiple x ed, High Bandwidth, High Delay En vironments, in Pr oceedings of F ir st International W orkshop on Pr otocols for F ast Long-Distance Networks February 2003. [Online]. A v ailable: http://datatag.web .cern.c h/d ata tag /p dnet 20 03/ pro gr am.html.  C. Jin, D. W ei, and S. Lo w F AST TCP for High-Speed Long-Distance Netw orks, IETF Internet dr aft (dr aft-jwl-tcp-fast-01.txt ) June 2003.  D. Katabi, M. Handle y and C. Rohrs, Congestion Control for High Bandwidth-Delay Product Netw orks, in Pr oceedings of A CM SIGCOMM August 2002, pp. 89102.  L. Xu, K. Harfoush, and I. Rhee, Binary Increase Congestion Control for F ast, Long Distance Netw orks, in T o appear in Pr oceedings of Infocom 2004 [Online]. A v ailable: A v ailable at http://www .csc.ncsu.edu/f acu lty /rh ee/ p.ht ml.  V Jacobson, Berk ele y TCP e v olution from 4.3-tahoe to 4.3-reno, in Pr oceedings of the 18th Internet Engineering T ask F or ce August 1990.  W Ste v ens, TCP Slow Start, Cong estion A voidance F ast Retr ansmit, and F ast Reco very Algorithms IETF RFC 2001, 1997.  S. Flo yd and T Henderson, The Ne wReno Modication to TCP' s F ast Reco v ery Algorithm, RFC 2582, http://www .ietf .or g/rfc/rfc2582 .txt April 1999.  K. F all and S. Flo yd, Simulation-based Comparisons of T ahoe, Reno, and SA CK TCP, Computer Communication Re vie w v ol. 26, pp. 521, 1996. 35
 L. Brakmo and L. Peterson, TCP V e gas: End to End Congestion A v oidance in a Global Internet, IEEE J ournal on Selected Ar eas in Communications v ol. V ol.13, No.8, pp. 1465 1480, 1995.  N. S. (ns2), http://www .isi.edu/nsnam/ns/  C. Casetti, M. Gerla, S. Mascolo, M. Sansadidi, and R. W ang, TCP W estw ood: End-to-End Congestion Control for W ired/W ireless Netw orks, W ir eless Networks v ol. 8, pp. 467479, 2002.  M. Handle y J. P ahdye, S. Flo yd, and J. W idmer TCP Friendly Rate Control (TFRC): Protocol Specication, in Internet Dr aft (dr aft-ietf-tsvwg-tfr c-0 3.tx t) May 2001.  E. Souza and D. Agarw al, A HighSpeed TCP Study: Characteristics and Deplo yment Issues, LBNL T ec hnical Report Number LBNL-53215. A vailable at: http://www-itg .lbl.go v/ e vandr o/hstcp/  S. Flo yd, Limited Slo w Start for TCP with Lar ge Congestion W indo ws, IETF Internet dr aft (dr aft-oyd-tcp-slowst art -01 .txt August 2002.  M. Allman, V P axson, and W Ste v ens, TCP Congestion Control, IETF RFC2581 APR 1999.  S. V angala and M. A. Labrador Performance of TCP o v er W ireless Netw orks with the Snoop Protocol, in Pr oceedings of IEEE LCN No v ember 2002, pp. 600601.  I. F Ak yildiz, G. Morabito, and S. P alazzo, TCP-Peach: A Ne w Congestion Control Scheme for Satellite IP Netw orks, IEEE/A CM T r ansactions on Networking v ol. V ol. 9, No. 3, pp. 307321, 2001.  M. Zorzi, A. Chockalingam, and R. Rao, Throughput Analysis of Channels with Memory, IEEE J ournal on Selected Ar eas in Communications v ol. V ol. 18, pp. 12891300, 2000.  R. R. R. M. Zorzi and L. B. Milstein, On the accurac y of a rst-order Mark o v Model for data transmission on F ading Channels in IEEE ICUPC'95 Ne w Jerse y USA., No v ember 1995, pp. 211215.  A. Abouzeid, S. Ro y and M. Azizoglu, Stochastic Modelling of TCP o v er Lossy Links, 2000, pp. 17241733. 36