SAINT: Secure and Active Internetworking

Download Report

Transcript SAINT: Secure and Active Internetworking

CMPE 252A:
Computer Networks
Set 13:
End-to-End Transmission
Control (TCP and UDP)
1
Transport Protocols

Services:
Addressing of processes
 Reliable or unreliable transport from source
process to end process(es)
 Multiplexing and demultiplexing
 Flow control



Congestion control


Avoid overflowing receiver’s buffer
Avoid overflowing the network bottleneck
Examples: UDP and TCP
2
Why Multiplexing






IP delivers packets from source host to destination host.
However, multiple processes run in the hosts!
Applications require communication among processes,
not just host computers.
Example: Multiple telnet sessions, email, ftp sessions,
and www can all be running concurrently in the same
host.
Ports are defined as the addresses of processes
inside a host.
How do we identify processes uniquely and
efficiently?
3
Well-Known Applications

Well-known ports under 1024, such as



FTP - port # 21
Telnet - port # 23
HTTP - port # 80
Host A
Client
Host B
Server
Source port = x
Source port = 23
Dst. port = 23
Dst. port = x
Segment
4
Transport Protocols

Transport protocols used today are point
to point

UDP used for:


TCP used for:


Remote file server (NFS), name translation (DNS),
intra-domain routing (RIP), network management
(SNMP), multimedia applications.
Electronic mail (SMTP), file transfer (FTP), remote
login (Telnet), web (HTTP)
No standard multipoint e-t-e protocol yet!
5
UDP: User Datagram Protocol
[RFC 768]


“No frills,” “bare bones”
Internet transport
protocol
“Best effort” service,
UDP segments may be:



Lost
Delivered out of order to
app
Connectionless:


Why is there a UDP?
• No connection establishment
(which can add delay)
• Simple: no connection state
at sender, receiver
• Small header
• No congestion control: UDP
can blast away as fast as
desired
No handshaking between
UDP sender, receiver
Each UDP segment
handled independently of
others
6
UDP





Header specifies the minimum
needed for multiplexing and
framing.
32 bits
Source and destination ports:
identify the end points.
Length, in
bytes of
UDP
segment,
including
header
Often used for streaming
multimedia apps
 Loss tolerant
 Rate sensitive
Other UDP uses:
 DNS
Reliable transfer over UDP
 Must be at application layer
 Application-specific error
recovery
Source port #
Dest port #
Length
Checksum
Application
data
(message)
UDP segment format
Checksum:
optional; if not used, set to zero.
7
UDP Checksum


Computed over a pseudo-header + UDP
header+data+padding (to even number
of bytes if needed).
Pseudo-header:
0
31
Source IP address
00000000
Destination IP address
Protocol
Segment length
8
High-Level TCP
Characteristics

Protocol implemented entirely at the ends


Fate sharing (on IP)
Protocol has evolved over time and will
continue to do so
Nearly impossible to change the header
 Use options to add information to the header
 Change processing at endpoints
 Backward compatibility is what makes it TCP

9
Differences From Link Layer

Logical link vs. physical link


Variable RTT


How long can packets live implies max segment
lifetime
Endpoints need not match link


May vary within a connection
Reordering


Must establish connection
Buffer space availability
Transmission rate

Must be found
10
TCP in a Nutshell

Abstraction:
Reliable.
 Ordered.
 Point-to-point.
 Byte-stream.


Mechanisms:
Window-based flow
control.
 Sequence
numbers/ordering, 3way handshake.
 Reliability (ACK,
retransmission
policies).
 Congestion control.
 RTT estimation.

11
TCP Header
Source port
Destination port
Sequence number
Flags: SYN
FIN
RESET
PUSH
URG
ACK
Acknowledgement
HdrLen 0
Flags
Advertised window
Checksum
Urgent pointer
Options (variable)
Data
12
History of TCP


“The” Internet protocol for reliable end-to-end
communication
First key paper:


V. Cerf and R. Kahn, “A Protocol for Packet Network
Interconnection,” IEEE Trans. Commun., 1974, pp. 627641.
Designed per se in the early ‘80s


J. Postel, RFC 793 (also IP and UDP)
Network assumptions:




reliable links
losses due to congestion only!
symmetric network connections
Implicit in order delivery of packets (more than IP can promise!)
13
Evolution of TCP
1984
Nagel’s algorithm
to reduce overhead
of small packets;
predicts congestion
collapse
1975
Three-way handshake
Raymond Tomlinson
In SIGCOMM 75
1983
BSD Unix 4.2
supports TCP/IP
1974
TCP described by
Vint Cerf and Bob Kahn
In IEEE Trans Comm
1986
Congestion
collapse
observed
1982
TCP & IP
RFC 793 & 791
1975
1980
1987
Karn’s algorithm
to better estimate
round-trip time
1985
1990
4.3BSD Reno
fast retransmit
delayed ACK’s
1988
Van Jacobson’s
algorithms
congestion avoidance
and congestion control
(most implemented in
4.3BSD Tahoe)
1990
14
TCP Through the 1990s
1994
T/TCP
(Braden)
Transaction
TCP
1993
1994
TCP Vegas
ECN
(Brakmo et al)
(Floyd)
delay-based
Explicit
congestion avoidance Congestion
Notification
1993
1994
1996
SACK TCP
(Floyd et al)
Selective
Acknowledgement
1996
Hoe
NewReno startup
and loss recovery
1996
FACK TCP
(Mathis et al)
extension to SACK
1996
15
Services Provided




End-to-end flow control
Reliable byte stream
In-order packet delivery (buffering)
Connection-oriented
Socket <host address, port>
 uniquely identify connection

16
TCP Service Model


TCP connections are full-duplex and
point-to-point.
Byte stream (not message stream).

A
Message boundaries are not preserved e2e.
B
C
D
Four 512-byte segments sent as
separate IP datagrams
ABCD
2048 bytes of data delivered
to application in single READ
17
TCP Byte Stream


When application passes data to TCP, it
may send it immediately or buffer it.
Sometimes application wants to send data
immediately.



Example: interactive applications.
Use PUSH flag to force transmission.
URGENT flag.

Also forces TCP to transmit at once.
18
TCP Header

Important fields:









source port and destination port: identify connection end points
32 bit SN: identifies byte in segment
32 bit ACK: identifies next byte expected
4 bit header length: how many 32-bit words in header
16-bit window size (max. 64KB) advertised by the receiver
(RAW)
checksum: checks header, data and pseudo-header
Flags: SYN, FIN, ACK, URG, PUSH
Options: Way to add more information.
Important: Only one sequence number!

Identifies the segment, but does not identify which
retransmission of the segment is being sent!
19
TCP Header Flags
Six TCP flags:






URG: indicates urgent data present; urgent pointer gives
byte offset from current sequence number where urgent
data are. Generally not used.
ACK: indicates whether segment contains
acknowledgment; if 0, acknowledgement number field
ignored.
PUSH: indicates PUSHed data so receiver delivers it to
application immediately. Generally not used.
RST: used to reset connection, reject invalid segment, or
refuse to open connection.
SYN: used to establish connection; connection request,
SYN=1, ACK=0.
FIN: used to release connection.
20
TCP Connection Management
TCP Server
Lifecycle
TCP Client
Lifecycle
21
TCP Transmission




Sender process initiates connection.
Once connection established, TCP can
start sending data.
Sender writes bytes to TCP stream.
TCP sender breaks byte stream into
segments.


Each byte assigned sequence number.
Segment sent and timer started.
22
TCP Transmission

If timer expires, retransmit segment.



After retransmitting segment for maximum
number of times, assumes connection is dead
and closes it.
If user aborts connection, sending TCP
flushes its buffers and sends RESET
segment.
Receiving TCP decides when to pass
received data to upper layer.
23
Timeout-Based Recovery


Wait at least one RTT before retransmitting
Importance of accurate RTT estimators:
Low RTT  unneeded retransmissions
 High RTT  poor throughput


RTT estimator must adapt to change in RTT


But not too fast, or too slow!
Spurious timeouts

“Conservation of packets” principle – more than
a window worth of packets in flight
24
TCP Sender Events:
Data received from app:
 Create segment with seq #
 Seq # is byte-stream
number of first data byte
in segment
 start timer if not already
running (think of timer as
for oldest unacked
segment)
 expiration interval:
TimeOutInterval
Timeout expires:
 Retransmit segment that
caused timeout
 Restart timer
Acknowledgments:
 If acknowledges previously
unacked segments


update what is known to be
acked
start timer if there are
outstanding segments
3-25
TCP ACK Generation
[RFC 1122, RFC 2581]
Event at Receiver
TCP Receiver action
Arrival of in-order segment with
expected seq #. All data up to
expected seq # already ACKed
Delayed ACK. Wait up to 500ms
for next segment. If no next segment,
send ACK
Arrival of in-order segment with
expected seq #. One other
segment has ACK pending
Immediately send single cumulative
ACK, ACKing both in-order segments
Arrival of out-of-order segment
higher-than-expect seq. # .
Gap detected
Immediately send duplicate ACK,
indicating seq. # of next expected byte
Arrival of segment that
partially or completely fills gap
Immediate send ACK, provided that
segment starts at lower end of gap
3-26
TCP Seq. #’s and ACKs
Seq. #’s:
Host A
 byte stream “number” of
first byte in segment’s
User
data
types
ACKs:
‘U’
 seq # of next byte
expected from other side
 cumulative ACK
 Piggybacking
host ACKs
receipt
NOTE: TCP spec does not
of echoed
dictate how receiver
‘U’
handles out-of-order
segments
(store them with modern
hardware)
Host B
host ACKs
receipt of
‘U’, echoes
back ‘U’
simple telnet scenario
time
3-27
Flow Control vs. Congestion
Control

Congestion control
Global issue: concerns all routers and
hosts on path from Source to Destination
 make sure every subnet can handle the
traffic

Router
1Mbps
Router
1Mbps
1Mbps
Receiver
Senders
28
Flow Control vs.
Congestion Control

Flow Control
Involves two endpoints
 Make sure sender doesn’t transmit faster than
receiver can absorb packets

Server
1Gbps
File transfer
PC
1Mbps
29
End-to-End Congestion Control

Why do it at the transport layer?


Real fix to congestion is to slow down sender.
Use law of “conservation of packets”.
Keep number of packets in the network
constant, just below maximum that bottleneck
can take
 Don’t inject new packet until old one leaves.


Congestion indicator: packet loss.
30
TCP Flow Control

TCP is a sliding window protocol
For window size n, can send up to n bytes
without receiving an acknowledgement
 When the data is acknowledged then the
window slides forward


Each packet advertises a window size


Indicates number of bytes the receiver has
space for
Original TCP always sent entire window

Congestion control now limits this
31
Self-Clocking

If we have large actual window, should
we send data in one shot?

No, use ACKs to clock sending new data.
Pb
Pr
receiver
sender
As
Ab
Ar
32
TCP Congestion Control
Mechanisms

Collection of interrelated mechanisms:
Slow start.
 Congestion avoidance.
 Accurate retransmission timeout estimation.
 Fast retransmit.
 Fast recovery.

33
TCP Flow Control


Transmission Window, a.k.a. congestion window (
 Sliding window
 maintained by sender
 conservation of packets
Receiver’s window set through socket API
 controlled by the receiver

cwnd )
advertised to the sender in field of TCP header ( RAW )
cwnd
1
2
3
4
5
6
7
8
Sent and ACKed
9
10
11
12
...
Can’t send
Sent, not ACKed
Send ASAP
34
Sender/Receiver State
w/o Buffering at Receiver
Sender
Max ACK received
Receiver
Next expected
Next seqnum
…
…
…
…
Sender window
Sent & Acked
Sent Not Acked
OK to Send
Not Usable
Max acceptable
Receiver window
Received & Acked
Acceptable Packet
Not Usable
35
Window Flow Control: Send
Side
Packet Received
Packet Sent
Source Port
Dest. Port
Source Port
Dest. Port
Sequence Number
Sequence Number
Acknowledgment
Acknowledgment
HL/Flags
Window
HL/Flags
Window
D. Checksum
Urgent Pointer
D. Checksum
Urgent Pointer
Options…
Options...
App write
acknowledged
sent
to be sent outside window
36
TCP Flow Control:
Observations

TCP sender not required to transmit data
as soon as it comes in form application.


Example: when first 2KB of data come in,
sender could wait for more data since window
is 4KB.
Receiver not required to send ACKs as
soon as possible.

Example: Wait for data so ACK is
piggybacked.
37
Data Flow

Conservation of packets



Inject new packets at the rate ACKs are
returned by receiver
New window: cwnd
 initially cwnd = 1 segment
Sender’s window = min(cwnd, RAW)
38
TCP Congestion Control
Slow Start






Due to Van Jacobson (SIGCOMM 88)
Algorithm:
 Initialize cwnd = 1 MSS.
 If an ACK is received before timeout:
cwnd = cwnd + 1 MSS for each acknowledged segment
Algorithm used at the beginning of a connection and after a
timeout
Leads to exponential growth in the amount of
outstanding data in network
cwnd doubles every RTT epoch (i.e., once last segment in
current window is acknowledged)
How do we avoid congesting the network?
39
Slow Start Example
one RTT
0R
1
one pkt time
1R
1
2
3
2R
2
3
4
5
3R
4
6
7
5
8
9
6
10
11
7
12
13
14
15
40
When Should Slow-Start End?
Congestion Avoidance

Want to end slow start when the pipe is full!
When cwnd > ssthresh.
 Start with large ssthresh, but then refine it.
Slow start continues until BWDP is exceeded, then:
 Routers drop packets -- losses occur
 Need to stop exponential increase!
Use congestion avoidance to deal with lost packets!
 Slow down the transmission rate
 Provide for linear increase of the transmission window
Congestion avoidance implemented together with slow start




41
Congestion Avoidance

Introduce a new variable, ssthresh



Initialized to 65,535 (max. window)
On packet loss (timeout):
 Set ssthresh = cwnd/2 and cwnd = 1
 Re-enter slow start, until cwnd = ssthresh
When cwnd = ssthresh, then
 Grow cwnd linearly until it reaches RAW
 When ACK is received before timeout
then set cwnd = cwnd + 1/cwnd

Hence, cwnd increases by 1 segment every RTT
42
Putting it together:
Slow start and congestion avoidance

The algorithm:

If cwnd < ssthresh


do slow start
Else if cwnd > ssthresh

do congestion avoidance
43
Problems with CA and SS

Slow start is an attempt to discover the
network bandwidth (quickly)
Discovery proceeds by filling network queues in
intermediate routers.
 Once queues are full, routers drop packets.
 Once loss is discovered, it’s too late!
 TCP sender reduces window when loss is
discovered


Queue level oscillates between full and
cwnd/2

What sort of problems does this introduce?
44
Tahoe TCP Congestion
Control: Under-damped
Feedback System!
cwnd
Very drastic a reaction to
congestion!
ssthresh
ssthresh/2
1 MSS
rt times
Slow
start
Congestion
avoidance
Waiting
timeout
Slow
start
Congestion
avoidance
45
Tuning TCP Tahoe’s
Congestion Control



Coarse timeouts remained a problem, and
Fast retransmit was added with TCP
Tahoe.
Timeouts can cause connections to be idle
for a long time waiting for timer to expire.
Fast retransmit: may trigger
retransmission of dropped packet sooner.

Complements regular timeouts.
46
Fast Retransmit

When can duplicate ACKs occur?


Loss, packet re-ordering, or sender waits for some
number of duplicate ACKs before retransmitting.
Assume packet re-ordering is infrequent.
Use receipt of 3 or more duplicate ACKs as loss
indicator
 Retransmit that segment before timeout.
Generally, fast retransmit eliminates about half the coarsegrain timeouts.
Conventional wisdom is that this yields roughly a 20%
improvement in throughput.
Note – fast retransmit does not eliminate all the timeouts
due to small window sizes at the source.




47
Reno: Fast Recovery

Goal:
Reduce the number of times connection is
slow-started.
 Use ACKs in the pipe for self-clocking.



In congestion avoidance mode, after
fast retransmit, reduce cwnd to half
(rather than dropping it to 1).
Reno vs Tahoe: Slow start only used in the
beginning of connection or when timeout
occurs.
48
Reno: Tahoe with Fast
Retransmit and Recovery

If 3 duplicate ACKs for segment N received:




For every subsequent duplicate ACK:


Increase cwnd by 1 segment.
When “new” ACK received to retransmitted packet:



Retransmit segment N.
Set ssthresh = 0.5*cwnd.
Set cwnd = ssthresh + 3*MSS. [account for 3 duplicate ACKs]
Reset cwnd = ssthresh
Resume congestion avoidance.
Result: cwnd is reset to half of the old cwnd after fast
recovery
49
Delayed ACKs



Tries to optimize ACK transmission.
Delay ACKs (500msec) hoping to piggyback on
data segment.
Example: telnet to interactive editor:




Send 1 character at a time: 20-byte TCP header+
1-byte data+20-byte IP header.
Receiver ACKs immediately: 40-byte ACK.
When editor reads character, window update: 40byte datagram.
Then echoes character back: 41-byte datagram.
50
Example 
Simple Bottleneck




Packet size = 1Kbyte
Initial ssthresh = 32 packets
BWDP (capacity of network) = 16.3 Kbyte
Queue capacity = 17 packets
51
Reno: Congestion Window
and Queue Growth
52
Reno: Congestion Window
and Queue Growth



Queues fill once window grows larger than 17 packets
After packet loss, Reno cuts window and starts again
See-saw oscillations in window and queue length
 increases end-to-end delays
 bad for real-time and interactive applications
53
So far…



Have way to fill pipe (slow start).
Have way to run at equilibrium (congestion
avoidance).
But tough transition.
No good initial ssthresh.
 Large ssthresh causes packet loss.
 Need approaches to quickly recover from
packet loss.

54
How Do Losses Occur?



Bit errors detected by CRC
Wireless links: common place
Congestion in network - packets
dropped by routers
competing data flows
 window exceeds bandwidth delay product
(BWDP)


Note that BWDP is a function of length
(prop delay) of link and bandwidth
55
Error Recovery


After Timeouts and duplicate ACKs.
Sender retransmits segment after a
Timeout

Sender must estimate connection RTT
 times one packet per window
 performs smooth average over time
• rtt = β*old_rtt + (1-β) * rtt_sample
(e.g., β = 0.875)
• RTO = rtt + 4 * dev
• Difference = rtt_sample – rtt_estimated
• rtt_estimated = rtt_estimated + (δ x Difference)
• dev = δ (|Difference| - Deviation) and0 < δ < 1
56
Error Recovery (Conc.)

Sender also retransmits segment after 3
duplicate ACKs



Receiver sends cumulative ACK stating the next
in-order packet expected
Missing packet causes generation of duplicates
No theoretical reason for 3
57
Estimating RTT



Karn, P. and Partridge, C. 1987, “Improving Round-Trip
Time Estimates in Reliable Transport Protocols”
Reno performs one RTT estimate per window of data
What do we do when there is a timeout and
retransmission?


Karn and Partridge’s solution:





When ACK arrives, does it refer to 1st or 2nd transmission?
Don’t update RTT on any segments that have been
retransmitted
Instead, double timeout on each failure, until successful
Goal: Induce exponential backoff!
This is a direct result of using a single sequence number
over a non-FIFO link!
TCP option can be used with time stamps!
58
Example RTT Estimation:
RTT: gaia.cs.umass.edu to fantasia.eurecom.fr
350
RTT (mill iseco nd s)
300
250
200
150
100
1
8
15
22
29
36
43
50
57
64
71
78
85
92
99
106
time (seconnds)
SampleRTT
Estimated RTT
59
TCP New Reno

Two problem scenarios with TCP Reno
bursty losses, Reno cannot recover from
bursts of 3+ losses
 Packets arriving out-of-order can yield
duplicate acks when in fact there is no loss.


New Reno solution – try to determine the
end of a burst loss.
60
TCP New Reno


When duplicate ACKs trigger a retransmission for a lost
packet, remember the highest packet sent from window in
recover.
Upon receiving an ACK,



Partial ACK implies another lost packet:



Retransmit next packet, inflate window and stay in fast recovery.
New ACK implies fast recovery is over:


if ACK < recover => partial ACK
If ACK ≥ recover => new ACK
Starting from 0.5 x cwnd proceed with congestion avoidance (linear
increase).
Positive result: New Reno recovers from n losses in n round
trips.
Many servers support New Reno
61
Evaluation of TCP Reno

Good:


Tries to adapt to network conditions, uses the Internet transparently.
Undesirable:

Error recovery:




Congests to discover bandwidth available in the connection:



RTT is used, rather than forward delay!
Internet is asymmetric
Out-of-order packet delivery happens


Creates congestion and fills network queues until a loss occurs.
Poor performance over asymmetric networks


RTT measurements and assumptions of link reliability.
Sequence numbering evolved from ARQ schemes that assume in-order delivery of
packets.
Caveat: wireless networks and mobility
When is a packet really lost? No packet self clocking!
Could be improved:

Every object is not just a byte stream with the same service needs!
62
Improvements to TCP




CUBIC: meant for high-speed networks. Meant to
improve TCP friendliness and RTT fairness. Throughput
is defined by packet loss rate only and not RTT.
SACK: include information in the ACK which indicates
missing packets in the window
Vegas: use rate control instead of arrival of ACKs to
pace data into network
Santa Cruz: use relative delay over forward path to
anticipate queue buildup
63
Extras…
64
Goals for “TCP Santa Cruz”
1. Improve detection of congestion
Decouple error control from congestion control
 Don’t rely on packet loss
 Identify direction of congestion

2. Improve congestion control
Don’t fill network queues
 Robust to congestion on reverse path and to ACK loss
 Isolate forward throughput from events in reverse
path

3. Provide high throughput, low delay and
delay variation
65
Error Recovery
1. Improve RTT estimate by timing each packet,
including retransmissions
 identify by SN and copy number
 eliminate Karn’s algorithm
 time packets when needed most during
congestion
2. Retransmit after 1st duplicate ACK if necessary
(Vegas does this for original transmissions)
3. Receiver transmits an ACK window to indicate
holes in the transmission stream
66
ACK Window



Provides status of every packet within send window
Each bit represents a specified number of bytes
received
Granularity of bit determined by the receiver
67
Congestion Control


Detect changes in queue length at the
bottleneck link
Monitor the relative delay over the link:
Delay that one packet experiences with
respect to another
 Limit number of packets in bottleneck queue
 Calculated by sender from a timestamp
returned by the Receiver

68
Relative Delay Calculation



Dj,i = 0  no additional
queuing
Dj,i > 0  increased queuing
on forward path
Dj,i < 0  decreased queuing
on forward path
Dj,i = Rj,i - Sj,i
69
Congestion Control
Algorithm
1.
2.
Let Nop be the desired number of
packets, per session, queued at the
bottleneck
For each pair of ACKs received, Sender
computes:
Relative Delay: D = R - S
 Current packet service time at receiver:
• pktS = R / # pkts received

70
Congestion Control
Algorithm (cont.)
3.
Translate the relative delay into packets:
(a) Queuing over window interval:
• sum delay measurements for all packet pairs and divide by average
packet service time
(b)
Total queuing:
71
Congestion Control
Algorithm (cont.)
4.
Window adjustment policy:
• controls amount of outstanding data in network
• Goal is to maximize throughput and minimize
delay
if ni == Nop  maintain current window size
 if ni < Nop  increase by one segment
 if ni > Nop  decrease by one segment

72
Simulations




ns-2 Network Simulator
Derived protocol from existing TCP
implementation
Compare TCP-Santa Cruz to Reno and Vegas
3 configurations:



Simple bottleneck
Traffic on reverse path
Asymmetric configuration
73
Experiment #1 
simple bottleneck



Packet size = 1Kbyte
BWDP (capacity of network) = 16.3 Kbyte
Queue capacity = 17 packets
74
Reno: Congestion Window
and Queue Growth



Queues fill once window grows larger than 17 packets
After packet loss, Reno cuts window and starts again
See-saw oscillations in window and queue length
 increases end-to-end delays
 bad for real-time and interactive applications
75
TCP-Santa Cruz: Congestion
Window and Queue Growth





Nop = 1 (minimal queuing to reduce delays)
Window at desired operating point: 17 + 1 = 18
No see-saw oscillations in window and queue length !!
Transmits at available bandwidth without introducing
congestion
No overflow of network queues
76
Simple Bottleneck summary

TCP Santa Cruz provides:



Slightly improved throughput (not much room for improvement)
Lower delay (20 - 45% )improvement over Reno
Reduced delay variance over Reno and Vegas
77
Experiment #2 - Reverse
Traffic



Question: Is throughput affected by reverse path traffic?
Goal: Isolate forward throughput from reverse path
events !
 can’t be done with RTT measurements!
No reason to slow forward path transmission rate
78
Reverse Traffic:
Window Growth

Reno



Window growth slowed because of lost and delayed
ACK packets
Loss detection is also delayed
Santa Cruz


Nop = 5
Achieves optimal window size: 17 + 5 = 22
79
Reverse Traffic - Summary



Throughput: Santa Cruz achieves 47 - 67% improvement
Delay: Santa Cruz achieves 45 - 59% improvement
Delay variance: 3 orders of magnitude improvement
80
Experiment #3 
Network Asymmetry




ADSL, HFC, Combination Networks: e.g.,
telephone upstream, cable downstream
Forward path: 24Mbps  3000 pkts/sec
Reverse path: 320kbps  1000 pkts/sec
Asymmetry factor: k = 3  commonplace
81
Reverse Traffic - Summary


Throughput: Santa Cruz achieves 99% improvement
Delay: Santa Cruz achieves 42 - 58% improvement over
Reno
82
Conclusion




High throughput
Low end-to-end delay and delay variation
Isolate forward throughput from events on reverse
path
 ACK loss, congestion on reverse path,
asymmetric links
Problems:



Modify AID (additive increase and decrease) to
ensure fairness under multiple sources!
Problems with different bottlenecks?
Wireless
83
TCP-SACK






Goal: Improve TCP error recovery mechanism
Selectively acknowledge lost data within the transmission window
Uses sequence number ranges
 example: ACK = 1000, SACK = 1040:1080
Limited by max. size of TCP header to 3 distinct ranges
Important when there are multiple losses per window
 multiple losses often results in a timeout
Significance performance improvements in wired networks
84
MSS (Maximum Segment Size)


Largest “chunk” of application-level data
can be specified in SYN segment, else default



Typical values are 1500, 536 and 512 bytes
“non-local address” - default 536 bytes
Ex: In practice MSS is limited by MTU of
LAN…

limited by outgoing interface’s MTU of 1500 bytes
85