Prezentacja programu PowerPoint
Download
Report
Transcript Prezentacja programu PowerPoint
Shall we worry about packet reordering?
Bartosz Belter, Artur Binczewski, Michał Przybylski
bart | artur | [email protected]
Terena Networking Conference TNC2005, Poznań, 6-9 June 2005
Contents
• What is packet reordering
• What are its sources?
• Shall we really worry?
Packet reordering is not pathological
network behavior...
• IP protocol provides unsequenced, unreliable
datagram transmission,
• higher layer protocols should provide missing
features, but...
• they assume some „common conditions”
• low loss, minimal reordering, no duplicates
• they misperform in normal conditions (they call it
extreme, rare cases)
What was packet reordering before?
Problem – what is the reordering in the following cases:
source
1 2 3 4 5 6 7 8 9 10
destination
--------------> 1 2 4 5 6 3 7 10 9 8
•
30% of packet reordered (because packets 3, 9 and 8 are late)
•
50% of packet reordered (as packets 4,5,6 and 10, 9 are early)
What else do we get from that example?
Another example:
source
1 2 3 4 5 6 7 8 9 10
•
destination
--------------> 2 1 4 3 6 5 8 7 10 9
50% of packet reordering (because packets 1, 3, 5, 7, 9 are early
or 1, 4, 6, 8, 10 are late)
Which case was worst, how can we compare such
values?
What about TCP?
Selected standard reordering metrics (1)
1. draft-jayasumana-reorder-density-04.txt
– Reorder Buffer-occupancy Density shows the histogram of the
occupancy of hypothetical buffer, used as a waiting room by early
packets (re-ordering buffer). The calculation of this metric is
performed upon each packet arrival to the receiver.
Selected standard reordering metrics (2)
2. draft-ietf-ippm-reordering-09.txt
– the Extent of Reordering (showing for each packet the
displacement – i.e. how much too late the packet has arrived) –
derived metrics include Maximum of Extent, etc.
Selected standard reordering metrics (3)
2. draft-ietf-ippm-reordering-09.txt
– the Byte Offset – the storage space in buffer required to restore
order;
Selected standard reordering metrics (4)
2. draft-ietf-ippm-reordering-09.txt
– the Time Offset – the amount of time needed to hold the
reordered packet until all preceding packets arrive
Sources of packet reordering
• parallelism in the network,
• network faults (e.g. route flapping),
• improper configuration,
• faulty software (implementation of queues),
• special QoS/performance configuration (especially
re-marking instead of dropping).
Parallelism in the network
• Link bundling (e.g. using three 1GE interfaces to
achieve 3Gbit/s link)
• Load balancing (sharing a number of separate paths
or links for transmission between two devices)
• „New technology in old chassis” – use of parallel
queues or processors of lower speed to build high
speed interface
Parallelism – where is the problem?
• Queuing regime
– Per packet Round Robin (can introduce high
reordering with asynchronous queues)
– Workaround to RR – constant packet processing
time...
– Per flow scheduling (will most probably preserve
the order, but will limit the flow size to the queue
capacity (in theory) and even less (in practice)
Parallelism – practical example
(with per-flow scheduling)
• 10Gbit/s interface, 4 queues x 2,5 Gbit/s, per-flow regime
• Interface load – 4Gbit/s, mix of Internet traffic
a) Maximum theorethical flow size = queue size = 2,5Gbit/s
b) Average queue load = 4Gbit/s / 4 queues = 1Gbit/s
c) Available queue bandwidth = 2,5Gbit/s – 1Gbit/s = 1,5Gbit/s
d) Available interface bandwidth = 10Gbits – 4Gbit/s = 6Gbit/s
Maximum flow size = available queue space = 1,5Gbit/s
(can be less in presence of high bandwidth flows)
Available interface bandwidth = 6Gbit/s
Known cases – Juniper M160 (GEANT core)
OC-192 interface
4 parallel processors x 2,5 Gbit/s to serve single 10Gbit/s interface.
test results from LightReading:
• reordering only above 73% of load, or 56% of load in the worst case,
when customer traffic is composed of 40-byte IP packets only.
Known cases – Juniper M160 (GEANT core)
Test results from GEANT (while testing LBE traffic)
„Packet reordering can be greatly reduced so that end-to-end
TCP throughput is preserved, by:
– assigning a lower weight to the EF queue
– Configuring different priorities to the LBE and BE
queues
– Packet reordering seems to affect TCP best-effort
throughput if the parameters are wrong”
Known cases – Juniper M160 (GEANT core)
Test results from GEANT (while testing LBE traffic)
Known cases – Extreme Networks
Black Diamond 6808 switches:
New interfaces (10GE) system in old architecture (1GE)
• Originally 8x1GE interface per card
• 10GE NIC served by 8 x 1GE queues
• Queuing regime – RR (packet based) and flow-based
Flow based:
Max. flow capacity – 1Gbit/s – backround traffic.
There is no known reordering workaround to solve this problem.
How to measure?
- Don’t use ping or traceroute (special treatment of
ICMP packets)
- To test the application use UDP packet generators
and try to shape streams to resemble your
application streams
- To stress the network, use the worst case scenario:
- Burst of long packets followed by burst of short
packets
- Line speed
- Small time gap
Our results
- Network fingerprint @ 100Mbit/s
- 10 long and ten short packets wire speed
N
Eu
ro
G p
EA e
N an
T
t
et es
w tb
or e
k
d
Ogg stream
Shall we really worry?
• Reordering is there and will stay there (race for faster
interfaces...)
• You will not be informed about it unless you ask
• Old applications are most probably vulnerable to reordering
• New applications should use reordering-resistant protocols
• Even slow (in macroscopic scale) applications can suffer from it
• Large flows even more affected
Understanding and network/application analysis is the key.
Questions?
Thank you!
Michal Przybylski
[email protected]