Transcript ippm-3x

TCP Throughput
Testing Methodology
IETF 76 Hiroshima
Barry Constantine
[email protected]
1
draft-constantine-ippm-tcp-throughput-tm-00.txt
OSI Model: Division of Responsibility
IT department
responsibility
7
6
HTTP, FTP,
Email, etc.
5
Shared
responsibility
Network
Provider’s
responsibility
2
TCP
4
3
2
1
draft-constantine-ippm-tcp-throughput-tm-00.txt
IP
Ethernet
History: Provisioning of Managed Networks
 Even though RFC2544 was meant to benchmark network
equipment (and used by network equipment manufacturers),
network providers have used it to benchmark managed,
operational networks in order to provide Service Level
Agreements (SLAs) to their business customers
– Ultimately, network providers have come to the realization that a
successful RFC2544 test result does not guarantee end-user
satisfaction
 It is difficult if not impossible, to extrapolate end-user
application layer performance from RFC2544 results and the
goal of RFC2544 was never intended to do so.
3
draft-constantine-ippm-tcp-throughput-tm-00.txt
Automated turn-up test – RFC 2544 Overview
 Goal
– Run a sequence of tests to verify the general
performance of a circuit.
 Test method
– Packet based end-end or looped-back
 Test end-end network:
–
–
–
–
Throughput rate
Frame loss
Delay/Latency
Back-to-Back
in frames/sec or % link utilization
absolute or %
in ms or us
in frames or time
 Test parameters:
– Packet size: 64, 128, 256, 512, 1024, 1280, 1518 bytes
– Packet rate: 10, 20, 30, 40, 50, 60, 70, 80, 90, 100% of maximum rate
– Burst: Time or number of packets
Note: RFC 2544 is a single stream test
4
draft-constantine-ippm-tcp-throughput-tm-00.txt
The Need for TCP Standard Test Methodology
 Network providers (and NEMs) are wrestling with
end-end network complexities (queuing, VPNs,
active proxy devices, etc.)
– they desire to standardize a test methodology to validate
end-end TCP performance, as this is the precursor to
acceptable end-user application performance
 The intent behind this draft TCP throughput work is
to define a methodology for testing TCP layer
performance (in a business class, managed
network), and guidelines for expected TCP
throughput results that should be observed in the
network under test
5
draft-constantine-ippm-tcp-throughput-tm-00.txt
The “Bounds” of this Draft Methodology
 TCP draft Testing Methodology is not intended to:
– definitively benchmark TCP implementations of one OS to another,
although some users may find value in conducting qualitative
experiments
– provide detailed diagnosis of problems within end-points or the
network itself as related to non-optimal TCP performance
 TCP draft Testing Methodology is intended to:
– provide the logical, next-step testing methodology so that a network
provider can test the managed network at Layer 4 (beyond the
current Layer 2/3 RFC2544 testing approach)
– provide a practical test approach that specifies the more well
understood (and end-user configurable) TCP parameters such as
Window size, MSS, # connections, and how these affect the
outcome of TCP performance over a network
– define a TCP layer test condition to validate that the end-end
network is tuned as expected (shaping, queuing, etc.)
– define the means to test end-end prioritization of services, both
stateful TCP and UDP
6
draft-constantine-ippm-tcp-throughput-tm-00.txt
Step 1: Baseline TCP Throughput

Before stateful TCP testing can begin, it is important to baseline the round trip
delay and bandwidth of the network to be tested.
– These measurements provide estimates of the ideal TCP window size, which will be
used in subsequent test steps.
– These latency and bandwidth tests should be run long enough to characterize the
performance of the network over the course of a meaningful time period.
– The goal would be to determine a representative minimum, average, and maximum
RTD and bandwidth for the network under test.
7
draft-constantine-ippm-tcp-throughput-tm-00.txt
Step 2: TCP Throughput versus MSS Size

By varying the MSS size of the TCP connection(s), the ability of the network to sustain
expected TCP throughput can be verified.
–

VPN technologies such as IPSEC, reduce the available MSS size to lower values than the
traditional maximum MSS (1460 bytes)
–
–
8
This is similar to frame and packet size techniques within RFC2544, which aim to determine the ability
of the routing/switching devices to handle loads in term of packets/frames per second at various frame
and packet sizes.
PMTUD is often disabled on end hosts, since it is not always reliable (black hole routers, server
routing anomalies, etc.)
Mis-configured, end-user equipment may exceed this MSS, which causes IP fragmentation and can
cause mis-diagnosis of performance issues
draft-constantine-ippm-tcp-throughput-tm-00.txt
Step 3: Verify Shaping, Policing, Queuing

Default router queuing (i.e. FIFO based) is inefficient for business critical applications.
–

Policing can cause TCP Tail Drop and Global Synchronization; from the user’s perspective, this
condition causes significant performance degradation
By automating end-to-end testing with several (4 or more) simultaneous TCP sessions,
detect non-optimized shaping / queuing in the network
–
Detect large discrepancies in throughput results between TCP sessions, which identifies potential
network performance optimization with proper shaping and queuing
Throughput
Time
9
draft-constantine-ippm-tcp-throughput-tm-00.txt
Step 4: Test Prioritization with Real TCP Traffic
 Application traffic such as Citrix, Peoplesoft, etc. now require real-time
performance to meet end-user response time expectations; there is a fine
balance between application data traffic prioritization and VoIP, Video, etc.
– Emulate bursty TCP traffic sessions (i.e. Citrix, HTTP, SMTP, etc.) with the
proper CoS and QoS values at an average throughput rate and with peaks.
– Emulate concurrent UDP sessions (i.e. VoIP G.711) with the proper CoS and
QoS values
TCP Session #1
10
draft-constantine-ippm-tcp-throughput-tm-00.txt
Challenges of TCP Test Methodology
 Standardizing a TCP test methodology will be
valuable and is of high interest to the network
provider (NP) and network equipment manufacturer
(NEM)
 As opposed to RFC2544 packet based testing, strict
“pass / fail” metrics will be much more complicated (if
not infeasible)
– Is it acceptable to standardize the testing procedure and
provide guidelines for metrics (expected ranges)?
– Specifying the appropriate data to be charted across the
test interval is very useful (throughput, retransmissions,
RTD)
11
draft-constantine-ippm-tcp-throughput-tm-00.txt