Transcript bmwg-5
Content-Aware Device Benchmarking
Methodology
(draft-hamilton-bmwg-ca-bench-meth-05)
BMWG Meeting
IETF-79 Beijing
November 2010
Mike Hamilton
[email protected]
BreakingPoint Systems
1
Agenda
Charter Update
Now What?
Proposed Schedule
Goals Reset
Goals/Non-Goals
Proposed Schedule
Outstanding Questions
Draft-05 highlights
2
Charter Update
New classes of network devices that operate above the IP
layer of the network stack require a new methodology to
perform adequate benchmarking. Existing BMWG RFCs
(RFC2647 and RFC3511) provides useful measurement
and performance statistics, though they may not reflect the
actual performance of the device when deployed in
production networks. Operating within the limitations of
the charter, namely blackbox characterization in
laboratory environments, the BMWG will develop a
methodology that more closely relates the performance of
these devices to performance in an operational setting. In
order to confirm or identify key performance
characteristics, BMWG will solicit input from operations
groups such as NANOG, RIP and APRICOT.
3
Now What?
•Define strategy for getting the work done
• Terminology Draft
• Methodology Draft
•Call for volunteers
• Would like to setup weekly meetings
4
Proposed Schedule
•December 2011
• Terminology Draft for IESG Review
• Methodology Draft for IESG Review
•December 2010
• -00 terminology draft
•January 2011
• -00 methodology draft
•March – June 2011
• Solicit WG feedback
•June 2011
• Start WGLC
5
Goals Reset
• Create a series of benchmark tests to MOST accurately predict
device performance under realistic conditions FOR A SPECIFIC
SIMULATED NETWORK
• RFC 2544 Quotes Page 11, Section 18, “Multiple Frame Sizes”
• “The distribution MAY approximate the conditions on the
network in which the DUT would be used.”
• “The authors do not have any idea how the results of such a
test would be interpreted other than to directly compare
multiple DUTs in some very specific simulated network”
6
Goals/Non-Goals
•Goals
• Repeatable Results
• Random/Repeatable
• Testbed to testbed
• Compare Multiple DUTs
•Non-Goals
• Replace RFC 2544/3511
7
Outstanding Questions
• Traffic Composition
• Basket?
• Methodology for selecting?
• Updatable?
• One network is different from another every day
•Security?
• Is it worth the trouble?
•Malformed Traffic
•What information is necessary to create repeatable ‘application
layer’ flows?
• Type of flow, size of flow, etc
•Others?
8
Draft-05 Highlights and Reasons
• “Shell” Methodology
• More reproducible
•Added back ‘security’
• Why shy away when application layer already difficult
•Maintain ‘fuzzing’ aspect
• Random but repeatable
9