An Efficient Gigabit Ethernet Switch Model for Large Scale Simulation
Download
Report
Transcript An Efficient Gigabit Ethernet Switch Model for Large Scale Simulation
An Efficient Gigabit
Ethernet Switch Model for
Large-Scale Simulation
Dong (Kevin) Jin
Overview and Motivation
Gigabit Ethernet
Widely used large-scale network with
many applications
High bandwidth, low latency
Packet delay and packet loss is now
mainly caused in switch
2
Overview and Motivation
Use simulation to study applications running
on large-scale Gigabit Ethernet
RTU/Relays
RINSE Simulator
Data aggregator
Control station
Expand the network
Explore different architectures
Need an efficient switch model in RINSE
3
Existing Switch Models
Detailed models (OPNET, OMNet++)
Simple Queuing Model (Ns-2, DETER)
Different models for different types of switches
High computational cost
Require constantly update and validation
Simple FIFO queue
One model for everything
Queuing model based on data collected from real
switch [Roman2008] [Nohn2004]
Device-independent
Model parameters based on experimental observations
4
Model Requirements
OMNet++
OPENNet
Queue model based
on experiments
Slow
Queue model based Expected Model OMNet++
OPENNet
on experiments
Less accurate
Fast simulation speed
Expected Model
Fast
Ns-2
Ns-2
No internal details, no queue
Accurate packet delay and loss
Device-independent
More Accurate
}
Parameters derived from real switch
without knowing device internals
Same model derivation process
Simulation Speed
Accurate packet delay
and packet loss
Black-box Switch Model
5
Model Design Approach
Perform
Experiments
on real switch
Build
Analytical
model
Build RINSE
model
Evaluate Simulation
Speed and Accuracy
6
Experiment
Goal
Obtain one-way delay per packet in switch
Obtain packet loss sequence
Challenge in Gigabit Environment
High bit rate - 1Gb/s
Low latency in switch - s
7
Experiment Difficulties
Clock synchronization
Accurate timestamp for one way delay
Sender and receiver on the same computer
One Way Delay = transmission delay + wire propagation delay +
delay in switch + delay in end host
Software Timestamp at NIC driver, s resolution
Large delay at end hosts at high bit rate (>500Mb/s)
Have to use hardware timestamp (NetFPGA)
4 on-board Gigabit Ethernet ports
10 ns resolution
No end-host delay, processing on the card
8
Experiment Setup
NetFPGA
Card
1
Time
2 2
Input pcap
3
Time
4 4
CBR UDP flows
packet size
sending rate,
#background flows
Time_2 - Time_4 = delay per
packet
Problem: capture 2000 packets
without missing at 1Gb/s
9
Experimental Results
- Packet Delay (Low Load)
Packet Delay Vs Sending Rate
(packet size = 100 Bytes)
Single flow
Delay NOT depends
on sending rate
Sufficient processing
power to handle single
flow up to 1Gb/s
Model packet delay as
a constant
10
Experimental Results
- Packet Delay (High Load)
Mean Delay Vs Sending Rate
(packet size = 100 Bytes)
3 extra non-cross interface UDP
flows, 950Mb/s each
NetGear
Low delay with small variance
Sufficient processing power to
handle 4 flows
3COM
Use processor-sharing
scheduling and assign weight to
a flow according to its bit rate
11
Experimental Results
- Packet Loss
A Packet Loss Sample Pattern
3COM
0 - received
1 - lost
Loss rate
NetGear 0.4%
3COM 0.6%
Strong autocorrelation
exists among neighboring
packets
12
Model Design Approach
Perform
Experiments
on real switch
Build
Analytical
model
Build RINSE
model
Evaluate Simulation
Speed and Accuracy
13
Packet Loss Model
Kth order Markov Chain
0 - received
1 - lost
state 3
state 1
state 2
1
3
2
Large K, large state space
Our Model - State Space
state 1 - long burst of 0s
state 2 - short burst of 0s
state 3 – burst of 1s
Next state depends on
current state
Number of successive packets in the
current state
State transition probabilities
counting number of pattern occurred
in experimental data
store in a table in simulator
14
Conclusion
Experimental results justified our approach
as necessary
Building models based on data collected on real
switches
Created a packet loss model based on
experimental data
15
Ongoing Work
Experiment
Collect long trace with Endace DAG card
Cross-interface traffic
Model
Design a complete packet delay model
Study correlation between packet loss and delay
Develop black-box in RINSE
Evaluation
Compare simulation speed with existing queuing models
Compare accuracy of the black-box model with real data traces and
existing packet delay/loss models
16
Thank You
17
Experimental Results
- Packet Delay (High Load)
Packet Delay at Beginning of experiment under differenet sending rate (Mb/s)
3COM - Processor Sharing
No idea about bit rate until sufficient packets passed
Assign max weight at beginning
Passed packets bit rate dertermined weight delay
18
Experimental Results
- Packet Delay (Low Load)
Single flow
Delay NOT depends
on sending rate
Sufficient processing
power to handle 1Gb/s
single flow
Model packet delay as
a constant
19
Experiment Setup I
send to self
timestamp at NIC driver
NIC to NIC overhead
Host
Packet capture
Switch
traffic sender
NIC 1
timestamp
traffic receiver
NIC 2
1
5
2
6
3
7
4
8
20
RINSE - Architecture
DML Configuration
configure
SSFNet
enhance
SSF [Simulation Kernel]
implements
SSF Standard/API
•
Scalable, parallel and
distributed simulations
•
Incorporates hosts,
routers, links, interfaces,
protocols, etc
•
Domain Modeling
Language (DML)
•
A range of implemented
network protocols
•
Emulation support
Protocol Graph
DNP3
TCP
Socket
OSPF
MODBUS
BGP
IPV4
PHY
ICMP
Emulation
Interface 1
MAC
UDP
Interface N
…
MAC
PHY
21
RINSE - Switch Model
Host A
Host B
APP
UDP
APP
Switch
UDP
IP
Switch
IP
Ethernet MAC
Ethernet MAC
Ethernet MAC
Ethernet PHY
Ethernet PHY
Ethernet PHY
Switch Layer
black-box model
Simple output queue
model
Flip-coin model random delay and
packet loss
Simulation Time:
complex queuing model > simple output queuing model > our black-box model ≥ coin
model
22
Outline
Overview and Motivation
Our Approach
Measurement
Experimental Results and Model
Conclusion and Ongoing Work
23
Our Approach
Black-Box Switch Model
Focus on packet delay and packet loss
No detailed architecture, no queues
Explore the statistical relation between datain and data-out
Paramters derived from data collected on
real swtiches
24
Monitor traffic on every
port
Long trace (one day)
Synchronized clock
Real ISP network
Model based on experiment,
no assumption on traffic
and internals
25
Trace-driven Traffic Model
Basic question:
– How to introduce different traffic sources into the simulation,
while retaining the end-to-end congestion control
Trace-driven
– Problem: Rate adaptation from end-to-end congestion control
causes shaping
– Example: a connection observed on a high-speed unloaded link
might still send packages at a rate much lower than what the link
could sustain because somewhere else along the path insufficient
resources are available.
– Solution: Trace-driven source-level simulation preferable to tracedriven packet-level because data volume and the application-level
pattern are NOT shaped by the network’s current property
26