Transcript ppt - MIT

Outline
•
Motivation
•
Simulation Study
•
Scheduled OFS
•
Experimental Results
•
Discussion
MIT
LIDS
Optical Flow Switching Motivation
•
OFS reduces the amount of electronic processing by switching
long sessions at the WDM layer
–
–
Lower costs, reduced delays, increased switch capacity
Provide specific QoS for advanced services
MIT
LIDS
LIDS
OFS Motivation (cont)
Elect. Domain
1KB
1MB
Optical Domain
10MB
Flow Size
100MB
Elect. Domain
1KB
1MB
Optical Domain
10MB
100MB
Flow Size
-Internet displays a “heavy-tail” distribution of connections
-More efficient optics => more transactions in optical domain (red line moves left)
MIT
Optical Flow Switching Study
•
•
•
•
•
Short-duration optical connections
–
–
Access area
Wide area
Network architecture issues
–
–
–
Connection setup
Route/wavelength assignment
Goal: efficient use of network resources I.e. high throughput
Previous work: “probabilistic” approaches
–
–
Difficulty: high-arrival rate leads to high blocking probability
Problem: lack of timely network state information
Our proposed solution: Use of timing information in network
–
–
Schedule connections
Gather timely network state information
This demonstration
–
–
–
–
Demonstrate flow switching
Demonstrate viability of timing and scheduling connections
Investigate key sources of overhead
High efficiency
MIT
LIDS
Connection Setup Investigation
•
Key issue:
–
–
–
•
Previous work
–
–
–
•
How to learn optical resource availability?
Distribution problem
“Wavelength continuity” problem makes it worse
Addresses issues one at a time
Assumes perfect network state information
Will these results be useful for ONRAMP, WAN implementation?
This work
–
–
Assesses effects of distributed network state information
Models some current proposals
MP-lambda-S
ASON
MIT
LIDS
Methodology
•
Design distributed approaches
–
–
•
Baseline flow switching architecture
–
–
–
•
Requested flows from user to user
Durations on order of seconds
All-optical
Simulate approaches on WAN topology
–
–
•
Combined routing, wavelength assignment
Connection setup
End-to-end latency (“time of flight” only)
Approaches: Ideal, Tell-and-Go, Reverse Reservation
Assess performance versus idealized approach
–
Blocking probability
MIT
LIDS
LIDS
Ideal Approach Illustration
Assume: Flow Requested from A->B
l-Changers
l-Changers
A
C
l-Changers
B
A
C
Optical
Flow
D
Bidirectional
Multi-fiber Link
B
“Tell”
cntl packet
D
l-Changers
Network Infrastructure
MIT
LLR Routing, Connection Setup
LIDS
Tell-and-Go Approach Illustration
Assume: Flow Requested from A->B
Available l: 2,3
A
Available l: 2,3,4
C
Available l: 1,2
Link-state
Updates
B
Available l: 1,2,3
D
Link-State Protocol
MIT
A
C
Optical
Flow
B
“Tell” Packet Single wavelength
D
Connection Setup
LIDS
Reverse Reservation Approach Illustration
Assume: Flow Requested from A->B
A
C
B
A
C
B
Information
Packets
Reservation
Packet
Route Chosen by B
D
Route Discovery
MIT
D
Route, Wavelength Reservation
Simulation Description
•
Results shown as Blocking Probability vs. Traffic Intensity
–
Uniform, Poisson flow traffic per node
•
Fixed WAN topology
•
Parameters:
–
–
–
–
–
–
–
F = Number of fibers/link
L = Number of channels/link
K = Number of routes considered for routing decisions
U = Update interval (seconds)
 = Average service rate for flows (flows/second)
l = Average arrival rate of flows (flows/second)
 = Traffic intensity. Equal to l/
not utilization factor
MIT
LIDS
Simulation Topology
MIT
LIDS
Latency-free Control Network Results (1sec
flows)
Title:
Creator:
gnuplot
Previe w :
This EPS picture w as not saved
w ith a preview inclu ded in it.
Comment:
This EPS picture w ill prin t to a
PostScrip t prin ter, but not to
other types of prin ters.
RR: F=1, L=16, K=10
MIT
TG: F=1, L=16, K=10
LIDS
Control Network With Latency Results (1sec
flows)
Title:
Creat or:
gnuplot
Prev iew :
This EPS pict ure w as not s av ed
w ith a preview inc luded in it.
Comment:
This EPS pict ure w ill print to a
Pos tSc ript printer, but not to
other ty pes of printers.
TG, RR: U=0.1, F=1, L=16, K=10
MIT
LIDS
Interesting Phenomenon
•
Why is TG performance better than RR?
–
1 sec flows and large rho => small inter-arrival times
Smaller than round trip time
–
–
Thus, with high probability, successive flows will see same state (at
least locally)
Increases chance of collision
Effect of distribution (latency)
•
Why is Rand better than FF?
–
–
This is exactly opposite of analytical papers’ claim
Combination of reasons
Nodes have imperfect information
FF makes them compete for same wavelengths (false advertisement)
–
Not seen in analysis because distribution was ignored
MIT
LIDS
Scheduled OFS in ONRAMP
•
Inaccurate information hurts performance
–
–
•
Our proposal: Use of timing information to schedule flows
–
–
–
–
•
Deliver network information on time to make decisions
Exchange flow-based information
Maximize utilization of core network
Possible small delay for user
Issues
–
–
–
–
•
In this case: Simple speed of light
Biggest problem: Core network resources wasted
Can timing be implemented cheaply, scaled?
Can schedules be implemented?
Must make use of current/future optical devices
Low cost
ONRAMP OFS
–
–
Demonstration of scheduled OFS in access-area network
One example of an implementation
MIT
LIDS
LIDS
Scheduling in ONRAMP
Intermediate Node
Router
OXC Sched
OXC
Access Node #2
Access Node #1
Control
Control
OXC
OXC
IP
IP
Router
Router
X-a
R-a
OXC Sched
IP
Fixed l
Xponder
FLOW
Tunable l
Xponder
GE
GE
•
OXC Sched
FLOW
IP
Tunable l
Xponder
Fixed l
Xponder
GE
Receiver (R )
)Xmitter (X)
MIT
GE
ONRAMP Connection Setup
•
•
Uses timeslotting and schedules for lightpaths
X => li busy on output of node i at corresponding slot
OXC Schedule
Slot 1
l1
X
l2
MIT
X
X
l3
l4
Slot 2 Slot 3
X
…..
LIDS
LIDS
Algorithm Timeline
Overhead - Dependent on timing uncertainty
Slot 1
Slot 2
Slot 3
TIME
Scheduling OH
Cannot go in next timeslot
Scheduling OH
Can go in next timeslot
-Overheads includes all timing uncertainty
-Efficiency of any scheduled algorithm related to
timing uncertainty, and switching/electronic overheads
-Rough efficiency = Flow duration / Flow duration + Overhead
MIT
Utilizing Link Capacity
•
Sending GigE over transparent optical channel
–
–
•
Clock rate 1.244 Ghz
Rate 8/10 coding results in raw bit rate of 995.2 Mb/s
Payload capacity for UDP
–
Send MTU-sized packets
9000 bytes
Avoid fragmentation
–
Headers
Ethernet (26 bytes) + IP (20 bytes) + UDP (8 bytes) = 54 bytes
Result: 8946 bytes of payload/packet
–
Link payload limit
989.2288 Mb/s
•
Rate-limited UDP
–
–
–
Input: desired rate
Timed sends of UDP packets achieve desired rates
Demonstrates transparency of OFS channel
MIT
LIDS
Experimental Setup
•
OFS implemented in lab
•
One second timeslots
–
•
Routing/wavelength selection
–
–
•
All available wavelengths (currently 14)
Both directions around ring
Gigabit Ethernet link layer
–
•
Timing overhead negligible
Flows achieve theoretical maximum link rate ~989 Mb/s
Rate limited UDP
–
–
–
–
Unidirectional flows
No packet loss (100s of flows)
Variable rate
Demonstrates transparent use of optical connection
MIT
LIDS
OFS Performance
Title:
rate.eps
Creator:
gnuplot 3.7 patchlevel 1
Previe w:
This EPS picture was not saved
w ith a preview inclu ded in it.
Comment:
This EPS picture will prin t to a
PostScrip t prin ter, but not to
other types of prin ters.
MIT
Title:
percent.eps
Creator:
gnuplot 3.7 patchlevel 1
Previe w:
This EPS picture was not saved
w ith a preview inclu ded in it.
Comment:
This EPS picture will prin t to a
PostScrip t prin ter, but not to
other types of prin ters.
LIDS
Current Performance Limitations
Title:
perc entcorr.eps
Creator:
gnuplot 3.7 patc hlev el 1
Prev iew :
This EPS picture w as not s av ed
w ith a preview inc luded in it.
Comment:
This EPS picture w ill print to a
Pos tSc ript printer, but not to
other ty pes of printers.
MIT
LIDS
LIDS
Current Performance Limitations(cont.)
•
Current overhead is 0.10 seconds
–
–
Efficiency for one-second flows is therefore 90%
Analysis of overhead reveals possible overhead of Gigabit Ethernet
frame sync
Still under investigation
–
–
Switching overhead and timing uncertainty negligible
I.e. scheduling viable, efficient
Algorithm Overhead Timeline
Flow begins…………
10ms
100ms
150ms
time
MIT
Scheduling
GBE Sync?
Switching
Command
Receiver
Laser