QoS Support in 802.11 Wireless LANs

Download Report

Transcript QoS Support in 802.11 Wireless LANs

End-to-End Provisioned Network
Testbed for eScience
• Team:
•
•
•
•
Malathi Veeraraghavan, University of Virginia
Nagi Rao, Bill Wing, Tony Mezzacappa, ORNL
John Blondin, NCSU
Ibrahim Habib, CUNY
NSF EIN: Experimental Infrastructure Network
Project: Jan. 2004 – Dec. 2007
Grant: $3.5M
1
Project goals
• Develop the infrastructure and
networking technologies to support a
broad class of eScience projects and
specifically the Terascale Supernova
Initiative (TSI)
– Optical network testbed
– Transport protocols
– Middleware and applications
7/20/2015
2
Provide rate-controlled
connectivity
Storage
Computer
Storage
Computer
Computer
Storage
Shouldn’t a “network” be able to
provide connectivity between two nodes
at some
requested bandwidth level?
7/20/2015
From application perspective:
is there value for such a network
Answer: yes!
3
Long way to go!
1.
2.
3.
Network switches are available off-the-shelf
with capability to provide bandwidth-on-demand
–
–
Sufficient to just buy these and hook ‘em together?
Answer: No!
–
–
–
Compare a computer to bandwidth
Can’t stop by giving a computer to a scientist
Need to give the scientists implemented applications
(toolkit) to enable them to use the computer
Same with bandwidth
Implement a “socket” to enable applications to
request bandwidth on demand and release when
done
Need to integrate this “socket” with applications
–
7/20/2015
4
Long way to go (contd)!
4. Test the applications with the
integrated BW-on-demand “socket”
on a lab BW-on-demand network
testbed
5. Finally, “take” the network wide area
7/20/2015
5
Project concept
• Network:
– CHEETAH: Circuit-switched High-Speed End-to-End Transport
ArcHitecture
– Create a network that offers end-to-end BW-on-demand
service
– Make it a PARALLEL network to existing high-speed IP
networks – NOT AN ALTERNATIVE!
• Transport protocols:
– Design to take advantage of dual end-to-end paths:
• IP path and end-to-end circuit
• TSI applications:
–
–
–
–
7/20/2015
High-throughput file transfers
Interactive apps. like remote visualization
Remote computational steering
Multipoint collaboration
6
Network specifics
• Circuit: High-speed Ethernet mapped
to Ethernet-over-SONET circuit
• Leverage existing strengths:
– 100Mbps/1Gbps Ethernet in LANs
– SONET in MANs/WANs
– Availability of Multi-Service Provisioning
Platforms (MSPP)
• Can map Ethernet to Ethernet-over-SONET
• Can be crossconnected dynamically
7/20/2015
7
Dynamic circuit sharing
Internet
PC 1
PC 2
Request bandwidth
MSPP
MSPP
PC 4
PC 3
SONET
XC with
UNI-N/NNI
XC
Ethernet/EoS circuit
•
7/20/2015
SONET
XC with
UNI-N/NNI
Parallel circuit-based testbed
Steps:
–
–
–
Route lookup
Resource availability checking and allocation
Program switch fabric for the crossconnection
9
TSI application
• Construct local visualization environment
•
•
•
•
Added 6 cluster nodes, expanded RAID to 1.7TB
Installed dedicated server for network monitoring
Began constructing visualization cluster
Wrote software to distribute data on cluster
• Supernova Science
• Generated TB data set on Cray X1 @ ORNL
• Tested ORNL/NCSU collaborative visualization session
John M. Blondin
[email protected]
10
LAN and WAN testing
ORNL
NC State
Operational April 1
Operational March 1
27-tile
Display wall
6-panel
LCD display
SGI Altrix
Linux Cluster
Supernova model
Same 1Tb SN model on
Disk at NCSU + ORNL
Supernova model
Currently testing viz on Altrix + cluster using single-screen graphics
John M. Blondin
7/20/2015
[email protected]
11
Applications that we will
upgrade for TSI project
• To enable scientists to enjoy the
rate-controlled connectivity of the
CHEETAH network
– GridFTP
– Viz. tool: Ensight or Aspect/Paraview
7/20/2015
12
Transport protocol
• File transfers
– Tested various rate-based transport solutions
• SABUL, UDT, Tsunami, RBUDP
• Two Dell 2.4Ghz PCs with 100Mhz 64-bit PCI buses
– Connected directly to each other via a GbE link
» Emulates a dedicated GbE-EoS-GbE link
– Disk bottleneck: IDE 7200 rpm disks
– Why rate based:
• not for congestion control: not needed after the
circuit is setup
• instead for flow control
7/20/2015
13
Rate-based flow control
•
Receive-buffer overflows: a necessary evil
Play it safe and set a low rate: avoid/eliminate receive-buffer losses
• Or send data at higher rates but have to recover from losses
•
7/20/2015
(MTU=1500B, UDP buffer size=256KB, SABUL data block size=7.34MB)
14
Oak Ridge National Laboratory
PIs: Nageswara S. V. Rao, Anthony Mezzacappa, William R. Wing
•
•
Overall Project Task:
To develop protocols, application interfaces for
interactive visualization and computational
steering tasks of TSI eScience application
On-going activities
– Stabilization protocols for visualization control streams
• Developed and tested stochastic approximation
methods for implementing stable application-toapplication streams
– Modularization and channel separation framework for
visualization
• Developed an architecture for decomposing
visualization pipeline into modules, measuring effective
bandwidths and mapping them onto network
– Dedicated channel testbed
• Setup dedicated ORNL-ATL_ORNL gigE-SONET
channel
7/20/2015
20Mbps on
ORNL-GaTech
IP connection
15
Planned activities
•
Control channel protocols for interactive visualization and computation
–
•
Implementation of modularized and channel-separated visualization
–
–
–
•
Automatic decomposition and mapping of visualization pipeline onto dedicated channels
Integrate control channel modules
Develop rate controllers to avoid channel overflows
Dedicated ORNL-ATL_ORNL connection over gigE-SONET channel
Visualization pipeline
–
•
Develop and test stochastic approximation methods on dedicated channels
Test protocols and visualizations
Integration with TSI visualization application
–
ORNL NCSU visualization clusters
will be automatically
distributed among the
channels and nodes
Visualization channel
Control channel
7/20/2015
16
Taking it wide-area
• Three possible approaches
– Collocate high-speed circuit switches at
POPs and lease circuits from commercial
service provider or NLR
– Use MPLS tunnels through Abilene
– Collocate switches at Abilene POPs and
share router links – after thorough
testing
7/20/2015
17
Router-to-router link leverage:
UVA/CUNY
NCSU/NCNI
OC48
Cisco 12008 router
Abilene backbone
ATLA
WASH
OC192
OC48
OC48
OC48
Rate limit to 27 OC1s
ORNL
OC192
SOX
OC48
GbE/21OC1 circuit
•
UVA equipment award from Internet2/Cisco
Setup
Release
request
request – Two Cisco 12008 (high-end) routers
(1Gbps) • Solutions:
– Link bundling
– Rate limiting
7/20/2015 • Question: impact of such link rate reductions on ongoing TCP18flows
Summary
• Implement the piece parts needed
for TSI scientists to take advantage
of rate-guaranteed channels
• Demonstrate these modified
applications on a local area
dynamically shared high-speed
circuit-switched network
• Take it to the wide area
7/20/2015
19