In VINI Veritas - Georgia Institute of Technology

Download Report

Transcript In VINI Veritas - Georgia Institute of Technology

Customizable, Fast, Virtual
Network Testbeds on Commodity
Hardware
Nick Feamster
Georgia Tech
Murtaza Motiwala, Yogesh Mundada, Vytautas Valancius,
Andy Bavier, Mark Huang, Larry Peterson, Jennifer Rexford
VINI Overview
Bridge the gap between “lab experiments”
and live experiments at scale.
?
VINI
Emulation
Simulation
•
•
•
•
•
Small-scale
experiment
Runs real routing software
Exposes realistic network conditions
Gives control over network events
Carries traffic on behalf of real users
Is shared among many experiments
Live
deployment
Goal: Control and Realism
• Control
Topology
Arbitrary,
emulated
Actual
network
Traffic
Synthetic
or traces
Real
clients,
servers
Network Events
Inject faults,
anomalies
Observed in
operational
network
– Reproduce results
– Methodically change or
relax constraints
• Realism
– Long-running services
attract real users
– Connectivity to real Internet
– Forward high traffic volumes
(Gb/s)
– Handle unexpected events
Overview
• VINI characteristics
–
–
–
–
Fixed, shared infrastructure
Flexible network topology
Expose/inject network events
External connectivity and routing adjacencies
• PL-VINI: prototype on PlanetLab
• Preliminary Experiments
• Ongoing work
Fixed Infrastructure
Shared Infrastructure
Arbitrary Virtual Topologies
Exposing and Injecting Failures
Carry Traffic for Real End Users
c
s
Participate in Internet Routing
BGP
BGP
c
s
BGP
BGP
PL-VINI: Prototype on PlanetLab
• First experiment: Internet In A Slice
– XORP open-source routing protocol suite (NSDI ’05)
– Click modular router (TOCS ’00, SOSP ’99)
• Clarify issues that VINI must address
–
–
–
–
Unmodified routing software on a virtual topology
Forwarding packets at line speed
Illusion of dedicated hardware
Injection of faults and other events
PL-VINI: Prototype on PlanetLab
• PlanetLab: testbed for planetary-scale services
• Simultaneous experiments in separate VMs
– Each has “root” in its own VM, can customize
• Can reserve CPU, network capacity per VM
Node
Mgr
Local
Admin
VM1
VM2
…
VMn
PlanetLab node
Virtual Machine Monitor (VMM)
(Linux++)
Version 2: Trellis
Design Decisions
•
•
•
•
Container-based virtualization
Terminate tunnels with Ethernet GRE
Terminate tunnels in the root context
Use a “shortbridge” for point-to-point links
Performance Evaluation
• Packet forwarding rates comparable to directly
terminating tunnels within the container.
Challenges and Next Steps
• Could have run experiments on Emulab
• Goal: Operate our own virtual network
– Want customizable packet forwarding
– Must do this without compromising speed
– We can tinker with routing protocols
• Goal: Attracting real users
– Require external connectivity (BGP Multiplexing)
Conclusion
• VINI: Controlled, Realistic Experimentation
• Installing VINI nodes in NLR, Abilene
• Download and run Internet In A Slice
http://www.vini-veritas.net/
XORP: Control Plane
XORP
(routing protocols)
• BGP, OSPF, RIP, PIMSM, IGMP/MLD
• Goal: run real routing
protocols on virtual
network topologies
User-Mode Linux: Environment
UML
XORP
(routing protocols)
eth0
eth1
eth2
eth3
• Interface ≈ network
• PlanetLab limitation:
– Slice cannot create new
interfaces
• Run routing software in
UML environment
• Create virtual network
interfaces in UML
Click: Data Plane
• Performance
UML
XORP
– Avoid UML overhead
– Move to kernel, FPGA
(routing protocols)
eth0
eth1
eth2
• Interfaces  tunnels
eth3
Control
Data
Packet
Forward
Engine
Click
UmlSwitch
element
Tunnel table
Filters
– Click UDP tunnels
correspond to UML network
interfaces
• Filters
– “Fail a link” by blocking
packets at tunnel
Intra-domain Route Changes
s
856
2095
700
260
1295
c
639
366
548
587
902
1893
233
1176
846
Ping During Link Failure
120
Routes converging
Ping RTT (ms)
110
100
Link down
90
Link up
80
70
0
10
20
30
Seconds
40
50
Close-Up of TCP Transfer
PL-VINI enables a user-space virtual network
to behave like a real network on PlanetLab
2.45
Megabytes in stream
Packet receiv ed
2.4
2.35
2.3
Slow start
2.25
2.2
Retransmit
lost packet
2.15
2.1
17.5
18
18.5
19
Seconds
19.5
20
TCP Throughput
12
Megabytes transferred
Packet receiv ed
10
8
6 Link down
Link up
4
Zoom in
2
0
0
10
20
30
Seconds
40
50
Ongoing Work
• Improving realism
– Exposing network failures and changes in the
underlying topology
– Participating in routing with neighboring networks
• Improving control
– Better isolation
– Experiment specification
Resource Isolation
• Issue: Forwarding packets in user space
– PlanetLab sees heavy use
– CPU load affects virtual network performance
Property
Depends On
Solution
Throughput
CPU% received
PlanetLab provides CPU
reservations
Latency
CPU scheduling
delay
PL-VINI: boost priority of
packet forward process
Performance is bad
• User-space Click: ~200Mb/s forwarding
VINI should use Xen
Experimental Results
• Is a VINI feasible?
– Click in user-space: 200Mb/s forwarded
– Latency and jitter comparable between network and
IIAS on PL-VINI.
– Say something about running on just PlanetLab?
Don’t spend much time talking about CPU
scheduling…
Low latency for everyone?
• PL-VINI provided IIAS with low latency by giving
it high CPU scheduling priority
Internet In A Slice
XORP
• Run OSPF
• Configure FIB
S
S
C
C
S
C
Click
• FIB
• Tunnels
• Inject faults
OpenVPN & NAT
• Connect clients
and servers
PL-VINI / IIAS Router
• Blue: topology
UML
– Virtual net devices
– Tunnels
XORP
eth0
eth1
eth2
UmlSwitch
FIB
UmlSwitch
element
Encapsulation table
Click
tap0
• Red: routing and
forwarding
eth3
Control
Data
– Data traffic does not enter
UML
• Green: enter & exit IIAS
overlay
PL-VINI Summary
Flexible Network Topology
Virtual point-to-point connectivity
Tunnels in Click
Unique interfaces per experiment
Virtual network devices in UML
Exposure of topology changes
Upcalls of layer-3 alarms
Flexible Routing and Forwarding
Per-node forwarding table
Separate Click per virtual node
Per-node routing process
Separate XORP per virtual node
Connectivity to External Hosts
End-hosts can direct traffic through VINI
Connect to OpenVPN server
Return traffic flows through VINI
NAT in Click on egress node
Support for Simultaneous Experiments
Isolation between experiments
PlanetLab VMs and network isolation
CPU reservations and priorities
Distinct external routing adjacencies
BGP multiplexer for external sessions
PL-VINI / IIAS Router
• XORP: control plane
• UML: environment
UML
XORP
(routing protocols)
eth0
eth1
eth2
– Virtual interfaces
• Click: data plane
eth3
Control
Data
Packet
Forward
Engine
Click
UmlSwitch
element
Tunnel table
– Performance
• Avoid UML overhead
• Move to kernel, FPGA
– Interfaces  tunnels
– “Fail a link”