geni-cisco-nag-may06 - cs.Princeton

Download Report

Transcript geni-cisco-nag-may06 - cs.Princeton

Next-Generation Network Research Facilities
Jennifer Rexford
Princeton University
http://www.cs.princeton.edu/~jrex
1
Outline
• Networking research challenges
– Security, economic incentives, management, layer-2 technologies
• Importance of building and deploying
– Bridging the gap between simulation/testbeds and real deployment
• Global Environment for Network Innovations (GENI)
– Major NSF initiative to support experimental networking research
– Key ideas: virtualization, programmability, and user opt-in
• GENI backbone design
– Programmable routers, flexible optics, and connection to Internet
– Example experiments highlighting the capabilities
• Virtual Network Infrastructure (VINI)
– Initial experimental network facility in NLR and Abilene
• Conclusions
2
Is the Internet broken?
• It is great at what it does.
– Everyone should be proud of this.
– All sorts of things can be built on top of it.
• But…
– Security is weak and not getting better.
– Availability continues to be a challenge.
– It is hard to manage and getting harder.
– It does not handle mobility well.
– A long list, once you start…
3
Challenges Facing the Internet
• Security and robustness
– Naming and identity
– Availability
• Economic incentives
– Difficulty of providing end-to-end services
– Commoditization of the Internet infrastructure
• Network management
– No framework in the original Internet design
– Tuning, troubleshooting, accountability, …
• Interacting with underlying network technologies
– Advanced optics: dynamic capacity allocation
– Wireless: mobility, dynamic impairments
– Sensors: small embedded devices at large scale
4
FIND: Future Internet Design
• NSF research initiative
– Requirements for global network of 10-15 years out?
– Re-conceive the network, if we could design from scratch?
• Conceive the future, by letting go of the present:
– This is not change for the sake of change
– Rather, it is a chance to free our minds
– Figuring out where to go, and then how to get there
• Perhaps a header format is not the defining piece of
a new architecture
– Definition and placement of functionality
– Not just data plane, but also control and management
– And division between end hosts and the network
5
The Importance of Building
• Systems-oriented computer science research
needs to build and try out its ideas to be effective
– Paper designs are just idle speculation
– Simulation is only occasionally a substitute
• We need:
– Real implementation
– Real experience
– Real network conditions
– Real users
– To live in the future
6
Need for Experimental Facility
Goal: Seamless conception-to-deployment process
Deployment
Analysis
(models)
Simulation / Emulation
(code)
Experiment At Scale
(results)
(measurements)
7
Existing Tools
• Simulators
– ns
• Emulators
– Emulab
– WAIL
• Wireless testbeds
– ORBIT
– Emulab
• Wide-area testbeds
–
–
–
–
PlanetLab
RON
X-bone
DETER
8
Today’s Tools Have Limitations
• Simulation based on simple models
– Topologies, administrative policies, workloads, failures…
• Emulation (and “in lab” tests) are similarly limited
– Only as good as the models
• Traditional testbeds are targeted
– Not cost-effective to test every good idea
– Often of limited reach
– Often with limited programmability
• Testbed dilemma
– Production network: real users, but hard to make changes
– Research testbed: easy to make changes, but no users
9
Bridging the Chasm
Maturity
Global
Experimental
Facility
Deployed
Future
Internet
This chasm is a major
barrier to realizing the
future designs
Small Scale Testbeds
Simulation and Research Prototypes
Foundational Research
Time
10
Goals for the Experimental Facility
• Broader impact
– Positive influence on the design of the future Internet
– Network that is more secure, reliable, efficient, manageable, usable
• Intellectual progress
– Network science
• Experimentally answer questions about complex systems
• Better understanding of dynamics, stability, evolvability, etc.
– Network architecture
• Evaluate and compare alternative architectural structures
• Reconcile the contradictory goals an architecture must meet
– Network engineering
• Evaluate engineering trade-offs in a controlled, realistic setting
• Test theories of how different elements might be designed
11
GENI
• Experimental facility
– MREFC proposal to build a large-scale facility
– Jointly from NSF’s CS directorate, & research community
– We are currently at the “Conceptual Design” stage
– Will eventually require Congressional approval
• Global Environment for Network Innovations
– Prototyping new architectures
– Realistic evaluation
– Controlled evaluation
– Shared facility
– Connecting to real users
– Enabling new services
See http://www.geni.net
12
Three Key Ideas in GENI
• Virtualization
– Multiple architectures on a shared facility
– Amortizes the cost of building the facility
– Enables long-running experiments and services
• Programmable
– Enable prototyping and evaluation of new architectures
– Enable a revisiting of today’s “layers”
• Opt-in on a per-user / per-application basis
– Attract real users
• Demand drives deployment / adoption
– Connect to the Internet
• To reach users, and to connect to existing services
13
Slices
14
Slices
15
User Opt-in
Client
Proxy
Server
16
Realizing the Ideas
• Slices embedded in a substrate of resources
– Physical network substrate
• Expandable collection of building block components
• Nodes / links / subnets
– Software management framework
• Knits building blocks together into a coherent facility
• Embeds slices in the physical substrate
• Builds on ideas in past systems
– PlanetLab, Emulab, ORBIT, X-Bone, …
17
National Fiber Facility
18
+ Programmable Routers
19
+ Clusters at Edge Sites
20
+ Wireless Subnets
21
+ ISP Peers
ISP 2
ISP 1
22
Closer Look
Sensor Network
backbone wavelength
Dynamic
Configurable
backbone switch Swith
Customizable
Router
Internet
Wireless Subnet
Edge Site
23
GENI Substrate: Summary
• Node components
– Edge devices
– Customizable routers
– Optical switches
• Bandwidth
– National fiber facility
– Tail circuits
• Wireless subnets
–
–
–
–
–
Urban 802.11
Wide-area 3G/WiMax
Cognitive radio
Sensor net
Emulation
24
GENI Management Core
Management Services
- name space for users, slices, & components
GMC
- set of interfaces (“plug in” new components)
- support for federation (“plug in” new partners)
Substrate Components
25
Hardware Components
Substrate HW
Substrate HW
Substrate HW
26
Virtualization Software
Virtualization SW
Virtualization SW
Virtualization SW
Substrate HW
Substrate HW
Substrate HW
27
Component Manager
CM
CM
CM
Virtualization SW
Virtualization SW
Virtualization SW
Substrate HW
Substrate HW
Substrate HW
28
GENI Management Core (GMC)
Slice Manager
GMC
Resource Controller
slice_spec
(object hierarchy)
Auditing Archive
node
control
sensor
data
CM
CM
CM
Virtualization SW
Virtualization SW
Virtualization SW
Substrate HW
Substrate HW
Substrate HW
29
Federation
GMC
GMC
...
30
User Front-End(s)
GUI
Front-End
(set of management services)
provisioning service
file & naming service
information plane
GMC
GMC
...
31
Virtualization in GENI
• Multiple levels possible
– Different level required by different experiments
– Different level depending on the technology
• Example “base cases”
– Virtual server (socket interface / overlay tunnels)
– Virtual router (virtual line card / static circuits)
– Virtual switch (virtual control interface / dynamic circuits)
– Virtual AP (virtual MAC / fixed spectrum allocation)
• Specialization
– The ability to install software in your own virtual-*
32
Distributed Services in GENI
• Goals
– Complete the GENI management story
– Lower the barrier-to-entry for researchers (students)
• Example focus areas
– Provisioning (slice embedder)
– Security
– Information plane
– Resource allocation
– Files and naming
– Topology discovery
– Development tools
– Interfacing with the Internet, and IP
33
GENI Security
• Limits placed on a slice’s “reach”
– Restricted to slice and GENI components
– Restricted to GENI sites
– Allowed to compose with other slices
– Allowed to interoperate with legacy Internet
• Limits on resources consumed by slices
– Cycles, bandwidth, disk, memory
– Rate of particular packet types, unique addrs per second
• Mistakes (and abuse) will still happen
– Auditing will be essential
– Network activity
slice
responsible user(s)
34
Success Scenarios
• Change the research process
– Sound foundation for future network architectures
– Experimental evaluation, rather than paper designs
• Create new services
– Demonstrate new services at scale
– Attract real users
• Aid the evolution of the Internet
– Demonstrate ideas that ultimately see real deployment
– Provide architectural clarity for evolutionary path
• Lead to a future global network
– Purist: converge on a single new architecture
– Pluralist: virtualization supporting many architectures
35
Working Groups to Flesh Out Design
• Research (Dave Clark and Scott Shenker)
– Usage policy / requirements / instrumentation
• Architecture (Larry Peterson and John Wroclawski)
– Define core modules and interfaces
• Backbone (Jen Rexford and Dan Blumenthal)
– Fiber facility / routers & switches / tail circuits / peering
• Wireless (Dipankar Raychaudhuri and Deborah Estrin)
– RF technologies / deployment
• Services (Tom Anderson, Reiter)
– Edge sites / infrastructure and underlay services
• Education
– Training / outreach / course development
36
GENI Backbone Requirements
• Programmability
– Flexible routing, forwarding, addressing, circuit set-up, …
• Isolation
– Dedicated bandwidth, circuits, CPU, memory, disk
• Realism
– User traffic, upstream connections, propagation delays, equipment
failure modes, …
• Control
– Inject failures, create circuits, exchange routing messages
• Performance
– High-speed packet forwarding and low delays
• Security
– Preventing attacks on the Internet, and on GENI itself
37
A Researcher’s View of GENI Backbone
• Virtual network topology
– Nodes and links in a particular topology
– Resources and capabilities per node/link
– Embedded in the GENI backbone
• Virtual router and virtual switch
– Abstraction of a router and switch per node
– To evaluate new architectures (routing, switching,
addressing, framing, grooming, layering, …)
• GENI backbone capabilities evolve over time
– To realize the abstractions at finer detail
– To scale to a larger number of experiments
38
Creating a Virtual Topology
Some links created by
cutting through other nodes
Allocating a fraction
of a link and node
39
GENI Backbone
PC Clusters
Wireless
Subnets
ISP 1
Programmable
Routers
ISP 2
Dynamic
ROADMs
40
GENI Backbone Node Components
• Phase 0 – General purpose blade server
– Single node with collection of assignable resources
– Virtual Router may be assigned VM, blade or >1 blades
• Phase 1 – Adding higher performance components
– Assignable Network Processor blades and FPGA blades
– NPs also used for I/O for better control of I/O bandwidth
• Phase 2 – Adding reconfigurable cross-connect
– Enable experiments with configurable transport layer
– Provide “true circuits” between backbone virtual routers
• Phase 3 – Adding dynamic optical switch
– Dynamic optical switch with programmable groomer and
41
framer, and reconfigurable add/drop multiplexers
...
Pn
P3
P2
– Node with collection of
assignable resources
– Virtual Router may be
assigned a virtual machine,
blade, or multiple blades
P1
• Phase 0 – General
purpose blade server
Mgmt.
Proc.
Mgmt. Proc.
GENI Backbone Node Components
Switch
Switch
42
GENI Backbone Node Components
LCk
...
LC1
...
PEn
PE2
– Assignable Network
Processor blades and
FPGA blades
– NPs also used for I/O for
better control of bandwidth
– ATCA chassis and blades
PE1
• Phase 1 – Adding higher
performance components
Mgmt.
Proc.
Mgmt. Proc.
10 GigE Links
Switch
Switch
GP Blade
Server
43
GENI Backbone Node Components
• Phase 2 – Reconfigurable
cross-connect
Control Plane
– Enable experiments with
configurable transport layer
– Provide “true circuits”
between backbone virtual
routers
– Cut-through traffic
circumvents the router
Customizable
Router
1 GE
10GE+VLAN
Programmable
Cross-Connect/
Groomer
Wavelength
tunable
transponders
/combiner
WDM Fiber
VR
VR
VX
VR
VR
VX
VR
44
GENI Backbone Node Components
– Dynamic optical switch with
programmable groomer and
framer, and reconfigurable
add/drop multiplexers
– Maleable bandwidth
– Arbitrary framing
Customizable
Router
Control Plane
• Phase 3 – Adding dynamic
optical switch
1 GE
10GE+VLAN
Programmable
Cross-Connect/
Groomer
Wavelength
tunable
transponders
ROADM
45
GENI Backbone Software
• Component manager and virtualization layer
– Abstraction of virtual router and virtual switch
– Setting scheduling parameters for subdividing resources
• Multiplexers for resources hard to share
– Single BGP session with the outside world
– Single interface to an element-management system
• Exchanging traffic with the outside world
– Routing and forwarding software to evaluate & extend
– VPN servers and NATs at the GENI/Internet boundary
• Libraries to support experimentation
– Specifying, controlling, and measuring experiments
– Auditing and accounting to detect misbehavior
46
Feasibility
• Industrial trends and standards
– Advanced Telecom Computing Architecture (ATCA)
– Network processors and FPGAs
– SONET cross connects and ROADMs
• Open-source networking software
– Routing protocols, packet forwarding, network address
translation, diverting traffic to an overlay
• Existing infrastructure
– PlanetLab nodes, software, and experiences
– National Lambda Rail and Abilene backbones
47
Example Experiment: End-System Multicast
• End-System Multicast
– On-demand, live streaming of audio/video to many clients
– Intermediate nodes forming a multicast tree
• Ways GENI could support ESM research
– Backbone nodes participating in the multicast tree
– New network architectures running under ESM
• Live: native multicast support and QoS guarantees
• Pre-recorded: burst transfer, push, and network-storage
GENI
48
Example Experiment: Routing Control Platform
• Routing Control Platform (RCP)
– Refactoring of control and management planes
– Computes forwarding tables in separate servers
• Ways GENI can support RCP research
– Providing direct control over the data plane
– BGP sessions with the commercial Internet
– Controlled experiments with node/link failures
BGP with ISPs
RCP
BGP with ISPs
GENI
49
Example Experiment: Valiant Load Balancing
• Valiant Load Balancing
– Fully mesh of circuits between routers
– Direct traffic through intermediate node
• Ways GENI can support VLB
– Virtual circuits with dedicated bandwidth
– Experimentation with routing
r1
2r1r2/RN
– Measurement of effects of
r2
1
2
higher delay vs. higher
throughput on users
r3
rN N
3
– Explore impact on
buffer sizing in routers
4
…
r4
50
Example Experiments: TCP Switching
• TCP switching
– TCP SYN packet triggers circuit set-up
– Effective traffic management and quality of services
• Ways GENI can support TCP flow switching
– Programmable routers act as edge routers
• Trigger circuit set-up and tear-down
• Buffer data packets during circuit set-up
– Measure overheads and performance
51
VINI: Step Toward GENI Backbone
• Virtual Network Infrastructure (VINI)
– Multiple network experiments in parallel
– Connections to end users and upstream providers
– Supporting Internet protocols and new designs
• VINI as an initial experimental platform
– Support researchers doing network experiments
– Explore software challenges of building GENI backbone
• GENI will have a much wider scope
– Programmable hardware routers
– Flexible control of the optical components
– Wireless and sensor networks at the edge
52
Network Infrastructure
• Network topology
– Points of Presence
– Link bandwidth
– Upstream connectivity
• Two backbones
– Abilene Internet2
– National Lambda Rail
53
Building Virtual Networks
• Physical nodes
– Initially, high-end computers
– Later, network processors and FPGAs
• Virtual routers (a la PlanetLab)
– Multiple virtual servers on a single node
– Separate shares of resources (e.g., CPU, bandwidth)
– Extensions for resource guarantees and priority
54
Building Virtual Links
• Creating the illusion of interfaces
– Create a tunnel for each link in the topology
– Assign IP addresses to the end-points of tunnels
– Match tunnels with one-hop links in the real topology
55
Building Multiple Virtual Topologies
• Separate topology per experiment
– Routers are virtual servers
– Links are a subset of possible tunnels
• Creating a customized environment
– Running User Mode Linux (UML) in a virtual server
– Configuring UML to see multiple interfaces
– Enables running the routing software “as is”
R
R
R
R
R
Operating System
56
Overcoming Efficiency Challenges
• Packet forwarding must be fast
– But, we are doing packet forwarding in software
– And don’t want the extra overhead of UML
• Solution: separate packet forwarding
– Routing protocols running within UML
– Packet forwarding running outside of UML
XORP
UML
XORP: routing software
Click: forwarding software
Click
57
Carrying Real User Traffic
• Users opt in to VINI
• External Internet hosts
– User runs VPN client
– Connects to VINI node
– VINI connects to Internet
– Apply NAT at boundary
VINI
XORP
UML
Click
routing-protocol
messages
Client
XORP
Open
VPN
UML
Click
XORP
UDP
tunnels
UML
Click
Server
Network
Address
Translation
58
Example VINI Experiment
• Configure VINI just like Abilene
– VINI node per PoP
– VINI link per inter-PoP link
– Routing configuration as real routers
• Network event
– Inject link failure between two PoPs
– … in midst of an ongoing file transfer
• Measuring routing convergence
– Packet monitoring of the data transfer
– Active probes of round-trip time & loss
– Detailed view of effects on data traffic
59
VINI Current Status
• Initial Abilene deployment
– Eleven sites
– Nodes running XORP and Click on UML
• Upcoming deployment
– Six sites in National Lambda Rail
– … with direct BGP sessions with CRS-1 routers
– Dedicated 1 Gbps bandwidth between Abilene sites
• In the works
– Upstream connectivity via a commercial ISP in NYC
– Speaking interdomain routing with the Internet
Initial write-up: http://www.cs.princeton.edu/~jrex/papers/vini.pdf
60
Conclusions
• Future Internet poses many research challenges
– Security, network management, economics, layer-2, …
• Research community should rise to the challenge
– Conceive of future network architectures
– Prototype and evaluate architectures in realistic settings
• Global Environment for Network Innovations (GENI)
– Facility for evaluating new network architectures
– Virtualization, programmability, and user opt-in
• GENI backbone design
– Fiber facility, tail circuits, and upstream connectivity
– Programmable router and dynamical optical switch
• VINI prototype
– Concrete step along the way to the GENI backbone
61