Where Do You Want To Peer Today?

Download Report

Transcript Where Do You Want To Peer Today?

Bringing Experimenters to
GENI with the Transit Portal
Vytautas Valancius, Hyojoon Kim, Nick Feamster
Georgia Tech
(with Jennifer Rexford and Aki Nakao)
Talk Agenda
•
•
•
•
Motivation: Custom routing for each experiment
Demonstration
How you can connect to Transit Portal
Experiment Ideas
– Anycast
– Service Migration
– Flexible Peering
• Using Transit Portal in Education
– Example problem set
• Summary and Breakout Ideas
2
Networks Use BGP to Interconnect
Autonomous Systems
Route Advertisement
Traffic
Session
3
Virtual Networks Need BGP Too
• Strawman
– Default routes
– Public IP address
ISP 1
ISP 2
• Problems
– Experiments may need
to see all upstream routes
– Experiments may need
more control over
traffic
BGP Sessions
GENI
• Need “BGP”
– Setting up individual
sessions is cumbersome
– …particularly for transient
experiments
4
Route Control Without Transit Portal
• Obtain connectivity to upstream ISPs
– Physical connectivity
– Contracts and routing sessions
• Obtain the Internet numbered resources from
authorities
• Expensive and time-consuming!
5
Route Control with Transit Portal
Experiment 1
ISP1
Virtual
Router
A
Transit
Portal
Virtual
Router
B
Internet
ISP2
Experiment 2
Experiment
Facility
Routes
Packets
Full Internet route
control to hosted
cloud services!
6
Connecting to the Transit Portal
• Separate Internet router for each service
– Virtual or physical routers
• Links between service router and TP
– Each link emulates connection to upstream ISP
• Routing sessions to upstream ISPs
– TP exposes standard BGP route control interface
7
Basic Internet Routing with TP
ISP 2
ISP 1
• Experiment with two
upstream ISPs
BGP
Sessions
Traffic
Transit
Portal
Virtual BGP
Router
Interactive Cloud Service
• Experiment can reroute traffic over one
ISP or the other,
independently of
other experiments
8
Current TP Deployment
• Server with custom routing software
– 4GB RAM, 2x2.66GHz Xeon cores
• Three active sites with upstream ISPs
– Atlanta, Madison, and Princeton
• A number of active experiments
– BGP poisoning (University of Washington)
– IP Anycast (Princeton University)
– Advanced Networking class (Georgia Tech)
9
Demonstration of Transit Portal
10
Demonstration Setup
Lookingglass
Server
route-server.ip.att.net
Client network:
168.62.21.0/24
Traceroute
GT
(AS 2637)
Transit
Portal
VPN
Tunneling
Public
AS
47065
Virtual
Router
Private
AS
65002
: BGP connectivity
11
Setting Up Peering with TP
1. Pick a device which will be the virtual router (Linux)
2. Request for needed resources & provide information

For tunneling: CA certificate, client certificate & key

Get prefixes that the client will announce
3. Make tunneling connection with Transit Portal
4. Set up BGP daemon in virtual router (e.g. Quagga)
5. Make proper changes to routing table if necessary
6. Check BGP announcements & connectivity (BGP
table)... and you are good to go!
12
Experiments Using Transit Portal
13
Experiment 1: IP Anycast
• Internet services require fast name resolution
• IP anycast for name resolution
– DNS servers with the same IP address
– IP address announced to ISPs in multiple locations
– Internet routing converges to the closest server
• Available only to large organizations
14
IP Anycast
• Host service at multiple locations (e.g., on ProtoGENI)
• Direct traffic to one instance of the service or another using anycast
Asia
ISP
1
North America
ISP
2
ISP
3
Transit
Portal
ISP
4
Transit
Portal
Anycast
Routes
Name Service
Name Service
15
Experiment 2: Service Migration
• Internet services in geographically diverse data
centers
• Operators migrate Internet user’s connections
• Two conventional methods:
– DNS name re-mapping
• Slow
– Virtual machine migration with local re-routing
• Requires globally routed network
16
Service Migration
Asia
ISP
1
Internet
ISP
2
Transit
Portal
Active Game
Service
North America
ISP
3
Tunneled Sessions
ISP
4
Transit
Portal
17
Experiment 3: Flexible Peering
Hosted service can quickly provision services
in the cloud when demand fluctuates.
18
Using TP in Courses
19
Using TP in Your Courses
• Used in “Next-Generation
Internet” Course at Georgia Tech
in Spring 2010
• Students set up virtual networks and connect
directly to TP via OpenVPN (similar to
demonstration)
– Live feed of BGP routes
– Routable IP addresses for in class topology inference
and performance measurements
20
Example Problem Set
• Set up virtual network with
– Intradomain routing
– Hosted services
– Rate limiting
• Connect to Internet with Transit Portal
21
Ongoing Developments
• More deployment sites
– Your help is desperately needed
• Integrating TP with network research testbeds
(e.g., GENI, CoreLab)
• Faster forwarding (NetFPGA, OpenFlow)
• Lightweight interface to route control
22
Conclusion
• Limited routing control for hosted services
• Transit Portal gives wide-area route control
– Advanced applications with many TPs
• Open-source implementation
– Scales to hundreds of client sessions
• The deployment is real
– Can be used today for research and education
– More information http://valas.gtnoise.net/tp
23
Transit Portal in the News
24
Breakout Session Agenda
• Q&A
• Demonstration Redux
• Brainstorming Experiments
– MeasuRouting: Routing-Assisted Traffic Monitoring
– Pathlet Routing and Adaptive Multipath Algorithms
– Aster*x: Load-Balancing Web Traffic over Wide-Area
Networks
– Migrating Enterprises to Cloud-based Architectures
25
Extra Slides
26
Scaling the Transit Portal
• Scale to dozens of sessions to ISPs and
hundreds of sessions to hosted services
• At the same time:
– Present each client with sessions that have an
appearance of direct connectivity to an ISP
– Prevented clients from abusing Internet routing
protocols
27
Conventional BGP Routing
• Conventional BGP router:
– Receives routing updates from peers
– Propagates routing update about one
path only
– Selects one path to forward packets
ISP2
ISP1
BGP Router
• Scalable but not transparent or
flexible
Client BGP
Router
Client BGP
Router
Updates
Packets
28
Scaling TP Memory Use
• Store and propagate all
BGP routes from ISPs
– Separate routing tables
• Reduce memory
consumption
– Single routing process shared data structures
– Reduce memory use from
90MB/ISP to 60MB/ISP
ISP1
ISP2
Routing Process
Routing
Table 1
Virtual
Router
Interactive Service
Routing
Table 2
Virtual
Router
Bulk Transfer
29
Scaling TP CPU Use
• Hundreds of routing
sessions to clients
– High CPU load
• Schedule and send
routing updates in
bundles
– Reduces CPU from 18% to
6% for 500 client sessions
ISP1
ISP2
Routing Process
Routing
Table 1
Virtual
Router
Interactive Service
Routing
Table 2
Virtual
Router
Bulk Transfer
30
Scaling Forwarding Memory
• Connecting clients
ISP1
ISP2
– Tunneling and VLANs
• Curbing memory usage
– Separate virtual routing tables
with default to upstream
– 50MB/ISP -> ~0.1MB/ISP
memory use in forwarding
table
Forwardin
Forwardng
Forwarding Table
g Table 1
Table 2
Virtual
BGP
Router
Interactive Service
Virtual
BGP
Router
Bulk Transfer
31