Towards Wireless Overlay Network Architectures
Download
Report
Transcript Towards Wireless Overlay Network Architectures
Berkeley-Helsinki Summer Course
Lecture #15: Content
Distribution Networks and
Service Composition Paths
Randy H. Katz
Computer Science Division
Electrical Engineering and Computer Science Department
University of California
Berkeley, CA 94720-1776
1
Outline
•
•
•
•
•
•
Rationale for Content Distribution
Web Caching Issues and Architectures
Akamai Architecture
Streaming Content Distribution
Business Issues in Content Distribution
Service Composition
2
Outline
•
•
•
•
•
•
Rationale for Content Distribution
Web Caching Issues and Architectures
Akamai Architecture
Streaming Content Distribution
Business Issues in Content Distribution
Service Composition
3
Services Within the Network:
Caching and Distribution
“Internet Grid”
Parallel Network Backbones
Internet Exchange Points
Co-Location
Scalable Servers
Web
Caches
4
Caching Advantages for
Service Providers
Internet
Local
POP
$
ISP $
Backbone
$
$
Local
POP
Local POP
• Move data closer to
consumer
• Backbone caches save b/w
• Edge caches for QoS
• 4 billion hits/day@AOL!
• Even more crucial for
broadband access
networks, e.g., cable, DSL
Eric Brewer
5
Reverse Caching
Forward Proxy Cache
Cache handles client requests
$
Internet
Reverse Proxy Cache
Cache fronts origin server
Internet
$
Eric Brewer
6
Surge Protection via
Clustered Caches
Reverse caches buffer load
across multiple sites
Hosting Provider Network
www.site 1.com
$ $
Internet
$ $
Reverse Proxy
Cluster
www.site 2.com
www.site 3.com
www.site 4.com
www.site 5.com
www.site 6.com
Eric Brewer
7
Content Distribution
We can connect these caches!
ISP Network
Hosting Provider Network
$ $
$ $
Forward
Caches
Internet
$ $
$ $
Reverse Proxy
Cluster
Push content out to the edge
Eric Brewer
8
Outline
•
•
•
•
•
•
Rationale for Content Distribution
Web Caching Issues and Architectures
Akamai Architecture
Streaming Content Distribution
Business Issues in Content Distribution
Service Composition
9
Example: Application-level
Multicast
Solve the multicast management and peering
problems by moving up the protocol stack
Isolated
multicast
clouds
multicast
cloud
multicast
cloud
multicast
cloud
multicast
cloud
multicast
cloud
Traditional
unicast
peering
Steve McCanne
10
Example: Application-level
Multicast
Solve the multicast management and peering
problems by moving up the protocol stack
Steve McCanne
11
Multicast as an
Infrastructure Service
• Global multicast as an “infrastructure service”,
not a core network primitive
– Circumvents technical/operational/business barriers of no
interdomain multicast routing, management, billing
• No coherent architecture for infrastructure
services, because of end-to-end principle
• Needed: Service stack to complement the IP
protocol stack
– Open redirection
– Content-level peering
Steve McCanne
12
The Service Stack
End
Host
Applications
End host
Services
TCP
service
End-to-end
argument
here
Router
IP service
Network
Services
Steve McCanne
13
The Service Stack
End
Host
Applications
TCP
service
DNS
stub
Overlay
DNS
Router
IP service
End host
Services
Infrastructure
Services
Network
Services
Steve McCanne
14
The Service Stack
End
Host
Overlay
Router
Applications
DNS
TCP
service
Cache
Services
stub
Proxy
Services
IP service
DNS
End host
Services
Infrastructure
Services
Network
Services
Steve McCanne
15
The Service Stack
End
Host
Applications
DNS
TCP
service
stub
redirection
Overlay
Router
Cache
Services
Proxy
Services
IP service
DNS
End host
Services
Infrastructure
Services
Network
Services
Steve McCanne
16
Service Elements for Internet
Broadcast
End
Host
Applications
TCP
service
redirection
DNS
stub
stub
Overlay
Broadcast
Router
Redirection
DNS
IP and Scoped IP Multicast
End host
Services
Infrastructure
Services
Network
Services
Steve McCanne
17
Incremental Path
End
Host
Applications
TCP
service
G2, WMT, QT4
RTSP, RTP DNS
stub
Overlay
Broadcast
Router
Redirection
DNS
IP and Scoped IP Multicast
End host
Services
Infrastructure
Services
Network
Services
Steve McCanne
18
Broadcast Overlay Architecture
Broadcasters
Content
Broadcast
Network
Content Distribution
Through Multicast
Overlay Network
Load Balancing Thru
Server Redirection;
Edge
Servers
Inter-ISP Redirection
Peering
Redirection
Fabric
Content
Broadcast
Management
Platform and
Tools
Clients
Steve McCanne
19
A New Kind of Internet
• Actively push services towards the edges:
caches, content distribution points
• Manage redirection, not routes
• New applications-specific protocols
–
–
–
–
Push content to the edge
Invalidate remote content for freshness
Collate remote logs into a single log
Internet TV/Radio: streaming media that works
• Twilight of the end-to-end argument
– Trusted service providers/network intermediaries
– Service providers create own application-specific overlays,
e.g., cache and streaming media content distribution
20
Outline
•
•
•
•
•
•
Rationale for Content Distribution
Web Caching Issues and Architectures
Akamai Architecture
Streaming Content Distribution
Business Issues in Content Distribution
Service Composition
21
Web Caching Service: Akamai
Number of Servers 5000
Number of Networks 350
Typically 70-90%
of Web content
Number of Countries 50
(as of Fall 2000)
22
ARLs and Akamai Traffic
Redirection
• http://www.foo.com/images/logo.gif when
Akamaized becomes:
– http://a836.g.akamaitech.net/7/836/123/e358f5db004e
9/www.foo.com/images/logo.gif
» Serial #: content “bucket” served from same server
» Akamai Domain: redirection to Akamai server
» Type Code: identify the Akamai service
» Serial #
» Content Provider Code: Akamai content provider
» Object Data: expiration/version information
» URL: original locator, if content not at server
23
Akamai’s DNS Extensions
• *.g.akamai.net mapped onto an IP address
• Two level hierarchy
– HLDNS: redirect lookup to LLDNS “close” to client
Recompute network map every O(10 minutes)
Resolution has TTL of 30 minutes
– LLDNS: redirect to “optimally located” Akamai server for
the client
Recompute network map every O(10 seconds)
Resolution has TTL of 30 seconds
– Map generation based on:
» Internet congestion
» System load
» User demands
» Server status
24
Akamai Fault Tolerance
• Machine failures
– Buddy system with paired back-up servers
– Recovery time is 1 second once detected
• Network outages/data center outages
– Continuous monitoring
– Set response to infinity when out, thereby driving site from
network maps
– Recovery time is 1-2 minutes due to frequent map updates
• Content provider home site must be robust!
• 7x24x365 NOC
• Geoflow Monitoring Software/Traffic Analyzer
25
Internet Cache Protocols
• Internet Cache Protocol (ICP)
– Peer-to-peer: check if missing content is in a nearby cache
• Cache Array Resolution Protocol (CARP)
– Confederation of caches to form a larger, unified cache
• Cache Digest Protocol
– Exchange descriptions of what is contained in each cache
– Used to manage peered caches
– Stale cached data can be an issue
• Web Cache Coordination Protocol (WCCP)
– Intercept HTTP and redirect to cache
– Cisco Cache Engine: WCCP manages router redirection
26
Impediments to Caching
• Cache busting
– Server actively prevents content from being cached
– E.g., set EXPIRES field to value in the past,
CACHE-CONTROL: no-cache or no-store
– Responses
» Hit metering: inform origin server of # users
accessing cached content
» Ad-Insertion: proxy server inserts ads, freeing origin
server from doing so
• Replication
– Mirror sites
– In a sense, content distribution is selective mirroring!
27
Outline
•
•
•
•
•
•
Rationale for Content Distribution
Web Caching Issues and Architectures
Akamai Architecture
Streaming Content Distribution
Business Issues in Content Distribution
Service Composition
28
Example CDN Application:
Internet Broadcast
• Media Distribution
– Application-level multicast
– Enforceable “content QoS”
• Content Peering
– Channel peering
– Redirection peering
McCanne, FFNets
29
Media Distribution
Application
Level
Multicast
?
Access Networks
Backbones
Redirection
Access Networks
McCanne, FFNets
30
Media Distribution
Application
Level
Multicast
Redirection
McCanne, FFNets
31
Media Distribution
Application
Level
Multicast
Redirection
McCanne, FFNets
32
Media Distribution
McCanne, FFNets
33
Congested Peering Points
McCanne, FFNets
34
CDN Quality of Service
• How to overcome congestion at peering points?
• Hard because peering policies evolve, hot spots move
• A few existing approaches
– Route around hot spots
– Satellite bypass
– Dispersity routing (Maxemchuk, 1977)
• A new alternative
– Provision the overlay network
» Seemingly intractable (QoS across ISP boundaries)
» But in reality, not so hard to do approximately perfect…
McCanne, FFNets
35
CDN Quality of Service
• Build on intradomain SLAs
– Given ISP typically offers great SLA for on-net destinations
McCanne, FFNets
36
CDN Quality of Service
• Build on intradomain SLAs
– Given ISP typically offers great SLA for on-net
destinations from a “transit/colo” connection
– But all bets are off when you cross a peering point
McCanne, FFNets
37
CDN Quality of Service
• Solution
– Create private, content-level peering points
– Bypass congested Internet peering points
– Enforce application-level QoS polices
co-locate
transit links
co-locate
transit links
McCanne, FFNets
38
CDN Quality of Service
• Solution
– Create private, content-level peering points
– Bypass congested Internet peering points
– Enforce application-level QoS polices
McCanne, FFNets
39
Congested Peering Points
McCanne, FFNets
40
Congested Peering Points
transit
links
P-NAP
P-NAP
McCanne, FFNets
41
Bypassing Congestion
P-NAP
P-NAP
42
Broadcast from Anywhere
P-NAP
P-NAP
McCanne, FFNets
43
Content-level QoS
• Mark and police traffic at
injection point
• Signal QoS policies across
overlay network
• Ensure content QoS on
each overlay hop
ingress policing
ATM
PVC
MPLS
unicast
mesh
– Map contant QoS to underlying
network QoS
– e.g., diffserv, rsvp, mpls
• No need for ubiquitous,
end-to-end QoS in network
• No need to modify apps or
end hosts
IP Multicast
w/ diffserv
DSL unicast
McCanne, FFNets
44
Content-level QoS
Application
Level
Multicast
Redirection
}
Completely managed
and provisioned
}
The broadcast edge…
move it as close as
possible to the user
McCanne, FFNets
45
Channel Peering
• Establish data peering relationships at
“content exchange points”
– Easy with application-level multicast
• Enforce QoS across peering point
• The catch
– How to do settlement?
– Same problem with IP multicast peering (don’t want to
turn it on because of lost revenue stream)
• The solution: audience tracking
– As in EXPRESS Multicast (Hollbrook & Cheriton)
McCanne, FFNets
46
Audience Tracking
}
Can now be a transit
carrier for broadcast
traffic… not viable with
vanilla multicast
McCanne, FFNets
47
Audience Tracking
So, given such information you can actually make the
channel peering component of content peering viable…
McCanne, FFNets
48
Redirection Peering
Content
aggregators
(broadcasters)
Broadcast
transit
providers
Access
networks
(affiliates)
McCanne, FFNets
49
Redirection Peering
Content
aggregators
(broadcasters)
Broadcast
transit
providers
Access
networks
(affilates)
McCanne, FFNets
50
Redirection Peering
Content
aggregators
(broadcasters)
Broadcast
transit
providers
Access
networks
(affilates)
McCanne, FFNets
51
Redirection Peering
• Need common architecture to allow different
vendors to create different pieces and work
with one another (yet still compete)
• The challenges
–
–
–
–
Define the redirection architecture
New client/infrastructure protocol & API (a la DNS)
Do so in backward compatible way
Others…
• One of the next big architectural issues for
the Internet…
McCanne, FFNets
52
Summary
• The “Broadcast Internet” is upon us
– Media distribution with app-level multicast
– Content peering
• Lots of intelligence in network
– At odds with end-to-end?
• Ultimately, these technologies will emerge as
the “BGP for Internet broadcast” and truly
catalyze convergence
McCanne, FFNets
53
Outline
•
•
•
•
•
•
Rationale for Content Distribution
Web Caching Issues and Architectures
Akamai Architecture
Streaming Content Distribution
Business Issues in Content Distribution
Service Composition
54
Alternative Broadband Content
Delivery Models
• Push Model
– DirectTV, Broadcast.com
• Pull Model
– Web Browsing
• Interactive
– Push-Pull Model
» Mix of broadcast data and on-demand request
• WebTV, OpenTV, ….
– Interactive Game Model
55
Content
Content-Deliver-Present:
XM Radio
100 Channels
Dedicated
Publisher
Distribution
Dedicated Satellite +
XM Radio managed
terrestrial repeaters
Content Push
Presentation
XM
Radio
Dedicated
Terminal
End-to-End Controlled QoS
56
Case Study:
XM Radio
• Wide-area unidirectional, high
bandwidth
distribution-based
access to vehicles
and homes
• Framework for
multimedia content
dissemination
beyond real-time
audio
• Localization
services at
redirection points
57
Content
Content-Deliver-Present:
DirecTV
Distribution
Dedicated Satellite
Independent
Channels
Content Push
Presentation
Dedicated
Set-top
Box + TV
Dedicated
Terminal
End-to-End Controlled QoS
58
Content
Content-Deliver-Present:
Internet
Distribution
Presentation
TV
Application-Specific
Overlay Network
Internet
Publishers
(Web Sites)
PC
Cell Phone
.
.
.
Access Networks
Content
Push
Content
Pull
Computing/storage in the Net
& Terminals
Supporting services like
Web Caching, Content Distribution
Controlled QoS
Controlled QoS
59
Content Delivery Service: Cidera
(formerly SkyCache)
• Satellite-based broadcast overlay network
to improve movement of Internet
information
– Web pages, software updates, streaming media
• Customers
– Content Publishers: quick access to the network
edges, redundancy
– Enterprises: Virtual VSAT Network, redundancy
– ISPs: quick access to nationwide POPs,
redundancy
• Distribution ONLY;
not servers, not content, not access
60
Content Dissemination and
Caching: Edgix
• Accelerated
content-delivery,
worldwide
• “One router hop
away from end
customer”
• For ISPs and
large corporate
customers
• Satellite bypass
of Akamai
61
Content
Content-Deliver-Present:
Web TV
Web Sites
Distribution
Internet
Presentation
Web TV
Service
Dial-up
Access
Web page xform
E-mail
Channels
Publishers
Cable
WebTV
Set-top
+
TV
Coax
Access
Access Networks
& Terminals
Controlled QoS
62
Content-Deliver-Present:
Internet Access over Cellular
Content
Web Sites
Publishers
Distribution
Presentation
Cellular
Data
Service
Internet
Cellular
Web page xform Access
E-mail
Network
PSTN
Cellular
Handset/
Modem
Cellular
Access
Access Networks
& Terminals
Controlled QoS
63
Access
ISP
Backbone
Portal
In-Vehicle Service
Scenario
Revenue Model: Subscription fees and
equipment purchase vs. advertiser pays
for targeted ad insertion based on
location, activity, vehicle owner
demographics, etc.
Broadband Downlink:
Radio/TV/Digital Media
Info Content
(News/Maps)
Vehicle Portal:
Info, Repair
Records, Ads
Hybrid Networking
w/ Narrowband Uplink
Vehicle LAN
Computers, Displays,
Audio Out, Etc.
Web-based I/F available
in-vehicle, at home, at work
Internet
Scalable
Servers
Caches
64
Access
ISP
Backbone
Portal
In-Home Service
Scenario
Revenue Model: Subscription fees and
equipment purchase vs. advertiser pays
for targeted ad insertion based on
location, activity, home owner
demographics, etc.
Home
Control
Home Portal:
Info, Repair
Records, Ads
Hybrid Networking
Home LAN
Digital
Recorder
Broadband Downlink:
Radio/TV/Digital Media
Info Content
(News/Maps)
LMDS/MMDS
Cable, DSL
Set-top
Box
Web-based I/F available
at home, at work, in-vehicle
Internet
Scalable
Servers
Caches
65
Outline
•
•
•
•
•
•
Rationale for Content Distribution
Web Caching Issues and Architectures
Akamai Architecture
Streaming Content Distribution
Business Issues in Content Distribution
Service Composition
66
Service Composition
• Assumptions
– Providers deploy services throughout network
– Portals constructed via service composition
» Quickly enable new functionality on new devices
» Possibly through SLAs
– Code is initially non-mobile
» Service placement managed: fixed locations, evolves slowly
– New services created via composition
» Across service providers in wide-area: service-level path
67
Service Composition
Provider A
Cellular
Phone
Video-on-demand
server
Provider A
Provider B
Transcoder
Thin
Client
Provider B
Replicated
instances
Text
to
speech
Provider R
Email
repository
Provider Q
68
Architecture for Service
Composition and Management
Application
plane
Logical
platform
Composed services
Peering relations,
Overlay network
Service-level
path creation
Handling failures
Hardware
platform
Service
location
Network
performance
Detection
Recovery
Service clusters
69
Architecture
Internet
Source
Destination
Peering:
monitoring
& cascading
Application
plane
Composed
services
Logical
platform
Peering relations,
Overlay network
Hardware
platform
Service
clusters
Service cluster: compute
cluster capable of running
services
• Overlay nodes are clusters
– Compute platform
– Hierarchical monitoring
– Overlay network provides
context for service-level path
creation & failure handling
70
Service-Level Path Creation
• Connection-oriented network
– Explicit session setup plus state at intermediate nodes
– Connection-less protocol for connection setup
• Three levels of information exchange
– Network path liveness
» Low overhead, but very frequent
– Performance Metrics: latency/bandwidth
» Higher overhead, not so frequent
» Bandwidth changes only once in several minutes
» Latency changes appreciably only once an hour
– Information about service location in clusters
» Bulky, but does not change very often
» Also use independent service location mechanism
71
Service-Level Path Creation
• Link-state algorithm for info exchange
–
–
–
–
Reduced measurement overhead: finer time-scales
Service-level path created at entry node
Allows all-pair-shortest-path calculation in the graph
Path caching
» Remember what previous clients used
» Another use of clusters
– Dynamic path optimization
» Since session-transfer is a first-order feature
» First path created need not be optimal
72
Session Recovery: Design Tradeoffs
• End-to-end:
– Pre-establishment possible
– But, failure information has to
propagate
– Performance of alternate
path could have changed
• Local-link:
– No need for information to
propagate
– But, additional overhead
Finding
entry/exit
Overlay n/w
Service-level
path creation
Handling failures
Service
location
Network
performance
Detection
Recovery
73
The Overlay Topology:
Design Factors
• How many nodes?
– Large number of nodes implies reduced latency overhead
– But scaling concerns
• Where to place nodes?
– Close to edges so that hosts have points of entry and exit
close to them
– Close to backbone to take advantage of good connectivity
• Who to peer with?
– Nature of connectivity
– Least sharing of physical links among overlay links
74
Problem: Internet Badly Suited to
Mission-Critical Applications
A
Network
B
Problem
• Commercial peering architecture:
A
Internet
takes
bad path
B
– Directly conflicts with robustness
– Ignores many existing alternate paths
• The Internet’s Global scale:
– Prevents sophisticated algorithms
– Route selection uses fixed, simple metrics
– Routing isn't sensitive to path quality
MIT RON Project
75
Proposed Solution: Resilient
Overlay Network (RON)
A
B
A
Network
Problem
B
RON Uses
Better Path
• One RON per distributed app
• RON nodes in different ASs form an overlay network
• RON nodes run application-specific routing protocol
among themselves
• Application data is tunneled over reliable, secure
transport between RON nodes
MIT RON Project
76
Advantages
• Better robustness
– Less susceptible to DoS attacks
– Wider choice of routes
– Route selection tailored to application needs
• Better security
– Traffic is encrypted and authenticated
– Routing protocol is authenticated
– Single administrator for entire RON
• Better responsiveness.
– Application-specific routing metrics for QoS
MIT RON Project
77
Research Questions
• How to design overlay networks?
– Self-configuration
– Understanding performance of underlying net
• How to design robust responsive routing protocols?
– Fast fail-over
– Sophisticated metrics
– Application-directed path selection
• Solutions take advantage of RON properties
– Just one RON per application
– Each RON run by a single administrator
MIT RON Project
78
Building the RON Prototype
Active
Prober
Performance
Database
Control
Path
Topology
RON
Manager
Router
TCP
UDP
Resource Manager
IP
Data
Path
• Explore end-to-end Internet performance
• Simulate RON path selection algorithms
• Deploy a realistic RON using:
– Co-located hosts on different backbones
– End-system API for application-directed routing
• Test prototype using multi-party, secure videoconferencing
MIT RON Project
79