305KB - Research

Download Report

Transcript 305KB - Research

Broadcast Federation
Untangling the Internet Multicast Landscape
Yatin Chawathe
Research,
Menlo Park
Joint work with Mukund Seshadri
Overview
• The problem:
 No
global multicast/broadcast solution
 Non-interoperable
broadcast technologies
• The missing piece:
 An
internetworking architecture
• Our approach:
 Overlay
of “peering gateways” with explicit
peering agreements
2
The Problem
Broadcast CDN
ISP’s
multicast
network
Broadcast
Networks
(BNs)
yet another
CDN
SSM-only
network
How many
Too
do clients
non-interoperable
in one network
broadcast
access content
protocols
being broadcast in
another network?
3
No single solution is viable
• IP multicast
 No
viable inter-domain protocol
 Address
scarcity
• SSM
 Better
 But,
semantics and business model
restricted service model
• Overlay CDNs
 Easier
to deploy, but less efficient, need more
infrastructure
4
An Interconnection Architecture
• Composition of diverse broadcast networks
 Equivalent
of BGP in the unicast IP world
• Requirements:
 Support
 Scale
range of net- & app-layer protocols
up in size (# of sessions, # of clients)
 Support
explicit service agreements
5
Our Approach: Broadcast Federation
Broadcast Gateway
(BG)
Broadcast
Networks
(BNs)
Unicast pipes:
Explicit service
agreements
Federation
JOIN request
Build an overlay network across broadcast networks:
i.e., a Broadcast Federation
6
Service Model
• Federation session “owned” by single BN
 Convenient
rendezvous point
 Distribution
trees rooted at owner BN
• Independent of intra-network protocols
• URL-style session names:
 bfed://owner_bn/native_session_name?pmtr=value&…
• e.g., bfed://multicast.att.net/224.4.4.4:4444
 Parameters
provide session-specific information
• e.g., sources=multiple, metric=bandwidth
7
Protocol Layers
I.
Routing

Propagate reachability information
II. Tree-building

Handle session JOINs and LEAVEs
III. Data-forwarding

Construct transport channels for data packets
IV. NativeNet

Customize lower layers for specific BN
8
I. Routing Layer
• Session-agnostic:
 Routing
from BN to BN
 For
finer-grained routes:
Maintain routes to BN via all reachable BGs
• “Content-aware” routing:
 Maintain
multiple routing tables
• Real-time vs bulk-data
• Single-source vs multi-source
• Latency vs bandwidth
9
II. Tree-building Layer
• One tree per session:
 Reverse
shortest path, rooted at owner BN
 Single-source
 uni-directional tree
Multi-source  bi-directional tree
• Two components:
 Mediator
• How does client send JOIN to its “access BN”
 SROUTEs
• How does access BN pick best upstream node
10
Mediator
• Abstract interface to clients
 Clients
BN1
send JOINs to Mediator
 Mediator
forwards them on
 Implemented
in BN’s native
fabric or integrated in BGs
BN2
Mediator
• For example:
 CDN:
 IP
JOIN
mediator is part of edge servers
multicast network: well-known multicast group
11
SROUTEs: Session-specific Routes
BN1
A
B
SROUTE
request
JOIN
Alternative
route to BN1
via exit-BG A
JOIN
C
JOIN
JOIN
Default best route
SROUTE To BN
1
response
D
BN2
REDIRECT
Mediator
• All messages are soft-state
 Distribution tree automatically adapts to route changes
• Mediator
JOIN
Client
If
BN
Default
default
request
sends
BG
forwards
send
BG
(D)
SROUTE
JOIN
propagates
has
JOIN
computes
request
noJOIN
request
response
session-specific
request
up
best
toto
to
local
BN
session-specific
session
to
default
route,
BG (C)
BG
then
(D)
route
1 returns
1Mediator
• Pros and Cons
BNroutes
 Contains
Itowner
sends
session-specific
SROUTE
request
costs
local
BN1to from
owner BN
BN to BN
• for
Two
Sends
possible
REDIRECT
response
for toward
connecting
to mediator
2
1
 SROUTEs stored only along distribution tree path
 Increased setup latency
12
III. Data Forwarding Layer
CDN
JOIN: CDN-URL
JOIN: bfed://BN1/CDN-URL
BN1
TRANSLATE: udp://IP:port
SSM
JOIN: bfed://BN1/CDN-URL TRANSLATE: ssm://S,G:port
TRANSLATE: udp://IP:port
JOIN: bfed://BN1/CDN-URL
JOIN: bfed://BN1/CDN-URL
IP multicast
TRANSLATE: multicast://G:port
BN2
Mediator
•• Provides
flexible
data path allocation
Hop-by-hop
TRANSLATE
messages establish data path



E.g., cluster-based BGs assign different backend nodes for
Map Federation session names into local network addresses
different sessions
External peers use unicast; within a BN, use native broadcast
13
IV. NativeNet Layer
• Customization API for each BG
 allocate_channel
 subscribe/refresh/unsubscribe
 reclaim_channel
 get_sroutes
 send_data
 recv_join/recv_leave
 recv_data
14
Status
• We have a prototype implementation
 Linux/C++
user-level application
• NativeNet implementations for:
 IP
multicast, Source-specific multicast, and
HTTP-based CDN
 Each
is 400-600 lines of code
• Preliminary results:
 Single
BG can handle load on 100Mbps network,
4 BG nodes sufficient for 1Gbps
15
Conclusion
• Fragmented broadcasting landscape
 Many
non-interoperable broadcast protocols
• Loosely-coupled Federation architecture
 Internetwork
of diverse broadcast technologies
 Application-layer
Broadcast Gateways
• Explicit peering agreements
• Overlay of unicast and broadcast connections
16
Open Questions
• Automated mediator discovery:
 How
do clients discover their “access BN”?
• Transport mismatch:
 Multiple
routing tables avoids problematic paths,
e.g., real-time video via TCP-based BNs
 What
if the only path has a transport mismatch?
• Complex routing queries:
 E.g.,
combination of bandwidth and system load
17