Multicast Routing State Scalability in the Internet Scaling Trends and

Download Report

Transcript Multicast Routing State Scalability in the Internet Scaling Trends and

Multicast Forwarding and
Application State Scalability in
the Internet
Tina Wong
Dissertation Seminar
Computer Science Division
University of California, Berkeley
October 16, 2000
1
Challenge
“… in the long run, the biggest issue facing multicast
deployment is likely to be the scalability of
multicast forwarding state as the number of
multicast groups increases.”
--Thaler and Handley 2000
The memory required to store multicast forwarding
entries at a router with 32 interfaces is 1024 TB
for IPv6, assuming 50% address space utilization
--Radoslavov, Govindan and Estrin 1999
2
Outline
•
•
•
•
•
Introduction, background, motivation
Multicast state scaling trends in Internet
Preference clustering protocol
Application-driven tunable reliability
Conclusions and future work
3
IP Multicast
S
R
R
R
R
R
• Efficient point-tomultipoint delivery
mechanism
• Packets travel on
common parts of the
network only once
4
Multicast Routing
S
R
R
Broadcast
DVMRP
R
• Per-source reverse
shortest path tree
• Broadcast-and-prune
• MBone
5
Multicast Routing
S
R
R
Prune
DVMRP
R
• Per-source reverse
shortest path tree
• Broadcast-and-prune
• MBone
6
Multicast Routing
S
R
R
DVMRP
R
Forward Data
• Per-source reverse
shortest path tree
• Broadcast-and-prune
• MBone
7
Multicast Routing
• PIM-Dense Mode / Sparse Mode
– Unidirectional shared tree
– Explicit joins
– Core location a problem
• Core Based Trees (CBT)
– Bi-directional shared tree
– More optimal data paths
– Few routing vendors support
8
Multicast Forwarding State
• Router maintains membership
state to achieve forwarding
• State scales linearly with
number of concurrent groups
• No natural aggregation
iif0
s = 0.0.0.0/0
G = 224.0.1.2/32
iif0, oif1, oif2
oif0
oif1 oif2
• Number of concurrent multicast
groups limited by router memory
• Heartbeat messages to maintain
state incur processing costs
9
Motivation
Lots of simultaneously active multicast
groups on the Internet?
• Many small, group-based applications
– Few participants form a single multicast group
– E.g. internet video conferencing, games, events
notifications, etc
• Few large-scale applications
– Lots of users form many multicast groups
– E.g. Content delivery, stock quotes, DIS, etc
10
Related Work
• Multicast state reduction
– Leaky and non-leaky state aggregation
– Tunneling in backbone (MPLS, DCM)
– Non-branching state elim (DTM, REUNITE)
• Application-level multicast
– End Sytsem Multicast, YOID, Scattercast
11
Contributions
• Comprehensive analysis on multicast state
–
–
–
–
Understand scaling trends in the Internet
Predict future growth
Estimate potentials for reduction
Apply to network provisioning, protocol and
application design
• Mechanisms for network and end-host
state scalability in large-scale applications
– Interest-based content delivery
– Application-driven loss recovery
12
Outline
•
•
•
•
•
Introduction, background, motivation
Multicast state scaling trends in Internet
Preference clustering protocol
Application-driven tunable reliability
Conclusions and future work
13
Questions: Scaling Trends
Much research and engineering effort into
making IP multicast widely deployed...
• How do multiplying peering agreements among
parallel backbone networks affect multicast state
scalability?
• How do rising subscriptions to individual
applications increase multicast state?
• What are the state scaling properties when more
and more applications use multicast?
14
Questions: State Concentration
An intuition: multicast state scalability is
most critical at “core” routers…
• How concentrated is
multicast state at
“core” routers?
“Core”
• How much benefit from
tunneling?
15
Questions: State Reduction
An intuition: delivery trees of sparse
multicast groups tend to have large number
of non-branching routers...
S
• How prominent are
non-branching routers?
• Are these routers
stateful?
R
R
R
16
R
Basic Model
Local state
• Fraction of concurrent
multicast groups
True local state
• Local state with only
multicast forwarding
Independent of address
space size and number
of concurrent groups
iif0
oif0
oif1 oif2
5 concurrent groups
Local state = 2/5
True local state = 1/5
17
Methodology
• Simulations
– Extends upon SGB package
• Parameters
– Topology
– Session density
– Membership model
18
Topology
• 4 AS graphs from Nov97 to Jan00
– Connectivity among Internet autonomous systems
– Study multicast state at inter-domain level
– Over 3 year timespan
• Mbone graph from Feb99
– Study multicast state at intra-domain level
• Generated graphs
– TIERS
– Transit-stub
19
Session Density
• Graphs have different number of nodes,
from 1000 to 6474
– Session density instead of absolute size
– 0.1% to 0.9%, 1% to 9%, 10% to 90%
– E.g., session with 0.1% density in AS-Jan00
with 6500 nodes involves 7 domains
– E.g., session with 10% density in Mbone with
4200 nodes involves 420 routers
20
Membership Taxonomy
Topological Correlation
within one group
NO
NO
1
random
Subscription correlation
across multiple groups
YES
4
interest
YES
2
3
distr
affinity/
clusters disaffinity
5
6
layered
interest
21
Experiments
• For each experiment, fix topology, session
density and membership model
– (1) Pick a set of nodes with these parameters
– (2) Build shortest path tree rooted at a random
node from this set
– Repeat (1) & (2) 1000 times
– Calculate local state and true local state on
each node in topology
• All combinations of parameters used,
yielding 945 experiments and results!
22
Answers: Scaling Trends 1
• How do multiplying peering agreements
among parallel backbone networks affect
multicast state scalability?
– More state at a handful of core routers
– Offset by reduced state in majority of routers
24
Topological Properties
25
Hypothesis
• In a more connected network
– Trees have larger fanouts and shorter heights
– Only a few highly peered routers involved in
most concurrent multicast trees
26
Hypothesis
• In a less connected network
– Trees have smaller fanouts and taller heights
– Backbone routers share responsibility of
multicast forwarding -- “load balancing”?
27
Path Lengths
28
Node Degrees
AS-Nov97
MBone
29
Past and Future ScalingTrends
• Implication
– If Internet continues to evolve as it has been,
multicast memory requirements at most of
border routers actually decline, all things
remain equal
• Evidence
– Peering increases for past 3 years
– Maximum domain degree from 605 to 1459,
roughly 50% expansion each year
– Slight decrease in state for majority of nodes
– Slight increase for rest of nodes
30
Answers: Scaling Trends 2
• How do rising subscriptions to individual
applications affect multicast state?
– Follows power law
• fraction of stateful routers grows proportional to
some constant power of multicast group size
– Exponents within each membership for the
Internet similar over past 3 years
– Predictive of future state growth
32
Answers: State Concentration
• How concentrated is multicast state at the
“core” routers?
– State concentration does not follow “10/90”
rule even when session density is 0.1%
– Application-driven membership significantly
impact state distribution and concentration
– Tunneling useful for multicast applications with
very sparse and spread-out membership
35
Answers: State Reduction
• How prominent are non-branching routers?
Are these routers stateful?
– Very prominent
– Up to 2 orders of magnitude reduction is
possible even at top 10% most stateful nodes
– Substantial even at 90% session density
– Promising approach
36
Outline
•
•
•
•
•
Introduction, background, motivation
Multicast state scaling trends in Internet
Preference clustering protocol
Application-driven tunable reliability
Conclusions and future work
37
Large-scale Applications
• Large-scale applications: many receivers,
many sources, rich data types, UI
• Multicast uses one data stream to satisfy
potentially heterogeneous receivers
• Lead to Preference Heterogeneity
– Users differ in interest on application data
– E.g. Content delivery, news dissemination, stock
quotes, network games, DIS, etc
38
Example: Stock Quotes Service
...
AABC
BACU
CABL
DAGR
EACO
FACO
www.StockCentral.com
INTC
DELL
CSCO
MSFT
Amy
AAPL
AMZN
EWEB
MSFT
GABC
QCOM
Bob
PWBC
SISI
YHOO
...
Cathy
39
Example: Network Games
A player's position in virtual environment
drives its preferences on entity updates
40
Preference Heterogeneity
• Assign each logical data stream a unique
multicast address ?
+ No superfluous data
– Multicast routing state scalability
– Multicast address allocation and scarcity
– End-host connection maintenance
• 100% reliability not necessary
– Different levels of reliability desired
– Help to reduce NACK implosion
41
The Clustering Concept
complete
heterogeneity
many small
groups
UNICAST
complete
similarity
MULTICAST
CLUSTER
approximately similar sources and receivers
into like groups
42
Preference Clustering Protocol
• Clustering algorithm
– On-line and adaptive to changes in preferences
– Customizable to different application and data
types
• Signaling protocol
– Coordinate clustering within an application
– Scalable, fault tolerant and reliable through
decentralization, soft state and sampling
• API
• Detailed evaluation
43
App-Level Tunable Reliability
• Consider application semantics in loss
recovery decisions
–
–
–
–
Meta-data to describe data content
Temporal: statistics on update frequency
Semantic: magnitude or importance of change
Policy-driven by individual receivers
44
Outline
•
•
•
•
•
Introduction, background, motivation
Multicast state scaling trends in Internet
Preference clustering protocol
Application-driven tunable reliability
Conclusions and future work
45
Conclusions
• Comprehensive study on multicast state
scalability
– Scaling trends confirmed with past 3 years
– State distribution and concentration
– Potentials for reduction
• Mechanisms to accommodate problem for
large-scale applications
– Customizable and adaptive preference
clustering protocol
– Tunable reliable multicast protocol
46
Future Directions
• Compare and contrast methodologically IP
multicast and application-level multicast
– Params: Topology, session density, membership
– Apps: Few-to-few, one-to-many
– Metric: Bandwidth, latency, complexity, etc
• Placement of service agents in Internet
– Spawning of new agents
– Coalescing based on topology, user population,
network measurements
47