related multicast slides

Download Report

Transcript related multicast slides

Internet Multicasting
Copyright M. Faloutsos
1
The Multicast Problem
?
GIVEN: Network
GIVEN: User Requests (and quality requirements)
ESTABLISH: Efficient distribution trees
U. of Toronto
Michalis Faloutsos
2
What is The Need for Multicasting?
Use bandwidth efficiently
3 times the
traffic
• Replace many unicast connections
with one multicast connection
• Duplicate packets only when
necessary
Simply put:
• Send one packet per link instead of
multiple
• Some links can not support multiple
flows of the same data
3
What Is Efficient?
Scalable
• Multicast state
• Number of control messages
Use network resources efficiently
Meet end-to-end performance requirements
• E2e Delay: from source to destinations
4
Some Theory on Multicast Routing
5
Multicast and the Steiner Tree
Steiner tree problem:
• Given an undirected weighted graph G(V,E), and a
set of nodes S, find the minimum cost tree that spans
the nodes in S
It is similar to a multicast problem which is
• Centralized: (link state)
• Static: we know all group members ahead of time
• Undirected graph: same edge weight either direction
6
A Steiner Tree Example
10
10
4
10
3
5
2
1
3
2
1
Which is the Steiner tree for green and red
nodes?
7
A Steiner Tree Example: Solution
10
10
4
10
3
5
2
1
3
2
1
Shortest Path tree =/= Steiner Tree
14 + 13 =/= 16
8
Multicast Routing Algorithms
source
destinations
Shortest Paths
Greedy
Optimal
NP-complete problem [Dreyfus72]
Approximation algorithms have practical interest
U. of Toronto
Michalis Faloutsos
9
The Greedy Algorithm
Start from the source
(or pick one group member)
While not all connected
• Find destination closest to
partial tree
• Add destination and relate
path
Note: tree-node distance
min dist(Any-tree-node, node)
10
The KMB Algorithm
A Greedy Variation [Kou Markowski Berman 81]
Greedy: connect to nearest point on the tree
KMB: connect to nearest group member
Seemingly: KMB is less efficient that Greedy
Surprisingly: KMB is just as efficient!
11
How KMB Works
10
11
1
10
2
3
1
1
3
2
1
10
10
12
10
13
5
2
3
2
G’ and T’
1
1
1
G
T
Create complete distance graph of group members G’
Calculate Minimum Spanning Tree, T’, in G’
Translate T’ to a tree T in G
12
The Performance of Algorithms
Competitive Ratio of an Algorithm:
• r = max( T_algorithm / OPT )
• Over all possible problem instances (worst case)
Inuitively,
• Given any problem instance, the T_algorithm will
never be more expensive than r x optimal OPT.
Express ratio as a function of problem size
• N number of nodes in graph
• M number of group members (not counting source)
13
Competitive Ratio of Shortest Paths
Never worse than M:
• T <= M OPT
Dmax
Assume Dmax the
maximum distance
between source and
destinations
OPT => Dmax
T <= M Dmax
14
KMB Has Good Performance
We can prove that the KMB tree is less than 2
the cost of the optimal Steiner tree
KMB < 2 OPT
Where does this 2 come from?
15
Competitive Ratio of KMB
1
Take Steiner, cost OPT
There is a Walk of OPT tree:
D(1,2)
• Walk = 2 OPT
• Walk defines sequence of v_i
2
3
4
Split the walk in W(v_i, v_i+1)
Intuitively, KMB will create a
tree of cost less than the walk
16
Competitive Ratio of KMB (cont.)
1
2
W(2,3)
3
4
W(2,3)
Intuitively, T_KMB <= Walk
Split walk in W(v_i, v_i+1)
W() form a tree T_w < 2 OPT
T_w corresponds to T_w’ in
the complete distance graph
• T_w’ <= T_w
Complete distance
Graph of KMB
T_KMB is MST in that graph:
• T_KMB <= T_w’
T_KMB < Walk = 2 OPT
D(2,3)
D(2,3) < W(2,3), since D() is distance
17
Multicasting on Directed Graphs
w1
w2
Real Networks: Directed
Maximum Asymmetry:
• Ratio of weights of two opposite edges
• A = max( w1 / w2 )
• Over all pairs of nodes
Ac
c
Undirected graphs: A = 1
Goal: Min. Cost Directed Tree
Joint work: R. Pankaj, K.C. Sevcik
U. of Toronto
Michalis Faloutsos
18
Multicast Bounds
Bounds for Greedy and Sh. Paths w.r.t.:
• Maximum asymmetry: A
• Number of join requests: M
• Competitive Ratio of tree cost: Algorithm/Optimal
Problem Variations:
• Static: members know ahead of time
• Join-Only: members join randomly
• Join-Leave: members join and leave randomly
19
Performance On Undirected Graphs
Shortest Path: tighlty bounded by M
• Where M the number of destinations
Greedy and KMB:
• Static: bounded by 2
• Join-only: log M
• Join-leave: 2^M (or N)
20
Overview of Theoretical Results
Short. Paths:
•
tight bound in all variations
Greedy:
• Static: Tight bound (2 A)
• Join-Only: Near-optimal online bound (A log M)
 Greedy is no more than log M from any online algorithm
• Join-Leave: Large lower bound (A or 2^M)
[Faloutsos et al SIROCCO’97]
Michalis Faloutsos
21
Our Results in Detail
Static
M
Sh. Paths
Tight
Join-Only
M
min (A, M) min (M, A log(M))
Greedy
Upper Bound
Join-Leave
M
*
min (A, M) min (M, A log(M)/log(A+1)) max (A, 2^M)
Greedy
Lower Bound
Join-Only:
•
•
Lower bound for any online algorithm
Greedy at most log(M) to any online algorithm
Michalis Faloutsos
22
A Graphical Overview
Greedy
SP
16
• Greedy is good (near-optimal)
A,2^M14
Join-leave:
12
10
• Greedy bad in worst case
8
Why not combine into an
adaptive algorithm?
M6
4
2A
Static and Join-only:
2
0
Static
U. of Toronto
JoinOnly
JoinLeave
Michalis Faloutsos
23
Desirable Protocol Properties
Scalable
Easy to implement and deploy
Provide high QoS
Use network resources efficiently
• Create efficient trees
• Use little bandwidth
• Minimize required state
Adaptive - Flexible
Michalis Faloutsos
25
Internet Multicasting: Protocols
26
Breaking the Rules
G1, G2
G1, G2
G1
G1, G2
Internet: no state
per flow
Multicasting has
state per each
multicast group!
G2
27
What Are The Issues?
Routing
• Intra-domain
• Inter-domain
Reliable multicasting
Secure multicasting
Address space allocation
Multicast state aggregation
Modeling and Simulating: realistic models
28
What Is The Real Issue
ISPs do not want/care to deploy multicast
Chicken and egg problem
• Customers do not ask for multicasting,
because there are no multicast applications,
because none builds multicast applications,
because there is no support for multicasting
Multicast researchers:
• Be careful in what topic you choose
29
MBone: The multicast backbone
Some routers are multicast enabled
These routers connect to each other with
virtual edges (encapsulation)
Multicast protocols and applications can be
deployed on the MBone
30
Multicast Routing
Router Assisted: routers duplicate packets
•
•
•
•
•
MOSPF: Link-state multicast routing
DVMRP: flood and prune
PIM/CBT: core based, shared tree
Source Specific Multicast: each source creates a tree
QoS aware protocols: QoSMIC, QMRP
Application-level
• Only end-points (members) are aware of the tree
• Packets are duplicated at the members only!
31
MOSPF: Link State multicasting
Link-state: advertise all your links
each router advertises groups for which it
has local receivers
Each router executs the same algorithm:
• They establish a tree accordingly
Scalability issues:
• Membership change: new advertisements are
sent
IETF RFC 1548 Standard
32
The PIM Protocol
source
Most deployed protocol
Based on a Core –
Rendez-vous Point
Based on Shortest Path
Routing
Issue: selecting the core
Not adaptive
core
New
33
Border Gateway Multicast Protocol
source
Shortest Path
New
Designed to be interdomain
Border Gateway Multicast Protocol [Thaler 97]
• Reverse Shortest Paths Routing
• Domains “own” multicast addresses
 Allocating addresses needs care: MASC
Michalis Faloutsos
34
Yet Another Multicast Protocol: YAM
Yet Another Multicast protocol
[Carlberg97]
Multiple paths
Use of static information
Non scalable:
New
U. of Toronto
• “flood” of control messages
Michalis Faloutsos
35
Our Protocol: QoSMIC
QoS Multicast Internet protoCol [SIGCOMM’98]
Supports QoS
Uses dynamic information
Scalable
Michalis Faloutsos
36
QoSMIC: The Overview
The protocol has three phases:
1. Search: for Candidates
2. Bid: Candidates “Bid”
3. Select: New chooses path
• using dynamic QoS info.
New
Joint work with A. Banerjea, R. Pankaj
U. of Toronto
Michalis Faloutsos
37
QoSMIC: Search and Bid
Manager



U. of Toronto
I. Local Search (costly)
II. Multicast Tree Search
Bid messages collect
dynamic information
Michalis Faloutsos
38
QoSMIC: Flexible-Adaptable
Manager

In Searching:
-

In Routing:
-

U. of Toronto
Local and/or M. Tree Search
Greedy and/or Short. Paths
In run time according to
application needs
Michalis Faloutsos
39
Some Simulations
Compare routing efficiency with BGMP
• Tree cost = Sum of link-weights
Compare message complexity with YAM
• Number of control messages of search phase
U. of Toronto
Michalis Faloutsos
40
The Simulation Set-up
Build a simulator
Real Graph: Major Internet routers [Casner93]
Weighted asymmetric graph
Simulation Precision:
• 95%-confidence interval <10% of shown values
U. of Toronto
Michalis Faloutsos
41
The Simulated Algorithms
Offline: greedy for reference
• At any snapshot, run greedy from scratch
SP: Shortest Paths
BGMP: Reverse Shortest Paths
QoSMIC
U. of Toronto
Michalis Faloutsos
42
Tree Cost Vs Group Size
SP
Offline
QoSMIC
BGMP
1.8
1.6
C
o 1.4
s 1.2
t
1
Asym = 10
0.8
0.1 0.2 0.2 0.3 0.3 0.4 0.5 0.5 0.6 0.7 0.7 0.8 0.8 0.9 1.0
Group Size %
QoSMIC up to 40% better than BGMP
U. of Toronto
Michalis Faloutsos
43
Cost
Tree Cost Vs Asymmetry
1.7
1.6
1.5
1.4
1.3
1.2
1.1
1
0.9
0.8
SP
Offline
QoSMIC
BGMP
1
6
11
16
21
26
31
36
41
46
Asymmetry
QoSMIC up to 60% better than BGMP
U. of Toronto
Michalis Faloutsos
44
% Paths
Joinning Distance Distribution
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
Group Density:
1-15 %
1 2 3 4 5 6 7 8 9 10 11
Number of hops
Even small Local Search is good (2 h - 55% joins)
Large Local Search (8 h) to cover for ALL cases
U. of Toronto
Michalis Faloutsos
45
Messages per Join
Search Message Complexity
2000
1800
1600
1400
1200
1000
800
600
400
200
0
-200
YAM
QOSMIC
0
2.5
3
3.5
4
4.5
5
Over-estimate of both
protocols
Average Degree
YAM has to have large Local Search
QoSMIC keeps Local Search small  scalable
U. of Toronto
Michalis Faloutsos
46
QoSMIC: The Claim
<< QoSMIC routes in a QoS-sensitive way,
and is more efficient even than the
non QoS-sensitive protocols >>
U. of Toronto
Michalis Faloutsos
47
Why Isn’t QoSMIC the Standard?
Implementation complexity
Scalability and message complexity
Needs industry and IETF support
Timeliness:
• Now people look at other issues
48
Generalizing QoSMIC
QoSMIC as a set of searching procedures
• Local flooding: receiver driven
• Indirect search: through designated router
A mechanism to search for points of interest
• Servers, caches etc
Possible applications
• Web server location
• Ad-hoc networks: searching for a destination
49
Multicasting: The Resurrection
Can we do multicasting without the need of
router support?
50
Application Level Multicasting
Push the “complication” to
the end-users
Several schemes (NARADA,
YOID)
Group members keep track
of each other
• A new node joins at other
members
• Network is unaware of
multicast
Trade-off:
• We need to keep track of
members
• We may waste bandwidth
(early duplication)
51
Application Layer Multicast:
Performance
A-L multicast implements the KMB version of
the greedy algorithm!
Thus routing performance is not that bad!
• In theory: cost of an edge is fixed
The real concern:
• We overload the last-mile of the network
I.e. imagine 3 copies of video uploaded
52
Application Layer Multicasting:
Pros and Cons
Pros:
Easy to deploy
Adaptive to change
Good routing
Cons:
Need to track group members
Member-changes alter the tree
Bad bandwidth utilization
More message overhead
53
Application Layer Multicasting:
Applications
Ad hoc networks:
• Paths are changing rapidly, faster than members
Nodes need to keep dynamic routing state
anyway
54
Aggregating Multicast State
55
Large Routing State is a Problem
Internet scalability: based on stateless
routing
IP multicast: state on each tree router per
group (group = connection)
Routing State: proportional to # groups
56
Previous Attempts to Reduce State
General idea:
• First, allocate resources: addresses and trees
• Then, try to aggregate state on routers
 IF addresses consecutive AND topology matches
MASC/BGMP: Thaler, Estrin, Handley et al sigcomm
INFOCOM’00: Thaler, Handley
USC-TR 99-697: Radoslavov, Estrin, Govindan
57
Example of Traditional Aggregation
2
G1, G2
Router finds that
groups G1, G2 have:
5
• Consecutive addresses
• Same outgoing links
1
3
4
G1, G2
G1
G1, G2
Aggregates groups
Leaky aggregation: not
all links match exactly
Keep one entry for G1,G2: 1,3,5
Leaky: Send G2 data to link 4!
58
Our Idea: Forced Aggregation
Force groups to share a distribution tree
• First aggregate
• Then route
Trade-off bandwidth for routing state
Limitation: know the group members
Target: intra-domain use
•Joint work: A. Fei, J. Cui, and M. Gerla, UCLA
59
Benefits of aggregation
State reduction
• Less memory
• Faster look-up time
Traffic engineering: (not discussed here)
• Manage and Control traffic
• Support Quality of Service
60
Roadmap
Introduction
Our approach in more detail
Implementation issues
Analytical results
Simulations results
Conclusions
61
A Simple Version of the Problem
Given
• a network
• a set of trees (trunks)
• a new multicast group
Find
Group-specific
block!
White node receives packets!
= bandwidth overhead
• the trunk that covers the
group with minimum
bandwidth overhead
• or create new trunk if
Bandwidth Overhead is
above a Threshold
62
Subtle Points and Variations
We need to know the group members at least
approximately
Off-line versus on-line version of the problem
• Know all groups ahead of time
Perfect versus leaky match:
• Trunk and group have the same leaves
Group specific blocks in the trunk
• Block unwanted traffic on a per source/group basis
63
Quick Reconfiguration - Fault
Tolerance
Quick error recovery
Imagine having multiple
trunks that cover the
same nodes
Initially: red group is
using yellow tree
If yellow tree fails
I switch to green tree
Map group packet to trunk
• Change “labels” at end
points
64
Implementation Issues
We need controlled and centralised
operation
• Some nodes should keep track of trunks and groups
How can we force a group on a tree?
• IP Encapsulation
• Multi Protocol Label Switching (MPLS)
• Home-made solutions are being considered
65
Aggregation Performance Metrics
Average aggregation degree:
• #groups / #trunks
Routing state reduction ratio, r_as:
• 1 – N_agg /N_non-agg
Non-reducible state:
• the state that leaf nodes maintain
Reducible state reduction ratio, r_rs:
• As r_as without considering the non-reducible state
66
Preliminary Simulation Results
Goal: examine trade-off
• aggregation vs bandwidth overhead
Used Abilene network topology
Random selection of group members
• No assumption of locality
We did not consider group-specific blocking
Group size: 2-10 nodes uniformly
67
How Many Groups per Tree?
45
bo =0.3
bandwidth
overhead
20
bo =0.2
bo =0.1
5
Average aggregation degree vs. number of groups
13 groups per trunk for 10% bandwidth overhead
68
How Much State Aggregation?
1
bo =0.3
bo =0.2
bo =0.1
bandwidth
overhead
0.1
Reducible state reduction ratio vs. #groups
Up to 70% for 10% bandwidth overhead
69
Summary
A novel idea, “forced” aggregate multicast:
• First aggregate, then route
We examined the trade-off:
• Aggregation versus bandwidth overhead
Initial simulations are promising
70
Future Work
Analytical bounds
Simulation with large ISP topologies
Consider user locality
• Find a realistic model for membership
• Examine its effect on aggregation (positive)
Consider group-specific blocks
Evaluate different implementations
71
End of Multicast
72
Total State Reduction
0.2
bo =0.3
bo =0.2
bo =0.1
bandwidth
overhead
0
State reduction vs. number of groups
14-19% for many groups
73
Aggregate Multicast for Traffic
Engineering
ISP network
Customer wants QoS
guarantees through a
network
Allocate a tree +
resources
Customer can utilize
the resource as she
wants
74
QoSMIC: More QoS support
source-1
source-1
source-2
source-2
s-1 only
stop s-1


Multiple sources per group
Connect directly to the source to improve
end-to-end QoS
Michalis Faloutsos
2275
Then, why is not QoSMIC deployed?
Need for industrial support
Complicated implementation
Timeliness: too late
The QoS taint: some people are “against” QoS
76
QoSMIC II: The return of the living
dead
QoSMIC as a collection of procedures for
searching
• Flood locally
• Indirection globally
Possible applications
• Wireless ad hoc networks
• Peer-to-peer networks
77