Transcript ppt

Resilient Peer-to-Peer
Streaming
Paper by:
Venkata N. Padmanabhan
Helen J. Wang
Philip A. Chou
Discussion Leader:
Manfred Georg
Presented by:
Christoph Jechlitschek
The Problem
Distribution of live streaming media to a
potentially large and highly dynamic
population of hosts
Motivation
 Flash crowds
Often due an event of widespread interest …
… but not always (e.g. Webcast of a birthday party)
Can effect relatively obscure sites (e.g. www.cricket.org)
 Site becomes unreachable precisely when it is popular!
 Streaming server can quickly be overwhelmed
Network bandwidth is the bottleneck
Other solutions
Content distribution networks
In-house server farms
Increase bandwidth
Peer-to-Peer networks
IP multicast
Why other solutions do not work
Not cost effective
To expensive
Do not scale
Force user to dedicate bandwidth
Can not handle high churn rate
Not widely supported
CoopNet
Cooperative Networking
Clients help server to distribute content
In return the overall quality of the content
increases
Placing only minimum demands on peers
Challenges
Unreliable peers
can disconnect/crash without notice
Constrained and asymmetric bandwidth
Last hop is often bottleneck
Median broadband bandwidth 900 Kbps/212
Kbps
Congestion due to competing applications
Reluctant users
Some ISPs charge based on usage
Design Decisions
P2P for scalability
Peers forward data only if tuned in
No more upload than download
Redundancy in network paths
Redundancy in data
Redundancy in network paths
A single distribution tree is vulnerable to
node failure
Create multiple distribution trees
Split data and send it with or without
redundancy over multiple trees
Example – Before…
Example – After…
Tree Construction
Nodes inform the server when they join
Also send delay coordinates
Server constructs and repairs the tree
Nodes report losses to server
Aggregate reports to avoid overloading server
Design Decisions
 Short trees
Minimizes chance of disruption
 Tree diversity vs. efficiency
Diversity: minimizes chance of disruption
Efficiency: matches underlying network topology
 Quick join and leave
Number of round-trips should be small
 Scalability
Preferable an algorithm that uses O(1) round-trips
Tree Construction algorithms
Randomized
Search tree top down for a node with enough
capacity and appoint it as parent for new node
Alternatively search also 1 or 2 levels below
and make better decision
Deterministic
Each node is fertile in only one tree and sterile
in all other trees
Allows shorter and more diverse trees
Random vs. Deterministic
Repairing a tree
If a peer leaves it stops forwarding packets
to its children
Peers do not need to notify the root
The root has to find and replace those
nodes
A 1 second repair interval gives good
results
Redundancy in data
Multiple Description Coding
Old idea, dates back to the 1970
Voice splitting work at Bell Labs
No ordering of the descriptions
Any subset of descriptions is decodable
The more descriptions received the better the
image quality
Increases video stream size
Multiple Description Coding
 Order bits by their
importance
 Split bits in the range
[Ri-1, Ri) into i blocks
 Send 1 block from
each range
 If no more blocks to
send then send error
correction instead
Scalable Client Feedback
Individual feedback from each client can
overwhelm server
Instead each peer reports to its parent
The parent combines that report with its
own and passes it to its parent periodically
Not all trees have to carry feedback
Feedback is used to increase or decrease
redundancy in data
Evaluation
Simulate client joins and leaves based on
a 911 flash crowd trace
Needed to substitute original video clip
Root bandwidth
20 Mbps
Peer bandwidth
160 Kbps
Stream bandwidth
160 Kbps
Packet size
1250 bytes
GOF duration
1 second
# descriptions
16
# trees
Reporting interval
Repair interval
1, 2, 4, 8, 16
1 second
1, 5, 10 seconds
Flash Crowd Trace
Multiple Trees
Multiple Trees
Randomized vs. Deterministic Tree
Construction
MDC versus FEC
Conclusions
P2P streaming is attractive because it is
self scaling
Resilience is provided by multiple
distribution trees and MDC
Experiments show promising results
Questions?
References
http://www.research.microsoft.com/project
s/CoopNet/papers/icnp2003.pdf
Figures were taken from
http://www.research.microsoft.com/~padm
anab/talks/resilientP2Pstreamingmar03.pdf