Peer-To-Peer Multimedia Streaming Using BitTorrent
Download
Report
Transcript Peer-To-Peer Multimedia Streaming Using BitTorrent
Peer-To-Peer Multimedia
Streaming Using BitTorrent
Purvi Shah, Jehan-François Pâris
University of Houston
Houston, TX
Problem Definition
One
Server
Transferring videos is a
resource intensive task!
Many
Customers
• Objectives
– Customer satisfaction:
Minimize customer waiting time
– Cost effectiveness:
Reduce operational costs (mostly hardware costs)
Transferring Videos
• Video download:
– Just like any other file
– Simplest case: file downloaded using
conventional protocol
– Playback does not overlap with the transfer
• Video streaming from a server:
– Playback of video starts while video is
downloaded
– No need to wait until download is completed
– New challenge: ensuring on-time delivery of
data
• Otherwise the client cannot keep playing the video
Why use P2P Architecture?
• Infrastructure-based approach
(e.g. Akamai)
–
–
–
–
–
–
Most commonly used
Client-server architecture
Expensive: Huge server farms
Best effort delivery
Client upload capacity completely unutilized
Not suitable for flash crowds
Why use P2P Architecture?
• IP Multicast
– Highly efficient bandwidth usage
– Several drawbacks so far
• Infrastructure level changes make most
administrators reluctant to provide it
• Security flaws
• No effective & widely accepted transport protocol
on IP multicast layer
P2P Architecture
• Leverage power of P2P networks
– Multiple solutions are possible
• Tree based structured overlay networks
–
–
–
–
–
Leaf clients’ bandwidth unutilized
Less reliable
Complex overlay construction
Content bottlenecks
Fairness issues
Our Solution
• Mesh based unstructured overlay
– Based on widely-used BitTorrent content
distribution protocol
– A P2P protocol started ~ 2002
– Linux distributors such as Lindows offer
software updates via BT
– Blizzard uses BT to distribute game patches
– Start to distribute films through BT this
year
BitTorrent (I)
BitTorrent (II)
• Has a central tracker
– Keeps information on peers
– Responds to requests for that information
– Service subscription
• Built-in incentives: Rechoking
– Give preference to cooperative peers: Titfor-tat exchange of content chunks
– Random search: Optimistic un-choke
• When all chunks are downloaded, peers
can reconstruct the whole file
– Not tailored to streaming applications
Evaluation Methodology
• Simulation-based
– Answers depend on many parameters
– Hard to control in measurements or to
model
• Java based discrete-event simulator
– Models queuing delay and transmission
delay
– Remains faithful to BT specifications
BT Limitations
• BT does not account for the
real-time needs of streaming
applications
– Chunk selection
• Peers do not download chunks in sequence
– Neighbor selection
• Incentive mechanism makes too many peers to
wait for too long before joining the swarm
Chunk Selection Policy
• Replace BT rarest first policy by
a sliding window policy
– Forward moving window is equal to viewing
delay
Download window
missed chunk
received chunk
playback start
playback delay
chunk not yet
received
Two Options
• Sequential policy
– Peers download first the chunks at the
beginning of the window
– Limit the opportunity to exchange chunks
between the peers
• Rarest-first policy
– Peers download first the chunks within the
window that are least replicated among its
neighbors
– Feasibility of swarming by diversifying
available chunks among peers
Success Ratio (percent)
120
Sliding window and rarest-first policy
Original BT rarest-first policy
Sequential policy
90
Best
60
30
Worst
0
0
200
400
Window Size
600
Success Ratio (percent)
Sliding window and rarest-first policy
Original BT rarest-first policy
90
70
50
30
10
3
4
5
Video Consumption Rate (Mbps)
Discussion
• Switching to a sliding window policy
greatly increases quality of service
– Must use a rarest first inside window policy
– Change does not suffice to achieve a
satisfactory quality of service
Neighbor Selection Policy
• BT tit-for-tat policy
– Peers select other peers according to their
observed behaviors
– Significant number of peers suffer from
slow start
• Randomized tit-for-tat policy
– At the beginning of every playback each
peer selects neighbors at random
– Rapid diffusion of new chunks among peers
– Gives more free tries to a larger number of
peers in the swarm to download chunks
Success Ratio (percent)
100
90
80
70
60
BT-tit-for-tat
BT-randomized-tit-for-tat
50
40
30
90
60
Playback Delay (s)
120
Normalized Network Throughput
0.8
0.5
0.3
BT-tit-for-tat
BT-randomized-tit-for-tat
0.0
0
100
200
300
Time (s)
400
Discussion
• Should combine our neighbor selection
policy with our sliding window chunk
selection policy
• Can then achieve an excellent QoS with
playback delays as short as 30 s as long
as video consumption rate does not
exceed 60 % of network link bandwidth.
Average Playback Delay (s)
Comparison with
Client-Server Solutions
1000000
100000
10000
Modified BT
Client-Server
Five mirrors
1000
100
10
10
100
Number of Peers
1000
Chunk size selection
• Small chunks
– Result in faster chunk downloads
– Occasion more processing overhead
• Larger chunks
– Cause slow starts for every sliding window
• Our simulations indicate that 256KB is a
good compromise
Resource-critical region (4 Mbps)
Success Ratio (percent)
110
100
90
80
70
60
50
40
30
20
10
Randomized-tit-for-tat with playback delay 30 s
Randomized-tit-for-tat with playback delay 120 s
Tit-for-tat with playback delay 30 s
Tit-for-tat with playback delay 120 s
64
128
256
512
Chunk Size (KB)
1024
Premature Departures
• Peer departures before the end of
the session
– Can be voluntary or resulting from
network failures
– When a peer leaves the swarm, it
tears down connections to its
neighbors
– Each of its neighbors to lose one of
their active connections
Success Ratio (Percent)
100
90
Can tolerate the loss of
at least 60 % of the peers
80
70
0
20
40
Percentage of Departing Peers
60
Future Work
• Current work
– On-demand streaming
• Robustness
– Detect malicious and selfish peers
– Incorporate a trust management
system into the protocol
• Performance evaluation
– Conduct a comparison study
Thank You – Questions?
Contact: [email protected]
[email protected]
Extra slides
nVoD
• Dynamics of client participations, i.e.
churn
– Clients do no synchronize their viewing
times
• Serve many peers even if they arrive
according to different patterns
Admission control policy (I)
• Determine if a new peer request can be
accepted without violating the QoS
requirements of the existing customers
• Based on server oriented staggered
broadcast scheme
– Combine P2P streaming and staggered
broadcasting ensures high QoS
– Beneficial for popular videos
Admission control policy (II)
• Use tracker to batch clients arriving
close in time form a session
– Closeness is determined by threshold θ
• Service latency, though server oriented,
is independent of number of clients
– Can handle flash crowds
• Dedicate η channels for each video
making worst service latency, w D/η
Results
100000
Average Playback Delay (s)
10000
1000
100
10
M odified BT + Admission Control
Client-Server
1
0
100
200
300
Threshold (seconds)
• We use the M/D/η queuing model to
estimate the effect on the playback
delay experienced by the peers
400