Presentation

Download Report

Transcript Presentation

P2PMoD
Peer-to-Peer
Movie-on-Demand
GCH1
Group members
Cheung Chui Ying
Lui Cheuk Pan
Wong Long Sing
Supervised by
Professor Gary Chan
Computer Engineering, Hong Kong University of Science & Technology
Presentation flow
1.
2.
3.
4.
5.
Introduction
System Design
Results
Conclusion
Q&A and Demo
GCH1 P2PMoD
Technical Challenges
• Asynchronized Play Time
– Movie-on-Demand is not TV Program Broadcast
– Viewers start watching at different time
• Peer Dynamics
– Network topology might changes over time
– Viewers might go on and off
• Interactivity
– Support for pause and jump
GCH1 P2PMoD
Related Work
• Traditional Server-to-Client
– Server loading grows linearly  Not scalable
• Multicasting
– Special network support needed
– Interactivity is not supported
• BitTorrent
– Unpredictable download order  Cannot start before
you finish downloading
– Interactivity is not supported
GCH1 P2PMoD
What is P2PMoD?
It is a peer-to-peer (P2P) based interactive movie
streaming system that brings movies to your
home
• Scalable
– Low server bandwidth requirement
– Decentralized control
• Support for user interactivity
• Resilience to node/link failure
• Short playback delay
GCH1 P2PMoD
Why is P2PMoD important?
• Overcome the limitation of the server-toclient movie streaming architecture
• Shape the future of movie watching
experiences
• Commercial deployment: Help strike on
illegal movie downloading by BT
GCH1 P2PMoD
System Architecture: PRIME
GUI
Control
Off-the-shelf
Media Player
RTSP
RTSP Server
RTP
Internal Logic
Statistic
Director
DHT
Buffering
Communication
Buffer
GCH1 P2PMoD
RTSP Server
RFC 2326
RTSP Server
Internal Logic
RTP
Movie data
RTSP Protocol
Commands
Off-the-shelf
Media Player
Director
Can use any RTSPcompatible media player
GCH1 P2PMoD
RTP Packetizer
Packetized Movie
Stream
RFC 2250
Movie
Packetizer
Movie in
compatible format
0
1
2
frames
RTP packets
Index file
GCH1 P2PMoD
0ms
 0
1000ms  164452
2000ms  299501
• Can play on any
RTP-compatible
media player
• Abstraction
– No change is needed
for PRIME to support
different movie format
Director Backend
• Responsible for the actual movie data retrieval
process
• Provide programming interface for stream
management and interactivity control
• Implementation Goal
– Scalable and Fast
collaboration between peers
– Efficient
Minimize control communication overhead
GCH1 P2PMoD
Director Backend Implementation
• Use the concept of virtual time slot to find
potential parents
• Use a DHT to achieve decentralized
control communication
GCH1 P2PMoD
Moving virtual timeslot
Movie
Length
slot
7
00:00:00
Time since Publishing:
Start
slot
1
slot
2
slot
3
00:24:00
00:21:00
00:15:00
00:18:00
00:12:00
00:09:00
00:06:00
00:00:00
00:03:00
slot
4
slot
00:42:39
End
5
slot
6
slot
7
• The time boundary keeps advancing along with the real
time.
• Peers will stay on the same slot once started playing,
unless user seeks to another position.
• Peers in the same or earlier virtual time slot can help us
in streaming.
• How to identify these potential parents?
DHT comes into play
GCH1 P2PMoD
DHT Key
• We construct <movie hash, virtual time slot,
random number> as the DHT key
<titanic, 1, 91>
<mi3, 6, 99>
<mi3, 5, 65>
<titanic, 2, 34>
<titanic, 2, 72>
<mi3, 1, 2>
<titanic, 3, 23>
<matrix, 4, 71>
<matrix, 2, 2>
<matrix, 3, 82>
GCH1 P2PMoD
<matrix, 3, 12>
How to retrieve the data?
• Implemented 2 versions of director
• Both utilized FreePastry as the DHT
– Initial version
• Movie data are carried on Scribe
• Scribe: an application-level multicast infrastructure built on
top of FreePastry
– Revised version
• Out-of-band transfer
• Employ a multiple parents scheme to transfer movie data
GCH1 P2PMoD
Director: Initial Version
Publisher
But that also means,
sometimes it have to go
through
otherthe
off-topic
nodes.
Clients
subscribe
slot they
interested in.
i.e. Slots covered by pre-buffer range
By the nature of DHT,
usually it takes some
hops for node A to
contact node B.
Slot 6
One node would be
the root determined
by it’s ID.
Slot 7 (00:18:00)
Topic Root
Members
Slot 7
Members
By the nature of DHT,
Slot root nodes are all
around the ring,
uniformly distributed.
Slot 6 (00:15:00)
Topic Root
GCH1 P2PMoD
Director: Revised Version
• Direct data connection contrary to multi-hops
transfer overlay in Scribe
– less likely to have problem induced by link failure
– Faster, due to reduced IP and processing overhead
• If the parents jump, the child can still stream from
other parents smoothly – unaffected
• Peer could schedule frame request intelligently
to achieve load balancing
DHT
IP of
potential parents
Parents 1
Myself
Movie data
Parents 2
Parents 3
GCH1 P2PMoD
Finding Parents
• Recall that each nodes carry an IP list of
its immediate N neighbors.
• By searching/routing the message to the
<Movie and Slot>, the node could returns
us a list of potential parents.
GCH1 P2PMoD
Director: Scheduling
• With the use of buffer map, that shows the frame
availability of one’s node…
• Continuity: Fetch the frames with the closest playback
deadline first
– The streaming is smooth
• Load Sharing: Fetch the frames which are possessed by
the least number of nodes first
– To obtain the rare pieces for redistribution
– To share the load for the peers holding these pieces
• Efficiency: Stream from multiple parents at the same time
GCH1 P2PMoD
Graphical User Interface
GCH1 P2PMoD
Results
• Deployment of P2PMoD on 71 nodes in
PlanetLab
• Configuration: 1 server and 70 peers
– 40KBps stream for 10 minutes
• Measurement Metrics:
– User Experience
– Efficiency
GCH1 P2PMoD
Results – User Experience
• Measures Continuity
– Playback delay
• Time required to start the stream
– Stall Occurrences
• Number of times the stream pauses to buffer more data
– Stall Ratio
• Ratio of paused time to streaming time
GCH1 P2PMoD
Results – User Experience
Cummulative Distribution of Playback Delay of Peers
100%
90%
Cummulative %
80%
70%
60%
50%
40%
30%
20%
10%
0%
1
2
3
4
5
6
Playback delay (Seconds)
GCH1 P2PMoD
7
8
9
10
Results – User Experience
Cummulative Distribution of Stall Occurances of Peers
100%
90%
Cummulative %
80%
70%
60%
50%
40%
30%
20%
10%
0%
0
1
2
Number of stall occurances
GCH1 P2PMoD
3
Results – User Experience
Cummulative Distribution of Stall Ratio of Peers
100%
90%
Cmmulative %
80%
70%
60%
50%
40%
30%
20%
10%
0%
0%
1%
2%
3%
Stall ratio
GCH1 P2PMoD
4%
5%
Results – User Experience
• Playback delay
– Over 90% has < 6 seconds delay
• Stall Occurrences
– Over 90% has < 2 occurrences
• Stall Ratio
– Over 90% has < 3% of total time
GCH1 P2PMoD
Results – Efficiency
• Peer
– Overhead caused by control messages
• Server
– Bandwidth required
GCH1 P2PMoD
Result – Efficiency
Cummulative Distribution of Data Efficiency of Peers
100%
90%
Cummulative %
80%
70%
60%
50%
40%
30%
20%
10%
0%
86%
87%
88%
89%
Data efficiency
GCH1 P2PMoD
90%
91%
92%
Results – Efficiency
• Peer
– Ratio of stream data to all data input: 90%
• Server
– Data output rate: 275KBps
– Output bandwidth equivalent to 7 streams
– Use 10% bandwidth of traditional server-client model
GCH1 P2PMoD
Practical Issue
• Network Traversal
– Router and NAT is common
– Until IPv6 lands…
– Universal Plug and Play, Hole Punching
• RTSP and RTP compatibility
– Glitches are common and expected
GCH1 P2PMoD
Network Positioning
• GNP, Vivaldi could potentially be used
• Map network latency to Rn coordinate
– Even with ninf, never perfect due to triangular
inequality violation
• GNP: Landmark selection and reselection
• Vivaldi: No fixed reference, coordinates are
updated continuously (spinning)
• Ping time does not reflect transfer rate
GCH1 P2PMoD
Future Work
• Fixed data cache instead of moving slot
– Parents interactivity would not affect availability
– Searching / refreshing next slot parents could be slow
• Frames popularity
• More movie formats, handheld devices to
be supported
• Error correction code
GCH1 P2PMoD
Conclusion
• Peer-to-peer is the way to go, to make use
of users’ increasing bandwidth and
reducing server resource
• PRIME, a working P2P MoD
implementation
• Workload reduced by adopting open
standard and using off-the-shelf player
GCH1 P2PMoD
Thank You
• Questions?
• Demonstration
GCH1 P2PMoD
Pastry: Ring
0x0002
0x22AF
0xDF41
0xCB95
0x3529
0xA125
0x591A
0x9A92
0x62C8
0x8392
GCH1 P2PMoD
0x7F52
Pastry: Routing Knowledge
0x0002
Leaf Set
N immediate neighboring nodes
0x22AF
0xDF41
0xCB95
0x3529
0xA125
0x591A
Routing Table
0x9A92
0x62C8
0x8392
GCH1 P2PMoD
0x7F52
Pastry: Object Storage
0x0002
Object is duplicated
to N immediate neighboring nodes
0x22AF
0xDF41
0xCB95
0x3529
0xA125
0x3530
0x591A
Routing Table
0x9A92
0x62C8
0x8392
GCH1 P2PMoD
0x7F52
PRIME?
• PRIME stands for Peer-to-peer Interactive
Media-on-demand
GCH1 P2PMoD