P2P Streaming 1H2007 - University of Wales, Newport
Download
Report
Transcript P2P Streaming 1H2007 - University of Wales, Newport
Continuous Scheduling for Data-Driven
Peer-to-Peer Streaming
Jyrki Akkanen
Peer-to-peer team, Internet Laboratory
Nokia Research Center, Helsinki, Finland
1
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
Agenda
• Data-driven (pull-based mesh) P2P streaming
• Continuous data-driven P2P streaming
• Early experience results
2
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
Our original motivation
• Desired practical P2P streaming solution
• For Symbian/S60 mobile phones
• No “perfect” solution exists yet
• Desired to improve joining time
• On-going work, results are preliminary
3
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
Peer-to-peer streaming: two basic questions
• How to manage the overlay topology?
• Centralized tracker
• Random topology, 4-8 neighbors
• How to disseminate data?
• Data-driven / pull-based mesh
• Well-known, much used in practice
4
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
source
Data-driven (pull-based) stream delivery
• Mesh topology
• Each node has a small set of partners
• Stream split in pieces
• Nodes collect pieces in buffer
• Window of interest / Request window in the front
• Playing from the back
• Notify – Request – Response pattern
• Buffer content advertised to partners
• Scheduling: what to fetch, when and where
• Requests sent to partners
Notify
Request
Send
• Partners return pieces one by one
• Periodic operation
• 1-2 second scheduling/request period
5
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
Jitter
Window
Window of
Interest
Challenges of periodic operation
• Buffer lag
• Advertising+scheduling delay 1-2 sec
Same timing,
• Receiving node lags sending partner
exchange few pieces
• Partners who lag cannot contribute
• Low reacton time, slow re-transmit
• Wait for the next scheduling round
Different timing,
unidirectional traffic
• Consequences
• Long end-to-end delay
• Long window of interest (30 sec or more)
• Hard to find partners that don’t lag and have enough capacity
• Low delivery ratio
• Can we run it faster?
• Better gain, shorter delay
• Increases overhead
6
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
Continuous Data-driven Streaming Protocol
• Speed up the basic data-driven protocol
• Up to point where it no more operates periodically
• Small piece size
• 1250 Bytes = single IP packet
• Narrow window of interest
• 256 packets = 8.5 sec at 200 kbps
• Send incremental notifications continuously
• As soon as possible after you get a piece
• Scheduler runs continuously
• Notifications fed in one by one whenever they arrive
• Maintains plan to fetch pieces from partners
• Requests sent continuously
• One by one at “last possible” moment
• Requests are waiting in the scheduler
7
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
Scheduler
Communication over UDP
• Rare selection: TCP is usually preferred
• Better timing, improved reaction speed
• Control on re-transmission
• AIMD-based rate control; could also use TFRC
• May lead to difficulties with firewalls etc.
8
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
Fighting against overhead
• Need to send lots of small amounts of control data all the time
• Piggybacking
• Control data piggybacked to stream data whenever possible
• Ability to mutually exchange data also helps
• Less need to send “control” packets without stream data
• In practice
• Rate control regularly allows transmission opportunities to each link
• On each opportunity, assemble a packet from what you need to send
• Incremental notifications sent in small groups (1-10)
9
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
Source fan-out policy
• Mandatory process to create initial packet distribution at first hop peers
• Selective advertising
• Simple fan-out policy
• Decreased density of packets
• Randomly selected packets
100%
80%
60%
40%
20%
0%
0
10
© 2008 Nokia
8
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
16
24
32
Advertised Window of Interest
40
Simple scheduler
• Packet is scheduled or re-scheduled whenever
• notification is received
randomize order
• request is lost
• Maintain a request queue per each partner
• holds assigned requests in randomized order
• sorted by seqno + X where X is uniformly random
• insert request in shortest queue
• Pick requests from the queues when needed
select shortest queue
re-notified
• window-based flow control
• minimize the number of outstanding requests
• Scheduling policy concerns bandwidth constrained
peers only
• others have always almost empty queues
• prefers rare and urgent packets
• virtual time sorting increases diversity
11
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
lost
pick request
Implementation status
• A portable C++ implementation
• Simulation in Linux
• Demonstration with Linux PCs
• Demonstration with Symbian phones
12
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
Simulations in Linux PC
• Simple network simulator
• centralized router with tail drop queues
• 500 kbps links with 50 ms latency
• Artificial media stream
• 200 kbps, 20 packets/sec
• Simple dynamic scenario
• random overlay topology, 4 partners / node
• initially 100 nodes
• after 120 seconds drop half of the nodes
• remaining peers re-connect so that all have 4 partners again
13
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
End-to-end delay, simulated
No frames are lost
50% of peers lost at 120 sec
10
Delay (sec)
8
1 hop
3 hops
5 hops
6
4
2
0
60
14
© 2008 Nokia
90
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
120
Time (sec)
150
180
Recent simulation results (not in paper)
• Dynamic scenarios with Poisson join/leave process
• Exponentially distributed random dwelling time 50 or 100 sec
• Each newcomer joined to 6 randomly selected peers
100%
100%
80%
98%
150 nodes, stable
Peers
Delivery ratio
99%
50 nodes, 60 jpm
97%
96%
60%
100 nodes, 60 jpm
40%
50 nodes, 60 jpm
100 nodes, 60 jpm
150 nodes, 180 jpm
20%
150 nodes, 180 jpm
300 nodes, 180 jpm
300 nodes, 180 jpm
95%
0%
0
15
© 2008 Nokia
5
10
End-to-end latency (s)
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
15
0
2
4
6
Time to play (s)
8
10
Live video streaming
• Live network of 4 Nokia N93i smart phones
• Ad-hoc WLAN network
• Video from phone camera
• H.263 profile 0 level 20
• 352 x 288 pixels, 16M colors, 15 fps
• variable data rate 64 – 315 kbps
• Other three phones viewed the video
• Player learned optimal playback point
• End-to-end latency was 10 sec
• No frames lost
• Full mesh overlay topology
16
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
Video rate at source, live
Rate (kbps)
500
400
300
200
100
0
46:00
17
© 2008 Nokia
49:00
52:00
55:00
Time (min:sec)
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
58:00
Buffer offset (packets)
Window of interest and player jitter
18
0
50
100
150
200
250
300
350
43:00
© 2008 Nokia
Window of Interest
end
miss
play
46:00
49:00 52:00 55:00
Time (min:sec)
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
58:00
Jitter Window
Network overhead, live
• Due to heavy piggybacking cannot distinguish data and control packets
• Total protocol overhead 14.4%
• all transmitted bytes including IP/UDP headers relative to received stream data
• live network, 3 peers
• For comparison
• periodic data-driven overlay streaming: ~10%
• unidirectional HTTP downloading: 7.2% (our measurement)
19
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
Total network overhead, simulated (not in paper)
25%
Loss,
0.01%
Overhead
20%
15%
50 nodes
100 nodes
150 nodes
200 nodes
300 nodes
10%
5%
0%
IP/UDP,
5.81%
Rate control, 2.91%
4.0
20
Stream,
6.40%
Tracker,
0.11%
© 2008 Nokia
9.0
8.0
7.0
6.0
5.0
Average number of neighbors
continous_scheduling_fmn_2008 / 2008-09-14 / JAk
10.0
Continuous scheduling approach: summary
• Work in progress
• Not yet tested in real large-scale networks
• Overhead is largish, but not intolerable
• not yet optimized
• Seems to provide
• small latency
• high delivery ratio
• good resilience for dynamics
• Difficulties with bandwidth constraints
• congestion close to the source
• few critical, bandwidth constrained overlay links
• Need better fan-out and scheduling policies
• must diffuse new packets quickly far out to the network
21
© 2008 Nokia
continous_scheduling_fmn_2008 / 2008-09-14 / JAk