IP Multicast
Download
Report
Transcript IP Multicast
Peer-to-Peer Networks (3) - IPTV
Hongli Luo
CEIT, IPFW
Internet Video Broadcasting
References:
“Opportunities and Challenges of Peer-to-Peer Internet
Video Broadcast” by Liu et al.
“Insights into PPLive: A Measurement Study of a LargeScale P2P IPTV System” by Hei et al.
Background
Large-scale video broadcast over Internet
Real-time video streaming
Applications:
•
•
•
•
Internet TV
Broadcast of sports events
Online games
Distance education
Need to support large numbers of viewers
• AOL Live 8 broadcast peaked at 175,000 (July 2005)
• CBS NCAA broadcast peaked at 268,000 (March 2006)
Very high data rate
• TV quality video encoded with MPEG-4 would require 1.5 Tbps
aggregate capacity for 100 million viewers
• NFL Superbowl 2007 had 93 million viewers in the U.S. (Nielsen Media
Research)
Possible Solutions
Broadcasting is possible in air, cable networks, or
local area networks
Possible solutions for broadcasting over Internet
Single server - unicast
IP multicast
Multicast overlay networks
Content delivery networks (CDNs)
Application end points (pure P2P)
Single Server
Application-layer solution
Single media server unicasts to all clients
Needs very high capacity to serve large number of clients
CPU
Main memory
Bandwidth
Impractical for millions of simultaneous viewers
IP Multicast
Network-layer solution
Routers responsible for multicasting
Efficient bandwidth usage
Requires per-group state in routers
High complexity
Scalability concern
Violates end-to-end design principle
IP Multicast
Unicast via Multicast
Multicast group
Unicast
Multicast
Clients
Server
C
S
Clients
Server
C
S
C
C
C
C
IP Multicast
End-to-end design principle: a functionality should be
Pushed to higher layers if possible, unless
Implementing it at the lower layer can achieve significant
performance befits that outweigh the cost of additional
complexity
Slow deployment
IP multicast is often disabled in routers
Difficult to support higher layer functionality
Error control, flow control, and congestion control
Needs changes at the infrastructure level
IP Multicast
Gatech
Stanford
Source:
Purdue
Berkeley
Per-group Router State
“Smart Network”
Source: Sanjay Rao’s lecture from Purdue
Multicast Overlay Network
Consists of user hosts and possibly dedicated servers scattered
through the Internet
Hosts, servers and logical links between them form an overlay
network, which multicasts traffic from the source to users
Originally in IP multimcast, router responsible for forwarding
packets, application run on the end systems.
New applications can now make their own forwarding decisions.
A logical network implemented on top of a physical network.
Consists of application-layer links
Application-layer link is logical link consisting of one or more
links in underlying network
Each node in the overlay processes and forwards packets in an
application-specific way
Used by both CDNs and pure P2P systems
MBone
The multicast backbone (MBone) is an overlay
network that implements IP multicast.
Mbone was an experimental backbone for IP
multicast traffic across the Internet
It connects multicast-capable networks over the
existing Internet infrastructure
One of the popular applications run on top of the
MBone is Vic
Vic
Supports multiparty videoconferencing
Broadcast both seminars and meetings across the Internet,
e.g. IETF meetings
Content distribution networks (CDNs)
Content replication
challenging to stream large files
(e.g., video) from single origin
server in real time
solution: replicate content at
hundreds of servers throughout
Internet
content downloaded to CDN
servers ahead of time
placing content “close” to
user avoids impairments
(loss, delay) of sending
content over long paths
CDN server typically in
edge/access network
origin server
in North America
CDN distribution node
CDN server
in S. America CDN server
in Europe
CDN server
in Asia
Content distribution networks (CDNs)
Content replication
CDN (e.g., Akamai)
customer is the content
provider (e.g., CNN)
CDN places CND servers
close to ISP access
networks and the clients
CDN replicates
customers’ content in
CDN servers.
when provider updates
content, CDN updates
servers
origin server
in North America
CDN distribution node
CDN server
in S. America CDN server
in Europe
CDN server
in Asia
Content distribution networks (CDNs)
When a client requests content, the content is
provided by the CDN server that can best deliver the
content to the specific
The closest CDN server to the client
CDN server with a congestion-free path to the client
A CDN server typically contains objects from many
content providers
CDN example
HTTP request for
www.foo.com/sports/sports.html
origin server
1
2
client
3
DNS query for www.cdn.com
CDN’s authoritative
DNS server
HTTP request for
www.cdn.com/www.foo.com/sports/ruth.gif
CDN server near client
origin server (www.foo.com)
distributes HTML
replaces:
http://www.foo.com/sports.ruth.gif
with
http://www.cdn.com/www.foo.com/sports/ruth.gif
CDN company (cdn.com)
distributes gif files
uses its authoritative
DNS server to route
redirect requests
More about CDNs
routing requests
CDNs make use of DNS redirection in order to guide browsers to
the correct server.
The browser does a DNS lookup on www.cdn.com, which is
forwarded to authoritative DNS server.
CDN’s DNS server returns the IP address of the CDN server that
is likely the best for the requesting browser.
when query arrives at authoritative DNS server:
server determines ISP from which query originates
uses “map” to determine best CDN server
CDN nodes create application-layer overlay network
CDN: bring content closer to clients
Why P2P?
Previous problems
Sparse deployment of IP multicast
High cost of bandwidth requirement for server-based unicast
and CDNs.
Limit video broadcasting to only a subset of Internet content
publishers
Why P2P?
Every node is both a server and client
Easier to deploy applications at endpoints
No need to build and maintain expensive routers and
expensive infrastructure
Potential for both performance improvement and additional
robustness
Additional clients create additional servers for scalability
Performance penalty
Can not completely prevent multiple overlay edges from
traversing the same physical link
• Redundant traffic on physical links
Increasing latency
Peer-to-peer Video Broadcasting
Characteristics of video broadcasting
Large scale, corresponding to tens of thousands of users
simultaneously participating the broadcast.
Performance-demanding,
• involving bandwidth requirements of hundreds of kilobits persecond and even more.
Real-time constraints, requiring timely and continuously
streaming delivery.
• While interactivity may not be critical and minor delays can be
tolerated through buffering, it is critical to get video
uninterrupted.
Gracefully degradable quality,
• enabling adaptive and flexible delivery that accommodates
bandwidth heterogeneity and dynamics.
Peer-to-peer Video Broadcasting
Stringent real-time performance requirement
Bandwidth and latency
On-demand streaming – users are asynchronous
Audio/video conferencing – interactive, latency more critical
File downloading
No time constraint, segments of contents can arrive out of order
Needs efficient indexing and search
P2p video broadcasting
Simultaneously support a large number of participants,
Dynamic changes to participant membership,
High bandwidth requirement of the video
Needs efficient data communication.
Overlay construction
Criterion:
Overlay efficiency
Scalability and load balancing
Self-organizing
Honor per-node bandwidth constraint
System considerations
Approaches
Tree-based
Data-driven randomized
P2P Overlay
Tree-based
Peers are organized into trees for delivering data
Each packet disseminated using the same structure
Parent-child relationships
Push-based
• When a node receives a data packet, it forwards copies of the packet to
each of its children.
Failure of nodes result in poor transient performance.
Uplink bandwidth not utilized at leaf nodes
• Data can be divided and disseminated along multiple trees (e.g.,
SplitStream)
The structure should be optimized to offer good performance to all
receivers.
Must be repaired and maintained to avoid interruptions
Example: End System Multicast (ESM)
P2P Overlay
Data-driven
Do not construct and maintain an explicit structure for
delivering data
Use gossip algorithms to distribute data
• A node sends a newly generated message to a set of randomly
selected nodes.
• These nodes do similarly in the next round, and so do other
nodes until the message is spread to all.
Pull-based
• Nodes maintain a set of partners
• Periodically exchange data availability with random partners and
retrieve new data
• Redundancy is avoided since node pulls data only it does not
have it.
Similar to BitTorrent, but must consider real-time constraints
• Scheduling algorithm schedules the segments that must be
downloaded to meet the playback deadlines
Example: CoolStreaming, PPLive
Tree-based vs. Data driven
Data driven
Simple
Suffer from a latency-overhead trade-off
Tree-based
No latency-overhead trade-off
Instability
Bandwidth under-utilization
A combination of both
P2P live streaming system
CoolStreaming
X.Zhang, J. Liu, B. Li, and T. S. P. Yum. Coolstreaming/donet: A data-driven
overlay network for efficient live media streaming. In Proceedings of IEEE
INFOCOM’05, March 2005.
PPLive
http://www.pplive.com
PPStream
http://www.ppstream.com
UUSee
http://www.uusee.com
AnySee
X. Liao, H. Jin, Y. Liu, L. M. Ni, and D. Deng. Anysee: Peer-to-peer live
streaming. In Proceedings of IEEE INFOCOM’06, April 2006.
Joost
http://www.joost.com/
Case Study: PPLive
PPLive: free P2P-based IPTV
As of January 2006, the PPLive network provided
200+ channels
with 400,000 daily users on average.
Typically over 100,000 simultaneously users
It now covers over 120 TV Chinese stations, 300 live channels
and 20,000 VOD (video-on-demand) channels (from Wiki)
The company claimed that they have more than 200 million user
installations and 105 million active monthly user base (as of Dec
2010). (from Wiki)
The bit rates of video programs mainly range from 250 Kbps to
400 Kbps with a few channels as high as 800 Kbps.
The channels are encoded in two video formats: Window Media
Video (WMV) or Real Video (RMVB).
The encoded video content is divided into chunks and
distributed to users through the PPLive P2P network.
Employs proprietary signaling and video delivery protocols
Case Study: PPLive
BitTorrent is not a feasible video delivery architecture
Does not account for the real-time needs of IPTV
Bearing strong similarities to BitTorrent
Video chunk has playback deadline
No reciprocity mechanism deployed to encourage sharing
between peers
Two major application level protocols
A gossip-based protocol
• Peer management
• Channel discovery
P2P-based video distribution protocol
• High quality video streaming
Data-driven p2p streaming
Case Study: PPLive
1.
2.
3.
4.
5.
User starts PPlive
software and becomes a
peer node.
Contact channel server
for list of available
channels
Select a channel
Sends query to root
server to retrieve an
online peer list for this
channel
Find active peers on
channel to share video
chunks
Channel and peer discovery
from “Insights into PPLive: A Measurement
Study of a Large-Scale P2P IPTV System” by Hei et al.
TV Engine
Download video chunks from PPLive network
Stream the downloaded video to a local video player
Streaming process traverses two buffers
PPLive TV engine buffer
Media player buffer
Cached contents can be uploaded to other peers watching the
same channel.
This peer may also upload cached video chunks to multiple
peers.
Peer may also download media content from multiple active
peers
Received video chunks are reassembled in order and buffered
in queue of PPLive TV engine, forming local streaming file in
memory.
When the streaming file length crosses a predefined threshold,
the PPLive TV engine launches media player, which downloads
video content from local HTTP streaming server.
After the buffer of the media player fills up to required level, the
actual video playback starts.
When PPLive starts, the PPLive TV engine downloads media
content from peers aggressively to minimize playback start-up
delay.
PPLive uses TCP for both signaling and video streaming
When the media player receives enough content and starts to
play the media, streaming process gradually stabilizes.
The PPLive TV engine streams data to the media player at
media playback rate.
Measurement setup
One residential and one campus PC “watched” channel CCTV3
The other residential and campus PC “watched” channel
CCTV10
Each of these four traces lasted about 2 hours.
From the PPLive web site, CCTV3 is a popular channel with a 5star popularity grade and CCTV10 is less popular with a 3-star
popularity grade.
Start-up delays
A peer search for peers and download data from active
peers.
Two types of start-up delay:
the delay from when one channel is selected until the streaming player pops up;
the delay from when the player pops up until the playback actually starts.
The player pop-up delay is in general 10-15 seconds
The player buffering delay is around 10-15 seconds.
Therefore, the total start-up delay is around 20 - 30
seconds.
Nevertheless, some less popular channels have a total
start-up delays of up to 2 minutes.
Overall, PPLive exhibits reasonably good start-up user
experiences
Video Traffic Redundancy
It is possible to download same video blocks more than once
Transmission of redundant video is a waste of network bandwidth
Define redundancy ratio as ratio between redundant traffic and
estimated media segment size.
The traffic redundancy in PPLive is limited
Partially due to the long buffer time period
Peers have enough time to locate peers in the same channel and
exchange content availability information.
Video Buffering
Estimation of size of media player buffer:
at least 5.37 MBytes
Estimation of size of PPLive engine buffer:
7.8 MBytes to 17.1 Mbytes
The total buffer size in PPLive streaming
10 - 30 Mbytes
A commodity PC can easily meet this buffer requirement
PPLive Peering Statistics
A campus peer has many more active video peer neighbors
than a residential peer due to its high-bandwidth access
network.
A campus peer maintains a steady number of active TCP
connections for video traffic exchanges.
Peers with less popular channel have difficulty in finding enough
peers for streaming the media
If the number of active video peers drops, the peer searches for
new peers for additional video download.
Peer constantly changes its upload and download neighbors