Figure 15.1 A distributed multimedia system

Download Report

Transcript Figure 15.1 A distributed multimedia system

Teaching material
based on Distributed
Systems: Concepts
and Design, Edition 3,
Addison-Wesley 2001.
Copyright © George
Coulouris, Jean Dollimore,
Tim Kindberg 2001
email: [email protected]
This material is made
available for private study
and for direct use by
individual teachers.
It may not be included in any
product or employed in any
service without the written
permission of the authors.
Viewing: These slides
must be viewed in
slide show mode.
Distributed Systems Course
Distributed
Multimedia Systems
Chapter 15:
15.1
15.2
15.3
15.4
15.5
15.6
15.7
Introduction
Characteristics of multimedia data
Quality of service management
Resource management
Stream adaptation
Case study: Tiger video file server
Summary
Learning objectives
 To understand the nature of multimedia data and the
scheduling and resource issues associated with it.
 To become familiar with the components and design
of distributed multimedia applications.
 To understand the nature of quality of service and
the system support that it requires.
 To explore the design of a state-of-the-art, scalable
video file service; illustrating a radically novel design
approach for quality of service.
2
*
A distributed multimedia system
Figure 15.1
Video camera
and mike
Local network
Local network
Wide area gateway Video
server
Digital
TV/radio
server
 Applications:
– non-interactive: net radio and TV, video-on-demand, e-learning, ...
– interactive: voice &video conference, interactive TV, tele-medicine, multi-user
games, live music, ...
3
*
Multimedia in a mobile environment
 Applications:
– Emergency response systems, mobile commerce, phone service,
entertainment, games, ...
4
*
Characteristics of multimedia applications
 Large quantities of continuous data
 Timely and smooth delivery is critical
– deadlines
– throughput and response time guarantees
 Interactive MM applications require low round-trip delays
 Need to co-exist with other applications
– must not hog resources
 Reconfiguration is a common occurrence
– varying resource requirements
 Resources required:
–
–
–
–
–
Processor cycles in workstations
and servers
Network bandwidth (+ latency)
Dedicated memory
Disk bandwidth (for stored media)
At the right time
and in the right quantities
5
*
Application requirements
 Network phone and audio conferencing
– relatively low bandwidth (~ 64 Kbits/sec), but delay times must be short ( <
250 ms round-trip)
 Video on demand services
– High bandwidth (~ 10 Mbits/s), critical deadlines, latency not critical
 Simple video conference
– Many high-bandwidth streams to each node (~1.5 Mbits/s each), high
bandwidth, low latency ( < 100 ms round-trip), synchronised states.
 Music rehearsal and performance facility
– high bandwidth (~1.4 Mbits/s), very low latency (< 100 ms round trip), highly
synchronised media (sound and video < 50 ms).
6
*
System support issues and requirements
 Scheduling and resource allocation in most current OS’s
divides the resources equally amongst all comers (processes)
– no limit on load
– \ can’t guarantee throughput or response time
 MM and other time-critical applications require resource
allocation and scheduling to meet deadlines
– Quality of Service (QoS) management
 Admission control:
 QoS negotiation:
 Resource management:
controls demand
enables applications to negotiate admission and
reconfigurations
guarantees availability of resources for
admitted applications
– real-time processor and other resource scheduling
7
*
Characteristics of typical multimedia streams
Figure 15.3
Data rate
(approximate)
Telephone speech
CD-quality sound
Standard TV video
(uncompressed)
64 kbps
1.4 Mbps
120 Mbps
Standard TV video
(MPEG-1 compressed)
1.5 Mbps
HDTV video
(uncompressed)
HDTV video
MPEG-2 compressed)
Sample or frame
frequency
size
8 bits
8000/sec
16 bits 44,000/sec
up to 640 x 480
24/sec
pixels x 16 bits
variable
24/sec
1000–3000 Mbps up to 1920 x 1080
pixels x 24 bits
24–60/sec
10–30 Mbps
9
variable 24–60/sec
*
Typical infrastructure components for multimedia applications
Figures 15.4 & 15.5
PC/workstation
: multimedia stream
PC/workstation
Camera
White boxes represent media
processing components, many
of which are implemented
in software, including:
codec: coding/decoding filter
mixer: sound-mixing component
A
B
Microphones
Screen
K
G
Codec
Codec
Window
system
H
L
Mixer
Network
connections
M
C
D
Codec
Video file system
Video
store
Window system
Component
Camera
Bandwidth
Out:
Latency
10 frames/sec, raw video
640x480x16 bits
Loss rate
Resources required
Zero
In:
10 frames/sec, raw video Interactive Low
10 ms CPU each 100 ms;
 Codec
This application
involves
multiple
concurrent
processes in the
Out:
MPEG-1 stream
10 Mbytes RAM
PCs In:
B Mixer
2 44 kbps audio
Interactive Very low
1 ms CPU each 100 ms;
A
Out:
1 44 kbps audio
1 Mbytes RAM
 Window
Other applications
may also Interactive
be running
onms;the
In:
various
Low concurrently
5 ms CPU each 100
system
Out:
50 frame/sec framebuffer
5 Mbytes RAM
same computers
H
K
Network
In/Out:
connection
MPEG-1 stream, approx.
1.5 Mbps
Interactive
Low
1.5 Mbps, low-loss
stream protocol
 Network
They allIn/Out:
share
processing and
network resources
Audio 44 kbps
Interactive Very low
44 kbps, very low-loss
L
stream protocol
connection
10
*
Quality of service management
 Allocate resources to application processes
– according to their needs in order to achieve the desired quality of multimedia
delivery
 Scheduling and resource allocation in most current OS’s
divides the resources equally amongst all processes
– no limit on load
– \ can’t guarantee throughput or response time
 Elements of Quality of Service (QoS) management
– Admission control:
– QoS negotiation:
controls demand
enables applications to negotiate admission and
reconfigurations
– Resource management: guarantees availability of resources for
admitted applications
– real-time processor and other resource scheduling
11
*
The QoS manager’s task
Figure 15.6
Adm issi onco ntrol
QoS neg otia tion
Application components s pecify their QoS
requirements to QoS manager
Flo w spec.
QoS manager ev aluates new requirements
agains t the av ailable res ources.
Suffic ient?
Yes
Reserve the requested res ources
No
Negotiate reduc ed res ource provision w ith applic ation.
Agreement?
Resou rce con tract
Yes
Allow applic ation to proceed
Application runs w ith res ources as
per resourc e c ontract
No
Do not allow applic ation to proceed
Application notifies QoS manager of
increas ed res ource requirements
12
*
QoS Parameters
Bandwidth
Figure 15.8 The RFC 1363 Flow Spec
– rate of flow of multimedia data
Protocol version
Latency
Maximum transmission unit
Token bucket rate
– time required forBandwidth:
the end-to-end transmission
of a single data
element
burstiness
Token bucket size
Jitter
 variation in latency :– dL/dt
Loss rate
Delay:
Maximum transmission rate
Minimum delay noticed
Maximum delay variation
Loss sensitivity
– the proportion of data elements that can be dropped or
Loss:
Burst loss sensitivity
Loss interval
Quality of guarantee
13
maximum rate
acceptable latency
acceptable jitter
percentage per T
delivered
late
maximum consecutive loss
T
value
*
Figure 15.8 The RFC 1363 Flow Spec
Protocol version
Managing the flow of multimedia data
Maximum transmission unit
Bandwidth:
Token bucket rate
burstiness
Token bucket size
 Flows are variable
Maximum transmission rate
Delay:
Minimum delay noticed
maximum rate
acceptable latency
Maximum delay variation
– video compression methods such as MPEG
(1-4) are based onacceptable jitter
percentage per T
Loss sensitivity
similarities between consecutive frames
maximum consecLoss:
Burst loss sensitivity
utive loss
T
Loss
interval
– can produce large variations in data rate
Quality of guarantee
value
 Burstiness
– Linear bounded arrival process (LBAP) model:
 maximum flow per interval t = Rt + B
(R = average rate, B = max. burst)
– buffer requirements are determined by burstiness
– Latency and jitter are affected (buffers introduce additional delays)
 Traffic shaping
– method for scheduling the way a buffer is emptied
14
*
Traffic shaping algorithms – leaky bucket algorithm
Figure 15.7
(a) Leaky bucket
process 1
process 2
overkill?
analogue of leaky bucket:
– process 1 places data into a buffer in bursts
– process 2 in scheduled to remove data regularly in smaller amounts
– size of buffer, B determines:
 maximum permissible burst without loss
 maximum delay
15
*
Traffic shaping algorithms – token bucket algorithm
Figure 15.7
(b) Token bucket
process 1
tokens: permits to place x bytes
into output buffer
process 2
process 3
Token generator
Implements LBAP
– process 1 delivers data in bursts
– process 2 generates tokens at a fixed rate
– process 3 receives tokens and exploits them to deliver output as quickly as it
gets data from process 1
Result: bursts in output can occur when some tokens have accumulated
16
*
Admission control
Admission control delivers a contract to the application
guaranteeing:
For each network connection:
bandwidth
latency
For each computer:
 cpu time, available at specific intervals
 memory
For disks, etc.:
bandwifth
latency
Before admission, it must assess resource requirements and
reserve them for the application
– Flow specs provide some information for admission control, but not all - assessment
procedures are needed
– there is an optimisation problem:
 clients don't use all of the resources that they requested
 flow specs may permit a range of qualities
– Admission controller must negotiate with applications to produce an acceptable result
17
*
Resource management
 Scheduling of resources
to meet the existing guarantees:
e.g. for each computer:
cpu time, available at specific intervals
memory
Fair scheduling allows all processes some portion of the resources based on
EDF scheduling
fairness:
Each task specifies a deadline T and CPU seconds S to the scheduler for each
 E.g. round-robin scheduling (equal turns), fair queuing (keep queue lengths equal)
work item (e.g.
video frame). EDF scheduler schedules the task to run at least
not appropriate for real-time MM because there are deadlines for the delivery of
S seconds data
before T (and pre-empts it after S if it hasn't yielded).
It hasReal-time
been shown
that EDF
will find a
schedule
that
meets
the deadlines,
if
scheduling
traditionally
used
in special
OS
for system
control
one exists.
(But for- MM,
S is likely
be a millisecond
or so,
there
applications
e.g. avionics.
RTtoschedulers
must ensure
thatand
tasks
are is a
danger completed
that the scheduler
may time.
have to run so frequently that it hogs the cpu).
by a scheduled
Real-time MM requires real-time scheduling with very frequent deadlines.
Rate-monotonic scheduling assigns priorities to tasks according to tasks
Suitable types of scheduling are:
according to their rate of data throughput (or workload). Uses less CPU for
Earliest deadline first (EDF)
scheduling decisions. Has been shown to work well where total workload is <
Rate-monotonic
69% of CPU.
18
*
Scaling and filtering
Figure 15.9
Source
Targets
High bandwidth
Medium bandwidth
Low bandwidth
 Scaling reduces flow rate at source
– temporal: skip frames or audio samples
– spatial: reduce frame size or audio sample quality
 Filtering reduces flow at intermediate points
– RSVP is a QoS negotiation protocol that negotiates the rate at each
intermediate node, working from targets to the source.
19
*
QoS and the Internet
 Very little QoS in the Internet at present
– New protocols to support QoS have been developed, but their implementation
raises some difficult issues about the management of resources in the
Internet.
 IPv6
RSVP
header layout
– Network resource reservation
Version
(4bits)
Priority
(4bits)
Flow label (24 bits)
– Doesn’t
ensure
enforcement
of reservations
Payload length (16 bits)
Next header (8bits) Hop limit (8bits)
 RTP
– Real time data transmission over
IP address
Source
(128 bits)
 need to avoid adding undesirable complexity to the Internet
 IPv6 has some hooks for it
Destination address
(128 bits)
20
*
Tiger design goals
 Video on demand for a large number of users
 Quality of service
Tiger
 Scalable and distributed
 Low cost hardware
Clients
 Fault tolerant
Network
21
*
Tiger architecture

Storage organization
–
–
Striping
Mirroring

Distributed schedule

Tolerate failure of any single computer or disk

Network support

Other functions
–
pause, stop, start
22
*
Tiger video file server hardware configuration
Figure 15.10
Controller
low-bandwidth network
0
n+1
Cub 0
1
n+2
Cub 1
2
n+3
3
Cub 2
n+4
Cub 3
n
2n+1
Cub n
high-bandwidth
ATM switching network
Cubs and controllers
are standard PCs
video distribution to clients
Start/Stop
requests from clients
Each movie is stored in 0.5 MB blocks (~7000) across all disks in the order of the disk
numbers, wrapping around after n+1 blocks.
Block i is mirrored in smaller blocks on disks i+1 to i+d where d is the decluster factor
23
*
Tiger schedule
Figure 15.11
2
block service
time t
1
block play time T
slot 0
slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
viewer 4
state
free
free
viewer 0
state
viewer 3
state
viewer 2
state
free
viewer 1
state
viewer  client
Stream capacity of a disk = T/t (typically ~ 5)
Stream capacity of a cub with n disks = n x T/t
1.
2.
3.
4.
viewer state:
Viewer state: Network address of client
FileID for current movie
Number of next block
Cub algorithm:
in time t
0
Viewer's next play slot
Read the next block into buffer storage at the Cub.
Packetize the block and deliver it to the Cub’s ATM network controller with the
address of the client computer.
Update viewer state in the schedule to show the new next block and play sequence
number and pass the updated slot to the next Cub.
Clients buffer blocks and schedule their display on screen.
24
*
Tiger performance and scalability
1994 measurements:
– 5 x cubs: 133 MHz Pentium Win NT, 3 x 2Gb disks each, ATM
network.
– supported streaming movies to 68 clients simultaneously without lost
frames.
– with one cub down, frame loss rate 0.02%
1997 measurements:
– 14 x cubs: 4 disks each, ATM network
– supported streaming 2 Mbps movies to 602 clients simultaneously with
loss rate of < .01%
– with one cub failed, loss rate <.04%
The designers suggested that Tiger could be scaled to 1000
cubs supporting 30,000 clients.
25
Summary
 MM applications and systems require new system
mechanisms to handle large volumes of time-dependent data
in real time (media streams).
 The most important mechanism is QoS management, which
includes resource negotiation, admission control, resource
reservation and resource management.
 Negotiation and admission control ensure that resources are
not over-allocated, resource management ensures that
admitted tasks receive the resources they were allocated.
 Tiger file server: case study in scalable design of a streamoriented service with QoS.
26