Broken Hope, Shattered Dreams

Download Report

Transcript Broken Hope, Shattered Dreams

The Reality and Mythology of
QoS and H.323
[email protected]
[email protected]
Overview
•
•
•
•
H. 323 bounds testing
QoS models
Implications of applying models
Engineering to need.
Test Motivation
• Abilene is trying to provide DiffServ EF
• Is H.323 suitable candidate for APS?
• DiffServ lacks hard bounds, it is totally
probabilistic.
• What really would help? What are the
performance bounds?
Video Artifacts
• Spatial Augmentation – Video artifacts are added
to the picture. Objects appear that are not in the
captured video such as video tiles.
• Spatial Depreciation – Parts of the picture or
objects in the picture are missing.
• Temporal Distortion – Over time the “flow” of an
event is distorted by missing data, in mild cases
resulting in an inter-frame jerkiness. In more
severe cases resulting in video freezing.
Video Artifacts
• Audio Augmentation – Audio artifacts
added to audio stream such as pops, clicks
and hiss.
• Audio Depreciation – Parts of the audio are
missing.
Scope of H.323 Bounds Testing
– What network conditions can be mapped to
certain qualities of video.
– It can be highly subjective.
– We did not desire to engage in a Cognitive
Science experiment.
– Needed simple reproducible test procedure.
Test Procedure
• Still office scene, count the number of defects over
a 60 second sample.
• Motion in scene and count the number of seconds
needed to recover.
• Tested in a variety of setups
–
–
–
–
Point-to-point
MCU
Cascaded MCUs
Isolated Latency, Loss and Jitter
Network Emulator
• Operating System: Linux Mandrake 7.2
Kernel recompiled and optimized for the
device to be a router.
• CPU: Pentium III 733Mhz
• Memory: 256 MB.
• Motherboard: Asus CUSLC2-C AGP4X
• NICS: Intel Etherpro 10/100.
• Emulator Software: Nistnet 2.1.0
Used to test H.323
• Verified Nistnet system prior to test.
– Tested platform with SmartBits.
– All parameters were met with in a +/- 1 msec
(Actual resolution ~.5msec)
– With SmartBits we could verify switches etc. to
further validate our findings. Worst case is total
accuracy within +/- 3msec.
Point-to-Point tests
• Latency does not matter. (holds true for all
scenarios)
58
42
25
30
60
%
%
2.
25
%
%
60
Appliance
NIC
35
1.
75
%
1.
50
%
1.
25
%
60
11
1.
00
%
%
6
0
0.
75
%
2
0.
50
0
0.
25
0
0.
10
%
17
60
48
2.
00
70
60
50
40
30
20
10
0
0.
01
Errored Seconds per
One Minute Sample
Drop Errored Seconds
Percentage of Packets Dropped
80
60
60
40
60
60
60
32
7
%
%
%
2.
00
2. %
25
%
1
1.
75
2
1.
50
%
60
Appliance
NIC
2
1.
25
1.
00
0.
75
%
%
0.
50
%
3
0.5 0.5
1
1
0.
25
0
0.
10
%
0
15
2
%
20
0.
01
Seconds for
Recovery
Recovery Times
Percentage of Dropped Packets
Errored Seconds
Per 1 Minute
Jitter Errored Seconds
60
50
40
30
20
10
0
57
53
Appilance
NIC
35
25
5
0
10ms
0
20ms
0
30ms
25
20
4
40ms
50ms
60ms
IP Delay Variation
Recovery Time in
Seconds
Jitter Recovery Time
10
9
8
6
5
4
2
0
0
10ms
1
1
0.5
0.5
0
20ms 30ms 40ms
IP Delay Variation
1
50ms
2
60ms
Appliance
NIC
80
60
60
60
60
60
23
23
23
23
Appliance
NIC
2.25%
2.00%
1.50%
21
11
1.25%
0.75%
12
6
0
0.50%
0
0.10%
0
21
16
16
12
1%
20
1.75%
40
0.01%
Errored Seconds per
One Minute
Sampling Period
MCU to Client Loss Test
Percetage of Dropped Packets
60
60
60
60
60
Appliance
NIC
5
Percentage of Dropped Packets
2.25%
2.00%
1.75%
1.50%
3
1.25%
0.75%
1.00%
4
1
3
1
1
0.50%
0.5
0
0.10%
70
60
50
40
30
20
10
0
0.01%
Recovery Time in
Seconds
MCU to Client Recovery Time for Loss
Errored Seconds per
One Minute Sampling
Period
MCU to Client Jitter Errored Seconds
80
60
60
40
20
60
60
60
34
60
Appliance
NIC
28
24
21
60
0
10ms
20ms
30ms
40ms
50ms
60ms
IP Delay Variation in Milliseconds
Recovery Times with
60sec Max.
MCU to Client Recovery Time for Jitter
70
60
50
40
30
20
10
0
60
50
60
60
60
Appliance
NIC
38
28
27
18
10ms
20ms
30ms
40ms
IP Delay Variation
50ms
60ms
30
25
20
15
10
5
0
8
6
0.
1
14
12
15
21
16
24
17
23
16
23
16
23
16
Appliance
NIC
0
0.
50
%
0.
75
%
1.
00
%
1.
25
%
1.
50
%
1.
75
%
2.
00
%
2.
25
%
0
0.
01
%
Number of Errors per
One Minute Sample
One Way Loss Test
Percentage of Dropped Packets
80
60
60
40
20
0
60
60
60
60
Appliance
NIC
0.5
0
1
0
4
1
10
2
12
2
3
3
3
3
0.
01
%
0.
10
%
0.
50
%
0.
75
%
1.
00
%
1.
25
%
1.
50
%
1.
75
%
2.
00
%
2.
25
%
Recovery in
Seconds per One
Minute Sample
Recovery Time for One Way Packet Loss
Percentage of Dropped Packets
3
NUmber of Errored
Seconds per One
Minute Sample
One Way Jitter with MCU
80
60
60
60
60
Appliance
NIC
40
20
0
0
10ms
0
20ms
0
30ms
0
40ms
0
50ms
0
60ms
IP Delay Variation
Recovery Time per
One Minute Sample
One Way Jitter With MCU
70
60
50
40
30
20
10
0
60
60
60
Appliance
NIC
2
0
10ms
10
1
20ms
12
2
30ms
2
1.5
40ms
50ms
IP Delay Variation
2
60ms
17
16
12
12
14
23
23
23
23
15
13
12
12
Appliance
NIC
6
25
%
2.
00
%
2.
75
%
1.
50
%
1.
25
%
1.
00
%
1.
0.
75
%
0
50
%
10
%
0
0.
0.
21
0.
25
20
15
10
5
0
01
%
Errored Seconds
per One Minute
Sample
Two Way Loss Via MCU
Percentage of Dropped Packets
80
60
60
60
60
60
40
0
Appliance
NIC
20
0.5
2
1
8
4
5
4
6
3
10
3
4
3
6
0.
01
%
0.
10
%
0.
50
%
0.
75
%
1.
00
%
1.
25
%
1.
50
%
1.
75
%
2.
00
%
2.
25
%
Recovery Time per
One Minute Sample
Recovery Time for Two Way Loss Via MCU
Percenatge of Dropped Packets
7
Errored Seconds per
One Minute Sample
Two Way Jitter with MCU
80
60
60
60
60
60
60
60
Appliance
40
NIC
20
0
0
10ms
10
7
20ms
30ms
40ms
50ms
60ms
IP Delay Variation
Recovery Time per
One Minute Sample
Two WayJitter with MCU Recovery Times
25
20
20
15
Appliance
10
5
0
10
3
0
10ms
20ms
10
10
2
30ms
8
8
2
2
2
40ms
50ms
60ms
IP Delay Variation
NIC
70
60
50
40
30
20
10
0
60
60
60
60
Appliance
NIC
0
2.25%
0
2.00%
0
1.75%
0
1.50%
0
1.25%
0
1.00%
0
0.75%
0
0.50%
0
0.10%
17
0.01%
Errored Seconds
per One Minute
Sample
Cascaded MCU One Way Test
0
Pecentage of Dropped Packets
60
Percentage of Lost Packets
25
%
2.
00
%
2.
75
%
1.
50
%
1.
25
%
1.
00
%
1.
75
%
0.
50
%
0.
25
%
0.
10
%
25
15 18
0.5
0.5
0.5
0.5 3
0.5 1
0.5 0.5 0.5 0.5 0.5
0
0
0
0
0.
01
%
70
60
50
40
30
20
10
0
0.
Errored Seconds per
One Minute Period
Cascaded MCU One Way Recovery Times
60
Appliance
NIC
Errored Seconds per
One Minute Sample
Cascaded MCU Jitter One Way Test
80
60
Appliance
40
NIC
20
0
10ms
20ms
30ms
40ms
50ms
60ms
IP Delay Variation
Recovery Time per
One Minute Sample
Cascaded MCU Jitter One Way Test
80
60
40
20
0
40
0.5
10ms
52
60
60
60
60
Appiance
NIC
0.5
20ms
0.5
30ms
0.5
40ms
0.5
50ms
0.5
60ms
IP Delay Variation
note: network disruption was injected into MCU from
NIC side.
End-to-end Delay Components
SENDER SIDE
NETWORK
RECEIVER SIDE
Compression
Delay
Propagation
Delay
Resynchronization
Delay
Transmission
Delay
Processing Delay
Decompression
Delay
Electronic
Delay
Queuing
Delay
Presentation
Delay
Delay Values
• Transmission Delay + Electronic Delay:
Modem delay = 40ms
Transmission delay = 10 chars over 56Kbps
= 80/56000bps = 1.4ms
• Switch Propagation Delay: <2ms
• Presentation Delay = 17ms
Encode and Decode Latency
SWITCH
END POINT 1
END POINT 2
MIC I/P
AUDIO O/P
MCU
METRONOME
(PULSE
GENERATOR)
A
MCU
OSCILLOSCOPE
B
SCOPE I/P A:
METRONOME I/P
SCOPE I/P B:
ENDPOINT 2 AUDIO O/P
Oscilloscope Waveforms
Experiment and Results
•
•
•
•
Dialing Speeds: 256K, 384K, 512K, 768K
Metronome setting: 113
Propagation delay + Switch delay ~ 0
Encode + Decode delay ~ 240ms
(independent of dialing speed)
• Delay through MCU ~120ms to ~200ms
(delay increasing with dialing speed)
Network Requirements
• Latency – users may find annoying but the
it does not break the protocol.
• Loss – Can tolerate some loss, must be
below 1% in p-2-p and 0.75% in MCU
• Jitter – Very jitter intolerant. For 30 Fps
must be lower than ~33 msec. Seems very
intolerant in cascaded MCU scenario.
Network Calculus 101
• All functions are cumulative distribution
functions, i.e. wide-sense increasing.
• Uses min-plus Algebra.
• Uses classes of primitive functions to
describe various network behaviors
• Employs convolution and deconvolution
with primitives to arrive at meaningful
conclusions.
Models
• IntServ – Has the necessary per flow state
but is not here yet.
– It also probably has many unforeseen
maintenance and administrative issues.
(see next section).
– Experience from ATM SVCs suggests many
scalability issues. Possible solutions include
MPLS or Policy routing.
Models
• Any E-2-E solution has scalability problem
in the sense that in packet switched
networks the solution vector is more than
number of hops and delay etc.
• x-> <= Ax->+α->
• In other words it is also a function of
topology. (More in DiffServ).
.Source: Network Calculus: A Theory of Deterministic Queuing Systems for the Internet
by Jean-Yeves Le Boudec & Patrick Thriran, Springer-Verlog, Berlin Heidelberg, 2001.
Models
• DiffServ lacks the per flow state necessary for
tight performance bounds because…..
• β*1(t) = [β(t)- α2(t)] Where β is the rate-latency
function. βR,T(t) = R[t-T]+ i.e. Service Curve.
• b*1 = b1 + r1T +r1(b2+r2T/R-r2) Where b is a
component of the Affine function γ r,b(t) = b+rt if
t>0.
Source: Network Calculus: A Theory of Deterministic Queuing Systems for the Internet
by Jean-Yeves Le Boudec & Patrick Thriran, Springer-Verlog, Berlin Heidelberg, 2001.
Models
• V ~ 0.564 for bounded delay so when v0
converges to V the latency bound explodes to
infinity. For vl = ΣiЭm ri/Cl. Where
v = link utilization, i=flow, r = rate and C =
service rate.
Source: Network Calculus: A Theory of Deterministic Queuing Systems for the Internet
by Jean-Yeves Le Boudec & Patrick Thriran, Springer-Verlog, Berlin Heidelberg, 2001.
Engineering to the need
• What realistically can we do?
– It depends on ones network.
– Appropriate queuing for congested links for
maybe a single to only a few flows.
– Packet shaping on receiver with a Greedy
Packet Shaper.
GPS will not increase latency or buffering
requirements if and only if network was
previously lossless.