Information and Telecommunication Technology Center
Download
Report
Transcript Information and Telecommunication Technology Center
How is performance measured?
#3
Victor S. Frost
Dan F. Servey Distinguished Professor
Electrical Engineering and Computer Science
University of Kansas
2335 Irving Hill Dr.
Lawrence, Kansas 66045
Phone: (785) 864-4833 FAX:(785) 864-7789
e-mail: [email protected]
http://www.ittc.ku.edu/
All material copyright 2006
Victor S. Frost, All Rights Reserved
#3 1
Outline
•
•
•
•
•
•
What is ideal?
Application types
Barriers to achieving the ideal
Performance metrics
Network Performance Perspective
What performance can the network
guarantee
#3 2
Goals
• What is the ideal?
– Meet customer expectations
• Service experience satisfies customer requirements
• Motivate customers to stay with service provider
• Motivate customers to even recommend the service
provider
– With finite resources
•
•
•
•
•
Capacity
Power
Processing
Buffers
Other considerations
– Unequal cost for resources
– Sometimes stated as satisfy customer expectations while
“maximizing” network utilization. However that only considers
“one” dimension of the problem
#3 3
What is the service experience?
• Web page download in X sec, where
X [5,10]
• For voice, no network busy
• For IM, message delivered in X Min, where
X [5-15]
• For E-mail, delivery in X min where
X [10-30]
• Noticeable video impairments greater than
X min apart where X ~30min
• Noticeable voice impairments greater than
X min apart where X ~10min
#3 4
Another perspective on application requirements
•
Elastic
– Non-real-time, does not need a fixed amount of capacity
– No deadline
– Examples
•
•
•
•
File transfer
E-mail
MP3 download
Video download
What is the contradiction?
•
Loss-tolerant/Loss-intolerant
•
Real-time
– All packets must be delivered, e.g., file transfer
– Occasional packet loss may not impact the service, e.g., voice and video
–
–
–
–
Deadlines
May require playout timing
Late packets as bad as lost packets
Examples
• Real-time viewing of sporting event
• Interactive speech
#3 5
More examples
From: Computer Networking: A Top Down Approach Featuring the Internet,
2nd edition. Jim Kurose, Keith RossAddison-Wesley, July 2002.
#3 6
Barriers to the Ideal
• Many of the barriers are in the access (last mile)
segment of the network
• Propagation delay lack of accurate knowledge of
the network state
• Efficiency of access network protocols
• Access connection conditions and variability, e.g.,
– Noise
• Thermal
• Interference
• Intentional jamming
– Multipath fading
• Interaction of end-to-end and link protocols
• Limited network capacity
#3 7
Barriers to the ideal
• Limited capacity of end server: with
an ideal network the application
server maybe be:
– Overloaded
– Mis-configured
• Issues with the application server are
not considered here
#3 8
What is the perceived QoS for this
end-to-end path?
#3 9
Network Performance Criteria
•
•
•
•
•
•
Or
How to rate how close to ideal can we get
Response Time
Throughput b/s
Channel Utilization
Channel Efficiency
Channel Capacity (not Shannon’s information theory capacity)
Blocking Probability
– Packet
– Call
• Fairness
• Security
• Reliability
#3 10
Network Performance Criteria
Response time TR: The time to “correctly” transmit a packet
from Source to destination.
“correctly” implies Response time includes acknowledgments
Source
Host
Network
Interface
Card
Network
Network
Interface
Card
Destination
Host
#3 11
Network Performance Criteria: Response Time
• Time from source applications to NIC
• Waiting time in NIC to enter the network:
buffering time
• Time to transmit the packet: clock the packet into
the network
• Time for the network to deliver the packet to the
destination’s NIC
• Time for destination’s NIC to generate an
acknowledgment
• Time for the acknowledgment to reach the source
host: repeating the above steps
#3 12
Network Performance Criteria:
Response Time Dependencies
• State of the network
– Current topology
– Active nodes
– Active links
• State of the other users
•
•
•
•
•
– Congestion
Errors
State of source/destination host
Link speeds
Message sizes
Message priorities
#3 13
Network Performance Criteria:
Response Time Statistics
• Response time, TR is a random variable
• Probability density function
characterizes TR
• % packets observed with delays
greater than T
• Variance
• Mean
#3 14
Network Performance Criteria
• Network designers focus on the
components of response time that are a
function of the network
• Find the one-way delay as a function of:
– traffic load
– packet length
– topology
• Focus on average response time or delay in
the access network
#3 15
Network Performance Criteria
• Throughput in b/s, packets/sec,
cells/sec
• Normalized throughput
R
where
C
R = Average error free rate (b/s) passing a
S
reference point in the network
C = Link Capacity (b/s)
S = % time the network is carrying
error free packets-goodput
#3 16
Network Performance Criteria
• Channel (or link) utilization:
– The % time the channel (or link is busy)
• Channel Efficiency
– The % time the channel is carrying user
information (impact of overhead)
Let
D = #user data bits / packet
H = # network overhead bits / packet
then
Channel efficiency = S(
D
)
D+ H
#3 17
Network Performance Criteria
• Channel Capacity, Smax, is the maximum
obtainable throughput over the entire
range of input traffic intensities, i.e.,
offered load.
Throughput
Ideal Case
1
smax
1
Offered Load
#3 18
Network Performance Criteria:
Other Throughput Metrics
• Maximum lossless throughput
• Peak throughput
• Full load throughput
Transfer from local to remote
host memory as fast as possible
#3 19
Network Performance Criteria:
Case Study
From: ATM WAN
performance tools,
experiments, and
results,
L.A DaSilva, J.B
Evans, D. Niehaus,
V.S. Frost, R.
Jonkman,
Beng Ong Lee, G.Y.
Lazarou; IEEE
Communications
Magazine,
Vol. 35, No. 8;
August 1997, pp.
118-125.
#3 20
Network Performance Criteria: Case
Study
From: ATM WAN
performance tools,
experiments, and
results,
L.A DaSilva, J.B
Evans, D. Niehaus,
V.S. Frost, R.
Jonkman,
Beng Ong Lee, G.Y.
Lazarou; IEEE
Communications
Magazine,
Vol. 35, No. 8;
August 1997, pp.
118-125.
#3 21
Network Performance Criteria:
Case Study
With DS3
Access
Lines
From: ATM WAN
performance tools,
experiments, and
results,
L.A DaSilva, J.B
Evans, D. Niehaus,
V.S. Frost, R.
Jonkman,
Beng Ong Lee, G.Y.
Lazarou; IEEE
Communications
Magazine,
Vol. 35, No. 8;
August 1997, pp.
118-125.
#3 22
Network Performance Criteria
• Reliability: The reliability of a
network can be defined as the
probability that the functioning nodes
are connected to working links.
Reliability = 1 - Network Failure
• Here lets assume all nodes are
working and analyze simple ring and
tree networks
#3 23
Network Performance Criteria
5 links:
every node has two paths
4 links
Tree Network
Topology
Ring Network
Topology
#3 24
Network Performance Criteria
• Reliability for a 5 node tree
network
• Any of the 4 links fail the network is
down
• Let p = probability of link failure and
failures are statistically independent
• Then Prob[no link failure] = (1-p)4
• Prob[network failure]
= 1 - (1-p)4
#3 25
Network Performance Criteria
• But
(1-p)4 = 1 - 4p + 6p2- 4p3 + p4
• Prob[network failure] =
4p - 6p2 + 4p3 - p4
• Assuming p is small then
for 5 node tree network the
Prob[network failure] 4p
#3 26
Network Performance Criteria
• Reliability for a 5 node ring network
• Ring network has 5 links
• Ring network can have one link failure and
still be working, note one more link can fail
• Let q = 1 - p=probability of link good
• Prob[network good]=Prob[all good or one
failed and 4 good] = q5+ 5p q4
5
Prob[link j failedand all otherlinksgood] 5pq
4
j 1
• So Prob[network failure] =
1 - q5 - 5p q4
#3 27
Network Performance Criteria
• Expanding Prob[network failure] =
10p2q3 + 10p3q2 + 5p4q +p5
• The dominant term (assuming p small)
is 10p2q3
p
0.01
0.001
10-5
10-7
Tree
4p
0.04
0.004
4x10-5
4x10-7
Ring
10p2q3
0.00097
10-5
10-9
10-13
#3 28
Network Performance Perspective:
User-Oriented
• Minimum application response time
(Delay guarantee)
• Maximum application throughput
(Throughput (b/s) guarantee)
• Low loss (Maximum packet loss guarantee)
• Highly reliable (Availability guarantee)
• Very flexible
• Secure
• Low cost
#3 29
Network Performance Perspective:
Network Manager/Provider
•
•
•
•
•
•
•
•
Maximum throughput for all users
Effective congestion control
Power = Throughput/Delay
Easy of management
Highly reliable
Fairness
Ease of billing
Low cost
#3 30
Network Performance Perspective:
Network Designer/Developer/Vendor
• Simple design
• Robust
• Scales
– Number of users
– Geographical distribution
– Speed
• Efficient use of resources, CPU, links and
memory
• Evolvable
#3 31
Network Performance:
What Can the Network Guarantee?
• Quality of Service (QoS)-
Absolute/Contractual performance
guarantees
Examples:
– Sustainable rate
– Peak rate
– Packet delay (average and standard deviation)
– Packet/Cell loss rate
• Network must reserve resources to
provide QoS
• ATM is designed to provide QoS
#3 32
Network Performance:
What Can the Network Guarantee?
• Class of Service (CoS)-Relative performance guarantees
Examples:
– Best Effort (lowest priority)
[Current Internet is Best Effort]
• e-mail
• ftp
– Gold (medium priority)
• Point of sales transaction
– Platinum (highest priority)
• Voice
• Video
• Network performs packet ‘labeling’ and priority queueing to
provide CoS
• Differential Services (IP-DiffServ) provides CoS in the
Internet
• IEEE 802.1p is a LAN packet prioritization mechanism to
provide CoS
#3 33
Some examples
• Push-to-Talk (PTT) over Cellular (PoC)
– Operates half duplex (like walkie-talkie)
– Start-to-Speak Delay from pressing the PTT button
until indication of permission to talk
– Voice delay time time from spoken until received
• Networking gaming service
– Action games (shoot’em)
• end-to-end delays of 75-100 ms are noticeable in first
person shooter and car racing games
– Real-time strategy games
• end-to-end delays greater than 200 ms are ”noticeable” and
“annoying” to end-users
– Turn-based games
#3 34
References
•
Gâomez, G. and R. Sâanchez, End-to-end quality of service over
•
T. Beigbeder, R. Coughlan, C. Lusher, J. Plunkett, E. Agu, and M.
Claypool, “The effects of loss and latency on user performance in
unreal tournament 2003,” in Proceedings of ACM SIGCOMM 2004
Workshops on NetGames ’04, pp. 144–151, 2004.
J. Nichols and M. Claypool, “The effects of latency on online
Madden NFL football,” in NOSSDAV ’04: Proceedings of the 14th
International Workshop on Network and Operating Systems
Support for Digital Audio and Video, pp. 146–151, 2004.
L. Pantel and L. C. Wolf, “On the impact of delay on real-time
multiplayer games,” in NOSSDAV ’02: Proceedings of the 12th
International Workshop on Network and Operating Systems
Support for Digital Audio and Video, pp. 23–29, 2002.
•
•
cellular networks : data services performance and optimization in
2G/3G. 2005, John Wiley.
#3 35