Transcript Slides_2

Chap 2 Network Analysis and
Queueing Theory
1
Two approaches to network design
1- “Build first, worry later” approach
- More ad hoc, less systematic
2- “Analyze first build later” is used extensively for
telephone networks
- More systematic, optimal, etc.
• A model is a mathematical abstraction: keep only the
details that are relevant
• Mathematical modeling is one step that can provide useful
approximations
2
• Mathematical model can be used for:
1- Evaluate the system performance
- Average queue length
- Average waiting time
- Loss probability due to buffer overflow
2- Improve the system performance
- Determine the service rate with tolerable waiting time
(upgrade the system with more capacity incurs investment
cost. But long waiting time would be annoying to users)
- Provide guaranteed packet loss probability with large
enough buffer (How large is enough?)
3
Roadmap
• Randomness in computer networks
• A review of probability
• Queuing theory
4
Randomness at the source (1/2)
• Communication is full of randomness during the data
generation at the source and data transmission at
the network
 Random data generation at the
source



data: users randomly subscribe to
download at any time (e.g., web
browsing)
voice: dynamic changing of data
flow based on the interactivity
during the conversation
video: variable video rate
5
Randomness at the source (2/2)
 Voice communication: interactive communication
Bob speaks
Dave speaks
 Video: variable length of video frames
Frame 1
Frame 2 Frame 3
Frame 4
 Video = moving picture (30 pics/s)
 picture (frame) size depends on motion in
pictures
6
Randomness at the network
• Data transmission encounters
random delays and data rate
– randomly changing physical
channels (noise of wireless
communication)
– congestion in the backbone
network (statistical multiplexing
of data packets)
– contention of transmission
among users in access network
7
Buffers
• Buffers are used to absorb the randomness of
network
– Sender, routers, receivers all have buffers
IP network
random arrivals
reorganize data
send to up-layers
Buffer at the receiver
8
Roadmap
• Randomness in computer networks
• A review of probability
• Queuing theory
9
A review of probability
•
•
•
•
•
Definitions
Random Variables
Distribution of RVs
Statistical Characterizations
Useful Distributions
10
Definitions
• Experiment: a physical experiment with an observable outcome
• Random Experiment: an experiment with a non-deterministic
outcome (e.g., packet transmission with random lost)
• Sample Space (S): the set of all possible well-defined outcomes
of an experiment
• Event (E): a subset of the sample space
• Probability of an event E can be defined in terms of its
relative frequency as:
n( E )
P( E )  lim
n
n
11
Conditional Probability
• The conditional probability of an event E assuming that an
event F occurs, denoted by P (E|F), defined as:
• Probability of joint events is:
• If A and B are independent
Total Probability Theorem
(mutually disjoint)
A is an arbitrary event in S, the total
probability theorem is given as:
Proof:
E4
E5
E2
E3
A
E7
E6
E1
Random Variables
• A random variable (r.v.) X(𝜔) is a function that
maps an arbitrary outcome 𝜔 of a random event
into a real value
14
Distribution of Random Variables
• Cumulative Probability Distribution (CDF)
𝐹 𝑥 = 𝑃(𝑋 ≤ 𝑥)
specifying, for all real 𝑥 ∈ (−∞, ∞), the probability
that the r.v (random variable) X is less than or equal to X.
• Probability Density Function (pdf) of a continuous r.v.
𝑑𝐹(𝑥)
𝑓 𝑥 =
𝑑𝑥
• Probability mass function (pmf):is a function that gives
the probability that a discrete random variable is exactly
equal to some value.
15
Discrete versus Continuous Random Variables
Discrete Random Variable
Takes discrete values
e.g. {0, 1, 2, 3}
Probability Mass Function (PMF)
p x i   P X  x i 
1. p (x i )  0, for all i
2.  i 1 p (x i )  1

Continuous Random Variable
Infinite (continuous) Sample
Space
e.g. [0,1], [2.1, 5.3]
Probability Density Function (PDF)
f x 
1. f ( x)  0 , for all x in R X
2.
RX
3. f ( x)  0, if x is not in RX
Cumulative Distribution Function (CDF)
p  X  x   x
i
x
p (x i )
 f ( x)dx  1
p X  x 
p  X  x    f t  dt  0
x

p a  X  b    f  x  dx
b
a
16
Statistical Characterizations
• Expectation (Mean Value) of a random variable
• Variance of X
17
1- Binomial Distribution
Binomial distribution (n, p)
- A fixed number of trials, n, e.g., 15 tosses of a coin;
- A binary outcome, called “success” and “failure”, e.g., head or
tail in each toss of a coin
Probability of success is p, probability of failure q = 1 – p
- Constant probability for each observation (independent trials),
e.g., probability of getting a tail is the same each time we
toss the coin
X is a r. v. that refers to the number of successes in n trials
n
n!
  
 i  k!(n  k )!
np = mean value
18
Example:
- Every packet has n bits. There is a probability pΒ that a bit
gets corrupted. What is the probability that a packet has
exactly 1 corrupted bit?
n!
n!
P( X  1) 
p1B (1  pB ) n 1 
pB (1  pB ) n 1
1!(n  1)!
(n  1)!
- What is the probability that a packet is not corrupted?
P( X  0) 
n!
pB0 (1  pB ) n  (1  pB ) n
0!(n)!
- What is the probability that a packet is corrupted? At least
one bit is corrupted
pE  1  P(Y  0)  1  (1  pB ) n
19
2- Geometric Distribution
Independent trials - A binary outcome, called “success” and
“failure”, e.g., head or tail in each toss of a coin
•
Probability of success is p, probability of failure q = 1 – p
Constant probability for each observation (independent trials),
e.g., Probability of getting a tail is the same each time we toss
the coin
•
•
X is a r. v. that refers to the number of trials needed to
obtain the first success
Example: what is the probability that we need k coin tossing
in order to obtain a head?
20
Example:
• A wireless network protocol uses a stop-and-wait
transmission policy. Each packet has a probability pE of
being corrupted or lost.
• What is the probability that the protocol will need 3
transmissions to send a packet successfully?
– Solution: 3 transmissions is 2 failures and 1 success,
therefore:
P( X  3)  (1  pE ) pE2
• What is the average number of transmissions needed per


packet?
i 1
E[ X ]   iP ( X  i )  i (1  p E ) pE 
i 1

(1  pE ) ip
i 1
i 1
i 1
E
1
1
 (1  p E )

2
(1  pE )
1  pE
21
3- Poisson Distribution
1. A discrete RV X follows the Poisson distribution with
parameter if its probability mass function is:
2. Poisson distribution defines the probability that a random
event occurs k times during a given interval s.
is a
positive real number, equal to the expected number of
occurrences that occur during the given interval.
22
4- Exponential Distribution
• Exponential distribution : A continuous RV X follows the
exponential distribution with parameter , if its probability
density function is:
f (x)
0
• Cumulative distribution function (CDF):
• Mean and Variance:
23
4- Exponential Distribution
• Memoryless property: Past history has no influence on the future.
The probability of having to wait at least x additional seconds
given that one has already waited t seconds.
Proof:
• Exponential is the only continuous distribution with the
memoryless property
• Let X be the amount of time a customer spends in a bank. X is
exponentially distributed with mean 10 minutes.
24
5. Poisson Process with rate

• Random process : a collection of random variables
indexed by a set T(“time").
• A counting process where the number of arrivals in any
time interval t follows Poisson distribution with
parameter t and the inter-arrival times are independent,
identically distributed and follows exponential
distribution with parameter 
25
5. Poisson Process with rate 
• Inter-arrival times for a Poisson process are independent
and follow exponential distribution with parameter
26
Merging & Splitting Poisson Processes
- A1,…, Ak are independent Poisson processes with rates
1,…, k
- Merged in a single Poisson process with rate = 1+…+ k

- A Poisson process with rate  can

be split into processes A1 and A2
independently, with rates p
and (1-p)
p
p
1-p
(1-p)
27
Roadmap
• Randomness in computer networks
• A review of probability
• Queuing theory
28
Queuing theory
• Introduction to queuing theory
– What is queuing theory?
– Why study queues?
– Notations of queues
• Queuing Models
–
–
–
–
–
M/M/1 Queue
M/M/1/N Queue
M/M/m/m Queue
M/M/m Queue
Network of queues
29
What is queuing theory used for?
• A random variable describes the behavior of one random
event
– User arrivals in unit time; bandwidth capacity of a link; transmission
power of a sender ...
• How to model and analyze a system where multiple random
events co-exist?
– e.g., a router where multiple flows, with different rates, are injected
and served with different policies
• Queuing theory: is the mathematical study of waiting lines
30
Why study queues?
• Whenever there is irregular demand for some
service, taking a random time, there unavoidably
appear queues. Only fully deterministic demand, and
fully deterministic service (which never happens!)
would eliminate the queueing effect.
31
Queue is everywhere
Packet switching relies on queues
- Queues are everywhere
At the sender
At the receiver
32
Queue is everywhere
• Computer network is a network of queues!
Inbound flow
Outbound flow
Router
33
Goals
• Evaluate the system performance
– Average queue length
– Average waiting time
– Loss probability due to buffer overflow
– Improve the system performance
– Determine the service rate with tolerable waiting time
(upgrade the system with more capacity incurs
investment cost. But long waiting time would be
annoying to users)
– Provide guaranteed packet loss probability with large
enough buffer
– (How large is enough?)
– Control the queue to offer differentiated service to
users
34
Queuing Representation
1- Single Server Queuing Model
- The server is usually the transmission facility
- The arrivals and the service time are random.
- The service time: how long a packet will remain in service,
directly related to the length of a packet
- The buffer is the available space in the system.
35
Queuing Representation
2- Multi-Server Queue
Arriving packets first go to empty servers. If more than
one server is empty, any server is chosen randomly.
36
Queues are represented via the notation: A/S/C/K
- A: The arrival process of packets. M stands for Markovian
(Poisson) process; λ is the arrival rate (the average number
of packets arriving by unit time). The interarrival time is
exponentially distributed with mean 1/λ.
- S: The packet departure process. In case of M, the
interdeparture time (the service time) is exponentially
distributed with average service time 1/μ, where μ is the
service rate.
- C: the number of parallel servers in the system.
- K: max. number of packets in the queue (the max. number
of packets that can be accommodated in the buffer plus the
number of servers). If K is missing, K = infinity.
Ex: M/M/1 or M/M/1/N queue (Poisson arrivals, Exponential
service time)
37
M/M/1 Queue
- Packets arrival and departure follow Poisson distribution
- Inter-arrival and inter-departure times follow
exponential distribution
- Why Poisson?
Suitable to most real-world scenarios, where arrivals
are independent
A simple queueing model (M/M/1 queue)
1 server, infinite buffer size
Assumptions:
1) Customers arrive according to a Poisson distribution with
parameter , where is the average number of arrivals per unit time.
Let X be the number of arrival in [0 ,t ] ,
39
A simple queueing model (M/M/1 queue)
Suppose we observe an arrival at and the next arrival at
where is a random variable, then
i.e.
,
has an exponential distribution with parameter .
40
A simple queueing model (M/M/1 queue)
(2) The single server takes a random length of time, , to serve each
customer and these times are independent random variables for
different customers.
has an exponential distribution with parameter
:
where is the average number of customers being served
per unit time.
41
A simple queueing model (M/M/1 queue)
Property of the exponential distribution
i.e., the probability of the completion of a service in the next
seconds is a
constant, independent of how long the service has being going on ! The
service time has no memory.
Both random variables
and
have no memory!
Summary: A single server queue with Poisson arrival and exponential service
times is denoted as M/M/1 queue (where M stands for Markov — a process
with no memory).
42
Analysis of the M/M/1 queue
State : There are customers in the system (including
the one being served, if any)
The process is called “birth - death” process
43
M/M/1 queue
Let Pn( t ) denote the probability that the system is in state n at
time t, then
,n  0
Let Δt →0, we have
,n  0
44
M/M/1 queue
This indicates the fundamental recursive relationship of the “birth–
death” process with “birth” parameter and “death” parameter .
Steady state solution:
,n  0
where
is called the
traffic intensity or utilization
factor.
45
M/M/1: State Balance Equation
flow in rate = flow out rate
 Pn is the steady state probability at state n
 Steady state: a r.v. becomes stable in statistics (mean,
variance, ...)
 State balance equation:
 Pn(λ + µ) =Pn+1µ + Pn−1λ
 P0λ =P1µ
 At each state: rate at which the process leaves = rate at which it
enters
M/M/1 queue
• Find P0:
Since
, we have
Then,
a geometric distribution
• The average number of customers in the system is:
• The average number of customers in the queue is:
47
M/M/1 queue
The time a customer must wait in the queue is
There are n customers in the system when a new customer arrives
Let W denote the mean time that a customer has to wait from the
moment he arrives until he departures then
The mean number of customers in the system is
48
Little's Law
• The long term average number of customers in a queuing
system is
– E (N ) = λE (T )
– where E (N ) is the average number of customers in the
system, λ is the average arrival rate of customers and E (T )
is the average time a customer spends in the system
– T = time wait in the buffer + time receiving service
M/M/1 queue
“Little’s Law” says that for any work-conserving1 queueing system,
the average occupancy of the system must equal the average delay
for the system multiplied by the average arrival rate.
As the utilization increases, the delay increases
correspondingly. At ρ =0. 5, the delay is twice as the
transmission time.
The probability that the queue exceeds a specified
number
1Working-conserving: any
traffic ready to be served must be served.
50
M/M/1: System Performance
• The average number of customers in the whole system
• The average spending time of customers in the whole system
(applying Little’s Law)
– Increasing the mean service rate µ and reducing the mean
arrival rate λ will reduce the sojourn time of customers in
the system
M/M/1 queue
• Example: In an M/M/1 queue, customers arrive at the rate of λ =15 per
hour. What is the minimum server rate to ensure that
1. the sever is idle at least 10% of the time
2. the expected value of the queue length is not to exceed 10?
3. the probability that at least 20people in the queue is at most 50% .
Solution:λ =15
Note: for that at least 20 people in the queue, there are at least 21 people in the system.
52
M/M/1 queue
• Example 2.2: Consider a packet transmission system whose arrival
rate (in packet/sec) is kλ, ( k> 1) and departure rate is kμ(service time
is 1/( kμ) ). What is the average number of packets in the system?
what is the average delay per packet?
• Solution: i) according to the Little’s Law
• ii) the average delay per packet is
(Increasing the arrival and transmission rates by the same factor, the average
delay is reduced by the factor)
53
M/M/1/N: Finite Buffer Queue
– Arrival packets are dropped when the buffer is full
– PB: blocking probability (prob. that buffer is full)
• State transition diagram
M/M/1/N: Blocking Probability PB
PB  PN 
1 
N

1   N 1
M/M/1/N queue
Arrival rate:
packet/sec;
Blocking probability:
;
Departure rate: packet/sec;
Throughput:
56
Queues with dependence on state of system
i) Multiserver situation;
ii) Customer arrival rate decreasing with queue occupancy to keep the
average occupancy down.
• Multiserver situation: λ and μ are function of n.
The global balance equations for the steady-state probabilities
are
57
Queues with dependence on state of system
Derivation:
The global balance equations for the steady-state probabilities
are
P0
58
Queues with dependence on state of system
Special case (M/M/N):
With the condition
, , we obtain
59
Queue with discouragement for (M/M/1)
(as the queue size increase, the arrival rate
drops accordingly), it can be derived:
A finite birth-death, or state-dependent, queueing system with maximum
queue size N
The blocking probability:
Throughput:
60
Example 2.3: Consider a M/M/m/m system (This model is in wide use in
telephony or circuit switched networks). In this context, customers in the system
correspond to active telephone conversations and the m servers represent a single
transmission line consisting of m circuits. The principle quantity of interest here is the
blocking probability, i.e., the steady-state probability that all circuits are busy, in
which case an arriving call is refused service, and the blocked calls are lost.
State representation:
)
The balance condition is
61
Solving for P0 in the equation
Then,
This equation is known as the Erlang B formula and find wide use in
evaluating the blocking probability of telephone systems.
62
M/M/m/m: Multiple Server Finite Buffer Queue
• Telephone network: at most m calls can be
connected at each time instant
• Arrival calls will be rejected if all m servers are busy
– PB: blocking probability that an arrival call is rejected
M/M/m: m Parallel Servers with
Infinite Buffer
• Performance metric: the probability that an
arrival has to waiting in the queue (PQ)
M/M/m: Steady State Probability
M/M/m: Queuing Probability PQ
• All servers in the system are busy (Erlang's C formula)
Network of M/M/1 Queues
m1
1 = g1 + g2
m2
2 = g1 + g2 + g 3
m3
3 = g1 + g3
67
m1
1 = g1 + g2
m2
2 = g 1 + g2 + g3
m3
3 = g 1 + g 3
68
m1
1 = g1 + g2
m2
2 = g1 + g2 + g3
m3
3 = g1 + g3
The time through the system is the sum of the time through
each queuing component.
69
Summary of Chapter 2
With queueing theory, we construct model so that
queue lengths and waiting time can be predicted. The
results can be used to decide the resources needed to
provide a service.
– Determine the service rate with tolerable
waiting time
– Provide guaranteed packet loss probability
with large enough buffer
– Control the queue to offer differentiated
service to users
70