Transcript simulation

Simulation Tutorial
By Bing Wang
Assistant professor,
CSE Department, University of Connecticut
Web site
Network Simulation
Motivation:
 learn fundamentals
of evaluating network
performance via
simulation
Overview:
 fundamentals of discrete
event simulation
 analyzing simulation
outputs
 ns-2 simulation
The evaluation spectrum
simulation
numerical
models
prototype
emulation
operational
system
What is simulation?
system boundary
exogenous inputs
to system
(the environment)
system under study
(has deterministic rules
governing its behavior)
“real” life
observer
program boundary
pseudo random inputs
to system
(models environment)
computer program
simulates deterministic
rules governing behavior
observer
“simulated” life
Why Simulation?
 goal: study system performance, operation
 real-system not available, is complex/costly or
dangerous (eg: space simulations, flight simulations)
 quickly evaluate design alternatives (eg: different
system configurations)
 evaluate complex functions for which closed form
formulas or numerical techniques not available
Simulation: advantages/drawbacks
 advantages:
 drawbacks/dangers:
Programming a simulation
What ‘s in a simulation program?
 simulated time: internal (to simulation program)
variable that keeps track of simulated time
 system “state”: variables maintained by simulation
program define system “state”

e.g., may track number (possibly order) of packets in queue,
current value of retransmission timer
 events: points in time when system changes state
 each event has associated event time
• e.g., arrival of packet to queue, departure from queue
• precisely at these points in time that simulation must
take action (change state and may cause new future
events)
 model for time between events (probabilistic) caused by
external environment
Discrete Event Simulation
 simulation program maintains and updates list of
future events: event list
 simulator structure:
Need:
 well defined set of
events
 for each event:
simulated system
action, updating of
event list
initialize event list
get next (nearest future)
event from event list
time = event time
process event
(change state values, add/delete
future events from event list
update statistics
n
done?
y
Simulation: example
 packets arrive (avg. interrarrival time: 1/ l) to
router (avg. execution time 1/m) with two outgoing
links
 arriving packet joins link i with probability fi
m1
l
m2
 state of system: size of each queue
 system events:
 job arrivals
 service time completions
 define performance measure to be gathered
Simulation: example
l
m1
m2
Simulator actions on arrival event
 choose a link



if link idle {place pkt in service, determine service time
(random number drawn from service time distribution)
add future event onto event list for pkt transfer
completion, set number of pkts in queue to 1}
if buffer full {increment # dropped packets, ignore
arrival}
else increment number in queue where queued
 create event for next arrival (generate
interarrival time) stick event on event list
Simulation: example
l
m1
m2
Simulator actions on departure event
 remove event, update simulation time, update
performance statistics
 decrement counter of number of pkts in queue
 If (number of jobs in queue > 0) put next pkt into
service – schedule completion event (generate
service time for put)
Gathering Performance Statistics
Ni
 avg delay at queue i: record Dij : delay
of customer j at queue i. Let Ni be #
customers passing through queue i
 throughput at queue i,
i =
 average queue length at i:
Ti 
Ni
total simulated time
N i   iTi
Little’s Law
D
j 1
Ni
ij
Analyzing Output Results
Each time we run a simulation, (using
different random number streams), we will
get different output results!
distribution of random numbers
to be used during simulation
(interarrival, service times)
random number sequence 1
input
simulation
output
output results 1
random number sequence 2
input
simulation
output
output results 2
random number sequence M
input
simulation
output
output results M
……
……
……
Analyzing Output Results
l
m1
m2
 W2,n: delay of
nth departing
customer from
queue 2
Analyzing Output Results
l
m1
m2
 each run shows variation in
customer delay
 one run different from
next
 statistical
characterization of delay
must be made
 expected delay of n-th
customer
 behavior as n
approaches infinity
 average of n customers
Transient Behavior
 simulation outputs that depend on initial condition
(i.e., output value changes when initial conditions
change) are called transient characteristics


“early” part of simulation
later part of simulation less dependent on initial
conditions
l
m1
m2
Effect of initial conditions
 histogram of delay of 20th
customer, given initially
empty (1000 runs)
 histogram of delay of 20th
customer, given non-empty
conditions (1000 runs)
Simulation: example
 packets arrive (avg. interrarrival time: 1/ l) to
router (avg. execution time 1/m) with two outgoing
links
 arriving packet joins link i with probability fi
l
m1
m2
Steady state behavior
 output results may converge to limiting “steady state” value if
simulation run “long enough”
avg delay
of packets
[n, n+10]
avg of 5 simulations
 discard statistics gathered during transient phase, e.g., ignore
first n0 measurements of delay at queue 2
Ni
Ti 
 Dij
j  n0
N i  n0
pick n0 so statistic is “approximately the
same” for different random number
streams and remains same as n
increases
Example: Random Waypoint Model
Simplest random waypoint model:
 mobile picks next waypoint Mn uniformly in area,
independent of past and present
 mobile picks next speed Vn uniformly in [vmin; vmax]
independent of past and present
 mobile moves towards Mn at constant speed Vn
Mn+1
Mn
Issue with RWP Model: Decay
 Distributions of node speed, position,
Speed (m/s)
distances, etc change with time
100 users average
1 user
Time (s)
Confidence Intervals
 run simulation: get estimate X1 as estimate of
performance metrics of interest
 repeat simulation M times (each with new set of
random numbers), get X2, … XM – all different!
 which of X1, … XM is “right”?
 intuitively, average of M samples should be “better”
than choosing any one of M samples
M
X
X
j 1
M
j
How “confident”
are we in X?
Confidence Intervals
 cannot get perfect estimate of true mean, m, with
finite # samples
 look for bounds: find c1 and c2 such that:
Probability(c1 < m < c2) = 1 – a
[c1,c2]: confidence interval
100(1-a): confidence level
Confidence Intervals: Central Limit Thm
 Central Limit Theorem: If samples X1, … XM
independent and from same population with
population mean m and standard deviation s, then
M
sample mean:
X
X
j 1
j
M
is approximately normally distributed with mean u
and standard deviation s
M
Confidence Intervals .. more
 don’t know population standard deviation; estimate
it using sample (observed) standard deviation:
sX
2
1 M
2

(
X

X
)
 m
M  1 m 1
X , s X we find upper and lower tails of
normal distributions containing a100% of mass
 given
2
Confidence Intervals .. the recipe
Given samples X1, …, XM, (e.g., having repeated
simulation M times), compute
M
X
sX2
X
j 1
j
M
1 M
2

(
X

X
)
 m
M  1 m 1
1.96s X
95% confidence interval: X 
M
Interpretation
of Confidence
Interval
If we calculate
confidence intervals
as in recipe, 95% of
confidence intervals
thus computed will
contain true
(unknown)
population mean.
Generating confidence intervals for
steady state measures
 independent replications with deletions
 method of batch means
 autoregressive method
 regenerative method
Independent replications
1.
generate n independent replications with m
samples, remove first l0 samples from each to
obtain
X 1 (m, l0 ), , X n (m, l0 )
2. calculate sample mean and variance from X i (m, l0 )
3. use t-distribution to compute confidence intervals
4. can combine with sequential stopping rule to obtain
confidence interval of specified width.
0ther methods
 batch means: take single run, delete first l0
observations, divide remainder into n groups
and obtain Xi for i-th, i = 1,…,n
follow procedure for independent replications
 complication due to nonindependence of Xis
 potential efficiency due to deletion of only l0
observation

 autoregressive method: spectrum analysis:
based on study of correlation of observations
 regenerative method: applicable to systems
with regeneration points
regeneration point  future independent of past
 can construct observations for intervals between
regeneration points that will be iid
 use of CLT provides confidence intervals

Comparing two different systems
Example: want to compare mean response times of
two queues where arrival process remains
unchanged but speed of servers are different.
 run each system n times (n sufficiently large) to
get {X1,j} and {X2,j} and take Zj = (X1,j ,X2,j) as the
observations to determine confidence intervals
for
 method of common random number

using the same streams to generate rvs for j-th runs of
both systems usually results in smaller sample variance of
{Zj}
ns-2, the network simulator
 discrete event simulator
 modeling network
protocols





wired, wireless, satellite
TCP, UDP, multicast,
unicast
web, telnet, ftp
ad hoc, sensor nets
infrastructure: stats,
tracing, error models, etc.
 prepackaged protocols
and modules, or create
your own
Our goal:
 flavor of ns: simple
example, modification,
execution and trace
analysis
“ns” components
 ns, the simulator itself (this is all we’ll have time
for)
 nam, the Network AniMator


visualize ns (or other) output
GUI input simple ns scenarios
 pre-processing:
 traffic and topology generators
 post-processing:
 simple trace analysis, often in Awk, Perl, or Tcl
 tutorial: http://www.isi.edu/nsnam/ns/tutorial/index.html
 ns by example: http://nile.wpi.edu/NS/