PowerPoint 簡報

Download Report

Transcript PowerPoint 簡報

Chapter 1: The Queueing
Paradigm
• The behavior of complex electronic systemStatistical
prediction
• Paradigms are fundamental models that abstract out the
essential features of the system being studied.
• The type of prediction:
• 1. How many terminals can one connect to a time
sharing computer and still maintain a reasonable
response time?
• 2. What percentage of calls will be blocked on the
outgoing lines of a small bussiness’s telephone system?
What improvement will result if extra lines are added?
• 3. What improvement if efficiency can one
expect in adding a second processor to a
computer system? Would it be better to
spend the money on a second hard disc?
• 4. What is the best architecture for a space
division packet switch?
• 5. Other? (e-mail servers, Web servers,…)
1.2 Queueing Theory
• Queueing theory has its roots early in the
twentieth century in the early studies of the
Danish mathematician A. K. Erlang on
telephone networks and in the creation of
Markov models by the Russian
mathematician A. A. Markov.
1.3 Queueing Model
•
•
•
•
•
•
•
•
Customers
Calls
Jobs
Events
Connections
Requests
Tasks
Packets
Queue: Buffering,
Server: Service
There three server in parallel
M/M/3
• The characters in the first two positions
indicate:
• M: Markov statistics,
• D: Deterministic timing,
• G: General (Arbitrary) Statistics,
• Geom: Geometric statistics
Network of Queues
• An “open” queuing network accepts and
loses customers from/to the outside world.
Thus the total number of customers in an
open network varies with time.
• A “closed” network does not connect with
the outside world and has a constant number
of customers circulating throughput it.
A customer that circulates in this queueing network
represents the control of the computer terminals
Service Discipline
• First In First Out (FIFO)
• Last In First Out (LIFO)
• LIFO service discipline with pre-emptive
resume (LIFOPR)
• Processor sharing (PS) displine
State Description
• The state of a queueing network is a vector
indicating the total number of customers in
each queue at a particular time instant.
• Several classes of customers  States?
• “Design and Implementation of the VAX
Distributed File Service” by William G.
Nichols and Joel S. Emer in the Digital
Technical Journal No. 9, June 1989
1.4 Case study I: Performance
Model of a Distributed File Service
• Important operation characteristics +
System parameter estimation
ModelAnalysis and simulation the
system performance
• The server resources being modeled are:
• 1.The network interface
• 2. The CPU
• 3. The disk subsystem at the server.
The response time breakdown of
a Web service
Design Alternatives
• Design alternatives  Models (adjustable)
 Parameters  Analysis, simulation,
measurement (artificial workload)
Performance comparison
• The model distinguishes two types of
requests: control operations and data access
operations.
Design Alternatives
• 1. Control operations are operations such as
open and close file, which have a high
computational component.
• 2. Data access operations are simple reads
and writes. Data access operations usually
have low computational requirements by
require larger data transfer.
Design Alternatives
• Model tractable: Assume the service time at
each service center is exponentially
distributed.
• Product-form solution (for single class)
• Approximate-solution (for multiclass mean
value analysis technique)
1.5 Case Study II: Single-bus
Multiprocessor Modeling
• “Modeling and Performance Analysis of
Single-Bus Tightly-Coupled
Multiprocessors” by B. L. Bodnar and A. C.
Liu in the IEEE Transactions on Computers,
Vol. 38, No. 3, March 1989.
Description of the Multiprocessor
Model
• A tightly-coupled multiprocessor (TCMP) is
defined as a distributed computer system where all
the processors communicate through a single
(global) shared memory. A typical physical layout
for such a computer structure is shown in Fig. 1.6
• Figure 1.7 illustrates our queueing for this singlebus tightly-coupled multiprocessor (SBTCMP)
architecture.
Description of the Multiprocessor
Model
• PE: Processing element
• BIU: A bus interface unit
• Each CPU and BIU has mean service rate
u(i,1) and u(i, 2), respectively.
• We also assume the CPU and BIU operate
independently each other, and that all the
BIU’s in the multiprocessor can be lumped
together into a single “equivalent BIU”.
Description of the Multiprocessor
Model
• The branching probability are p(i,1), p(i,2) and
p(i, 3).
• The branching probability p(i,3) is interpreted as
the probability that a task associated with PE i will
join the CPU queue at PE i after using the BIU.
• 1-p(i, 3) is then the probability that a task
associated with PE i will wait for an interrupt
acknowledgment at PE i.
Queueing Model of
Multiprocessor
• Interrupt mechanism: Intertask communication
interrupt
• The state of the task: sleep, ready, execution,…
• The task sleep in the task pool.
• If the task requests bus usage, it BIU will enter the
BIU queue. If the bus is free, the task will begin
using the bus; otherwise, it will wait until the bus
released.
Queueing Model of
Multiprocessor
• We assume that only one of the preceding events
can occurs at a given moment. That is, CPU and
interrupt processes cannot occur concurrently.
• We also assume that a task queued for bus access
will not using the CPU.
• Tasks waiting for an interrupt or undergoing
interrupt servicing will be referred to as “interruptdriven tasks”.
Queueing Model of
Multiprocessor
• Tasks waiting for interrupts are modeled by
a queue feeding a “lock”. The lock is drawn
using a triangle to signify that it is enabled
via an external stimulus. The purpose of the
lock is to allow only one task to pass by it in
response to an interrupt.
Queueing Model of
Multiprocessor
• If the task that was forced to release the
CPU was an interrupt-driven task, then this
pre-empted task will become the first entry
on a-last-come-first served queue (i.e., a
stack)
• If the pre-empted task was not an interrupt
driven task, then it will become the first
entry on the ready list.
Queueing Model of
Multiprocessor
• The model not only considers the the bus
contention caused by multiple processors
attempting to access the shared memory, but
also attempts to include realistically the
local behavior of the various tasks running
on these same processors.
1.6 Case Study III: TeraNet, A
Lightwave Networks
1.7 Case Study IV: Performance Model of a
Shared Medium Packet Switch
PlaNET Switching Configuration
Model of Switch