Operating Systems
Download
Report
Transcript Operating Systems
Operating Systems
Part III: Process
Management
(CPU Scheduling)
CPU-I/O Burst Cycle
CPU burst - CPU executes process
I/O burst - process does I/O
Processes go back & forth between 2 states
Frequency curve of CPU bursts is inverse exponential:
–
Large number of short CPU bursts, small number of long CPU
bursts
CPU-bound: few but long CPU bursts
I/O-bound: many very short CPU bursts
Preemptive & Non-preemptive
Scheduling
CPU decisions needed in:
–
–
–
–
switching from running to waiting (i/o request)
switching from running to ready (timer interrupt)
switching from waiting to ready (i/o done)
process termination
1st and 4th : no choice in terms of scheduling:
new process must be selected for execution
2nd and 3rd: scheduling is pre-empted
Preemptive & Non-preemptive
Scheduling
Non-preemptive scheduling
–
–
Once CPU is allocated to a process, it is released
only by termination or waiting
Does not need special hardware (timer)
Preemptive & Non-preemptive
Scheduling
Preemptive scheduling
–
–
Processes may be interrupted any time
Costly because:
shared data update by a process may be preempted
kernel process may be preempted (e.g. medium-term
scheduler)
solution: wait for system call to finish -> not good for realtime computing
Dispatcher
Gives control to process selected by short-term
scheduler
Does:
–
–
–
Context-switching
Switching to user-mode
Jumping to program location to start it
Dispatch latency - time it takes the dispatcher
to stop one process and start another running
Scheduling Criteria
CPU utilization - keep CPU busy (40% - 100%)
Throughput - number of processes completed
per unit time
Maximize these.
Scheduling Criteria
Waiting time - sum of time spent in ready queue
Turnaround time - time it takes to execute process
(from submission to completion)
Response time - time it takes to start responding (from
submission to first response); does not include time to
output response
Minimize the above.
Scheduling Algorithms
Scheduling: which process in the ready queue
is to be run (allocated to the CPU)?
First Come First Served (FCFS)
–
–
–
–
Easily managed by a FIFO queue
Simple to write and understand
Long waiting time depending on order of process
Non-preemptive
Scheduling Algorithms
Shortest Job First (SJF)
–
–
–
–
CPU assigned to process that has shortest next
CPU burst
FCFS used for processes that have same length
Can be proved to be optimal (minimum waiting time)
Difficulty is determining length or time duration of
next CPU burst.
Scheduling Algorithms
Shortest Job First (continued)
–
–
Determining length or time duration of next CPU
burst: approximate by using the exponential average
of the job's previous CPU bursts.
tn is the length of the nth CPU burst (previous),
ξn+1 the predicted value for the next CPU burst:
ξn+1 = α tn + (1 – α ) ξn
Scheduling Algorithms
Shortest Job First (continued)
–
–
Long-term scheduling (batch system): length of
process time limit specified by user
Short-term scheduling:
SJF cannot be implemented since there’s no way to know
length of next CPU burst
Implementation relies on predicting length of next CPU
burst, using the previous formula
Scheduling Algorithms
Shortest Job First (continued)
–
May be:
Preemptive - also called shortest-remaining-time-first
Non-preemptive - allows current process to finish
Scheduling Algorithms
Priority Scheduling
–
–
A priority is associated with each process. CPU is
allocated to process with the highest priority. Equal
priority processes are scheduled in FCFS order or
SJF order.
SJF special case of priority scheduling algorithm.
CPU is allocated to process with shortest next burst
(highest priority), and longest next burst has lowest
priority.
Scheduling Algorithms
Priority Scheduling (continued)
–
Priority defined either
–
Internally - measurable quantities to compute priority
(memory requirements, time limits, number of open files,
etc.)
Externally - set by criteria outside the O/S (importance of
process, funded projects, etc.)
Scheduling is also preemptive or non-preemptive
Scheduling Algorithms
Priority Scheduling (continued)
–
–
Indefinite blocking or starvation - can leave low
priority processes starving for CPU allocation
Solution: Aging - low priority process gradually
increases priority until it is finally allocated to the
CPU
Scheduling Algorithms
Round-Robin Scheduling
–
–
–
–
Designed specifically for time-sharing systems
Similar to FCFS but with preemption
Time quantum or time slice is defined (typically
between 10 and 100 milliseconds)
CPU goes around ready queue allocating 1 time
quantum to each process
Scheduling Algorithms
Round-Robin Scheduling (continued)
–
Two things can happen:
Process takes up 1 time quantum, or
Process gives CPU up voluntarily
Scheduling Algorithms
Round-Robin Scheduling (continued)
–
Performance depends heavily on size of time
quantum, n
n is very small - called processor sharing (virtual processor
runs at 1/n the speed) -> smaller time quantum increases
context-switching -> make time-quantum >> context switch
(10%)
n is large - scheduling degenerates to FCFS policy
Rule of thumb: 80% of CPU bursts should be shorter than
time quantum
Scheduling Algorithms
Multilevel Queue Scheduling
–
–
–
Ready queue classified into several groups
Processes are permanently assigned to a queue
depending on priority, memory size, etc. (e.g.
separate queues used for foreground and
background processes)
Examples:
Absolute priority: System -> Interactive -> Batch -> Lowpriority
Time-slice: 80% foreground, 20% background
Scheduling Algorithms
Multilevel Feedback Queue Scheduling
–
–
Allows processes to move between queues
Separate processes w/ different CPU burst
characteristics
Too much CPU time -> move to low-priority queue
I/O-bound and interactive -> high-priority queue
Use aging : if process waits too long in low-priority queue > move to high-priority queue to prevent starvation
Scheduling Algorithms
Multilevel Feedback Queue (continued)
–
General parameters:
–
Number of queues
Scheduling algorithm for each queue
Method to upgrade process to high-priority queue
Method to demote process to low-priority queue
Method to determine what queue a process will enter
Considered the most general (can be configured to
fit a system), but also the most complex
Multiprocessor Scheduling
Scheduling process becomes more complex
Many possibilities tried, no one best solution
Heterogeneous system
–
–
Processors are different (e.g. distributed system)
Process can only run on processor it was compiled
in
Multiprocessor Scheduling
Homogenous system
–
–
–
Identical CPUs within a multiprocessor system
Process can run in any CPU
Can have load sharing
Separate queue for each processor
May have one processor busy while another is idle (empty
queue)
Remedied with a common ready queue
Multiprocessor Scheduling
Homogeneous system (continued)
–
Load sharing - 2 possibilities
Self-scheduling (symmetric scheduling) SMP - each
processor examines queue and selects process to execute
-> careful programming to ensure no two processors
choose the same process
Master-slave (asymmetric scheduling) - one processor is
appointed as scheduler -> in some systems, other system
activities are performed by the master, slaves only execute
user code
Real-Time Scheduling
Hard real-time
–
–
–
Resource reservation -> scheduler either admits a
process w/ guaranteed response time, or rejects it
Guarantee impossible for systems with virtual
memory or secondary storage
Composed of special software running hardware
dedicated to critical processes
Soft real-time -> less restrictive
–
–
Requires only that critical processes receive priority
Can support multimedia & high speed graphics
Algorithm Analysis
Analytic Evaluation
–
–
Uses an algorithm and the system workload to
produce a formula to evaluate performance
Deterministic Modeling
Takes a particular predetermined workload and defines
performance of each algorithm for that (sample) workload
Simple, fast, and gives exact numbers (input/output)
Requires too specific and too much exact knowledge to be
useful -> processes that run daily vary greatly
Algorithm Analysis
Queueing Models
–
CPU burst distribution is used instead of
predetermined workload
Simulations
–
–
–
Uses software to simulate major system
components
Expensive -> requires hours of computer time
Design, coding, and debugging of simulator is
complex
Algorithm Analysis
Implementation
–
–
–
–
Simulations have limited accuracy
Only way to find out is to implement
Puts algorithm to test in real environment
Disadvantages:
Cost
Reaction of users to constantly changing O/S