Chapter 4 - Processor Management

Download Report

Transcript Chapter 4 - Processor Management

Chapter 4
Processor Management
2015 - Semester 2
Objectives


Overview
Scheduling sub-managers
– Job Scheduler
– Process Scheduler



Process States
Process Control Block
Process Scheduling Policies:
– Two types: => Preemptive and non-preemptive scheduling

Process Scheduling Algorithms
–
–
–
–
–
–
–

First-Come First-Served
Shortest Job Next
Priority Scheduling
Shortest Remaining Time
Round Robin
Multiple-Level Queues
Earliest Deadline First
Interrupts
Overview
 Program (job)
– Inactive unit. E.g. File stored on a disk
– A unit of work submitted by a user (not a process)
 Process (task)
– A program in execution
– Active entity
• Requires resources to perform function. E.g. Processor, special registers, etc.
 Process management
- keeping track of processes and the states they are in
 Thread
– Portion of a process
– Runs independently
 Processor (CPU)
– Performs calculations and executes programs
Overview
 Multiprogramming environment
– Multiple processes competing to be run by a single processor
(CPU).
– Requires fair and efficient CPU allocation for each job
 Multitasking
– One user running many programs
– Still, resources must be shared by several programs
 Interrupt
– Call for help
– Activates higher-priority program
 Context Switch
– Saving job processing information when interrupted
Scheduling Sub-managers
 Processor Manager:
– Made up of two sub-managers
 Job Scheduler: higher-level scheduler
– Job scheduling responsibilities
– Job initiation based on certain criteria
 Process Scheduler: lower-level scheduler
– Process scheduling responsibilities
– Determines execution steps
– Process scheduling based on certain criteria
Job Scheduling
 Job Scheduler functions
– Selects incoming job from queue
– Places the job in process queue
– Decides on job initiation criteria
 Goal
– Sequence jobs
• Efficient system resource utilization
– Balance I/O interaction and computation
– Keep most system components busy most of
time
Process Scheduling
 Process Scheduler functions:
– Determines job to get CPU resource
• When and how long
– Decides interrupt processing
– Determines queues for job movement during
execution
– Recognizes job conclusion / termination
Process Scheduling (cont.)
 Make use of common computer program traits
– Programs alternate between two cycles
• CPU and I/O cycles
• Frequency and CPU cycle duration vary
 General tendencies exists:
– I/O-bound job
• Performs lots of I/O operations
• Many brief CPU cycles and long I/O cycles (printing
documents)
– CPU-bound job
• Performs lots of computation and do little O/I
• Many long CPU cycles and shorter I/O cycles (math
calculation)
Process States
 CPU / Processor Scheduling Determining which process in
the ready state should be moved to the running state
– Many processes may be in the ready state
– Only one process can be in the running state, actually running at any
one time
Process States
(figure 4.2)
A typical job (or process) changes status as it moves through the system from
HOLD to FINISHED.
© Cengage Learning 2014
Process Control Block (PCB)
 The operating system must manage a large amount of data
for each active process in the system.
 Usually that data is stored in RAM in a data structure
called a process control block (PCB)
 The OS maintains one PCB for each process
Process Control Block Attributes
 Unique ID
 Processor status information (job state, e.g.
READY, RUNNING, WAITING)
 Processor state information (main memory
info, priority, resources)
 Accounting (CPU time, I/O operations,
performance measurements)
Control Blocks and Queuing
 Job PCB
– Created when Job Scheduler accepts job
– Updated as job executes
– Queues use PCBs to track jobs
– Contains all necessary job management processing data
– PCBs linked to form queues (jobs not linked)
 PCBs requires orderly management of queues
– Determined by process scheduling policies and
algorithms
PCB and Queuing
Queues use PCBs to track jobs
(figure 4.5)
Queuing paths from HOLD to FINISHED. The Job and Processor schedulers
release the resources when the job leaves the RUNNING state.
© Cengage Learning 2014
CPU Context Switch
 There is only one CPU and therefore only one set of CPU
registers
– These registers contain the values for the currently
executing process
 Each time a process is moved to the running state:
– Register values for the currently running process
are stored into its PCB
– Register values of the new running state are loaded
into the CPU
– This exchange of information is called a context switch
Process Scheduling Policies
 Multiprogramming environment
– More jobs than resources at any given time
 Operating system pre-scheduling task
– Resolve three system limitations:
• Finite number of resources (disk drives, printers,
tape drives)
• Some resources cannot be shared once allocated
(printers)
• Some resources require operator intervention before
reassigning (tape drives)
“Some” definitions
 CPU utilization – keep the CPU as busy as possible
 Turnaround time – the amount of time when process
arrives in the ready state the first time and when it exits the
running state for the last time.
 Throughput – the number of processes that complete their
execution per time unit
 Waiting time – amount of time a process has been waiting
in the ready queue
 Response time – amount of time it takes from when a
request was submitted until the first response is produced,
not output (for time-sharing environment)
Process Scheduling Policies (cont.)
 Good process scheduling policy criteria:
– Maximize throughput
• Run as many jobs as possible in given amount of time
• Minimize overhead (context switch time) as well as efficient
use of resources (CPU, disk, memory, etc.)
– Minimize response time
•
•
•
•
•
Elapsed time to do an operation (job)
Response time is what the user sees
Time to echo keystroke in editor
Time to compile a program
Quickly turn around interactive requests
– Minimize turnaround time
• amount of time to execute a particular process
• Move entire job in and out of system quickly
Process Scheduling Policies (cont.)
 Good process scheduling policy criteria (cont.):
– Minimize waiting time
• amount of time a process has been waiting in the ready
queue
• Move job out of READY queue quickly
– Maximize CPU efficiency
• Keep CPU busy 100 % of time
– Ensure fairness for all jobs
• Give every job equal CPU and I/O time
 Final policy criteria decision lies with system
designer or administrator’s hands!
Process Scheduling Policies (cont.)
 Problem
– Job claims CPU for very long time before I/O
request issued
• Builds up READY queue and empties I/O queues
• Creates unacceptable system imbalance
 Corrective measure
– Interrupt
• Used by Process Scheduler upon predetermined
expiration of time slice
• Current job activity suspended
• Reschedules job into READY queue
Process Scheduling Policies (cont.)
 Two types of scheduling policies:
 Non-preemptive scheduling The currently executing
process gives up the CPU voluntarily
– Once CPU has been allocated to a process, the process keeps the
CPU until
• Process exits OR
• Process switches to waiting state
 Preemptive scheduling The operating system decides to
favor another process, preempting the currently executing
process. (Interruption).
– Transfers CPU to another job
– Used in time-sharing environments
Process Scheduling Algorithms
 Six algorithm types
– First-come, first-served (FCFS)
– Shortest job next (SJN)
– Priority scheduling
– Shortest remaining time (SRT)
– Round robin
– Multiple-level queues
– Earliest deadline first (EDF)
 Current / most systems emphasize interactive
use and response time (use preemptive
policies)
First Come First Served (FIFO)
 Non-preemptive
 Job handled based on arrival time
– Earlier job arrives, earlier served
 Simple algorithm implementation
– Uses first-in, first-out (FIFO) queue
 Good for batch systems
 Unacceptable in interactive systems
– Unpredictable turnaround time
 Disadvantages
– Average turnaround time varies seldom minimized
First Come First Served (FIFO)
What is the average
turn- around time?
(140+215+535+815+940)/5 = 529
Shortest Job Next (SJN)
 Non-preemptive
 Also known as shortest job first (SJF)
 Job handled based on length of CPU cycle time
 Easy implementation in batch environment
– CPU time requirement known in advance
 Does not work well in interactive systems
 Optimal algorithm
– All jobs are available at same time
– CPU estimates available and accurate
Shortest Job Next (SJN)
What is the average
turn-around time?
(75+200+340+620+940)/5 = 435
Priority Scheduling
 Non-preemptive
 Preferential treatment for important jobs
– Highest priority programs processed first
– No interrupts until CPU cycles completed or
natural wait occurs
 READY queue may contain two or more jobs
with equal priority
– Uses FCFS policy within priority
 System administrator or Processor Manager use
different methods of assigning priorities
Priority Scheduling (cont.)
 Processor Manager priority assignment
methods:
– Memory requirements
• Jobs requiring large amounts of memory
• Allocated lower priorities (vice versa)
– Number and type of peripheral devices
• Jobs requiring many peripheral devices
• Allocated lower priorities (vice versa)
Priority Scheduling (cont.)
 Processor Manager priority assignment
methods (cont.)
– Total CPU time
• Jobs having a long CPU cycle
• Given lower priorities (vice versa)
– Amount of time already spent in the system
(aging)
• Total time elapsed since job accepted for processing
• Increase priority if job in system unusually long time
Shortest Remaining Time (SRT)
 Preemptive version of SJN
 Processor allocated to job closest to completion
– Preemptive if newer job has shorter completion time
 Often used in batch environments
– Short jobs given priority
 Cannot implement in interactive system
– Requires advance CPU time knowledge
 Involves more overhead than SJN
– System monitors CPU time for READY queue jobs
– Performs context switching
Shortest Remaining Time (SRT)
Arrival Time
0
1
2
3
Job
A
B
C
D
CPU Cycle
6
3
1
4
Round Robin
 Preemptive
 Distributes the processing time equitably among all ready
processes
 The algorithm establishes a particular time slice
(quantum), which is the amount of time each process
receives before being preempted and returned to the ready
state to allow another process its turn
 Time quantum size:
– Crucial to system performance
– Varies from 100 ms to 1-2 seconds
 CPU equally shared among all active processes
– Not monopolized by one job
 Used extensively in interactive systems
Round Robin (cont.)
Suppose the time slice is 50
What is the average
turn-around time?
(515+325+940+920+640)/5 = 668
Round Robin (cont.)
Suppose the time quantum is 4
Arrival Time
0
1
2
3
Job
A
B
C
D
CPU Cycle
8
4
9
5
(figure 4.11)
Timeline for job sequence A, B, C, D using the preemptive
round robin algorithm with time slices of 4 ms.
© Cengage Learning 2014
Round Robin (cont.)
 Efficiency
– Depends on time quantum size
• In relation to average CPU cycle
 Quantum too large (larger than most CPU cycles)
– Response time suffers
– Algorithm reduces to FCFS scheme
 Quantum too small
– Throughput suffers
– Context switching occurs
• Job execution slows down
• Overhead dramatically increased
 Information saved in PCB
Round Robin (RR) (cont.)
 Best quantum time size:
– Depends on system
• Interactive: response time key factor
• Batch: turnaround time key factor
– General rules of thumb
• Long enough for 80% of CPU cycles to complete
• At least 100 times longer than context switch time
requirement
Process Scheduling Algorithms
Are they preemptive or non-preemptive? Explain.
 First-Come, First-Served?
 Shortest Job Next?
 Shortest Remaining Time?
 Round Robin?
Multiple-Level Queues
 A class of scheduling algorithms that categorise processes by
some characteristic and can be used in conjunction with
other policies.
– Processes can be categorised by: memory size, process type, their
response time, their externally assigned priorities, etc.
 A multilevel queue-scheduling algorithm divides the ready
queue into several separate queues to which processes are
assigned.
– Each queue is associated with one particular scheduling algorithm
 Examples:
– Interactive processes: Round Robin
– Background processes: FCFS and SRT
Multiple-Level Queues
 Not really a separate scheduling algorithm
 Works in conjunction with several other schemes
 It is found in systems with jobs that can be grouped
according to common characteristics
 Works well in systems with jobs grouped by common
characteristic
– Priority-based
• Different queues for each priority level
– CPU-bound jobs in one queue and I/O-bound jobs in another queue
– Hybrid environment
• Batch jobs in background queue
• Interactive jobs in foreground queue
 Scheduling policy based on predetermined scheme
Multiple-Level Queues (cont.)
 Four primary methods of moving jobs:
–
–
–
–
Case 1: No Movement Between Queues
Case 2: Movement Between Queues
Case 3: Variable Time Quantum Per Queue
Case 4: Aging
Find out about these cases on your own ...
Earliest Deadline First (EDF)
 Dynamic priority algorithm
 Preemptive
 Addresses critical processing requirements of
real-time systems: deadlines
 Job priorities can be adjusted while moving
through the system
 Primary goal:
– Process all jobs in order most likely to allow each
to run to completion before reaching their
respective deadlines
Earliest Deadline First (EDF) (cont.)
 Initial job priority: inversely proportional to its
absolute deadline
– Jobs with same deadlines: another scheme applied
 Priority can change as more important jobs enter
the system
 Problems
– Missed deadlines: total time required for all jobs
greater than allocated time until final deadline
– Impossible to predict job throughput: changing priority
nature
– High overhead: continual evaluation of deadlines
Interrupts
 Interrupt Types
– Page interrupt (memory manager)
• Accommodate job requests
– Time quantum expiration interrupt
– I/O interrupt
• Result from READ or WRITE command issuance
– Internal interrupt
• Synchronous
• Result from arithmetic operation or job instruction
– Illegal arithmetic operation interrupt
• Dividing by zero; bad floating-point operation
Interrupts
 Interrupt Types (cont.):
– Illegal job instruction interrupt
• Protected storage or non-existent access attempt
• Attempts to use an undefined operation code, operating on
invalid data, etc.
 Interrupt handler
– Control program that handles the interruption event
sequence. When the OS detects a non-recoverable error,
the interrupt hander follows this sequence:
• Type of interrupt is described and stored to be passed on to the
user as an error message
• State of the interrupt process is saved
• The interrupt is processed
• Processor resumes normal operation.
Homework
Read chapter 4 in the textbook!!
End