Transcript Schedulers

Styresystemer og
Multiprogrammering
Block 3, 2005
Processes, Threads, and
Scheduling
Robert Glück
Today’s Plan
• Processes:
– Process Concept
– Context Switch
– Process Family
– Interprocess Communication
• Threads:
– A light form of parallelism
• Scheduling:
– How to utilize the system resources best?
– How to ensure short waiting times?
2
What is a Process?
• A process is a program in execution:
– Program code
– State of execution (registers, open files, ...)
• Process execution in sequential fashion
• Operating system executes a variety
of user processes and system process
• Programs functionality is specified by
a series of instructions:
– Executable code
– High-level programming language
3
What’s in a Process?
• Information associated with each process:
–
–
–
–
Program code (same for all instances)
Program data
CPU registers
System resources:
• Memory
• Open files
– Accounting information:
• Process ID
• Resource usage
4
Process Control Block
5
Life Cycle of a Process
As a process P executes, it changes state:
new: P is being created
running: P’s instructions are executed
waiting: P is waiting for some event
ready: P is ready to be executed,
but waits for CPU time
terminated: P has finished execution
6
Process Life Cycle
Schedulers: manage queues
7
Switch from Process to Process
8
Context Switch
• Context switch is expensive (typical: ≤10msec)
– Direct costs:
• Save and load execution state (CPU registers, counters,...)
• Memory-management information (e.g., virtual memory)
– Indirect costs:
• CPU cache filled with data of old process
• Other caches have similar problems
• Hardware supported context switch:
– Single instruction to load and store registers
– CPU has several register sets (example: 10 sets)
– Intel hyperthreading (2 active process per CPU)
9
Process Scheduling Queues
Processes that are not active are located:
- Ready queue: ready, waiting for CPU
- Waiting queues: process waits for
event:
I/O device: waiting for an I/O device
Synchronization: limited access to a
system resource requires wait
10
Scheduling Queues
CPU
I/O
11
Representation of Process Scheduling
12
Schedulers
• Long-term: select which processes should
be brought into the ready queue
• Short-term: selects which process should
be executed next and allocates CPU
• Medium-term: swap-out, swap-in,
reduce degree of multiprogramming,
improve process mix or overcommitted
13
Scheduling Time Horizons
medium-term
long-term
≤ minutes
short-term
≤ 10msec
14
Family Life of Processes
• Process (parent) can start new processes (children)
• A child can inherit (parts of) the system resources of
the parent
– Open files
– Address space
• A child can be an instance of another program
(typically: inherit nothing of the parent)
• Execution of child:
– Parent waits until child terminates
– Parent and child execute concurrently
• When parent terminates:
– Kill all children
– Rights to children given to ancestor
15
UNIX Example
• fork() duplicates a process:
– Copy address space of the parent
– Return value:
• 0 for the child
• Process_id of the child is returned to parent
• wait() wait for termination of child
• execv(prog) execute a new program ”prog”
16
Forking Separate Process in C
#include <stdio.h>
#include <unistd.h>
int main(int argc, char *argv[])
{
int pid;
/* fork another process */
pid = fork();
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
exit(-1);
}
else if (pid == 0) { /* child process */
execlp("/bin/ls","ls",NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait(NULL);
printf("Child Complete");
exit(0);
}
}
17
UNIX Example: pstree
process tree
18
UNIX Example: ps
One child
Four children
19
Cooperating Processes
• Independent process if it cannot affect or be
affected by other processes.
• Cooperating process if it can affect or be
affected by other processes.
• Reasons for providing process cooperation:
– Share data (example: a shared file)
– Computation speedup (only if multiple processing
elements such as CPU or I/O channels)
– Modularity (divide system functions into modules)
– Convenience: several processes are natural for a
task (example: edit, print, compile in parallel)
20
Interprocess Communication (IPC)
• Linking input and output streams:
– UNIX pipes, example: gunzip –v text.txt.gz | less
• Message Passing:
– Direkt:
• send(453, message): send msg to process with ID 453
• receive(id, message): receive msg from some process
– Indirekt (via mailboxes):
• Easier to establish many-to-many relations
• Named mailboxes abstract from process ID
• Shared memory
21
Producer-Consumer Problem
• Paradigm for cooperating processes:
producer process produces that is consumed
by a consumer process
• Buffer:
– Unbounded = no practical limit on the size
– Bounded = synchronize P’s and C’s speed
• Shared memory:
– buffer: a table
– in: pointer to the next free entry
– out: pointer to the next available element
22
Shared-Memory Solution
item nextProduced;
while (1) {
while (((in + 1) % BUFFER_SIZE) == out)
yield(); /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
}
buffer FULL
condition
item nextConsumed;
while (1) {
while (in == out)
yield(); /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
}
buffer EMPTY
condition
23
Threads
• Process can have several parallel tasks:
– Web browser: display images, layout text,
retrieve data from the network
– Word processor: display graphics, check spelling,
read keystrokes from the user
• Threads give a process the possibility to have
several active program sequences that share
the resources of the process:
– threads are ”light” overhead
– processes are ”heavy” overhead
24
Single- and Multiple-Threaded
three threads
25
User and Kernel Threads
• User Threads:
– Implemented ”inside” a single process
– Administration of threads inside user process
– Low overhead when creating, switching, and
terminating threads
• Kernel Threads:
– Calls kernel for administration (expensive)
– Kernel can switch threads when a system call
blocks
– Can exploit multiple processors (parallelism)
26
CPU Scheduling
• Goal of CPU scheduling:
”Best utilization of system resources”
• Different scheduling strategies:
– First come, first served
– Shortest job first
– Round robin
– Priority based
– Multilevel queue
27
Goals of Scheduling
• Maximize utilization of system resources:
– CPU
– I/O devices
• Process execution consists of:
– CPU execution
– I/O requests
• Ensure low waiting times for user!
28
Usage of CPU versus I/O
• CPU-bound: uses
CPU more that I/O
devices
• I/O-bound:
uses I/O devices more
than CPU
29
Histogram of CPU-burst Times
1
30
CPU Scheduler
•
CPU scheduler selects a process for
execution from a set of ready processes
•
Scheduling decisions may take place when:
1.
2.
3.
4.
•
•
running process switches to waiting state (I/O req)
running process switches to ready state (interrupt)
waiting process switches to ready state (I/O done)
process terminates
Nonpreemptive (voluntary switch): 1 & 4
Preemptive (forced switch): 2 & 4
31
Criteria for CPU Scheduling
• CPU utilization:
MAXIMIZE
– How much % of time CPU is busy (typical 40-90%)
• Throughput:
MAXIMIZE
– Number of processes that are completed per minute
• Turnaround time:
MINIMIZE
– Time it takes to complete a process
• Waiting time:
MINIMIZE
– Time spent waiting in the ready queue
• Response time:
MINIMIZE
– Time it takes from a request to a response
32
First Come, First Served (FCFS)
Process CPU burst time (msec)
P1
24
P2
3
P3
3
• Processes arrive in the order: P1 , P2 , P3
• Gantt chart for the schedule is:
P1
0
P2
24
P3
27
30
• Waiting times: P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
33
First Come, First Served (cont’d)
• Suppose that the processes arrive in the order
P2 , P3 , P1
• Gantt chart for the schedule is:
P2
0
P3
3
P1
6
30
• Waiting times: P1 = 6; P2 = 0; P3 = 3
• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case!
• Convoy effect: short process behind long process
34
Shortest Job First
• CPU assigned to the process that has the
smallest next CPU burst:
– Preemptive: when a new process with a shorter
CPU burst arrives at the ready queue
– Nonpreemptive: running process is not interrupted
• Shortest waiting time for a set of processes
• Problem: need to know the execution time
• Heuristics predict length of next CPU burst:
– Exponential average:
 n1   tn  1    n
35
Round Robin (RR)
• Each process gets a small unit of CPU time
(time quantum), usually q=10-100 msec.
• After this time has elapsed, the process is
preemptied and added to the end of the ready
queue.
• If there are n processes in the ready queue,
then each process gets 1/n of the CPU time,
and no process waits longer that (n-1)q times.
• Performance:
– q large  long waiting times (same as FCFS)
– q small  q must be large with respect to contextswitch time, otherwise the overhead is too high
36
Priority Scheduling
• A priority number is associated with each
process. Example: 0 (high) - 7 (low)
• The CPU is allocated to the process with the
highest priority (equal priority in FCFS)
• SJF is a priority scheduling where the priority
is the predicted next CPU burst time
• Problem: Starvation – low priority processes
may never execute
• Solution: Aging – as time progresses increase
the priority of the process
37
Multilevel Ready Queue
• Queue is partitioned into separate queues:
– foreground (interactive) processes
– background (batch) processes
• Each queue has its own scheduling algorithm:
– foreground: RR
– background: FCFS
• Scheduling must be done between the
queues:
– Fixed priority scheduling: serve all from
foreground, then from background. Possibility of
starvation
– Time slice: example: 80% to foreground in RR,
20% to background in FCFS.
38
Multilevel Feedback Queue
• A process can move between the
various queues; e.g., implement aging
• Observe processes and place them into
queue according to their behavior:
– A foreground process that often (or once)
exceeds its time quantum is placed into the
background queue
39
Example of Multilevel Feedback Queue
40
Summary
• Process: program in execution, process
switch, process queues
• Threads: parallelism within a process
• Scheduling: maximize CPU utilization,
minimize waiting times, forced vs voluntary
process switch, several scheduling
strategies
41
Source
• These slides are based on SGG04
and the slides provided by the authors.
42