Multiprocessor and Real

Download Report

Transcript Multiprocessor and Real

Multiprocessor and Real-Time
Scheduling
Chapter 10
1
Classifications of
Multiprocessor Systems
• Loosely coupled or distributed multiprocessor,
or cluster
– Each processor has its own memory and I/O
channels
• Functionally specialized processors
– Such as I/O processor or Graphics Processor
– Controlled by a master processor
• Tightly coupled multiprocessing
– Processors share main memory
– Controlled by operating system
2
3
Independent Parallelism
• Separate application or job
• No synchronization among processes
• Example is time-sharing system
– Each user has his or her own processor to
run their programs.
– But still sharing main memory and I/O
resources with the other users.
4
Independent Parallelism
• Able to schedule similar to single
processor system.
– Doesn’t matter which process runs on
which processor when.
• Main concern is processes running on
the same processor repeatedly to make
use of cache memory.
5
Coarse and Very CoarseGrained Parallelism
• Synchronization among processes at a very
gross level
– The processes rarely have to talk to one another.
– Example: Grocery shopping by having each
person go find one item.
• Good for concurrent processes running on a
multiprogrammed uniprocessor.
– very little swapping needed
– Can by supported on a multiprocessor with little
change
6
Medium-Grained Parallelism
• Single application is a collection of threads.
– Uniprocessor machine these would be swapped in
and out to run.
– Multi-processor machine we can have each thread
running on a separate processor.
• Threads usually interact frequently
– A lot more communication issues than previously
needed.
– How often one thread runs may have a strong
effect on the other threads of the process.
• One thread potentially overwhelming the other threads
with requests because it is running constantly.
7
Medium-Grained Parallelism
• In conflict with the Independent
processes in that we need multiple
processors at once.
8
Fine-Grained Parallelism
• Highly parallel applications
– Taking a group of instructions and
attempting to execute them all at once.
– Graphics processors utilize this with
multiple pipelines to render images quickly.
• Specialized and fragmented area
– Translation: Not really discussed in this
book.
9
Scheduling
• Assignment of processes to processors
• Use of multiprogramming on individual
processors
• Actual dispatching of a process
10
Assignment of Processes to
Processors
• Treat processors as a pooled resource and
assign process to processors on demand
– Single queue for all processes
– Multiple queues are used for priorities
– All queues feed to the common pool of processors
• Permanently assign process to a processor
–
–
–
–
Known as group or gang scheduling
Dedicate short-term queue for each processor
Less overhead
Processor could be idle while another processor has
a backlog
11
Assignment of Processes to
Processors
• Global queue
– Schedule to any available processor
• Master/slave architecture
– Key kernel functions always run on a
particular processor
– Master is responsible for scheduling
– Slave sends service request to the master
– Disadvantages – (Bottleneck at the Master,
Master processor dying kills the system)
12
Assignment of Processes to
Processors
• Peer architecture
– Operating system can execute on any
processor
– Each processor does self-scheduling
– Complicates the operating system
• Make sure two processors do not choose the
same process.
• No central point to synchronize resources.
13
Thread Scheduling
• Executes separate from the rest of the
process
• An application can be a set of threads
that cooperate and execute concurrently
in the same address space
• Threads running on separate processors
yields a dramatic gain in performance
– Medium-Grained can also get a big
decrease if only on one processor.
14
Multiprocessor Thread
Scheduling
• Load sharing
– Processes are not assigned to a particular processor
– Downside is cache memory not as useful.
– Maximizing Processor Use
• Gang scheduling
– A set of related threads is scheduled to run on a set
of processors at the same time.
– Sucks for processes with only one thread.
– Maximizing Process Efficiency
15
Multiprocessor Thread
Scheduling
• Dedicated processor assignment
– Threads are assigned to a specific processor
– Makes good use of cache memory
• Dynamic scheduling
– Number of threads can be altered during
course of execution
– If process needs fewer/more threads while
running.
16
Multiprocessor Thread
Scheduling
• We can track the resident thread size for
a process.
– How many threads it needs running for
smooth communication
• Medium term scheduler for the
processors to move processes in and out.
17
Load Sharing
• Load is distributed evenly across the
processors
• No centralized scheduler required
• Use global queues
18
Disadvantages of Load
Sharing
• Central queue needs mutual exclusion
– May be a bottleneck when more than one
processor looks for work at the same time
• Preemptive threads are unlikely resume
execution on the same processor
– Cache use is less efficient
• If all threads are in the global queue, all
threads of a program will not gain access
to the processors at the same time
19
Gang Scheduling
• Simultaneous scheduling of threads that
make up a single process
• Useful for applications where
performance severely degrades when
any part of the application is not running
• Threads often need to synchronize with
each other
20
Scheduling Groups
21
Dedicated Processor
Assignment
• When application is scheduled, its
threads are assigned to a processor
• Some processors may be idle
• No multiprogramming of processors
22
23
Dynamic Scheduling
• Number of threads in a process are altered
dynamically by the application
• Operating system adjust the load to improve
utilization
– Assign idle processors
– New arrivals may be assigned to a processor that is
used by a job currently using more than one
processor
– Hold request until processor is available
– Assign processor a jog in the list that currently has
no processors (i.e., to all waiting new arrivals)
24
Real-Time Systems
• Correctness of the system depends not
only on the logical result of the
computation but also on the time at
which the results are produced
• Tasks or processes attempt to control or
react to events that take place in the
outside world
• These events occur in “real time” and
tasks must be able to keep up with them
25
Real-Time Systems
•
•
•
•
•
•
•
Control of laboratory experiments
Process control in industrial plants
Robotics
Air traffic control
Telecommunications
Military command and control systems
Your car.
26
Characteristics of Real-Time
Operating Systems
• Two-Main Types of Real-Time
Processes
– Soft Real-Time Task – Bad to miss, but no
the end of the world.
– Hard Real-Time Task – Certain death or
failure if not met.
27
Characteristics of Real-Time
Operating Systems
• Deterministic
– Operations are performed at fixed,
predetermined times or within
predetermined time intervals
– Concerned with how long the operating
system delays before acknowledging an
interrupt and there is sufficient capacity to
handle all the requests within the required
time
28
Characteristics of Real-Time
Operating Systems
• Responsiveness
– How long, after acknowledgment, it takes
the operating system to service the interrupt
– Includes amount of time to begin execution
of the interrupt
– Includes the amount of time to perform the
interrupt
– Time to restart after the interrupt
– Effect of interrupt nesting
29
Characteristics of Real-Time
Operating Systems
• User control
– User specifies priority
– Specify paging
– What processes must always reside in main
memory
– Disk algorithms to use
– Rights of processes
30
Characteristics of Real-Time
Operating Systems
• Reliability
– Degradation of performance may have
catastrophic consequences
• Fail-soft operation
– Ability of a system to fail in such a way as
to preserve as much capability and data as
possible
– Stability
31
Features of Real-Time
Operating Systems
• Fast process or thread switch
• Small size
• Ability to respond to external interrupts
quickly
• Multitasking with interprocess
communication tools such as
semaphores, signals, and events
32
Features of Real-Time
Operating Systems
• Use of special sequential files that can
accumulate data at a fast rate
• Preemptive scheduling base on priority
• Minimization of intervals during which
interrupts are disabled
• Delay tasks for fixed amount of time
• Special alarms and timeouts
33
Real-Time Scheduling
• Static table-driven
– Determines at run time when a task begins
execution
• Static priority-driven preemptive
– Traditional priority-driven scheduler is used
• Dynamic planning-based
– Feasibility determined at run time
• Dynamic best effort
– No feasibility analysis is performed
34
Deadline Scheduling
• Information used
– Ready time – When the task will be ready to run.
We are aware of the task before it is ready to start.
– Starting deadline – When the task needs to be
started.
– Completion deadline – When the task needs to be
completed.
– Processing time – How long it takes to do the task.
We know there is enough time to complete, just
need to start it in time.
– Resource requirements
– Priority
– Subtask structure
35
Deadline Scheduling
• Real-time applications are not concerned
with speed but with completing tasks
• We know when the process has to
complete by even if we don’t know how
long it will take.
• We can assume that the real-time system
can complete all its tasks in time.
36
37
Rate Monotonic Scheduling
• Assigns priorities to tasks on the basis of
their periods
• Highest-priority task is the one with the
shortest period
• Still need to ensure that lower-priority
tasks don’t get held up too long.
38
Priority Inversion
• Can occur in any priority-based
preemptive scheduling scheme
• Occurs when circumstances within the
system force a higher priority task to
wait for a lower priority task
39
Unbounded Priority Inversion
• Duration of a
priority inversion
depends on
unpredictable
actions of other
unrelated tasks
40
Priority Inheritance
• Lower-priority
task inherits the
priority of any
higher priority
task pending on
a resource they
share
41
Linux Scheduling
• Scheduling classes
– SCHED_FIFO: First-in-first-out real-time
threads
– SCHED_RR: Round-robin real-time threads
– SCHED_OTHER: Other, non-real-time
threads
• Within each class multiple priorities may
be used
42
43
Non-Real-Time Scheduling
• Linux 2.6 uses a new scheduler the O(1)
scheduler
• Time to select the appropriate process
and assign it to a processor is constant
– Regardless of the load on the system or
number of processors
44
45
UNIX SVR4 Scheduling
• Highest preference to real-time
processes
• Next-highest to kernel-mode processes
• Lowest preference to other user-mode
processes
46
UNIX SVR4 Scheduling
• Preemptable static priority scheduler
• Introduction of a set of 160 priority
levels divided into three priority classes
• Insertion of preemption points
47
SVR4 Priority Classes
48
SVR4 Priority Classes
• Real time (159 – 100)
– Guaranteed to be selected to run before any
kernel or time-sharing process
– Can preempt kernel and user processes
• Kernel (99 – 60)
– Guaranteed to be selected to run before any
time-sharing process
• Time-shared (59-0)
– Lowest-priority
49
SVR4 Dispatch Queues
50
Windows Scheduling
• Priorities organized into two bands or
classes
– Real time
– Variable
• Priority-driven preemptive scheduler
51
52
53
End of Slides
54
Scheduling of a
Real-Time Process
55
Scheduling of a
Real-Time Process
56
57
58
Two Tasks
59
Periodic Task Timing Diagram
60
61