video de char

Download Report

Transcript video de char

3302MEE – Computer Systems
Operating Systems
The warm fuzzy stuff that makes a PC work…
Title Picture: Microsoft Office 2000 Clipart
Outline
What it’s all about...
History
The Computer System
Hardware, Firmware and Software
An Operating System
Operating System Concepts
Foreground / Background Systems
Critical Code
Sharing Resources
Multitasking
Outline
Operating System Concepts
Context Switching
Kernels
Schedulers
Non-preemptive Kernels
Preemptive Kernels
Reentrant Functions
Round-Robin Scheduling
Task Priorities
Mutual Exclusion
Outline
Operating System Concepts
Disabling Interrupts
Flags
Scheduler Techniques
Semaphores
Deadlock
Synchronization
Event Flags
Outline
Intertask Communications
Message Mailboxes
Message Queues
Interrupts
Interrupt Latency
Interrupt Response
Interrupt Recovery
Real-time Operating Systems
Advantages of Real-time Operating Systems
What it’s all about: Why?
Task 1
Task 2
1 Processor
Task 3
5 Tasks
1 Memory
Space
Task 4
Task 5
1 Physical
1
Set
of
I/O
Target
What it’s all about: Why?
Task 1
:(
Task 2
:(
Task 3
:(
Task 4
Task 5
:(
What it’s all about: Why?
Task 1
>:(
Task 2
>:(
Task 3
Task 4
>:(
Task 5
>:(
What it’s all about: Why?
Task 1
Task 2
Task 3
OS
Task 4
Task 5
What it’s all about: Why?
Task 1
Task 2
Task 3
OS
Task 4
Task 5
What it’s all about: Why?
Task 1
Task 2
Task 3
OS
Task 4
Task 5
What it’s all about: Why?
Task 1
Task 2
Task 3
OS
Task 4
Task 5
What it’s all about: Why?
Task 1
Task 2
Task 3
OS
Task 4
Task 5
What it’s all about: Why?
Task 1
Task 2
Task 3
OS
Task 4
Task 5
What it’s all about: Why?
Task 1
Task 2
Task 3
OS
Task 4
Task 5
History
Operating Systems have evolved over
the past 40 years.
Operating systems have evolved in
phases roughly corresponding to the
decades.
•
1940’s and 1950’s:
Computers generally had no OS, the
software that ran on these computers was
generally entered one bit at a time on
rows of mechanical switches.
Pictured: IBM 701
On April 29, 1952 --- IBM President Thomas
J. Watson, Jr., informed his company's
stockholders at the annual meeting that IBM
was building "the most advanced, most
flexible high-speed computer in the world."
History
•
1940’s and 1950’s:
Eventually machines evolved to use
punch cards, programs could be
entered into the computer via a punch
card reader.
General Motors Research Laboratories
implemented the first OS in the early
1950’s for their IBM 701. It ran a single
job at a time and smoothed transition
between jobs to get maximum
utilization of the system (single-stream
batch processing systems).
Pictured: ENIVAC
History
•
1960’s
The systems of the 1960’s were also batch processing systems however
they began to make better use of the computer’s resources by running
several jobs at once (early stages of multitasking).
Computers of this era contained many peripherals such as card readers and
punchers, printers, and floppy disk storage. Rarely did one job utilize all of
the computers resources.
Users would submit their job into the load table via punch card or paper
tape. The job would sit there for hours or even days before the computer
would be ready to run it. If an error occurred in the program then the error
would have to be corrected and the job resubmitted.
History
•
1960’s
IBM released OS/360 for its System/360 family of computers.
More advanced operating systems where developed to service many users
at a time. Users were connected to the computer via terminals.
By the end of the decade the OS had made a drastic change from being a
single batch system to a multi-user system.
Turnaround times dropped from hours to seconds, users could compile their
programs and correct syntax errors before submitting them.
History
•
1960’s (cont’d)
IBM developed its VM OS,
Higher level languages where developed.
The creators of UNIX developed the high-level language C specifically to
write UNIX.
TSS, Multics and CP/CMS incorporated the concept of virtual storage which
allowed programs to address much larger amounts of memory then what
was actually provide in primary storage.
Introduction of system calls.
Mid 1960’s UNIX takes over Mainframes.
History
•
1970’s
In the 1970’s the systems of this decade were multimode timesharing systems
that supported batch processing, timesharing, and real-time applications.
The TCP/IP standard was devised and communication in local area networks
became practical due to the Ethernet standard.
History
•
1980’s
The decade of the personal computer.
Microprocessors evolved to the point that a user could have a computer on
their desk with the same processing power of the mainframes of the 1970’s.
Computing became distributed.
More focus was made on user applications (Home computing).
The 1980’s introduced the concept of device drivers and library functions.
Saw the development of many proprietary versions of UNIX.
History
•
1990’s and 2000’s
The early 1990’s saw the development and spread of a phenomena known
now as the internet.
The major consumer OS of the early 1990’s was MSDOS coupled with
Windows for Workgroups. The mouse change the way in which we interacted
with the PC.
The early 1990’s saw the development of a UNIX OS for Intel hardware
(LINUX).
The OS is continually evolving as the demands of the consumer and hardware
become greater.
The Computer System
User
Applications
A computer system consists of three major
components:
Hardware
Firmware
OS
BIOS
Software
Hardware
The Computer System
User
Applications
Hardware
The physical touchable components of the PC. Hardware
provides the raw computing power.
OS
BIOS
Hardware
The Computer System
Firmware
User
Applications
The hardware level software, most commonly known as
the BIOS. The BIOS:
Provides a self-check functionality,
OS
Sorts out System memory Map,
Handles Plug-and-Pray functionality,
Installs system based drivers that allow higher level software such
as an OS to function (legacy drivers).
Provides a boot-strap loader,
Provides low-level access to the hardware for the OS.
provides a standard interface to the subsystems of the PC, eg.
ACPI, APM, generic video.
BIOS
Hardware
The Computer System
User
Applications
Software
This encompasses the OS and any user-level
applications.
OS
BIOS
Hardware
An Operating System
The Operating System
(OS) provides:
Process Management
Memory Management
Hardware Abstraction
(Device Control)
File Systems Support
Networking
A system call interface
Operating System Concepts
Foreground/Background Systems
Small systems of low complexity are
generally referred to as
foreground/background systems or
super-loops.
The application consists of an infinite loop
that calls modules (functions) to perform
specific functions (background). Interrupt
service routines handle asynchronous
events (foreground).
Foreground is referred to as interruptlevel, while background is commonly
referred to as task-level.
Foreground/Background System
Operating System Concepts
Foreground/Background Systems
Critical operations are performed at the
interrupt-level as these must be dealt with
in a timely manner.
Data provided by an interrupt level task to a
background task, is not processed by that
task until it gets a turn to execute (task-level
response).
Microwave ovens, toys, telephones, etc
utilize this type of OS.
Foreground/Background System
Operating System Concepts
Terms
Critical Sections (of Code)
Also known as critical region.
Once a critical sections begins it can not be interrupted, it must
be completed.
Disabling interrupts is one sure way of ensuring that the critical
code completes uninterrupted. Once the code is complete
interrupts can be enabled once more.
Critical Sections also relate to areas of code in which
commonly shared data is modified, and the code must
complete the modification prior to any other process using it.
Operating System Concepts
Terms
Resources
A resource is any item that can be used by a task.
These include memory, CPU, I/O (peripheral).
Operating System Concepts
Terms
Shared Resources
A shared resource is a resource than can be used by
more than one task.
The task must gain exclusive access to the resource in
order to prevent data corruption (mutual exclusion).
Operating System Concepts
Terms
Multitasking
Multitasking refers to the scheduling of multiple tasks
and switching the CPU between several tasks.
A single CPU is seen to switch its attention between
several sequential tasks.
Multitasking is like foreground/background with multiple
backgrounds.
Multitasking is an efficient use of the CPU and
promotes a modular construction of applications.
Operating System Concepts
Tasks
Tasks
Also known as threads.
A task believes it has the CPU all to
itself.
Each task has a priority, its own set of
CPU registers, and its own stack.
Tasks
Operating System Concepts
Tasks
Each task is effectively an infinite loop
that can be either:
Dormant,
•
Resides in memory but not yet
available to the kernel.
Ready,
•
Can execute however its priority is
lower than the task currently running.
Running
•
Has control of the CPU.
Waiting (for an event),
•
Requires an event to take place first.
or ISR (Interrupted).
•
Services an event.
Tasks
Operating System Concepts
Task States
Operating System Concepts
Context Switches
Context Switches
When the CPU decides to run another task it stores the current
CPU context in the current tasks context storage area.
After the operation is complete the new tasks context is restored
from its storage area onto the CPU and the new task resumes
execution.
This process is commonly referred to as context switching or task
switching.
Context switching adds overhead to the application. The time
required to perform a context switch depends on the number of
registers within the CPU as they all must be saved and restored.
Operating System Concepts
HS12 Context Switch Example
•
An example illustrating a context
switch on the HS12.
Operating System Concepts
HS12 Context Switch Example
Context
A
Halted
Running
D
PC
X
TOI
Interrupt
Y
CCR
Interrupts Disabled
SP
CPUS12
Stack
Operating System Concepts
HS12 Context Switch Example
D
PC
X
Y
CCR
...
TOI
TOI
SP
...
Reset
Interrupts Disabled
CPUS12
Interrupt Vector Table
Operating System Concepts
HS12 Context Switch Example
TOI ISR
Halted
Running
D
PC
CCR
X
D
Y
X
Interrupts Disabled
CCR
Y
SP
PC
CPUS12
Stack
Operating System Concepts
HS12 Context Switch Example
TOI ISR
Running
D
PC
CCR
X
D
Y
X
Interrupts Disabled
CCR
Y
SP
PC
CPUS12
Stack
Operating System Concepts
HS12 Context Switch Example
CCR
TOI ISR
Running
D
X
Y
D
D
PC
PC
XX
YY
CCR
CCR
SP
SP
PC
CCR
D
CCR
CCR
D
D
X
X
Y
Y
PC
PC
SP
PC
X
Y
Interrupts Disabled
CPUS12
SP
PC
Stack
Operating System Concepts
HS12 Context Switch Example
CCR
Context
Save
TOI ISR
Running
D
X
Y
D
D
PC
PC
XX
YY
CCR
CCR
SP
SP
PC
CCR
SP
CCR
CCR
D
D
X
X
Y
Y
PC
PC
SP
SP
D
X
Y
Interrupts Disabled
CPUS12
SP
PC
Stack
Operating System Concepts
HS12 Context Switch Example
Context
Find
TOI ISR
Running
D
D
PC
PC
XX
YY
CCR
CCR
SP
SP
Interrupts Disabled
CPUS12
CCR
CCR
D
D
X
X
Y
Y
PC
PC
SP
SP
CCR
CCR
D
D
X
X
Y
Y
PC
PC
SP
SP
Stack
Operating System Concepts
HS12 Context Switch Example
Context
Restore
TOI ISR
Running
D
D
PC
PC
XX
YY
CCR
CCR
SP
SP
Interrupts Disabled
CPUS12
CCR
CCR
D
D
X
X
Y
Y
PC
PC
SP
SP
CCR
CCR
D
D
X
X
Y
Y
PC
PC
SP
SP
Stack
Operating System Concepts
HS12 Context Switch Example
Context
B
Halted
Running
D
PC
CCR
X
D
Y
X
CCR
Y
SP
PC
Interrupts Disabled
CPUS12
Stack
Operating System Concepts
Kernels
Kernels
The kernel is part of a multitasking system responsible for management of
tasks and communication between tasks.
The fundamental service provided by the kernel is context switching.
The kernel adds overhead to the system as it also needs to be executed by the
processor.
The amount of overhead depends on how often you use the services provided
by the kernel.
Operating System Concepts
Kernels
Kernels
The kernel requires space for:
Its own code,
Its own data structures, and
A task structure for each task.
The kernel introduces a strain on the memory resources of the system.
Operating System Concepts
Schedulers
Scheduler
The scheduler is also know as the dispatcher.
Responsible for deciding which task runs next.
Each task is assigned a priority based on its importance.
Control of the CPU is always given to the task with the highest priority that is
ready to run.
When the highest priority task gets the CPU depends on whether the
scheduler is preemptive or non-preemptive.
Operating System Concepts
Schedulers
Non-Preemptive Kernel
Also known as cooperative multitasking.
Non-preemptive kernels require that each task gives up control of the CPU so
another task can execute.
To maintain the illusion of concurrency tasks must give up control of the CPU
on a regular basis.
Asynchronous events are still handled by ISRs.
Operating System Concepts
Schedulers
Non-Preemptive Kernels (cont’d)
An ISR can make a higher priority task ready to run however the ISR returns to
the interrupted task, completes the task then runs the higher priority task.
Interrupt latency is generally low for non-preemptive kernels.
Also non-preemptive kernels can use non-reentrant functions.
Operating System Concepts
Schedulers
Non-Preemptive Kernels (cont’d)
Non-reentrant functions can be used by each task without the fear of data
corruption, as each task must run to completion before giving up the CPU.
The task-level response of a non-preemptive kernel is much lower than a
foreground/background system, as the task level response is defined by the
time taken to execute the longest task.
There is no need to guard shared data (using semaphores) as only one task
has access to it at any given time.
Operating System Concepts
Schedulers
(1) Task is executing, but is interrupted.
(2) If interrupts are enabled, the CPU vectors to the
ISR.
(3) The ISR handles the event and makes a higher
priority task ready to run.
(4) Upon completion of the ISR the CPU returns
control to the interrupted task.
(5) The task continues from the point it was
interrupted.
(6) When the task completes, it calls a service that
the kernel provides to relinquish control of the
CPU to another task.
(7) The kernel sees that a higher priority task is
ready to run and performs a context switch to
the higher priority task so it can run.
Non-preemptive Kernel
!
The kernel doesn’t know how the task
became ready, just that it now is.
Operating System Concepts
Schedulers
Non-preemptive Kernel(cont’d)
The biggest drawback of a non-preemptive kernel is its
responsiveness.
A higher priority task that has been made ready to run has to wait a
long time for the lower priority task to complete before it can get
control of the CPU.
A non-preemptive kernel allows each task to run until it voluntarily
gives up control of the CPU.
An interrupt (ISR) preempts a task. Once completed returns to the
preempted task and lets the task complete before running the new
task.
Operating System Concepts
Schedulers
Preemptive Kernels
Used when system responsiveness is extremely important.
Real-time kernels fall into this category.
The highest priority task that is ready to run is always given control of the CPU.
When a higher priority task is ready to run the current running task is
preempted (suspended) and the higher priority task is run to completion
(unless interrupted by an even higher priority task) before returning control to
the preempted task.
When an ISR makes a higher priority task ready, the higher priority
task is immediately given control of the CPU and the interrupted
task is suspended.
Operating System Concepts
Schedulers
(1) A low priority task is running and is interrupted.
(2) The CPU services the ISR.
(3) The ISR handles the event and makes a higher
priority task ready to run.
(4) Upon completion of the ISR a kernel service is
invoked that performs a context switch to the
higher priority task, instead of returning to the
suspended task.
(5) The higher priority task is run to completion.
(6) On completion of the higher priority task the
kernel realizes that a lower priority task has
been suspended and therefore performs a
context switch to continue running the
suspended task.
Preemptive Kernel
Operating System Concepts
Schedulers
Preemptive Kernels (cont’d)
Using a preemptive kernel you have more control over when an actual task is
executed.
The task-level response time is therefore minimized.
Application code running under a preemptive kernel should not use nonreentrant code, unless of course mutual-exclusion mechanisms are used.
Otherwise data corruption will occur if a higher priority task utilizes the same
code that is being used by a lower priority task.
A preemptive kernel will always execute the highest priority task, suspending
the lower priority task until the higher priority task has completed.
Operating System Concepts
Reentrant Functions
Reentrant Functions
A reentrant function can be used by more than one task at a time without the
fear of data corruption.
A reentrant function can be interrupted at any time and continued later with no
effects on its outcome.
Reentrant functions use only local variables or protected global variables.
Operating System Concepts
Reentrant Functions
Reentrant Function
•
void strcpy (char *dest, char *src){
while (*dest++ = *src++){
•
:
•
}
•
•
*dest=NULL;
•
}
Because this function’s input parameters are placed on the stack, this function
can be run simultaneously without fear of data corruption.
Operating System Concepts
Reentrant Functions
Non-reentrant Function
•
int Temp;
•
...
•
void swap (int *x, int *y){
•
Temp=*x;
•
*x=*y;
•
*y=Temp;
•
}
Since this function uses a unprotected global variable it will result in
data corruption if this function is executed simultaneously.
Operating System Concepts
Reentrant Functions
(1) When swap is interrupted
Temp contains 1.
(2) An ISR readies a higher
priority task.
(3) The kernel triggers a context
swap to the higher priority
task that also uses the swap
function.
(4) The high priority task finishes
leaving the value of Temp as
3.
(5) The lower priority task runs
using the new value of Temp
not the value it originally
stored in Temp.
Non-reentrant
Function
Operating System Concepts
Round Robin Scheduling
Round Robin Scheduling
Also know as time-slicing.
When two or more tasks have the same priority, the
kernel allows one task to run for a predetermined
amount of time (quantum) and then selects another
task of the same priority to run.
Task 1
Task 2
Task 3
Task 4
Task 5
Operating System Concepts
Round Robin Scheduling
Round Robin Scheduling
Also know as time-slicing.
When two or more tasks have the same priority, the kernel
allows one task to run for a predetermined amount of time
(quantum) and then selects another task of the same
priority to run.
Task 1
Task 2
Task 3
Task 4
Assuming All Tasks have the same priority.
Task 5
Operating System Concepts
Round Robin Scheduling
Round Robin Scheduling
Also know as time-slicing.
When two or more tasks have the same priority, the kernel
allows one task to run for a predetermined amount of time
(quantum) and then selects another task of the same
priority to run.
Task 1
Task 2
Task 3
Task 4
Assuming All Tasks have the same priority.
Task 5
Operating System Concepts
Round Robin Scheduling
Round Robin Scheduling
Also know as time-slicing.
When two or more tasks have the same priority, the kernel
allows one task to run for a predetermined amount of time
(quantum) and then selects another task of the same
priority to run.
Task 1
Task 2
Task 3
Task 4
Assuming All Tasks have the same priority.
Task 5
Operating System Concepts
Round Robin Scheduling
Round Robin Scheduling
Also know as time-slicing.
When two or more tasks have the same priority, the kernel
allows one task to run for a predetermined amount of time
(quantum) and then selects another task of the same
priority to run.
Task 1
Task 2
Task 3
Task 4
Assuming All Tasks have the same priority.
Task 5
Operating System Concepts
Round Robin Scheduling
Round Robin Scheduling
Also know as time-slicing.
When two or more tasks have the same priority, the kernel
allows one task to run for a predetermined amount of time
(quantum) and then selects another task of the same
priority to run.
Task 1
Task 2
Task 3
Task 4
Assuming All Tasks have the same priority.
Task 5
Operating System Concepts
Round Robin Scheduling
Round Robin Scheduling
Also know as time-slicing.
When two or more tasks have the same priority, the
kernel allows one task to run for a predetermined
amount of time (quantum) and then selects another
task of the same priority to run.
Task 1
Task 2
Task 3
Task 4
Assuming All Tasks have the same priority.
Task 5
Operating System Concepts
Round Robin Scheduling
Round Robin Scheduling
The kernel gives control to the next task if:
The current task has no work to do during this time slice or
The current task completes before the end of the time slice or
The time slice ends.
Some Operating Systems do not support round-robin
scheduling as each task must have a unique priority.
Operating System Concepts
Priorities
Task Priorities
A priority is assigned to each task, the higher the priority the more important
the task is. With most kernels you are responsible for assigning the priority
values.
Static Priorities
Static priorities are assigned to tasks at compile time and do not change during
the running of the application.
Operating System Concepts
Priorities
Dynamic Priorities
Task priorities are referred to as dynamic if the priority can be changed during
the running of the application.
Priority Inversion
Priority inversion is a problem when a task of low priority registers a
semaphore over a shared resource, is interrupted to run a higher priority task
which also wants access to the shared resource. The higher priority task can
not gain access to the resource until the lower priority task releases the
semaphore.
Operating System Concepts
Priority Inversion
(1) Task 1 and 2 are waiting for an
event to take place. Task 3 is
running.
(2) Task 3 acquires a semaphore.
(3) Task 3 performs some operation
with the resource.
(4) The event task 1 was waiting for
takes place. Task 3 is suspended
and task 1 gets control of the
CPU.
(5) Task 1 executes.
(6) Task 1 attempts to acquire
semaphore for the same
resource.
Priority Inversion Example
Operating System Concepts
Priority Inversion
(7) Task 1 is placed on a list of tasks
waiting for a semaphore.
(8) Task 3 resumes.
(9) The event Task 2 was waiting for
occurs, CPU is given to Task 2.
(10) Task 2 completes and returns
control back to Task 3.
(11) Task 3 finishes with the resource
and releases the semaphore.
(12) At this point the kernel gives
control to Task 1.
Priority Inversion Example
Operating System Concepts
Priority Inversion / Priority Inheritance
Priority Inversion (cont’d)
Due to priority inversion the priority of Task 1 was reduced to
priority of Task 3.
This could have been avoided by raising the priority of Task 3
to that of Task 1, while Task 3 held onto the semaphore.
Altering a Tasks priority takes time and the kernel needs to be
able to alter task priorities during runtime.
Altering a Tasks priority to prevent this condition from
happening is commonly referred to as Priority Inheritance.
Operating System Concepts
Priority Inheritance Example
Priority Inheritance Example
Operating System Concepts
Assigning Task Priorities
In most systems, not all tasks are critical.
Non-critical tasks should be given low priority.
A method of determining the priorities of each task is rate monotonic
scheduling (RMS).
RMS assigns tasks priorities based on how often the task executes.
Tasks with the highest rate of execution have the highest priority.
Operating System Concepts
Assigning Task Priorities
RMS makes a number of assumptions:
All tasks are periodic (they occur at regular intervals).
Tasks do not synchronize with one another, share resources, or exchange
data.
The CPU must execute the highest priority task that is ready to run
(preemptive scheduling).
Operating System Concepts
Assigning Task Priorities
Given a set of n tasks that are assigned RMS priorities, the basic
RMS theorem states that all task hard real-time deadlines are met
if the following equality is verified.
Where
Ei is execution time of task i, Ti is execution period of task i and n is
the number of tasks.
As the number of tasks increase the amount of usable CPU time
decreases, due to kernel overhead.
Operating System Concepts
Mutual Exclusion
The easiest way for tasks to communicate with each other is through
shared data structures.
Task intercommunication is easy when the tasks exist in the same
address and can reference elements such as global variables,
pointers, linked lists, etc.
Even though sharing data simplifies the exchange of data, care must
be taken to ensure that each task has exclusive access to the data to
avoid contention and data corruption.
Operating System Concepts
Mutual Exclusion
Common methods of gaining exclusive access to shared resources
are:
Disabling and enabling interrupts,
Flags,
Scheduling Techniques, and
Semaphores
Operating System Concepts
Mutual Exclusion: Disabling and Enabling Interrupts
The easiest and fastest way to gain exclusive access to a shared
resource is by disabling and enabling interrupts.
While the interrupts are disabled it is impossible for the kernel to
preempt another task.
Care must be taken to ensure interrupts are not disabled for too long.
Doing so will effect the interrupt latency of your system.
You should use this method if you are storing to only a few variables.
This method is the only way of sharing data with an ISR.
Operating System Concepts
Mutual Exclusion: Flags
Flags also know as Test-and-Set Operations.
Used in situations where a kernel is not being used.
Two tasks could access a shared resource by first checking the
condition of a global flag, if set the resource is being used else
the resource is free to be used by the task.
The first function would set a flag to prevent any other task from
accessing the same shared resource.
The flag must either be set by the processor itself or interrupts
must be disabled when testing and then setting the flag.
This will ensure that no other task sets the flag prior to the
operation being completed.
Operating System Concepts
Mutual Exclusion: Scheduler Techniques
If your task is not sharing data with an ISR then you can disable and
enable scheduling.
By disabling scheduling you do not disable interrupts, so an ISR can still
interrupt your task.
However on completion of the ISR, control of the CPU will be returned to
the interrupted task even if a higher priority task is now ready to run.
Because the kernel returns to the interrupted task this makes the
preempted kernel function in a similar fashion to a non-preempted kernel.
When scheduling is enabled once more, control of the CPU will be given to
a higher priority task if one has been made ready while scheduling was
disabled.
Operating System Concepts
Mutual Exclusion: Semaphores
Semaphores were invented during the mid-1960s.
Semaphores are a protocol mechanism offered by most multitasking
kernels.
Semaphores are used to:
Control access to a shared resource (mutual exclusion),
Signal the occurrence of an event, and
Allow two tasks to synchronize their activities.
Operating System Concepts
Mutual Exclusion: Semaphores
A semaphore is a key that a task acquires in order to continue
execution. If the semaphore is already in use then the requesting task
suspends until the semaphore is free.
There exists two types of semaphores:
Binary Semaphores
•
Binary semaphores can only take the value 0 or 1.
Counting Semaphores
•
Counting Semaphores allow any value between 0 and 255, 65536,…
Operating System Concepts
Mutual Exclusion: Semaphores
Generally only three operations can be performed on a semaphore:
INITIALIZE (CREATE),
WAIT (PEND) and,
SIGNAL (POST)
A task desiring the semaphore performs a WAIT operation. If the
semaphore is available (the semaphore value is greater than 0) then
the semaphore is decremented and the task continues execution.
Operating System Concepts
Mutual Exclusion: Semaphores
If the semaphores value was 0 when the WAIT operation was
performed, then the Task is placed on a waiting list.
Most kernels allow you to specify a timeout, that once it is reached
the requesting task is made ready to run and an error code is
returned to the caller.
A Task releases the semaphore by performing a SIGNAL operation. If
no task is waiting for the semaphore, the semaphore value is simply
incremented.
If a Task is waiting then the semaphore is not incremented, it is
simply given to one of the waiting tasks.
Operating System Concepts
Mutual Exclusion: Semaphores
Depending on the kernel, the task that receives the semaphore is
either:
The highest priority task waiting for the semaphore or
The first task that requested the semaphore (first in, first out).
Generally kernels allow you to chose which scheme it should adopt.
If the readied task has a higher priority than the one releasing the
semaphore, a context switch occurs (with preemptive kernels) and
the higher priority task resumes execution.
The lower priority task is suspended until the higher priority task
completes.
Operating System Concepts
Mutual Exclusion: Semaphores
Semaphores are especially useful when tasks share I/O peripherals.
Using the printer as an example, can you imagine the mess that would occur if two tasks
tried to print at the same time.
In this case using a binary semaphore would ensure that only one task had access to the
printer at any given time.
Each task must be aware of the existence of the semaphore.
Semaphores
with shared
resources
Operating System Concepts
Mutual Exclusion: Semaphores
In some cases it is better to encapsulate the semaphore.
Each task wouldn’t need to know it is acquiring a semaphore when accessing the resource.
The function CommSendCmd() would take care of acquiring and releasing the semaphore.
Therefore any task that wants to communicate to this device would have to do so via this common system call.
If another task uses the function CommSendCmd while it is currently being used by another task, the hidden
semaphore will suspend the second task.
Encapsulated
Semaphores
Operating System Concepts
Mutual
Exclusion:
Semaphores
A counting
semaphore
is used when a resource can be used by more than one task at the
same time.
A counting semaphore can be used in the management of a buffer pool.
For example, if a buffer pool originally contained 10 buffers.
A buffer is acquired by calling the function BufReg(), and released by calling the function
BufRel().
Each time BufReg() is called a semaphore is acquired. Hence the semaphore decrements.
Each time BufRel() is called the semaphore is released. Hence the semaphore is incremented.
Counting Semaphores
Operating System Concepts
Deadlock
Deadlock
Also known as deadly embrace.
A deadlock occurs when two tasks are unknowingly waiting for resources held
by the other.
For example
Task 1 is using resource 1 and waiting to get hold of resource 2.
Task2 is using resource 2 and waiting to use resource 1.
Neither of these two tasks can continue, they are deadlocked.
Operating System Concepts
Deadlock
Deadlock (cont’d)
The easiest way to ensure this doesn’t happen is:
Each task acquires a lock for all resources it requires at the start before
proceeding,
Acquires resources in the same order, and
Release resources in the reverse order.
Most kernels allow you to specify a timeout when attempting to acquire a
semaphore.
Deadlocks generally occur in large multitasking systems and are unusual in
embedded systems.
Operating System Concepts
Synchronisation
A task can be synchronized with an ISR or another Task (when no data is
being transferred) by the use of a semaphore.
The semaphore is used to flag the occurrence of an event instead of
acquiring exclusive access to a resource.
Semaphores as Flags (Unilateral Rendezvous).
Operating System Concepts
Synchronisation
When used as a synchronization mechanism, the semaphore is initialized to
‘0’ (unilateral rendezvous).
A task initiates an I/O operation and waits for the semaphore. Once the
operation is complete an ISR or other task can release the semaphore
allowing the original task to continue.
Semaphores as Flags (Unilateral Rendezvous).
Operating System Concepts
Synchronisation
If the kernel supports counting semaphores, the semaphore can then
accumulate events that have not yet occurred.
More than one task can be waiting for an event to occur.
The kernel signals the occurrence of the event either to the highest priority
task or the first task waiting for the event.
Task Synchronization using Semaphores (Bilateral Rendezvous).
Operating System Concepts
Synchronisation
Depending on the application more than one ISR or task can signal the
occurrence of the event.
Two tasks can synchronize their activities by using two semaphores a
procedure commonly referred to as bilateral rendezvous.
Bilateral rendezvous differs from unilateral as both tasks must synchronize
with each other before continuing.
Task Synchronization using Semaphores (Bilateral Rendezvous).
Operating System Concepts
Event Flags
An event fag is used when a task needs to synchronize with the
occurrence of multiple events.
In disjunctive synchronization, the task can be synchronized
when any of the events take place (OR) or the task can be
synchronized when all of the events have taken place (AND).
Synchronization using
Event Flags.
Operating System Concepts
Event Flags
Common events can be used to
synchronize multiple tasks.
Events are generally grouped,
depending on the kernel each group
consists of 8, 16, or 32 events, each
represented by a bit.
Tasks or ISR can set or clear any event
within a group. A task is resumed once
all the events it requires are satisfied.
Kernels provide services to:
set event flags,
clear event flags, and
wait for event flags.
Event Flags.
Operating System Concepts
Intertask Communications
There are times in which a task or ISR needs to communicate
information to another task.
There are two possible ways of achieving this:
Using Global variables, or
Sending Messages.
If global variables are used, care must be made to ensure a task
has exclusive access to this variable.
Operating System Concepts
Intertask Communications
If an ISR wants exclusive access to a global variable it must ensure that
interrupts are disabled when the variable is accessed.
If a TASK requires exclusive access to a global variable it must ensure
that either interrupts are disabled or a semaphore is used.
Note that data can only be conveyed to an ISR using global variables.
A task is not aware that a global variables has been changed by an ISR
unless a semaphore is used.
This can be solved using either message mailboxes or message queues.
Operating System Concepts
Intertask Communications: Message Mailboxes
A message is a kernel service generally the size of a pointer.
The kernel provides a service that enables a message (pointer)
to be deposited into a mailbox by either an ISR or another Task.
Similarly the kernel provides a service that enables one or more
tasks to receive the message through a mailbox.
Message Mailbox.
Operating System Concepts
Intertask Communications: Message Mailboxes
Both the sender and receiver must agree on what the pointer actually
points to.
Each mailbox has a waiting list associated with it. A task desiring a
message from an empty mailbox is suspended and placed on the
waiting list until the mailbox is no longer empty.
As with semaphores a timeout can be provided, once exceeded, the
requesting task continues and an error code is provided to the calling
task outlining what error occurred.
Message Mailbox.
Operating System Concepts
Intertask Communications: Message Mailboxes
When one or more tasks are waiting for a message to
appear in a mailbox, either the message is given to
the highest priority task or the first task that requested
the message.
A message mailbox can simulate binary semaphores.
Message Mailbox.
Operating System Concepts
Intertask Communications: Message Mailboxes
Kernels typically provide the following mailbox services:
Initialize the contents of a mailbox.
Deposit a message into the mailbox (POST).
Wait for a message to appear (PEND).
Get a message from the mailbox (ACCEPT).
Message Mailbox.
Operating System Concepts
Intertask Communications: Message Queues
A message queue is used to send multiple messages to a task.
A message queue can be thought of as an array of mailboxes.
As with mailboxes, tasks or ISRs can deposit a message into a
message queue via a kernel service.
Also one or more tasks can read messages posted to a message
queue.
Operating System Concepts
Intertask Communications: Message Queues
Like mailboxes, both the sender and receiver must agree on what the
pointer is pointing to.
Generally the first message inserted into the queue is the first
message extracted from the queue (FIFO).
Some kernels in fact allow you to also extract messages in a stacklike fashion (Last-In-First-Out).
Also the message queue has a waiting list and timeout associated
with it.
Operating System Concepts
Intertask Communications: Message Queues
A kernel usually provides the following message queue services:
Initialize the queue.
Deposit a message onto the queue (POST).
Wait for a message to be deposited onto the queue (PEND).
Get a message from the queue (ACCEPT).
Message Queue.
Operating System Concepts
Interrupts
Interrupts
Interrupt are a hardware mechanism used to inform the CPU of an
asynchronous event.
When an interrupt is triggered the entire CPU context is stored on the Stack and
restored once the interrupt has been serviced.
Nested Interrupts.
Operating System Concepts
Interrupts
Interrupts
On completion of an ISR, CPU control is returned to:
A background task for a foreground/background system,
The interrupted task for non-preemptive kernel, or
The highest priority task for a preemptive kernel.
Nested Interrupts.
Operating System Concepts
Interrupts
Interrupts
Using interrupts allows the CPU to deal with events once they
occur instead of wasting valuable CPU time polling registers.
CPUs allow you to disable and enable interrupts through the use
of specialized commands.
In a real-time system interrupts should be disabled for an
extremely small amount of time.
Many CPUs allow you to use nested interrupts.
Operating System Concepts
Interrupts: Interrupt Latency, Response and Recovery
•
Foreground / Background System
Operating System Concepts
Interrupts: Interrupt Latency, Response and Recovery
•
Non-preemptive Kernel
Operating System Concepts
Interrupts: Interrupt Latency, Response and Recovery
•
Preemptive Kernel
Operating System Concepts
Interrupts: ISR Processing Time
Even though ISRs should be made as short as possible, there
are no hard limits on how long an ISR can be.
If the ISR is the most important code that needs to be run, then
it can be as long as it needs to be. As long as it can completely
execute between interrupts.
Generally an interrupt needs only to signal that an event has
taken place so a task can then process the data.
However if processing the data within the ISR is quicker than
the time taken to create a mailbox or semaphore, then it should
be done within the ISR once interrupt have been enabled.
Operating System Concepts
Interrupts: Nonmaskable Interrupts
Sometimes an interrupt needs to be serviced as quickly as possible
without the interrupt latency imposed by a kernel. In these situations
using the NMI may be advantageous.
Because NMI cannot be disable interrupt latency, response and recovery
are minimal.
The interrupt latency, response, and recovery for an NMI are as follows:
Interrupt Latency = Time to execute longest instruction + time to
initiate NMI ISR
Interrupt Response = Interrupt latency + time to save CPUs context
Interrupt Recovery = Time to restore the CPUs context + time to
execute the RTI
Operating System Concepts
Clock Tick
A clock tick is a special interrupt that occurs periodically.
It can be seen as the systems heartbeat.
The clock tick interrupt allows the kernel to delay a task
by a predefined amount of time and provide timeouts for
semaphores and message queues.
All kernels allow task to be delayed by a number of clock
ticks. Even though the delay resolution is a single clock
tick does not mean its accuracy is one clock tick.
Real-time Operating Systems
•
“Real-time Systems are characterized by the severe
consequences that result if logical as well as timing
correctness of the system are not met.” [1]
Real-time Operating Systems
There are two types of Real-time systems:
Soft; The tasks run as quickly as possible, perform
correctly but need not finish on time.
Hard; task must both run correctly and finish on time.
Most real-time systems are a combination of both soft
and hard and are almost always embedded.
Real-time Operating Systems
The Advantages and Disadvantages
Functionality can be added to your system without the need for major
system rewrites.
If additional low priority tasks are added to your system they will not
effect the performance of a preemptive RTOS.
Using an RTOS simplifies system development as the task is broken
into smaller simpler tasks.
However using an RTOS greatly increases the amount of RAM
required for your system.
Also there is generally a cost associated with using an RTOS ranging
from $5USD to $500USD per seat.
References
•
[1] Labrosse, J. J., “MicroC/OS-II: The real-time Kernel”, CMP Books,2003.
•
[2] Deitel, H. M., “Operating Systems”, Addison Wesley, 1990.
•
[3] Simon, D. E., “ An Embedded Software Primer”, Addison Wesley, 1999.