Transcript RTOSx

Real Time Operating System
What happens when we
power-on the PC??
Real Time Operating System
But First What is RTS???
What is Real time system????????
• A system is said to be Real Time if it is required to complete it’s work
& deliver it’s services on time.
• A RTS is a system where correctness of computing depends not only
on the correctness of the logical result of the computation but also
on the result of delivery time.
• But in practice it is not possible and also costly to achieve this
requirement. So, people classified the real time systems in to the
following types.
Classification of RTS
REAL
TIME
SYSTEM
HARD
REAL
TIME
FIRM
REAL
TIME
SOFT
REAL
TIME
Hard Real time:
• Here missing an individual deadline results in catastrophic or bigger
failure of the system which also causes a great financial loss .
• The examples for Hard real time systems are:
 Air traffic control
 Nuclear power plant control
Firm Real time
• In this, missing a deadline results in unacceptable quality reduction.
Technically there is no difference with hard Real time, but
economically the disaster risk is limited.
• Examples for Firm real time are :
 Failure of Ignition of a automobile
 Failure of opening of a safe
Soft real time:
• Here the dead line may not be fulfilled and can be recovered from. The
reduction in system quality and performance is at an acceptable level.
• Examples of Soft real time systems :
 Multimedia transmission and reception
 Networking, telecom (Mobile) networks
 websites and services
 Computer games
What is RTOS???
REAL- TIME OPERATING SYSTEM (RTOS) :
• “The ability of the operating system to provide a required level of
service in a bounded response time”.
Or
• A real-time operating system (RTOS) is a program that schedules
execution in a timely manner, manages system resources, and
provides appropriate developing application code.
RTOS Concepts
What is RTOS?
• Multiple events handled by a single processor
• Events may occur simultaneously
• Processor must handle multiple, often competing events
• Wide range of RTOS systems
• Simple polling through multiple interrupt driven systems
• Each system activity designated as Task.
• RTOS is a multitasking system where multiple tasks run concurrently
• system shifts from task to task
• must remember key registers of each task called its context
RTOS Concepts cntd….
• RTOS responsible for all activities related to a task:
•
•
•
•
•
•
•
scheduling and dispatching
Inter-task communication
memory system management
input/output system management
timing
error management
message management
Misconception of RTOS
a) RTOS must be fast
• False statement: RTOS depends on its deterministic behavior and
not on its processing speed.
• The ability of RTOS to response to events within a timeline does
not imply it is fast.
b) RTOS introduce considerable amount of overhead
on CPU
• An RTOS typically only require between 1% to 4% of a CPU time.
c) All RTOS are the same
•
RTOS are generally designed for 3 types of real-time
systems (i.e. hard, firm & soft).
Features of RTOS
1. Multitasking and Pre-emptibility:
• To support multiple tasks in real-time applications, an RTOS
must be multi-tasked and pre-emptible.
• The scheduler should be able to preempt any task in the
system and allocate the resource to the task that needs it
most even at peak load.
2. Task Priority
• Preemption defines the capability to identify the task that
needs a resource the most and allocates it the control to obtain
the resource.
• In RTOS, such capability is achieved by assigning individual
task with the appropriate priority level.
Contd…
3. Reliable and Sufficient Inter Task Communication
Mechanism:
•
For multiple tasks to communicate in a timely manner and
to ensure data integrity among each other, reliable and
sufficient inter-task communication and synchronization
mechanisms are required.
4. Priority Inheritance
• To allow applications with stringent priority requirements to be
implemented, RTOS must have a sufficient number of priority
levels when using priority scheduling.
Contd…
5. Predefined Short Latencies
An RTOS needs to have accurately defined short timing of its system
calls. The behavior metrics are:
• Task switching latency:
The time needed to save the context of a currently executing
task and switching to another task is desirable to be short.
• Interrupt latency:
The time elapsed between execution of the last instruction
of the interrupted task and the first instruction in the
interrupt handler.
6. Control of Memory Management
• To ensure predictable response to an interrupt, an RTOS should
provide way for task to lock its code and data into real memory.
Desktop OS versus RTOS
Difference between DESKTOP OS & RTOS
Desktop OS
1. Large in Memory
RTOS
Not Large Memory
2. Big or Large User Interface Limited No. of User Interface
Management
3. Network protocol are usually
in-built
If required
4. Program with definite exit
loop.
Usually Infinite Loop
5. It is always plug-n-play
For PnP in RTOS we have to make a
lot changes
ISR in RTOS Environment
Interrupt Routine in RTOS Environment
In a system, the ISR should function as following
• ISRs have the higher priorities over the RTOS functions and
the tasks.
• An ISR should not wait for a semaphore, mailbox message or
queue message.
• But Three alternative ways systems to respond to hardware
source calls from the interrupts
First Way:
Direct Call to an ISR
• On an interrupt, the process running at the CPU is interrupted.
• ISR corresponding to that source starts executing. (STEP 1)
• A hardware source calls an ISR directly. The ISR just sends an ISR enter message to
the RTOS. (STEP 2) . ISR enter message is to inform the RTOS that an ISR has taken
control of the CPU.
• The routine sends an ISR enter message to the RTOS which is stored @ memory
allotted for OS message. (STEP 3)
• When ISR finishes, it sends ISR exit to the RTOS and there is return back to OS
functions or task. (STEP 4)
2nd Way:
RTOS first interrupting on an interrupt,
then OS Calling the Corresponding ISR
Second Way: RTOS first interrupting on an interrupt
• On interrupt of a task, say, k-th task, the RTOS first gets itself the
hardware source call and initiates the corresponding ISR after
saving the present processor status (or context).
• Then the ISR during execution then can post one or more outputs
for the events and messages into the mailboxes or queues.
RTOS first interrupting on an interrupt
• The ISR must be short and it must simply puts post the
messages for another task.
• This task runs the remaining codes whenever it is scheduled.
• RTOS schedules only the tasks (processes) and switches the
contexts between the tasks only.
• ISR executes only during a temporary suspension of a task.
3rdWay
RTOS first interrupting on interrupt,
then
RTOS
Initiating
the
ISR,
and then an ISR
RTOS first interrupting on interrupt, then RTOS Initiating the ISR,
and then an ISR
•
•
•
•
On an interrupt, the RTOS first gets the hardware source call (STEP 1) and
initiates the corresponding ISR after finishing other sections and then saving the
processor status (or context)(STEP 2)
The ISR executes the device. (STEP 3)
The ISRs during execution then can send one or more outputs for the events and
message into mailboxes or queues for the IST(Interrupt Service Thread) (STEP 4).
The ISR just before the end, unmasks (enables) further pre-emption from the
same or other hardware sources. (STEP 5)
RTOS first interrupting on interrupt, then RTOS Initiating the ISR,
and then an ISR
•
The ISTs in the memory that have received the messages from the ISRs
executes (STEP 6) as per their priorities on return from the ISR.
•
When no ISR or IST is pending execution in the memory, the interrupted task
on return (STEP 7).
RTOS calling the corresponding ISR, the
ISR sending messages to IST
• An RTOS can provide for two levels of interrupt service
routines, a fast level ISR,
• FLISR and a slow level ISR (SLISR).
• The FLISR can also be called hardware interrupt ISR and the
SLISR as software interrupt ISR.
• FLISR is called just the ISR in RTOS.
• The SLISR is called interrupt service thread (IST)
Cntd……
• The ISR must be short, run critical and necessary codes only,
and then must simply send the initiate call or messages to ISTs
into the memory.
• The main function of IST is, it runs the remaining codes as per
the schedule. They are SLISR (Slow Level ISR) running device
independent codes as per the device priorities on signals (SWIs)
from the ISR.
• The ISTs run in the kernel space.
• The system priorities are in order of ISR, then IST and Then
TASK.
Interprocess/Task/Thread
Communication (IPC)
Introductions
What do you mean IPC?????
• Inter-process communication (IPC) is a set of methods or
rules for the exchange of data among multiple threads in
one or more processes.
• IPC may also be referred to as inter-thread
communication and inter-application communication.
What are different Classifications of IPC???????
•Synchronization communication
•Communication with/without data
•Uni-directional/bi-directional transfer
•Structured/un-structured data
•Destructive/non-destructive read
Mutual exclusion & Synchronization
Communication
•Semaphore
•Binary semaphore
•Counting semaphore
•Mutex
Communication with data
Communication without data
• Structured data
• Event register
• Signal
• Condition variable
• destructive read
• Message queue
• Unstructured data
• Uni-direction
• destructive read
• Named pipe(FIFO)
• Unnamed pipe
• Bi-direction
• Non-destructive read
• Shared memory
Semaphore Introduction
• It is a variable or abstract data type that provides a simple but
useful abstraction for controlling access by multiple processes or
threads to a common resources.
• Semaphore useful for
• Synchronize execution of multiple tasks
• Coordinate access to a shared resource
• RTOS provides a semaphore object and associated semaphore
management services
• There are three types of semaphore
1. Binary Semaphore
2. Counting Semaphore
3. Mutex
Binary Semaphores
• Have a value of either 0 or 1
• 0: the semaphore is considered unavailable
• 1: the semaphore is considered available
• When a binary semaphore is first created
• It can be initialized to either available or unavailable
(1or 0, respectively)
• When created as global resources
• Shared among all tasks that need them
• Any task could release a binary semaphore even if the
task did not initially acquire it
The State Diagram of a Binary Semaphore
Counting Semaphores
• Use a counter to allow it to be acquired or
released multiple times.
• The semaphore count assigned when it was first
created denotes the number of semaphore tokens
it has initially.
• Global resources that can be shared by all tasks
that need them.
The State Diagram of a Counting Semaphore
Mutual Exclusion ( Mutex ) Sémaphores Mutex
• A special binary semaphore that supports
•
•
•
•
Ownership
Recursive access
Task deletion safety
One or more protocols avoiding problems inherent to mutual
exclusions
• The states of a mutex
• Locked(1) and unlocked (0)
• A mutex is initially created in the unlocked
state.(initial count value=0)
Difference between MUTEX & Semaphore
• Assume, We have a buffer of 4096 byte length.
• In a process there are two thread T1 & T2.
• Suppose a thread T1 will collect the data and writes it to the buffer. A thread T2 will
process the collected data from the buffer.
• But My Objective is, both the threads should not run at the same time.
Using Mutex:
• A mutex provides mutual exclusion, either T1 or T2 can have the key (mutex) and
proceed with their work.
• As long as the buffer is filled by thread T1, the Thread T2 needs to wait, and vice versa.
• At any point of time, only one thread can work with the entire buffer.
Using Semaphore:
• A semaphore is a generalized mutex. In place of single buffer, we can split the 4 KB
buffer into four 1 KB buffers (identical resources).
• A semaphore can be associated with these four buffers. The and producer can work on
different buffers at the same time.
A mutex is associated with an entity or process while a semaphore is associated with a
resource.
The State Diagram of a Mutual Exclusion
(Mutex) Semaphore Mutex
Typical Semaphore Use
• Semaphore useful for
• Synchronize execution of multiple tasks
• Coordinate access to a shared resource
• Examples
•
•
•
•
•
•
Wait-and-signal synchronization
Multiple-task wait-and-signal synchronization
Credit-tracking synchronization
Single shared-resource-access synchronization
Recursive shared-resource-access synchronization
Multiple shared-resource-access synchronization
Wait and Signal Synchronization
• Two tasks can communicate for the
synchronization without exchanging data.
purpose
of
• Example: a binary semaphore can be used between two
tasks to coordinate the transfer of execution control
• Binary semaphore is initially unavailable
• tWaitTask has higher priority and runs first
• Acquire the semaphore but blocked
• tSignTask has a chance to run
• Release semaphore and thus unlock tWaitTask
Wait and Signal Synchronization Between Two
Tasks.
Wait and Signal Synchronization (When more
than 2 tasks wants the shared resources)
• To coordinate the synchronization of more than two
tasks
• Use the flush operation on the task-waiting list of a binary
semaphore
• Example
• Binary semaphore is initially unavailable
• tWaitTask has higher priority and runs first
• Acquire the semaphore but blocked
• tSignTask has a chance to run
• Invoke a flush operation and thus unlock the three tWaitTask
Wait and Signal Synchronization
Message Queues
Message Queues Introduction (1/2)
• To provide inter-task data communication, kernels
provide
• a message queue object and message queue
management services.
• A message queue
• a buffer-like object through which tasks and ISRs send
and receive messages to communicate and
synchronize with data
• The message queue itself consists of a number of
elements, each of which can hold a single message.
Message Queues Introduction (2/2)
• When a message queue is first created, it is assigned
• a queue control block (QCB)
• a message queue name
• a unique ID
• memory buffers
• a queue length
• a maximum message length
• task-waiting lists
• Kernel takes developer-supplied parameters to determine
how much memory is required for the message queue:
• queue length and maximum message length
The associated parameters, and supporting data
structures
Message Queue States (1/2)
Message Queue States (2/2)
•When a task attempts to send a message to a
full message queue, two ways of kernel
implementation:
• the sending function returns an error code to
that task
• Sending task is blocked and is moved into sending
task-waiting list
Message Queue Content (1/3)
• Message queues can be used to send and receive a
variety of data.
• Some examples:
• a temperature value from a sensor
• a bitmap to draw on a display
• a text message to print to an LCD
• a keyboard event
• a data packet to send over the network
Message Queue Content (2/3)
• When a task sends a message to another task, the
message normally is copied twice
• from sender’s memory area to the message queue’s
memory area
• from the message queue’s memory area to receiver’s
memory area
• Copying data can be expensive in terms of
performance and memory requirements
Message copying and memory use for
sending and receiving messages
Message Queue Content (3/3)
• Keep copying to a minimum time in a real-time
embedded system:
• by keeping messages small (length wise)
• by using a pointer instead (Allocate memory using
pointer function)
• When a queue becomes full, there may be a need
for error handling and user codes for blocking the
task(s). There may not be self-blocking.
Message Queue Storage (1/2)
• Message queues may be stored in a system pool or
private buffers
• System Pools
• the messages of all queues are stored in one large shared area
of memory
• Advantage: save on one memory only
Message Queue Storage (2/2)
Private Buffers
• Separate memory areas for each message queue
• Downside: uses up more memory
• requires enough reserved memory area for the full capacity of
every message queue that will be created
• Advantage: better reliability
• ensures that messages do not get overwritten and that room is
available for all messages
Typical Message Queue Operations
1. Creating and deleting message queues
2. Sending and receiving messages
3. Obtaining message queue information
1. Creating and deleting message queues
• When created, message queues are treated as global objects
and are not owned by any particular task.
• When creating a message queue, a developer needs to
decide
• Message queue length
• The maximum messages size
• The blocked tasks waiting order
2. Sending and receiving messages
Sending Messages
• Tasks can send messages with different blocking policies:
• not block (ISRs and tasks)
• If a message queue is already full, the send call returns
with an error, the sender does not block
• block with a timeout (tasks only)
• block forever (tasks only)
• The blocked task is placed in the message queue’s taskwaiting list
• FIFO or priority-based order
a)
Sending messages in FIFO or LIFO order
b) FIFO and priority based task waiting lists
Receiving Messages (1/2)
• Tasks can receive messages with different blocking
policies:
• not blocking
• blocking with a timeout
• blocking forever
• Due to the empty message queue, the blocked task
is placed in the message queue’s task-waiting list
• FIFO or priority-based order
FIFO and priority based task waiting lists
Receiving Messages (2/2)
•Messages can be read from the head of a
message queue in two different ways:
• Destructive read
• removes the message from the message
queue’s storage buffer after successfully read
• Non-Destructive read
• without removing the message
Obtaining Message Queue Information
Obtain information about a message queue:
•message queue ID,
•task-waiting list queuing order (FIFO or
priority-based), and
•the number of messages queued.
Broadcast Communication (1/3)
• Allow developers to broadcast a copy of the same
message to multiple tasks
• Message broadcasting is a one-to-many-task
relationship.
• tBroadcastTask() sends the message on which
multiple tSink-Task() are waiting.
Broadcast Communication (2/3)
Broadcast Communication (3/3)
Pipes
Pipe Introduction (1/2)
• Provide unstructured data
synchronization among tasks.
exchange
and
facilitate
• A pipe is a unidirectional data exchange facility.
• Two descriptors or functions, one end for reading and one
for writing.
• Data is written via one descriptor or function and read via
the other.
• Reader becomes blocked when the pipe is empty, and
writer becomes blocked when the pipe is full.
A common pipe — Unidirectional pipe
• The data remains in the pipe as an unstructured byte
stream.
• Data is read from the pipe in FIFO order.
Pipe Introduction (2/2)
• Typically used to exchange data between a data-producing task and a
data-consuming task.
• Allows several writers for the pipe with multiple readers on it.
Pipes vs. Message Queue
• A pipe does not store multiple messages.
• data that it stores is not structured, but consists of a
stream of bytes
• The data in a pipe cannot be prioritized
• the data flow is strictly FIFO
• Pipes support the powerful select operation, and
message queues do not.
Pipe
Pipes are a layer over message Queues
Queue
Message queue is managed by kernel.
All the queue memory allocated at
creation.
Pipe is a technique for passing Message queue is a method by which
information from one process to another process can pass data using an interface.
Two processes, one feeding the pipe with A message queue can be created by one
data while the other extracts the data at process and used by multiple processes
the other end.
that read / write messages to the queue.
Pipe is a linear array of bytes, as is a file, Queue is not a streaming interface.
but it is used solely as an I/O stream.
Pipe supports destructive reading. (once Datagram-like behavior: reading an entry
if you read it vanishes)
removes it from the queue. If you don't
read the entire data, the rest is lost
Pipe is one-way communication only
Have a maximum number of elements
and each element has maximum size
Pipe Control Blocks (1/2)
• Pipes can be dynamically created or destroyed.
• Kernel creates and maintains
information in a pipe control block
pipe-specific
• A kernel-allocated data buffer for the pipe’s input and
output operation
• Buffer size is fixed when the pipe is created
• Current data byte count
• amount of readable data in the pipe (Capacity of Pipe)
• Current input and output position
• specifies the next write/read position in the buffer
Pipe Control Blocks (2/2)
• Two task-waiting lists are associated with each pipe
Pipe States
• Corresponds to the data transfer state between the
reader and the writer of the pipe
Signals
Signal Introduction (1/3)
• A signal is a software interrupt that is generated
when an event has occurred.
• Signals notify tasks of events that occurred during
the execution of other tasks or ISRs
• these events are asynchronous to the notified task
Signals
Signal Introduction (2/3)
• The number and type of signals defined is both systemdependent and RTOS-dependent.
• each signal is associated with an event
• Unintentional
• such as an illegal instruction encountered during program
execution
• Intentional
• such as a notification to one task from another that it is about
to terminate
• a task can specify the particular actions to undertake when a
signal arrives
• the task has no control over when it receives signals
Signal Introduction (3/3)
• When a signal arrives,
• the task is diverted from its normal execution path, and
• the corresponding signal routine is invoked.
• Signal routine, signal handler, asynchronous event
handler, and asynchronous signal routine (ASR)
• Each signal is identified by an integer value, which is
the signal number or vector number.
Signal number or Vector number
Basic Design of RTOS
Basic Design of RTOS
• An Embedded system with a single CPU can run only one process at
an instance.
• The process at any instance may either be an ISR, or kernel function
or task.
• Provides running the user threads in kernel space so that they
execute fast.
• Provides effective handling of the ISRs, ISTs, tasks or threads
• Disabling and enabling of interrupts in user mode critical section
code.
• Provides memory allocation and de-allocation functions in fixed
blocks of memory.
Cntd….
• Provides for the uses of Semaphore(s) by the tasks or for the
shared resources in a task or OS functions.
• Provides for effectively scheduling and running and blocking of the
tasks in cases of multiple tasks.
• I/O Management with devices, files, mailboxes, pipes and sockets
becomes simple using an RTOS.
Design Principles in RTOS Environment
1. Design with the ISRs and Tasks.
2. Each ISR design consisting of shorter code.
3. Design with using Interrupt Service Threads or Interrupt
Service tasks.
4. Design Each Task with an infinite loop.
5. Design in the form of tasks for the Better and Predictable
Response Time Control.
6. Design in the form of tasks Modular Design
7. Design in the form of tasks for Data Encapsulation.
8. Design with taking care of the time spent in the system calls
9. Design with Limited number of tasks
1. Design with the ISRs and Tasks.
• The Embedded system hardware
source call generates interrupts
• The ISR can post(send) the
message for the RTOS and task’s
parameters.
• No ISR instructions should block any
task. Therefore, the ISR should not use
“MUTEX Lock functions”.
• Only an RTOS initiates the actions
according to the ISR-posted signal,
semaphore, queues and pipe.
On Interrupt (if it is not Masked)
Saves the current process context on
Stack
Executes corresponding ISR
Handling of interrupt is done by one
of the three method as explained
before
Cntd…….
• RTOS provides for nesting of ISRs.
Running ISR can be interrupted by higher
priority ISR
higher priority ISR starts executing
Blocking the running of low priority ISR, After
saving all related information on stack
When high priority interrupt service
completes and then
Returns to the low priority interrupt after
retrieving the saved context from the stack.
2. Each ISR design consisting of shorter code.
• Since ISRs have higher priorities over the tasks, the ISR
code should be made short so that the tasks don’t
wait longer to execute.
• A design principle is that the ISR code should be
optimally short and the detailed computations be
given to an IST or task by posting a message or
parameters for that.
3 Design with using Interrupt Service
Threads or Interrupt Service tasks.(ISTs)
• In certain RTOSes, for servicing the interrupts,
• There are two levels, fast level ISRs and slow level
ISTs,
• The priorities are first for the ISRs, then for the ISTs
and then the task
4. Design Each Task with an infinite loop
• Each task has a while loop which never terminates.
• A task waits for an IPC or signal to start.
• The task, which gets the signal or takes the IPC for which it is
waiting, runs from the point where it was blocked or
preempted.
• In preemptive scheduler, the high priority task can be
delayed for some period to let the low priority task execute
5. Design in the form of tasks for the Better
and Predictable Response Time Control
• Provide the control over the response time of the different
tasks.
• The different tasks are assigned different priorities and those
tasks which system needs to execute with faster response
are separated out.
Response Time Control
• For example, in mobile phone device there is need for faster
response to the phone call receiving task then the user key
input.
6. Design in the form of tasks Modular
Design
•System of multiple tasks makes the design
modular.
•The tasks provide modular design
7. Design in the form of tasks for Data
Encapsulation.
• System of multiple tasks encapsulates the code and
data of one task from the other by use of global
variables.
8. Design with taking care of the time spent in
the system calls
• Expected time in general depends on the specific
target processor of the embedded system and the
memory access times.
9. Design with Limited number of tasks
• Limit the number of tasks and select the appropriate
number of tasks to increase the response time to the tasks,
• And better control over shared resource and reduced
memory requirement for stacks.
• The tasks, which share the data with number of tasks, can be
designed as one single task.