event - Dedicated Systems
Download
Report
Transcript event - Dedicated Systems
OS: the state of the art
Module 2
Scheduling and memory management
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 1
Content of module 2
a. Introduction: what is an operating
system
b. RTOS architectures
c. Scheduling
d. Memory Management
e. Conclusions
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 2
Documentation
• This presentation
• Evaluation reports
– http://www.dedicated-systems.com
• This course website
– http://vub.dedicated-systems.com
– Username: vubuser
– Code: vubdse1!
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 3
www.dedicated-systems.com
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 4
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 5
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 6
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 7
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 8
OS: a definition
• GENERAL PURPOSE OS (GPOS):
– A GPOS is a collection of programs that acts
as an intermediary between the hardware and
its user(s), providing a high-level interface to
low level hardware resources, such as the
CPU, memory, and I/O devices.
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 9
GPOS: PURPOSE
• To provide an environment with facilities
and services that make the use of the
hardware
– Convenient
– Efficient
– Safe
– Secure
• Sharing of resources among users and/or
programs
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 10
GPOS: facilities & services
•
•
•
•
•
•
Memory management
Process management
Communication facilities
A command language interpreter or GUI
A file system
....
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 11
OS TYPES
In 1980
off-line
batch
remote batch
userProgrammable
Time
personal
sharing computing
Response
requirements
on-line
nondata-base real-time Programmable
multi-user single-user
Protection Requirements
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 12
Migration of OS concepts and
features
1940
1950
1960
1970
1980
1990
2000
MULTICS
Mainframes
no
software
time
shared
batch
distributed
systems
compilers
multi-user
resident
monitors
UNIX
Minicomputers
no
software
compilers
time
shared
multi-user
resident
monitors
UNIX
Microcomputers
no
compilers
software
resident
monitors
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
interactive
multi-user
.
.
p. 13
GPOS - RTOS
• GPOS
– Multiple applications on one system
– Maximum resource usage
– Average performance is important
• RTOS
– A single application on a system
– Reliably
– Predictability = time constraints on individual
events
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 14
Real-Time OS:
too many or too few?
• An enormous quantity of RTOS are
available today
• Free – commercial – with/without source –
experimental…
• Some are competitive, others are
complementary
• The miss-choice of an RTOS may have
dramatic results on a project
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 15
Some Embedded (Real-Time) OS
•
•
•
•
•
•
•
•
•
•
•
AMX (KADAK)
CExecutive (JMI)
CHORUS (Jaluna C5)
CX/UX (HARRIS)
eCos
FlexOS (NOVELL)
iRMX (Intel)
LynxOS (Lynx)
Linux flavours
OSE (ENEA DATA)
OS9 (Microware)
•
•
•
•
•
•
•
•
•
•
•
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
PDOS (Eyring Research)
(pSOS+) (WindRiver)
QNX (QNX)
Real/IX (Modcomp AEG)
SPECTRA (VRTX) (Mentor
Graphics)
SMX (Micro Digital)
SunOS (Sun)
Symbian
VXWorks (Wind River)
Windows CE
Windows XP embedded
p. 16
Thesis
General purpose systems are migrating from mainframe to
distributed Client-Server technologies. A Real-Time System is
distributed by nature and is using this technology for some
years already.
Real-Time & General Purpose OS technology
is melting together.
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 17
Scenario for the presentations (1)
• What are the requirement for a good
RTOS?
Requirement
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 18
Scenario (2)
• Examples:
– Of real product situations
– Some examples show product problems which
are in most cases resolved by now
• In new release
• Except for NTe – XPe – Linux (because non RT!)
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 19
Testing scenario
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 20
evaluation test platform
• Pentium 200 MMX
– Reference platform
– Not really an embedded processor
– Is faster than most embedded processors
• PPC
– ATX platform with MPC7410 PPC 400 MHz
• ARM9
– ATX module: Integrator AP/CM920 T
• Hardware logic analyser for measurements
(10 ns resolution)
– No tricks with on-boards timers
– No resolution problems with these timers
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 21
Development Platforms
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 22
Pictures
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 23
Test examples
• Platform calibration
– To compare results with different processors
• Performance measurements
– Thread creation, switching, deletion
Interrupt latencies under load conditions
Semaphore/mutex creating, release,..
• Behaviour
– Queues: Task/thread, Semaphore/mutex
– Application simulation
• Endurance tests
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 24
Test procedure
• Starttime: Before starting an OS call write a
trace to PCI bus – capture with (PCI) analyser
• Start the call
• Stoptime: At the end of the call – write again a
trace to the PCI bus – capture with analyser
• Do this as many times as possible – limited to
the size of the analyser trace buffer
• Do the same test in different circumstances
– No other activity
– A lot of other activity (processor load)
• Event = system activity between start and stop
• Eventtime = stoptime - starttime
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 25
Measurements:
sample diagram
1 200 0
event duration (ns)
1 110 0
1 020 0
930 0
840 0
750 0
660 0
570 0
480 0
390 0
300 0
0
100 00
200 00
30 000
40 000
50 000
a b so lu te t im e (µ s)
All public tests on 200 MHz Pentium MMX motherboard platform
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 26
Measurements:
histogram
nbr of events
1 000 000
10 000
100
1
300 0
39 00
48 00
5 700
660 0
750 0
84 00
93 00
1 020 0 1 110 0 120 00
tim e ( n s)
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 27
End of M2a
Introduction
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 28
Module 2 b:
RTOS Architecture
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 29
Good RTOS – REQ: OS structure
A good RTOS needs a Client/Server architecture
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 30
OS STRUCTURE
• Simple or monolithic structure
• Layered Approach
• Client/Server model
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 31
Software Layers (1)
Applications
Implemented in software
System software
Hardware
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 32
The RT way
Applications
System software
Hardware
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 33
Software Layers (2)
Applications
System software
Language
translators
Operating
system
Utility
Programs
Hardware
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 34
Software Layers (3)
Applications
System software
Language
translators
Operating
system
Utility
Programs
Hardware
Command interpreter
Long-term scheduler
Resource manager
Short-term scheduler
File manager
I/O system
Memory manager
Kernel
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 35
Monolithic OS
Application
Program
. . .
Application
Program
User Mode
Kernel Mode
System Services
Operating
System
Procedures
Hardware
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 36
Client/server OS = REQ
Client
Application
Memory
Server
Process
Server
Network
Server
File
Server
Display
Server
User Mode
Kernel Mode
Microkernel
Send
Reply
Hardware
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 37
Client/server OS: advantages
• OS components are small and selfcontained
• a single server may fail and be restarted
(user mode) without corrupting the rest of
the OS
• different servers may run on different
processors = easy distributed
environments
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 38
Client/server examples
• New approach
– Chorus (now Jaluna C5) (EU)
– QNX neutrino = QNX 6.x (CANADA)
– VxWorks 6.x ??? (US)
• Inherent (old) approach
– OSE (EU)
– VXWorks 5.x (US)
– VRTX (US)
– - ..
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 39
Good RTOS – REQ:
OS architecture
Whatever architecture is used,
it should be clearly documented and published.
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 40
Example of different architectures
• Basic NT architecture
• RTX
• INTIME
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 41
RTX 4.1 (version 2) their view
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 42
INtime - their view
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 43
Simplified Basic NT architecture
Win32
Win32
Win32
NT-Kernel
proces
s
thread
proces
s
thread
proces
s
thread
thread
proces
s
Win32
I/O Mgr
Dev.Dr.
HAL
Hardware
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 44
Win32
Win32 RT-API
thread
proces
s
NT-Kernel
proces
s
I/O Mgr
Dev.Dr.
proces
s
thread
thread
proces
s
thread
RING 0
RING 3
RTX - our view
RT-API
RT-API
scheduler
HAL Timer - irq handling
Hardware
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 45
RING 0
Win32
Win32 NTX
proces
s
proces
s
thread
NT-Kernel
HAL
thread
proces
s
thread
proces
s
thread
RING 3
INtime - our view
proces
RT-APIs
I/O NTX
Mgr
driver
Dev.Dr.
NTX
driver
proces
RT-APIs
scheduler
Timer - irq handling
Hardware
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
MEM
p. 46
End of M2b
RTOS architecture
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 47
Module 2 c:
Processor(s) management:
scheduling
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 48
M2c: Processor(s) Management
Single processor issues
• Scheduling in general
• A simple introduction to OS-scheduling
• Scheduling for RT-Systems
Multiple processor issues (later – Module x)
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 49
Scheduling in general
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 50
Scheduling in general
• Scheduling = ressource sharing
• Resource
– Processor: “scheduling”
– Disk: “disk scheduling”
– Bus: “arbitration”
– Network
– ....
– Human, material: “project planning”
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 51
Scheduling
= Arbitration = Disk Scheduling =
Project Planning
Common Resource
all objects want to use the resource simultaneously = need for arbitration or scheduling
Object 4
Object 1
Object 2
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
Object 3
p. 52
Scheduling
Processor
all bus masters want to use the processor simultaneously = need for scheduling
Task 4
Task 1
Task 2
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
Task 3
p. 53
IEEE’ s Scheduler
• IEEE 610.12-1990 (standard glossary of
software engineering terminology)
– A computer program, usually part of an
operating system, that schedules, initiates
and terminates jobs.
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 54
Arbitration
Bus
all bus masters want to use the bus simultaneously = need for arbitration
Bus Master 4
Bus Master 1
Bus Master 2
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
Bus Master 3
p. 55
IEEE’ s Arbitration
• IEEE 959 - 1987 (SBX)
– The process of determining which requested device
will gain access to a resource
• IEEE 1000 - 1987 (STEbus)
– The means whereby masters compete for control of
the bus and the process by which a master is granted
control of the bus.
• IEEE 1196 - 1987 (VSB)
– A collection of mechanisms that allow masters to
access the bus without conflicting with each other.
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 56
Arbitration = Scheduling
• For n levels of priority
• (Preemptive) Priority (for Real-Time
Systems)
– n > n-1 > .......... > 1
• Round Robin Select
levels
n
n-1
1
– n = n-1 = .......... = 12
• Mixed modes
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
VMEbus
4
PCI
# slots
RTOS
256
i
p. 57
Disk Scheduling
Disk
different requesters want to use the disk simultaneously = need for scheduling
Acces
Request
4
Acces
Request
1
Acces
Request
2
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
Acces
Request
3
p. 58
JOB and PROCESSOR SCHEDULING
•
•
•
•
•
•
•
Scheduling levels
Scheduling objectives
Scheduling criteria
Pre-emptive vs Non pre-emptive
The interval timer
Priorities
Discussion of scheduling methods
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 59
Scheduling levels
Jobs
waiting
for entry
Job entry
Jobs
waiting
for initiation
High-level scheduling
Job initiation
Suspended
processes
waiting for
activation
Intermediate-level scheduling
Active
Suspend
Active
processes
Low-level scheduling
Block or
timerout
Running
Dispatch
Block
Dispatch
Running
processes
Complete
Timerrunout
Blocked
Ready
Wakeup
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
Completed
p. 60
Scheduling levels - GPOS
• High-level scheduling = job scheduling
– job or admission scheduling
– compete for the resources of the system
– jobs become processes or groups of processes
• Intermediate-level scheduling
– which process competes for the CPU(s)
– is a buffer between admission of jobs and assigning
them to the CPU
• Low-level scheduling = process/thread
scheduling
– performed by the dispatcher which is all the time in
primary storage
– which ready process will be assigned the CPU
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 61
Possible Scheduling objectives
•
•
•
•
•
•
•
•
•
•
•
•
Fair
Maximize throughput
Maximum number of interactive users
Be predictable and respect deadlines
Minimize overhead
Balance resource use
Balance response and utilization
Avoid indefinite postponement
Enforce priorities
Preference to processes holding key resources
Better service to P with desirable behavior
Graceful degradation under heavy loads
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 62
Scheduling criteria in general
• Consider
– That a process can be
• I/O bounded
• CPU bounded
• batch or interactive
– How urgent is a fast response
– Process priority
– how frequently a process
• is generating page faults
• has been pre-empted by a higher priority process
– how much
• real execution time has been received
• more time is needed to finish
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 63
Processor Scheduling
a simple introduction
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 64
•
•
•
•
•
Scheduling mechanisms
a simple introduction
Single event system
Multiple event system
Use scheduling with RR and a ticker
Use priorities
Pre-emption
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 65
Single event system
? event
No
Yes
respond to
event
Deadline = respond to event time + jitter on event detection
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 66
Multiple event system 1
wait for
event A
A
B
C
respond to
event A
wait for
event B
respond to
event B
wait for
event C
respond to
event C
A, B & C ordered
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 67
Multiple event system 2
A
B
C
Y
? event A
1 ms
Y
? event B
1 ms
respond
to
event A
40 ms
respond
to
event B
35 ms
total loop time: 108 ms
Y
? event C
1 ms
respond
to
event C
30 ms
A, B & C random
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 68
•Exec times:
•A: 40 ms
•B: 35 ms
•C: 30 ms
•Deadlines:
•A: 100 ms
•B & C: 150 ms
A: 40 ms
B: 35 ms
A: 40 ms
respond
to
event A
40 ms
Y
? event B
1 ms
respond
to
event B
35 ms
Y
? event A
1 ms
respond
to
event A
40 ms
Y
respond
to
event C
30 ms
C: 30 ms
time
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
Y
? event A
1 ms
? event C
1 ms
p. 69
•Exec times:
•A: 40 ms
•B: 35 ms
•C: 30 ms
•Deadlines:
•A: 100 ms
•B: 130 ms
•C: 150 ms
? event A
1 ms
? event B
1 ms
Y
Y
respond
to
event A
40 ms
respond
to
event B
35 ms
? event B
1 ms
Y
respond
to
event B
35 ms
respond
Y
to
? event A
event A
1 ms
30 ms
Y respond
to
? event C
event C
1 ms
30 ms
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 70
•Exec times:
•A: 40 ms
•B: 35 ms
•C: 30 ms
•Deadlines:
•A: 100 ms
•B: 130 ms
•C: 150 ms
? event A
1 ms
? event B
1 ms
Y
Y
respond
to
event A
40 ms
respond
to
event B
35 ms
? event B
1 ms
Y
respond
to
event B
35 ms
respond
Y
to
? event A
event A
1 ms
40 ms
Total loop time:
149 ms
Y respond
to
? event C
event C
1 ms
30 ms
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 71
Basic ingredients for scheduling
• Ticker
• A state machine concept
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 72
Task A
Task B
EXECUTIVE
Task C
? event No
Yes
respond to
event
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 73
Repetitive Interrupt
TIMER - TICKER
CPU
X ms
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
time
p. 74
Is it better?
Solution with just one program
A: 40 ms
B: 35 ms
A: 40 ms
C: 30 ms
time
A: 40 ms
B: 35 ms
A: 40 ms
C: 30 ms
time slice: 41 ms = best solution
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 75
Total time
57
52
47
42
37
32
27
17
12
7
2
290
270
250
230
210
190
170
150
22
Total time = f (timeslice)
timeslice
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 76
Why is it worse?
exec
wakes up
task
has
event
occurred?
Happens occasionally
respond
to
event
execute
trap
instruction
exec
puts task
to sleep
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 77
THE solution
exec
wakes up
task
BUT: who is detectiong the event??
respond
to
event
The OS = the device driver!
execute
trap
instruction
exec
puts task
to sleep
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 78
Task A
respond to
event A
Task B
EXECUTIVE
respond to
event B
Task C
respond to
event C
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 79
Process states & transitions
Running
Dispatch
Block
Timer-runout
Blocked
Ready
Wakeup
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 80
Queues – 4 processors
Running
Dispatch
Block
Timer-runout
Blocked
Ready
Wakeup
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 81
Queues – 1 processors
Running
Dispatch
Block
Timer-runout
Blocked
Ready
Wakeup
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 82
Different scheduling algorithms
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 83
Some scheduling mechanisms
• http://en.wikipedia.org/wiki/Category:Scheduling_algorithms
•
•
•
•
•
FIFO
Round Robin
Multilevel feedback queues
Shortest job first (SJF)
Shortest remaining time
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 84
FIFO scheduling
Ready list
completion
A
C
B
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
A
CPU
p. 85
Round Robin scheduling (RR)
Ready list
completion
A
C
B
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
A
CPU
p. 86
Multilevel feedback queues
completion
Level 1
CPU
Level 2
CPU
Level 3
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
completion
completion
CPU
p. 87
Shortest-job-first
• non pre-emptive scheduling
• waiting job with smallest estimated runtime-to-completion is run next
• requires precise knowledge of how long a
job will last
• once a job is started it runs to completion
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 88
Shortest-remaining-time
• Pre-emptive counterpart of SJF
• job with shortest estimated runtime to
completion runs next
• should keep track of elapsed time (creates
more overhead)
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 89
Good RTOS – REQ:
A deadline scheduler
A good RTOS needs a deadline scheduling mechanism.
This technology is NOT (yet?) available.
Pre-emptive scheduling is an acceptable replacement
if RMA is used
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 90
Deadline scheduling problem
• Precise resource requirements should be known
in advance
• The system must carefully plan its resource
requirements through to the deadline
• Knowledge of task execution time is needed.
• What if many deadline jobs exists together?
• scheduling algorithm complexity introduces
(serious) overhead
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 91
•
•
•
•
•
•
Waiting for deadling scheduling
(if ever)
Pre-emptive priority scheduling
Rate Monotonic Analysis
Earliest deadline
Least slack
..
Time triggered
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 92
ISR A
ISR B
Priority Scheduling
TASK A (40)
TASK B (35)
TASK C (30)
timeslice: 25
time
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 93
ISR A
ISR B
Pre-emptive Priority Scheduling
TASK A (40)
TASK B (35)
TASK C (30)
timeslice: 25
time
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 94
ISR B
ISR A
Pre-emptive Priority Scheduling
TASK A (40)
TASK B (35)
TASK C (30)
timeslice: 25
time
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 95
Rate-Monotonic Scheduling
• Liu and Layland [1973]: RMS = optimal fixedpriority scheduling
– If a successful scheduling for the RMS cannot be
found, then no other fixed priority mechanism will
avail.
• The higher the request rate, the higher the
priority assigned to the process request
• As long as the processor utilisation remains
BELOW a certain level RMS will assure the
meeting of the deadlines of the tasks
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 96
Static RMS
• the process set is schedulable by the static
rate monotonic priority assignment
scheme if
n
n
1/n - 1)
u =
c
.
f
n
(2
i
i
i=1
u: processor
utilisation
ci : task period
fi : tasks frequency
n: number of tasks
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
1
2
3
4
5
10
100
1000
max (u)
1
.83
.78
.76
.74
.72
.70
.69
p. 97
Periodic tasks: example 1
run
wait
idle
Task
Period Exectime
CY_1
CY_2
2s
3s
Total
6s
1s
.9 s
4.8 s: 80 %
CY_1: Pr: 110
CY_2: Pr: 100
CY_1: Pr: 100
CY_2: Pr: 110
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 98
Periodic tasks: example 2
Task
CY_1
CY_2
Period
2s
3s
Exectime
1 s
1.1 s
Total
6s
5.2 s: 87%
CY_1: Pr: 110
CY_2: Pr: 100
CY_1: Pr: 100
CY_2: Pr: 110
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 99
Periodic tasks: example 3
Task
CY_1
CY_2
Period Exectime
2s
1 s
3s
1.1 s
Total
6s
5.2 s: 87%
CY_1: Pr: 110
CY_2: Pr: 100
CY_2: Pr: 120
CY_2: Pr: 100
Change priorities dynamically
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 100
Periodic + non periodic tasks
• earliest deadline (d)
• least slack (minimal laxity)
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 101
Earliest Deadline
• http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling
• the process set is schedulable on a single
processor by the dynamic earliest deadline
scheme if
n
ci . fi
i=1
• ci : the computation time of task i
• fi : the task's frequency
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 102
Least Slack Time
• http://en.wikipedia.org/wiki/Least_slack_time_scheduling
• slack (Pi , t) = max (di - ci - t , 0)
• optimal in single processor systems only
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 103
Some supplementary issues
• Queue response predictability
• Cache introduces un-predictability
• Hard RT today with TTP
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 104
Scheduling queues
PCB a
PCB b
registers
registers
queue header
ready
queue
head
tail
For predictability: thread switch time should be independent of
the number of threads waiting in the queue.
Always order queue when a thread goes into it! High priority
thread goes in top of it.
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 105
Cache???
• RMA is based on the availability of a fixed
processor performance capability
• Cache, pipelining & other mechanism
provides us with a variable performance
engine. They give an AVERAGE
enhancement of processor performance
Today - there is no real solution how to deal with this!
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 106
TTA & TTP
• Time Triggered Architecture
• Time Triggered Protocol
• Time Triggered OS
– OSEK
– OSEKtime
• http://www.ttagroup.org/
• http://www.tttech.com
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 107
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 108
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 109
Good RTOS – REQ:
have enough thread priority
levels
A good RTOS needs enough thread priority levels (> 128)
so that the
application of RMA or similar theories is easy.
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 110
NT priority levels
Not enough
Levels for
a serious
real-time
design!
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 111
CE 2.0 Fixed Priority levels
• 0-1 (interrupt level)
–real-time processing and device drivers
–0: time-critical - no time slicing
• 2-4 (main level)
–normal applications
• 5-7 (idle level)
–non RT threads
–pre-emption available
Not enough levels
for a serious real-time design!
• same priority level: round-robin
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 112
CE 3.0 Fixed Priority levels
• 0-246 (real-time priority interrupt level)
• 247-255 (system level)
• same priority level: round-robin
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 113
QNX
• 32 levels
• May be just enough!?
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 114
End of Scheduling
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 115
Module 2 d:
Memory management
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 116
M2d: Storage management
As some people use non-RT systems for
embedded applications, we are obliged to study
both GPOS and RTOS memory management
techniques.
• Memory Management (Real storage)
• Virtual Memory (Virtual storage)
• Storage in RT-Systems
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 117
MEMORY MANAGEMENT
•
•
•
•
•
•
•
•
Background
Swapping
Single-Partition Allocation
Multiple-Partition Allocation
Multiple Base Registers
Paging
Segmentation
Paged Segmentation
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 118
Memory Management: intro
• multiple processes: share memory
• different ways of managing memory
– primitive bare machine
– paging
– segmentation
• selection of memory management =
hardware design issue
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 119
Background
• Memory is central to the operation of a
modern computer system
• In this section we may ignore HOW a
memory address is generated by a
program via the CPU or any other device
• We are only interested in the sequence of
memory addresses generated by the
running program
• In a first approach: the whole program in
memory to be able to run
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 120
Background
•
•
•
•
Address Binding
Dynamic Loading
Dynamic Linking
Overlays
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 121
B: Address Binding 1
source
program
compiler or
assembler
object
module
compile time
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 122
B: Address Binding 2
other
object
module
object
module
linkage
editor
load
module
load time
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 123
B: Address Binding 3
dynamically
loaded system
library
system
library
load
module
loader
load time
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
in
memory
execution time
(run time)
p. 124
Address Binding during
development
source
program
object
module
compiler or
assembler
resident
system
library
in
memory
other
object
module
object
library
module
linkage
editor
load
module
loader
dynamically
loaded system
library
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
system
library
p. 125
Dynamic loading
• loading a routine when needed
• routines are kept on disk
• advantages:
– unused routines are never loaded
– no special support from OS
• examples: error routines
NON-RT
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 126
Dynamic linking
• example: having the library loaded in
memory all the time
• advantage:
– memory usage
– easy replacement of library by new one (with
less bugs)
• other name: shared libraries
RT-OK
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 127
Overlays
• problem: limitation of program size
(historical issue)
symbol
table
20K
common
routines
30K
overlay
driver
10K
70K
pass 1
80K
pass 2
NON-RT
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 128
Swapping
• swapping out to a backing store
operating
system
swap out
process
P1
user
space
swap in
process
P2
NON-RT
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 129
Swapping
• swapping back to the same place or not
depending on the address binding
• backing store is fast disk
• context switch time may be very high if
swapping is needed
• problem: swapping a process with pending I/O
– solution 1: wait for I/O completion
– solution 2: I/O via OS buffers
• today swapping is only used in very few systems
NON-RT
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 130
Single-partition-allocation 1
• simple memory management = NONE
– user has complete control over entire memory
• advantages:
–
–
–
–
maximum flexibility to the user
maximum simplicity and minimum cost
no need for special hardware
no real need for OS software
–
–
–
–
no service
OS has no control over interrupts
no mechanism to process system calls or errors
no space for multi-programming
RT-OK
• limitations
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 131
Single-partition allocation 2
• divide memory in 2 partitions
– one for the user
– one for the OS (if used)
0
OS
user
512K
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
RT-OK
p. 132
SPA: memory protection
base
address
CPU
>=
base + limit
yes
no
yes
<
memory
no
trap to OS
monitor - addressing error
RT-OK
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 133
SPA: loading problem
• user program does not start at 0
• what if OS uses transient code
– OS code size changes during execution
– Transient code is not anymore needed with
modern processors
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 134
SPA: loading - solution 1
• process loaded in high memory
0
operating
system
user
512K
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 135
SPA: loading - solution 2
• dynamic relocation:
base register
14000
CPU
logical
address
346
+
physical
address
memory
14346
• user thinks the process runs in locationRT-OK
0
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 136
Multiple-Partition-Allocation
• multi-programming
• fixed-sized partitions
– example: IBM OS/360
0
operating
system
400K
Job Queue
Process
P1
P2
P3
P4
P5
memory
600K
1000K
300K
700K
500K
Time
10
5
20
8
15
2160K
2560K
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 137
P1
P2
P3
P4
P5
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 138
External fragmentation & compaction
0
0
operating
system
400K
0
operating
system
400K
P1
0
operating
system
400K
P1
0
operating
system
400K
operating
system
400K
P5
P1
900K
1000K
1000K
1000K
P4
P2
2000K
1000K
2000K
P3
1000K
P4
P4
1700K
1700K
1700K
2000K
2000K
2000K
P3
P3
P3
P3
2300K
2300K
2300K
2300K
2300K
2560K
2560K
2560K
2560K
2560K
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 139
MPA: dynamic storage allocation
• First-fit
– first hole that is big enough
• Best-fit
– smallest hole that is big enough
• Worst-fit
– largest hole
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 140
MPA: external fragmentation
• enough space is left but is not contiguous
• one third of memory may be unusable
EXTERNAL FRAGMENTATION
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 141
MPA: memory protection
limit
CPU
logical
address
base
yes
<
+
memory
no
trap to OS
monitor - addressing error
RT-OK
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 142
MPA: internal fragmentation
operating
system
P7
next request
for 17.000 bytes
hole of
18.000 bytes
P43
INTERNAL FRAGMENTATION
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 143
MPA: compaction
• shuffle all free memory together in one
large block
• combine compaction with swapping to
make it work
NON-RT
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 144
0
0
operating
system
400K
operating
system
400K
P5
900K
P5
900K
1000K
P4
P4
1600K
1700K
P3
2000K
1900K
P3
660K
2300K
2560K
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
2560K
NON-RT
p. 145
0
0
0
operating
system
300K
operating
system
300K
P1
500K
600K
P2
400K
operating
system
300K
P1
500K
600K
P2
500K
600K
P3
1200K
300K
P2
P1
500K
600K
P2
P4
800K
P4
operating
system
P1
P3
1000K
1200K
0
1000K
900K
P3
1200K
300K
1500K
1500K
900K
P4
900K
1900K
2100K
P4
1900K
2100K
moved 600K
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
2100K
moved 400K
P3
2100K
moved 200K
NON-RT
p. 146
Multiple Base Registers
• problem of variable-sized partition scheme
is external fragmentation
• solution: break down the memory a
process needs into several parts
• multiple base registers per process must
be provided to do logical to physical
address translation
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 147
MBR: solution 1
• memory divided into 2 disjoint parts by
using the high order address bit
• use 2 pairs of base registers
• compilers and assemblers put
– read-only values in high memory
– variables in low memory
• mechanism permits shared use of readonly segments
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 148
MBR: solution 2
• separate a program into 2 parts: code and
data
• programs may be shared
– compilers
– editors
– ...
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 149
PAGING
• problem: external fragmentation
• solution:
– Compaction – paging
• paging can also be used on the backing
store (= virtual memory)
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 150
Paging Hardware
logical address
p
physical address
d
f
CPU
p
0
1
2
3
4
5
6
7
8
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
f
a
b
c
d
e
f
g
h
i
d
physical
memory
p. 151
P: hardware 2
• page size
– 512 to 2048 bytes per page
– depends on the number of bits of “d”
0
– = table of base registers
1 page 0
2
page 0
page 1
page 2
0
1
2
3
1
4
3
7
page 3
3
page 2
4
page 1
5
6
7
logical memory
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
page 3
physical memory
p. 152
Paging mechanism
free frame list
14
13
18
20
15
free frame list
15
13
14
14 page 0
15
15
16
16
17
17
18
new process
page 0
page 1
page 2
page 3
19
20
21
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
13 page 1
new process
page 0
page 1
page 2
page 3
new process
page table
0 14
1 13
2 18
3 20
18 page 2
19
20 page 3
21
p. 153
P: implementation of page table
• performance issue
• dedicated registers
– high speed logic
– part of the context
• in main memory
– via page-table base register for fast context switch
– speed reduction by 2
• special small hardware memory
– called associative registers or translation look-aside
buffers (TLBs)
– hit ratio problem
SLOWER-RT-OK
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 154
logical address
p
physical address
d
f
CPU
p
0
1
2
3
4
5
6
7
8
f
a
b
c
d
e
f
g
h
i
associative map
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
d
physical
memory
RT-OK
p. 155
logical address
p
physical address
d
f
d
CPU
PT origin R
address of
page table b
+
b
p
0
1
2
3
4
5
6
7
8
f
a
b
c
d
e
f
g
h
i
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p
f
0
2
3
5
8
a
c
d
f
i
physical
memory
only if no match
in assoc. map
associative map
NON-RT ?
p. 156
P: shared pages
• if code re-entrant
– non self modifying code!
ed 1
– also called pure code ed 2
3
0
4
1
data 1
ed 3
6
data 1
1
2
data 3
process P1
PT
3
ed 1
4
ed 2
ed 1
3
ed 1
3
ed 2
4
ed 2
4
5
ed 3
6
ed 3
6
6
ed 3
data 2
7
data 3
2
7
data 2
process P2
PT
process P3
PT
8
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 157
P: protection
• protection bits associated with each frame
– read
– write
– read-only
– read-write
– execute only
• protection via page-table length register
– limited number of pages given to a process
(not all pages are possible)
RT-OK
– if other pages used: bus error for OS
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 158
P: two views of memory
• user’s view - OS view
• logical - physical address
• logical to physical address translation for I/O
operations
• Paging solves problem of external fragmentation
but creates internal fragmentation!
• Page size is small. Many pages needed.
Associated Map is limited. RT-problem
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 159
SEGMENTATION
• User’s view of memory
– user does not think of memory as a linear
array
subroutine
stack
symbol
table
main
program
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 160
Segmentation 2
• segmentation supports user’s view
• example: Pascal compiler
–
–
–
–
global variables
the procedure call stack
the programs itself
the local variables of each procedure and function
• segments are numbered
• user refers to a piece of code or data via a
segment number and an offset in the segment
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 161
S: hardware
segment table
limit
CPU
base
(s,d)
Physical
memory
yes
<
no
address error trap
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
+
RT-OK
p. 162
1400
segment 0
2400
subroutine
stack
segm 3
segm 0
main
program
symbol
table
segm 5
lim base
0 1000 1400
1
2 400 4300
3 1100 3200
4
5 1000 4700
3200
segment 3
4300
4700
segment table
segment 2
segment 5
segm 2
5700
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 163
S: implementation of segment
tables
• Segment table – part of the context (will
be in Task Control Block
• Context is larger – thread switching time is
higher = slower RT-OK
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 164
S: protection and sharing
• protection: corresponds better to the user idea
–
–
–
–
read
write
read-only
execute only
• sharing
– at the segment level: any information can be shared
if it is defined at the segment level
– examples: share entire program, share a subroutine,
share a data portion
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 165
S: fragmentation
• a lot of external fragmentation
– to be reduced by using smaller segments
– but then more segment table space to be lost
+ performance degradation
• no internal fragmentation
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 166
Paged Segmentation
• 68000 family: flat address space
– First MMU was external chip (segmented)
– Then new external chip (paged)
– 68030: paged MMU inside the chip
• 8086 family: segmentation
– Created problem with OS2
– Todays’ solution for INTEL processors:
page the segments
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 167
VIRTUAL MEMORY
•
•
•
•
•
•
•
•
•
Motivation
Demand Paging
Performance of Demand Paging
Page replacement
Page replacement algorithms
Allocation of Frames
Trashing
Other Considerations
Demand Segmentation
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 168
VM: motivation
• in previous chapter: process entirely in memory
• the entire program is not always needed in
memory
– error code rarely used
– over consumption of memory for arrays, lists etc..
– some program options are never used (by most users)
• not all the time needed in memory:
– program larger than physical memory
– each user takes less physical memory
– less I/O to load and swap each program into memory
• overlay is not anymore needed
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 169
VM: Demand Paging
main memory
page out
page in
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
p. 170
VM: Demand Paging 2
• hardware support
– valid-invalid bit in page table
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 171
0
1
2
0
A
3
1
B
4
2
C
5
3
D
4
E
0 4 v
1
i
2 6 v
5
F
6
G
7
H
3
i
4
i
5 9 v
6
i
7
i
logical memory
page table
6
A
C
0
1
4
5
A
6
B
7
9
E
10 F
11
D
2
3
7
8
8
12
13
14
15
16
17
18
19
20
21
22
23
9
10
11
F
C
physical memory
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 172
OS
3: page is on disk
2: trap
load M
1: reference
6: restart inst
5: reset page table
i
page table
free frame
4: get missing page
NON-RT!!
physical mem
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 173
VM: performance & demand paging
• following sequence occurs
–
–
–
–
–
trap to OS
save the user registers and process state
determine that the interrupt was a page fault
check that page ref was legal and locate page on disk
issue read from disk to a free frame
–
–
–
–
–
–
–
while waiting, allocate CPU to other user (optional CPU scheduling)
disk I/O completion
save registers and process state for the other user
determine the interrupt was from disk
correct page tables and other tables
wait for CPU to be allocated to this process again
restore user registers, process state and new page table, then resume
the interrupted instruction
NON-RT
• wait in queue for this device until read is serviced
• wait for device seek and/or latency time
• begin transfer of the page to free frame
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 174
VM: page replacement
• limit the number of pages in memory for
one user
• replace an old (non used) page with a
new one
• Different page replacement algorithms
(pm)
• problem of how many frames a process
gets
• a process is trashing if it spends more
NON-RT
time paging than executing
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 175
VM: other considerations
• page size (hardware issue)
• program structure and behaviour under
paging is NON-RT
• ......
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 176
VM: Demand segmentation
• invented especially for the 80286 not
including page features
• used by OS/2
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 177
Good RTOS – REQ:
Predictable Mem Mgt
A good RTOS needs a predictable memory mgt system.
It will therefore be as simple as possible.
No memory leaks are accepted. Garbage collection is damned.
Memory protection becomes an important issue.
If virtual memory is used, firm memory locking should be possible.
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 178
MM in RTS
• performance issue
– CPU power used for MM
– if MMU is used outside processor: 10% lost
– if MMU inside processor 3% lost
– MMU: protection against programmer’s errors
• predictability issue
– ask for memory - when do I get it, if I get it!
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 179
Mem Mgt Conclusions for RT
•
•
•
•
Never usable
Not used
Sometimes used
Always used
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 180
Never use-able in RTS
• Dynamic loading of DLL.
• Virtual memory on disk.
• Compaction or garbage collection due to
external fragmentation.
• ..
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 181
Not Used in RTS
• Dynamic relocation via an MMU
• Instead good CROSS compilers are used
generating “pure code” and position
independent code (PIC)
• But could be used in slower RT
systems
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 182
Sometimes Used in RTS
• Protection of segments via an MMU
• Use of paging MMU is difficult due to the
large number of pages involved
• Used in application where security is
important
– Nuclear power plant control
– Telecom billing systems
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 183
Always Used in RTS
• KISS (Keep It Simple and Stupid)
• Static memory allocation
• Minimal Dynamic Memory Allocation
– Try to allocate fixed size areas!
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 184
Dynamic Paging & RT
without MMU
2K
2K
5K
7K
2K
2K
5K
2K
7K
2K
2K
Pools subdivided in blocks
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 185
Naming in RTOS
Partitions
Pools
Regions
subdivided in
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
Blocks
Buffers
Segments
p. 186
Dynamic paging in RTS with
segmented MMU
•
•
•
•
use segmented MMU
group segments in pools
never render pools to the system
segment used by task should be part of
the context
2K
2K
5K
7K
2K
2K
5K
2K
7K
2K
2K
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 187
Dynamic paging in RTS with paged
MMU
•
•
•
•
associative map = part of the context
larger than 2K pages indicated
a lot of internal fragmentation
difficult to use
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 188
Take Care
• Object oriented languages are difficult to
use due to the dynamic behavior of the
memory scheme.
• Forget C++ and Java for HARD real-time
systems (for time being?) !!!!!!
Just stick to C.
• If you use a non-RT OS with virtual
memory management for (soft) RT
applications, don’t forget to LOCK the
(soft) RT tasks in memory.
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 189
Memory Leaks
• Some OS do not release all memory.
• Garbage is building up – you need to
restart the system in order to “clean
memory”
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 190
Good RTOS – REQ:
known memory footprint
The memory footprint should be know
for different configurations.
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 191
CE Memory Footprint
• Minimum:
– low end system
• kernel + communications + stacks - no display +
application
– 500 KB ROM - 350 KB RAM
• Typical footprint
– Handheld PC
– 2 MB ROM - 512 KB RAM
Dedicated Systems Experts – 2005 – Martin TIMMERMAN
p. 192
End of
RTOS Memory management
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 193
Module 2 Conclusions
Dedicated Systems Experts 2005 - Martin TIMMERMAN
p. 194