Transcript Lecture 5

Lecture 5
Memory Management
Part I
Lecture Highlights
 Introduction to Memory Management
 What is memory management
 Related Problems of Redundancy, Fragmentation
and Synchronization
 Memory Placement Algorithms
 Continuous Memory Allocation Scheme
 Parameters Involved
 Parameter-Performance Relationships
 Some Sample Results
Introduction
What is memory management



Memory management primarily deals with space multiplexing.
All the processes need to be scheduled in such a way that all
the users get the illusion that their processes reside on the
RAM.
The job of the memory manager:



keep track of which parts of memory are in use and which parts are not
in use
to allocate memory to processes when they need it and deallocate it
when they are done
to manage swapping between main memory and disk when main
memory is not big enough to hold all the processes.
What is memory management
Visual Representation
Main Memory
Operating
system
User
space
Hard disc
P1 -Swap
out
P2 Swap in
Process
p1
Process
p2
Memory Management
An Example

This example illustrates the basic concept of
memory management. We consider a mickey
mouse system where:




Memory Size: 16MB
Transfer Rate: 2MB/ms
RR Time Quantum: 2ms
We’ll use the process mix on the next slide
and follow the RAM configuration before and
after each time slot as also the action taking
place during the time slot for five time slots.
Memory Management
An Example – The Process Mix
Process ID
Execution
Time (ms)
Size (in
MB)
Transfer time
needed (ms)
P1
4
2
1
P2
2
6
3
P3
6
4
2
P4
8
4
2
P5
2
2
1
P6
10
4
2
P7
2
2
1
Memory Management
An Example – Time Slot 1
Before:
Time Slot 1
P1 (4ms)
After:
P1 (2ms)
P2 (2ms)
P2 (2ms)
P3 (6ms)
P3 (6ms)
P4 (8ms)
RAM Configuration
• P1 Executes
P4 (8ms)
RAM Configuration
Memory Management
An Example – Time Slot 2
Before:
P1 (2ms)
Time Slot 2
• P1 spooled in
in 1ms
After:
P5 (2ms)
P2 (2ms)
• P5 spooled in
in 1ms
P2 (0ms)
P3 (6ms)
• P2 Executes
P3 (6ms)
P4 (8ms)
RAM Configuration
• P2 Done
P4 (8ms)
RAM Configuration
Memory Management
An Example – Time Slot 3
Before:
Time Slot 3
P5 (2ms)
P2 (0ms)
P3 (6ms)
P4 (8ms)
RAM Configuration
After:
P5 (2ms)
• P2 spooled out
in 2ms
• P3 Executes
P2 (0ms)
P3 (4ms)
P4 (8ms)
RAM Configuration
Memory Management
An Example – Time Slot 4
Before:
P5 (2ms)
P2 (0ms)
P3 (4ms)
P4 (8ms)
RAM Configuration
Time Slot 4
• P2 spooled out
in 1ms
• P6 spooled in
in 1ms
• P4 Executes
After:
P5 (2ms)
P6 (10ms)
2MB Hole
P3 (4ms)
P4 (6ms)
RAM Configuration
Memory Management
An Example – Time Slot 5
Before:
Time Slot 5
P5 (2ms)
• P6 spooled in
in 1ms
P6 (10ms)
• P7 spooled in
in 1ms
2MB Hole
P3 (4ms)
P4 (6ms)
RAM Configuration
• P5 Executes
• P5 Done
After:
P5 (0ms)
P6 (10ms)
P7 (2ms)
P3 (4ms)
P4 (6ms)
RAM Configuration
Memory Management
An Example


The previous slides gave started a
stepwise walk-through of the mickey
mouse system.
Try and complete the walk through from
this point on.
Related Problems
Synchronization problem in spooling



Spooling enables the transfer of process while another process is
in execution. It aims at preventing the CPU from being idle,
thus, managing CPU utilization more efficiently.
The processes that are being transferred to the main memory can
be of different sizes. When trying to transfer a very big process,
it is possible that the transfer time exceeds the combined
execution time of the processes in the RAM. This results in the
CPU being idle which was the problem for which spooling was
invented.
The above problem is termed as the synchronization problem.
The reason behind it is that the variance in process sizes does
not guarantee synchronization.
Related Problems
Redundancy Problem



Usually the combined size of all processes is
much bigger than the RAM size and for this
reason processes are swapped in and out
continuously.
One issue regarding this is: What is the use
of transferring the entire process when only
part of the code is executed in a given time
slot?
This problem is termed as the Redundancy
problem.
Related Problems
Fragmentation


Fragmentation is encountered when the free
memory space is broken into little pieces as processes
are loaded and removed from memory.
Fragmentation is of two types:



External fragmentation
Internal fragmentation
In the present context, we are concerned with
external fragmentation and shall explore the same in
greater details in the following slides.
Generation of Holes In A System
An Example
Figure:
P5 of size 500K cannot be allocated in part (c)
OS
400K
1000K
P1
P2
2000K
OS
400K
1000K
P2
terminates
2300K
P3
2560K
P1
P4
allocate P4
1700K
2000K
2300K
a
400K
1000K
2000K
P3
2560K
P1
OS
P3
2300K
b
2560K
c
Generation of Holes In A System
An Example



In the previous visual presentation, we see that initially
P1, P2, P3 are in the RAM and the remaining 260K is not
enough for P4 (700K). (part a)
When P2 terminates, it is spooled out leaving behind a
hole of size 1000K. So now we have two holes of sizes
1000K and 260K respectively. (part b)
At this point, we have a hole big enough to spool in P4
which leaves us with two holes of sizes 300K and 260K.
(part c)
Thus, we see holes are generated because the size
of the spooled out process is not that same as the
size of the process waiting to be spooled in.
Related Problems
Fragmentation – External Fragmentation


External fragmentation exists when
enough total memory space exists to satisfy
a request, but it is not contiguous; storage is
fragmented into a large number of small
holes.
Referring to the figure of the scheduling
example on the next slide, two such cases
can be observed.
Related Problems
Fragmentation – External Fragmentation
Figure:
P5 of size 500K cannot be allocated due to external fragmentation
OS
400K
1000K
P1
P2
2000K
OS
400K
1000K
P2
terminates
2300K
P3
2560K
P1
P4
allocate P4
1700K
2000K
2300K
a
400K
1000K
2000K
P3
2560K
P1
OS
P3
2300K
b
2560K
c
Related Problems
Fragmentation – External Fragmentation
From the figure on the last slide, we see


In part (a), there is a total external fragmentation of
260K, a space that is too small to satisfy the requests of
either of the two remaining processes, P4 and P5.
In part (c), however, there is a total external
fragmentation of 560K. This space would be large
enough to run process P5, except that this free memory
is not contiguous. It is fragmented into two pieces,
neither one of which is large enough, by itself, to
satisfy the memory request of process P5.
Related Problems
Fragmentation – External Fragmentation
This fragmentation problem can be severe.
In the worst case, there could be a block of
free (wasted) memory between every two
processes. If all this memory were in one
big free block, a few more processes could
be run. Depending on the total amount of
memory storage and the average process
size, external fragmentation may be either a
minor or major problem.
Related Problems
Fragmentation – External Fragmentation






One solution to the problem of external fragmentation is
compaction.
The goal is to shuffle the memory contents to place all free
memory together in one large block.
The simplest compaction algorithm is to move all processes
toward one end of the memory; all holes in the other
direction, producing one large hole of available memory.
This scheme can be quite expensive.
The figure on the following slide shows different ways to
compact memory.
Selecting an optimal compaction strategy is quite difficult.
Related Problems
Fragmentation – External Fragmentation
Different Ways To Compact Memory
OS
300K
500K
600K
300K
P1
P2
400K
1000K
OS
P1
500K P2
600K P3
800K P4
P3
1200K 300K
1200K
900K
OS
300K
500K P1
600K P2
P4
1000K
P3
1200K
900K
OS
300K P1
500K
600K P2
900K
1500K
1500K P4
P4
1900K
1900K 200K
2100K
2100K
Original
Moved
allocation
600K
P3
2100K
2100K
Moved
400K
Moved
200K
Related Problems
Fragmentation – External Fragmentation


As mentioned earlier, compaction is an expensive
scheme. The following example gives a more concrete
idea of the same.
Given the following:



RAM size = 128 MB
Access speed of 1byte of RAM = 10ns
Each byte will need to be accessed twice during
compaction. Thus,


Compaction time = 2 x 10 x 10-9 x 128 x 106
= 2560 x 10-3 s = 2560ms  3s
Supposing we are using RR scheduling with time quantum of
2ms, the compaction time is equivalent to 1280 time slots.
Related Problems
Fragmentation – External Fragmentation
Compaction is usually defined by the following two
thresholds:

Memory hole size threshold: If the sizes of all the holes
are at most a predefined hole size, then the main memory
undergoes compaction. This predefined hole size is termed as
the hole size threshold.
e.g. If we have two holes of size ‘x’ and size ‘y’ respectively and the
hole threshold is 4KB, then compaction is done provided x<= 4KB and
y<= 4KB

Total hole percentage: The total hole percentage refers to
the percentage of total hole size over memory size. Only if it
exceeds the designated threshold is compaction undertaken.
e.g. taking the two holes with size ‘x’ and size ‘y’ respectively, total
hole percentage threshold equal to 6%, then for a RAM size of 32MB,
compaction is done only if (x+y) >= 6% of 32MB.
Related Problems
Fragmentation – External Fragmentation



Another possible solution to the external fragmentation
problem is to permit the physical address space of a process to
be noncontiguous, thus allowing a process to be allocated
physical memory wherever the latter is available. One way of
implementing this solution is through the use of a paging
scheme.
Paging entails division of physical memory into many small
equal-sized frames. Logical memory is also broken into blocks
of the same size called pages. When a process is to be
executed, its pages are loaded into any available memory
frames. On using a paging scheme, external fragmentation can
be eliminated totally.
Paging is discussed in details in the next lecture.
Related Problems
Fragmentation – Internal Fragmentation

Consider a hole of 18,464
bytes as shown in the figure.
Suppose that the next
process requests 18,462
bytes. If we allocate exactly
the requested block, we are
left with a hole of 2 bytes.
The overhead to keep track
of this hole will be
substantially larger than the
hole itself. The general
approach is to allocate very
small holes as part of the
larger request.
Internal fragmentation
operating
system
P7
P43
Hole of
18,464
bytes
Next
request is
for 18,462
bytes
Related Problems
Fragmentation – Internal Fragmentation


As illustrated in the previous slide, the allocated
memory may be slightly larger then the requested
memory. The difference between these two
numbers is internal fragmentation – memory
that is internal to a partition, but is not being
used.
In other words, unused memory within allocated
memory is called internal fragmentation.
Memory Placement Algorithms



As seen earlier, while swapping processes in
and out of the RAM, holes are created. In
general, there is at any time a set of holes, of
various sizes, scattered throughout memory.
When a process arrives and needs memory,
we search the set of holes for a hole that is
best suited for the process.
The following slide describes three algorithms
that are used to select a free hole.
Memory Placement Algorithms
The three placement algorithms are:



First-fit: Allocate the first hole that is big enough.
Best-fit: Allocate the smallest hole that is big enough.
Worst-fit: Allocate the largest hole.
Simulations have shown that both first-fit and
best-fit are better than worst-fit in terms of
decreasing both time and storage utilization.
Neither first-fit nor best-fit is clearly the best in
terms of storage utilization, but first-fit is usually
faster.
Continuous Memory Allocation Scheme



The continuous memory allocation scheme entails
loading of processes into memory in a sequential
order.
When a process is removed from main memory,
new processes are loaded if there is a hole big
enough to hold it.
This algorithm is easy to implement, however, it
suffers from the drawback of external
fragmentation.
Compaction,
consequently,
becomes an inevitable part of the scheme.
Continuous Memory Allocation Scheme
Parameters Involved




Memory size
RAM access time
Disc access time
Compaction thresholds




Memory hole-size threshold
Total hole percentage
Memory placement algorithms
Round robin time slot
Continuous Memory Allocation Scheme
Effect of Memory Size

As anticipated, greater the amount of
memory available, the higher would be the
system performance.
Continuous Memory Allocation Scheme
Effect of RAM and disc access times



RAM access time and disc access time
together define the transfer rate in a system.
Higher transfer rate means less time it takes
to move processes from main memory to
secondary memory and vice-versa thus
increasing the efficiency of the operating
system.
Since compaction involves accessing the
entire RAM twice, a lower RAM access time
will translate to lower compaction times.
Continuous Memory Allocation Scheme
Effect of Compaction Thresholds




Optimal values of hole size threshold largely depend
on the size of the processes since it is these
processes that have to be fit in the holes.
Thresholds that lead to frequent compaction can
bring down performance at an accelerating rate since
compaction is quite expensive in terms of time.
Threshold values also play a key role in determining
state of fragmentation present.
Its effect on system performance is not very
straightforward and has seldom been the focus of
studies in this field.
Continuous Memory Allocation Scheme
Effect of Memory Placement Algorithms


Simulations have shown that both firstfit and best-fit are better than worst-fit
in terms of decreasing both time and
storage utilization.
Neither first-fit nor best fit is clearly
best in terms of storage utilization, but
first-fit is generally faster.
Continuous Memory Allocation Scheme
Effect of Round Robin Time Slot


As depicted in the figures on the next slide, best choice
for the value of time slot would be corresponding to the
transfer time for a single process. For example, if most of
the processes required 2ms to be transferred, then a time
slot of 2ms would be ideal. Hence, while one process
completes execution, another can be transferred.
However, the transfer times for the processes in
consideration are seldom a normal or uniform
distribution. The reason for the non-uniform distribution
is that there are many different types of processes in a
system. The variance as depicted in the figure is too
much in a real system and makes the choice of time slot
a difficult proposition to decide upon.
Continuous Memory Allocation Scheme
Effect of Round Robin Time Slot
Time slot
corresponding
to this size
transfer time
Process Size
Realistic Process Size Graph
# of processes
# of processes
Ideal Process Size Graph
Process Size
Continuous Memory Allocation Scheme
Performance Measures





Average Waiting Time
Average Turnaround Time
CPU utilization
CPU throughput
Memory fragmentation percentage over time


This is a new performance measure and it
quantifies compaction cost.
It is calculated as a percentage of compaction
times versus the total time.
Continuous Memory Allocation
Implementation


As part of Assignment 3, you’ll
implement a memory manager system
within an operating system satisfying
the given requirements. (For complete
details refer to Assignment 3)
We’ll see a brief explanation of the
assignment in the following slides.
Continuous Memory Allocation
Implementation Details
Following are some specifications of the
memory manager system you’ll implement:





A continuous memory allocation scheme is used.
The PCB’s are to be executed based on a round
robin mechanism.
The main memory size is 32 MB.
The job sizes vary between 20 KB -> 2 MB.
(Uniform Random Distribution, Multiple of 20 KB).
The Disc capacity is 500 MB, initially 50 % full
with jobs.
Continuous Memory Allocation
Implementation Details





Use First Fit, Best Fit, and Worst Fit Techniques
(should be a variable).
Do compaction when fragmentation is more than
6 % and holes are 50 KB or less (Assume memory
access time = 14 x 10-9 seconds).
Use a varying time slot (a variable parameter,
multiple of 1M.S).
Disc access time = 1ms + (jobsize (in bytes)/
500000) ms
Job execution time ranges between 2ms and 10ms
(multiple of 1ms).
Continuous Memory Allocation
Implementation Details
Once you’re done with the implementation, think
of the problem from an algorithmic design point
of view. The implementation involves many
parameters such as:






Memory Size
Disc access time
Time slot for RR
Compaction Thresholds
RAM access time
Fitting algorithm
Continuous Memory Allocation
Implementation Details


The eventual goal would be to optimize
several performance measures (enlisted
earlier)
Perform several test runs and write a
summation indicating how sensitive are
some of the performance measures to
some of the above parameters
Continuous Memory Allocation
Sample Screenshots of Simulation
Setting variable parameters
Continuous Memory Allocation
Sample Screenshots of Simulation
Initial Hard Disc Configuration
Continuous Memory Allocation
Sample Screenshots of Simulation
Initial RAM Configuration
Continuous Memory Allocation
Sample Screenshots of Simulation
Memory Manager In Execution
Continuous Memory Allocation
Sample Screenshots of Simulation
Compaction Scenario
Continuous Memory Allocation
Sample Screenshots of Simulation
Final Performance Measures For The Run
Continuous Memory Allocation
Sample tabulated data from simulation
TABLE: Round Robin Time Quantum vs. Performance Measures
Time
Slot
Average
Waiting
Time
Average
Turnaround
Time
CPU
Utilization
Through
put
Measure
Memory
fragmentation
percentage
2
3
4
5%
5
29%
3
4
4
2%
8
74%
4
5
6
3%
12
74%
5
12
12
1%
17
90%
Continuous Memory Allocation
Sample tabulated data from simulation
RR
Ti
me
Slo
t
Average
Turnaround
Time
Fi
rs
t
fit
B
es
t
fit
2
4
3
Average
Waiting Time
CPU Utilization
Throughput
Fragmentation
%
W
ors
t
fit
Fir
st
fit
Be
st
fit
W
ors
t
fit
Fir
st
fit
B
es
t
fit
Wo
rst
fit
Fir
st
fit
Be
st
fit
Wo
rst
fit
Fir
st
fit
B
es
t
fit
W
ors
t
fit
3
3
3
2
2
1
%
1
%
1%
5
5
5
82
7
4
74
4
4
4
4
4
4
2
%
2
%
2%
8
8
8
74
7
4
74
4
6
6
6
5
6
6
3
%
2
%
2%
12
11
11
74
7
4
74
5
12
6
6
12
5
5
1
%
2
%
2%
17
1
4
14
90
7
9
79
Continuous Memory Allocation
Sample Graph (using data from simulation)
Effect of Round Robin Time Quantum over Performance Measures
18
16
14
12
Average Waiting
Time
10
Average
Turnaround Time
8
6
CPU Utilization
4
Throughput
2
0
2
3
4
Tim e Slot
5
Memory
Fragmentation
Percentage
Continuous Memory Allocation
Sample Graph (comparing memory algorithms)
Average Turnaround Tim e vs. Round
Robin Tim e slot for three m em ory
placem ent algorithm s
Average Turnaround
Tim e
Comparing
Memory
Placement
Algorithms:
Average
Turnaround time
15
A verage
Turnaro und
Time First-fit
10
5
0
2
3
4
5
1 2 3 4
Round Robin Tim e
Slot
A verage
Turnaro und
Time B estfit
A verage
Turnaro und
Time Wo rstfit
Continuous Memory Allocation
Sample Graph (comparing memory algorithms)
Average Waiting Tim e vs. Round
Robin Tim e Slot for three m em ory
placem ent algorithm s
Average Waiting
Tim e
Comparing
Memory
Placement
Algorithms:
Average Waiting
Time
14
12
10
8
6
4
2
0
Average
Waiting Time
First-fit
Average
Waiting Time
Best-fit
13 4 2 5
3
4
Round Robin Tim e
Slot
2
Average
Waiting Time
Worst-fit
Continuous Memory Allocation
Sample Graph (comparing memory algorithms)
CPU utilization vs. Round Robin Slot
for three m em ory placem ent
algorithm s
CPU utilization
Comparing
Memory
Placement
Algorithms:
CPU utilization
4%
3%
3%
2%
2%
1%
CPU ut ilizat ion
First -f it
1%
0%
CPU ut ilizat ion
Best -f it
2 3
4
5
Round
1
2Robin
3 Tim
4e
Slot
CPU ut ilizat ion
Worst -f it
Continuous Memory Allocation
Sample Graph (comparing memory algorithms)
Throughput vs. Round Robin Tim e
Slot for three m em ory placem ent
algorithm s
20
Throughput
Comparing
Memory
Placement
Algorithms:
Throughput
15
10
Thro ughput
First-fit
5
Thro ughput
B est-fit
0
Thro ughput
Wo rst-fit
2
1 3 4 25
3
4
Round Robin Tim e
Slot
Continuous Memory Allocation
Sample Graph (comparing memory algorithms)
% Fragm entation vs. Round Robin
Tim e Slot for three m em ory
placem ent algorithm s
%Fragm entation
Comparing
Memory
Placement
Algorithms:
% Fragmentation
100%
80%
60%
Fragmentation
%First-fit
40%
Fragmentation
%Best-fit
20%
0%
2
3 4
5
1 2Robin
3 Tim
4 e
Round
Slot
Fragmentation
%Worst-fit
Continuous Memory Allocation
Fragmentation percentage over time
15.00
0.00
Time Slot = 4
Tim e w indow
17
Time Slot = 3
13
5.00
9
Time Slot = 2
5
10.00
1
% Fragm entation
Fragmentation percentage over time
Time Slot = 5
Continuous Memory Allocation
Conclusions from the sample simulation

The following emerged as the studied
optimizing parameters:



Optimal value of the round robin quantum
None of the memory placement algorithms
could be termed as optimal.
Studying the fragmentation percentage
over time gave us the probable time
windows
where
compaction
was
undertaken.
Lecture Summary
 Introduction to Memory Management
 What is memory management
 Related Problems of Redundancy, Fragmentation
and Synchronization
 Memory Placement Algorithms
 Continuous Memory Allocation Scheme
 Parameters Involved
 Parameter-Performance Relationships
 Some Sample Results
Preview of next lecture
The following topics shall be covered in the next
lecture:
 Introduction to Paging









Paging Hardware & Page Tables
Paging model of memory
Page Size
Paging versus Continuous Allocation Scheme
Multilevel Paging
Page Replacement & Page Anticipation Algorithms
Parameters Involved
Parameter-Performance Relationships
Sample Results