class slides
Download
Report
Transcript class slides
Memory Management
Operating Systems
Fall 2002
OS Fall’02
Background
A compiler creates an executable code
An executable code must be brought into
main memory to run
Operating system maps the executable code
onto the main memory
The “mapping” depends on the hardware
Hardware accesses the memory locations
in the course of the execution
OS Fall’02
Memory management
Principal operation: Bringing programs
into the memory for execution by the
processor
Requirements and issues:
Relocation
Protection
Sharing
Logical and physical organization
OS Fall’02
Relocation
Compiler generated addresses are
relative
A process can be loaded at different
memory location, swapped in/out
Actual physical mapping is not known at
compile time
Simple if processes are allocated
contiguous memory ranges
Complicated if paging is used
OS Fall’02
Protection
Processes must be unable to reference
addresses of other processes or those of
the operating system
Cannot be enforced at compile time
(relocation)
Each memory access must be checked for
validity
Enforced by hardware
OS Fall’02
Sharing
Protection must be flexible enough to
allow controlled sharing of the same
portion of the main memory
E.g., all running instances of a text editor can
share the same code
OS Fall’02
Logical organization
Main memory is organized as a linear
one-dimensional address space
Programs are typically organized into
modules
can be written/compiled independently, have
different degrees of protection, shared
A mechanism is desirable for supporting a
logical structure of the user programs
OS Fall’02
Physical organization
Two level hierarchical storage
Main memory: fast, expensive, scarce,
volatile
Secondary storage: slow, cheap, abundant,
persistent
Memory management involves bi-
directional information flow between the
two
OS Fall’02
Memory management techniques
Partitioning
fixed, dynamic
now obsolete
useful to demonstrate basic principles
Paging and segmentation
Paging and segmentation with virtual
memory
OS Fall’02
Fixed partitioning
Main memory is divided into a number of
static partitions
all partitions of the same size
a collection of different sizes
A process can be loaded into a partition
of equal or greater size
A process cannot be scattered among many
partitions
OS Fall’02
Fixed partitioning
Operating System
8M
Operating System
8M
Internal fragmentation
wasted space is internal
to an allocated region
2M
4M
8M
6M
8M
8M
8M
Executing big
programs requires
explicit memory
management by the
programmer
8M
8M
12 M
overlays
OS Fall’02
Placement algorithms
Operating
System
New
Processes
Operating
System
New
Processes
OS Fall’02
Replacement algorithms
If all partitions are occupied some
process should be swapped out
Select a process that
occupies the smallest partitions that will hold
the incoming process
blocked process
OS Fall’02
Dynamic partitioning
Partitions created dynamically
Each process is loaded into a partition of
exactly the same size as that process
OS Fall’02
Dynamic partitioning example
Operating
System
128 K
Operating
System
Process 1
Operating
System
320 K
Process 1
Process 2
320 K
224 K
896 K
576 K
352 K
OS Fall’02
Dynamic partitioning example
Operating
System
Operating
System
Process 1
320 K
Process 2
224 K
Process 3
288 K
Process 1
Operating
System
320 K
224 K
Process 1
Process 4
320 K
128 K
96 K
Process 3
64 K
288 K
64 K
OS Fall’02
Process 3
288 K
64 K
Dynamic partitioning example
Operating
System
Operating
System
320 K
Process 2
224 k
96 K
Process 4
128 K
Process 4
96 K
Process 3
288 K
128 K
96 K
Process 3
64 K
288 K
64 K
OS Fall’02
External fragmentation
Situation when the memory which is
external to the allocated partitions
becomes fragmented
Can be reduced using compaction
wastes the processor time
OS Fall’02
Placement algorithms
Free regions are organized into a linked
list ordered by the memory addresses
simplifies coalescing of free regions
increases insertion time
First-fit: allocate the first region large
enough to hold the process
optimization: use a roving pointer: next-fit
Best-fit: allocate the smallest region
which is large enough to hold the process
OS Fall’02
An example
8K
8K
12K
First Fit
12K
22K
Last
allocated
block (14K)
Best Fit
6K
18K
2K
8K
8K
6K
6K
Allocated block
14K
Free block
14K
Next Fit
36K
20K
Before
OS Fall’02
After
Placement algorithms discussion
First-fit:
efficient and simple
tends to create more fragmentation near the
beginning of the list
solved by next-fit
Best-fit:
less efficient (might search the whole list)
tends to create small unusable fragments
OS Fall’02
Buddy systems
Approximates the best-fit principle
Efficient search
first (best) fit has a linear complexity
Simplified coalescing
Several variants exist
Binary buddy system allocates memory in
blocks which are powers of 2
OS Fall’02
Binary buddy system
Memory blocks are available in size of 2K
where L <= K <= U and where
2L = smallest size of block allocated
2U = largest size of block allocated(generally,
the entire memory available)
Initially, all the available space is treated
as a single block of size 2U
OS Fall’02
Creating buddies
A request to allocate a block of size s:
If 2U-1 < s <= 2U, allocate the entire block
Otherwise, split the block into two equal
buddies of size 2U-1
If 2U-2 < s <= 2U-1, then allocate the request
to one of the buddies, otherwise split further
Proceed until the smallest block is allocated
OS Fall’02
Maintaining buddies
Existing non-allocated buddies (holes) are
kept on several lists
Holes of size 2i are kept on the ith list
Hole is removed from the (i+1)th list by
splitting it into two bodies and putting them
onto the ith list
Whenever two adjacent buddies on the ith
list are freed they are coalesced and moved
to the (i+1)th list
OS Fall’02
Finding a free buddy
get_hole(i):
if (i==U+1) then failure;
if (list i is empty) {
get_hole(i+1);
split hole into buddies;
put buddies on list i;
}
take first hole on list i;
OS Fall’02
An example
OS Fall’02
Remarks
Buddy system is a reasonable
compromise between the fixed and
dynamic partitioning
Internal fragmentation is a problem
Expected case is about 28% which is high
BS is not used for memory management
nowadays
Used for memory management by user
level libraries (malloc)
OS Fall’02
Relocation with partitioning
Relative address
Base Register
Text
Adder
Bounds Register
Comparator
Absolute
address
Data
Interrupt to
operating system
Stack
OS Fall’02
Process image in
main memory
Paging
The most efficient and flexible method of
memory allocation
Process memory is divided into fixed size
chunks of the same size, called pages
Pages are mapped onto frames in the
main memory
Internal fragmentation is possible with
paging (negligible)
OS Fall’02
Paging support
Process pages can be scattered all over
the main memory
Process page table maintains mapping of
process pages onto frames
Relocation becomes complicated
Hardware support is needed to support
translation of relative addresses within a
program into the memory addresses
OS Fall’02
Paging example
0
1
A.0
0
0
0
---
A.1
1
1
1
---
2
A.2
2
2
2
---
3
A.3
3
3
4
D.0
5
D.1
6
D.2
7
C.0
8
C.1
9
C.2
10
C.3
11
D.3
12
D.4
13
Process B
Process A
0
7
0
4
13
1
8
1
5
14
2
9
2
6
3
10
3
11
4
12
Process C
Process D
14
OS Fall’02
Free Frame List
Address translation
Page (frame) size is a
power of 2
with page size = 2r, a
logical address of l+r
bits is interpreted as a
tuple (l,r)
l = page number, r =
offset within the page
Page number is used
as an index into the
page table
OS Fall’02
Hardware support
Logical address
Page #
Offset
Frame # Offset
Register
Page Table Ptr
Page Table
Offset
+
P#
Frame #
Program
Paging
OS Fall’02
Main Memory
Page
Frame
Segmentation
A program can be divided into segments
Segments are of variable size
Segments reflect the logical (modular)
structure of a program
Text segment, data segment, stack segment…
Similar to dynamic partitioning except
segments of the same process can be
scattered
subject to external fragmentation
OS Fall’02
Address translation
Maximum segment size is always a power of 2
Process’ segment table maps segment numbers
into their base addresses in the memory
With the maximum segment size of 2r, a logical
address of l+r bits is interpreted as a pair (l,r):
l = segment number, r = offset within the segment
l is used as an index into the segment table
OS Fall’02
Hardware support
Virtual Address
Segment Table
+
Seg # Offset = d
Base + d
Register
Segment Table
d
+
S#
Length Base
Program
Segmentation
OS Fall’02
Main Memory
Segment
Seg Table Ptr
Remarks
The price of paging/segmentation is a
sophisticated hardware
But the advantages exceed by far
Paging decouples address translation and
memory allocation
Not all the logical addresses are necessary
mapped into physical memory at every given
moment - virtual memory
Paging and segmentation are often
combined to benefit from both worlds
OS Fall’02
Next: Virtual memory
OS Fall’02