oslecture10-13Paging and Segmentation

Download Report

Transcript oslecture10-13Paging and Segmentation

Memory Management
G.Anuradha
Outline







Background
Logical versus Physical Address Space
Swapping
Contiguous Allocation
Paging
Segmentation
Segmentation with Paging
Background



Program must be brought into memory and
placed within a process for it to be executed.
Input Queue - collection of processes on the
disk that are waiting to be brought into
memory for execution.
User programs go through several steps
before being executed.
Virtualizing Resources

Physical Reality: Processes/Threads share the same hardware



Need to multiplex CPU (CPU Scheduling)
Need to multiplex use of Memory (Today)
Why worry about memory multiplexing?



The complete working state of a process and/or kernel is defined by its
data in memory (and registers)
Consequently, cannot just let different processes use the same memory
Probably don’t want different processes to even have access to each
other’s memory (protection)
Important Aspects of Memory Multiplexing

Controlled overlap:



Processes should not collide in physical memory
Conversely, would like the ability to share memory when desired (for
communication)
Protection:

Prevent access to private memory of other processes



Different pages of memory can be given special behavior (Read Only, Invisible to
user programs, etc)
Kernel data protected from User programs
Translation:


Ability to translate accesses from one address space (virtual) to a different one
(physical)
When translation exists, process uses virtual addresses, physical memory
uses physical addresses
Names and Binding

Symbolic names  Logical names  Physical
names

Symbolic Names: known in a context or path


Logical Names: used to label a specific entity


file names, program names, printer/device names, user
names
inodes, job number, major/minor device numbers, process
id (pid), uid, gid..
Physical Names: address of entity



inode address on disk or memory
entry point or variable address
PCB address
Binding of instructions and data to
memory

Address binding of instructions and data to memory
addresses can happen at three different stages.

Compile time:


Load time:


If memory location is known apriori, absolute code can be
generated; must recompile code if starting location changes.
Must generate relocatable code if memory location is not
known at compile time.
Execution time:

Binding delayed until runtime if the process can be moved
during its execution from one memory segment to another.
Need hardware support for address maps (e.g. base and limit
registers).
Binding time tradeoffs

Early binding
compiler - produces efficient code
 allows checking to be done early
 allows estimates of running time and space


Delayed binding
Linker, loader
 produces efficient code, allows separate compilation
 portability and sharing of object code


Late binding
VM, dynamic linking/loading, overlaying, interpreting
 code less efficient, checks done at runtime
 flexible, allows dynamic reconfiguration

Multi-step Processing of a Program for Execution

Preparation of a program for execution involves
components at:




Addresses can be bound to final values anywhere
in this path



Compile time (i.e., “gcc”)
Link/Load time (unix “ld” does link)
Execution time (e.g. dynamic libs)
Depends on hardware support
Also depends on operating system
Dynamic Libraries



Linking postponed until execution
Small piece of code, stub, used to locate appropriate
memory-resident library routine
Stub replaces itself with the address of the routine,
and executes routine
Dynamic Loading




Routine is not loaded until it is called.
Better memory-space utilization; unused
routine is never loaded.
Useful when large amounts of code are
needed to handle infrequently occurring
cases.
No special support from the operating system
is required; implemented through program
design.
Dynamic Linking




Linking postponed until execution time.
Small piece of code, stub, used to locate the
appropriate memory-resident library routine.
Stub replaces itself with the address of the
routine, and executes the routine.
Operating system needed to check if routine
is in processes’ memory address.
Overlays



Keep in memory only those instructions and
data that are needed at any given time.
Needed when process is larger than amount
of memory allocated to it.
Implemented by user, no special support
from operating system; programming design
of overlay structure is complex.
Overlaying
Memory Partitioning

An early method of managing memory



Pre-virtual memory which is not used much now
Virtual Memory
But, it will clarify the later discussion of virtual
memory if we look first at partitioning

Virtual Memory has evolved from the partitioning
methods
Types of Partitioning






Fixed Partitioning
Dynamic Partitioning
Simple Paging
Simple Segmentation
Virtual Memory Paging
Virtual Memory Segmentation
Fixed Partitioning- Equal size
partitions


Any process whose size is less than
or equal to the partition size can be
loaded into an available partition
The operating system can swap a
process out of a partition

If none are in a ready or running state
Fixed Partitioning Problems

A program may not fit in a partition.


The programmer must design the program with
overlays
Main memory use is inefficient.


Any program, no matter how small, occupies an
entire partition.
This is results in internal fragmentation.
Fragmentation generally happens when the memory blocks have been allocated and
are freed randomly. This results in splitting of a partitioned memory (on the disk or
in main memory) into smaller non-contiguous fragments.
Fixed partitioning – Unequal Size
Partitions

Lessens both problems


but doesn’t solve completely
In Fig


Programs up to 16M can be
accommodated without overlay
Smaller programs can be placed in smaller
partitions, reducing internal fragmentation
Placement Algorithm

Equal-size


Placement is trivial (no options)
Unequal-size



Can assign each process to the smallest partition
within which it will fit
Queue for each partition
Processes are assigned in such a way as to
minimize wasted memory within a partition
Fixed Partitioning
Remaining Problems with Fixed
Partitions

The number of active processes is limited by
the system


i.e limited by the pre-determined number of
partitions
A large number of very small process will not
use the space efficiently

In either fixed or variable length partition methods
OBSOLETE
Early IBM Mainframe OS, OS/MFT
Dynamic Partitioning


Partitions are of variable length and number
Process is allocated exactly as much memory
as required
Dynamic Partitioning Example
OS (8M)


P2
(14M)
P1
(20M)
Empty (6M)
Empty
P4(8M)
P2
(56M)
(14M)
Empty (6M)
P3
(18M)
Empty (4M)
Refer to Figure 7.4

External Fragmentation
Memory external to all
processes is fragmented
Can resolve using
compaction


OS moves processes so that
they are contiguous
Time consuming and wastes
CPU time
Dynamic Partitioning


Operating system must decide which free
block to allocate to a process
Best-fit algorithm




Chooses the block that is closest in size to the
request
Worst performer overall
Since smallest block is found for process, the
smallest amount of fragmentation is left
Memory compaction must be done more often
Dynamic Partitioning

First-fit algorithm



Scans memory from the beginning and chooses
the first available block that is large enough
Fastest
May have many process loaded in the front end of
memory that must be searched over when trying
to find a free block
Dynamic Partitioning

Next-fit




Scans memory from the location of the last
placement
More often allocates a block of memory at the end
of memory where the largest block is found
The largest block of memory is broken up into
smaller blocks
Compaction is required to obtain a large block at
the end of memory
Dynamic Partitioning

Worst Fit:

Allocate the largest hole.
Produces the largest leftover hole, which may be
more useful than the smaller leftover hole from a
best-fit approach.
Memory Allocation Policies
Example: Parking Space Management
 A scooter, car and a truck are looking out for
space for parking. They arrive in the order
mentioned above. The parking spaces are
available for each one of them as per their
size. Truck parking space can accommodate
, a car and a scooter or even two scooters.
Similarly, In a car parking space two scooters
can be parked.
4
Memory Allocation Policies
Alongside is shown
the partition in the
parking area for Truck,
Car and Scooter.
Now when a scooter,
car and truck come in
order, parking space is
allocated according to
algorithm policies
5
Worst Fit
Best Fit
Next Fit
Memory Allocation Policies
Now take another theoretical example.
 Given the partition of
100K,500K,200K,300K,600K as shown, the
different algorithms will place the processes
212K,417K,112K,426K respectively.
• The request for 426K will be rejected in case of
next fit and worst fit algorithm because any single
partition is less than 426K
Next Fit (212k,417k,112k)
The request for 426K will be rejected
Similarly we cant implement for Other Two Policies
212K-Green
417K-Blue
112K-Pink
426K-Yellow
External Fragmentation-Gray
Unused Partitions-White
Next Fit
Best Fit
Worst Fit
7
User selects the algorithm in order of worst fit, best fit and, next fit.
Next Fit
Worst
Best
Fit
Best fit
Next Fit
100K
100K
100K
100K
200K
200K
200K
200K
300K
300K
300K
300K
300K
300K
300K
User input = 300K
process
100K
sizes
500K
400K
100K
100K
400K
500K
400K
400K
500K
100K
100K
100K
500K
400K
300K
600K
600K
300K
300K
600K
600K
Dynamic memory Partitioning with User interactivity
Empty Memory
User entered 5 processes which
allotted in memory
300 k
400 k
500 k
800 k
User Selected Best Fit
User entered 5 processes
allotted in memory
Best Fit
New process
given by user
450 k
300 k
300 k
400 k
400 k
500 k450K
500 k
External
Fragmentation
800 k
800 k
User Selected Worst Fit
User entered 5 processes
allotted in memory
300 k
New process
given by user
450 k
Worst Fit
300 k
400 k
400 k
500 k
500 k
450K
800 k
800 k
350K
External Fragmentation
User Selected Next Fit
User entered 5 processes
allotted in memory
300 k
New process
given by user
450 k
Next Fit
300 k
400 k
400 k
500 k
500 k450K
External Fragmentation
800 k
800 k
Memory Allocation Policies


Which is the best placement algorithm with respect to
fragmentation?
Worst-fit algorithm is the best placement algorithm
with respect to fragmentation because it results in less
amount of fragmentation.
Which is the worst placement algorithm respect to
time complexity ?
Best-fit is the worst placement algorithm respect to
time complexity because it scans the entire memory
space resulting in more time.
9
Best-fit, first-fit, and worst fit-memory
allocation method for fixed partitioning
List of Jobs
Job 1
Job 2
Job 3
Job 4
Job 5
Job 6
Job 7
Job 8
Job 9
Job 10
Size
100k
10k
35k
15k
23k
6k
25k
55k
88k
100k
Turnaround
3
1
2
1
2
1
1
2
3
3
Memory Block
Block 1
Block 2
jobs 1 to 5 are
Block 3
submitted and
Block 4
be processed
Block 5
first
Size
50k
200k
70k
115k
15k
Best-fit memory allocation makes the best use of
memory space but slower in making allocation
After the first cycle, job 2 and 4 located
on block 5 and block 3 respectively and
both having one turnaround are replace by
job 6 and 7 while job 1, job 3 and job 5
remain on their designated block.
In the third cycle, job 1 remain on
block 4, while job 8 and job 9
replace job 7 and job 5
respectively
FIRST-FIT
First-fit memory allocation is faster in making allocation but
leads to memory waste. Scans memory from the beginning
and chooses the first available block that is large enough
WORST-FIT
Worst-fit memory allocation is opposite to best-fit. It allocates free available block to the new
job and it is not the best choice for an actual system
Example

Given memory partitions of 100K, 500K,
200K, 300K, and 600K (in order), how would
each of the First-fit, Best-fit, and Worst-fit
algorithms place processes of 212K, 417K,
112K, and 426K (in order)? Which algorithm
makes the most efficient use of memory?
First-fit:
212K is put in 500K partition
417K is put in 600K partition
112K is put in 288K partition (new partition 288K = 500K - 212K)
426K must wait
Best-fit:
212K is put in 300K partition
417K is put in 500K partition
112K is put in 200K partition
426K is put in 600K partition
Worst-fit:
212K is put in 600K partition
417K is put in 500K partition
112K is put in 388K partition (600K – 212K)
426K must wait
In this example, Best-fit turns out to be the best.
Comparison between algorithms




Best is depended on the exact sequence of
process swappings that occur and the size of
those processes
First-fit-best and fastest
Next-fit-slightly worse results than the first-fit.
A free block is produced at the end of
memory
Best fit-worst performer
Buddy System-Compromise between
fixed and dynamic partitioning


Entire space available is treated as a single
block of 2U
If a request of size s where 2U-1 < s <= 2U


entire block is allocated
Otherwise block is split into two equal
buddies

Process continues until smallest block greater
than or equal to s is generated
Example of Buddy System
Tree Representation of Buddy
System
Replacement algorithm

When all of the processes in main memory
are in a blocked state and there is insufficient
memory even after compaction, OS swaps
one of the processes out of main memory to
accommodate new process
Swapping



A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for
continued execution
Backing store – fast disk large enough to accommodate
copies of all memory images for all users; must provide
direct access to these memory images
Roll out, roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped out
so higher-priority process can be loaded and executed

Major part of swap time is transfer time; total transfer time is
directly proportional to the amount of memory swapped

Modified versions of swapping are found on many systems
(i.e., UNIX, Linux, and Windows)
System maintains a ready queue of ready-to-run processes
which have memory images on disk

Schematic View of Swapping
Swapping contd…





Context switch time is fairly high
Eg: User Process-100MB
Transfer rate-50MB/sec(2 sec)
if latency is 8 millisec then swap time to
and fro is 4016ms.
Never swap a process with pending I/O
Relocation


When program loaded into memory the
actual (absolute) memory locations are
determined
A process may occupy different partitions
which means different absolute memory
locations during execution


Swapping
Compaction
Addresses

Logical


Relative


Reference to a memory location independent of
the current assignment of data to memory.
Address expressed as a location relative to some
known point.
Physical or Absolute

The absolute address or actual location in main
memory.
Relocation
Registers Used during Execution

Base register


Bounds register


Starting address for the process
Ending location of the process
These values are set when the process is
loaded or when the process is swapped in
Registers Used during Execution



The value of the base register is added to a
relative address to produce an absolute
address
The resulting address is compared with the
value in the bounds register
If the address is not within bounds, an
interrupt is generated to the operating system