csc432-week15x - Rensselaer Polytechnic Institute: Computer
Download
Report
Transcript csc432-week15x - Rensselaer Polytechnic Institute: Computer
Rensselaer Polytechnic Institute
CSC 432 – Operating Systems
David Goldschmidt, Ph.D.
A noncontiguous memory allocation scheme
avoids the external fragmentation problem
Slice up physical memory into
fixed-sized blocks called frames
▪ Sizes typically range from 216 and up!
Slice up logical memory into
fixed-sized blocks called pages
Allocate pages into frames
▪ Note that frame size equals page size
When a process of size n pages is ready to
run, operating system finds n free frames
The OS keeps
track of pages
via a page table
== in use
== free
process Pi
main memory
Page tables map logical memory
addresses to physical memory
addresses
Covers a logical address
space of size 2m with
page size 2n
page number page offset
p
d
(m – n)
(n)
Use page table
caching at the
hardware level
to speed address
translation
Hardware-level
translation look-aside buffer
(TLB)
Segmentation is a memory-management
scheme that corresponds
to logical segments of a
user program
A segment resides in a
contiguous block of memory
Segments are numbered
and referred to by a
<segment-number, offset> pair
Logical segments map to physical memory
4
3
3
0
4
2
0
1
2
1
A segment table maps a <segment-number,
offset> pair to a physical address
Each table entry has:
A segment base containing
the starting physical address
where the segment resides
A segment limit specifying
the length of the segment
Only part of a process needs to be loaded
into memory for it to start its execution
Virtual memory further separates logical memory
and physical memory
Logical (or virtual) address space can be larger
than physical address space
Allows physical address space to be shared
by several processes
Enables quicker process creation
Unused pages are stored on disk
Multiple processes can share common
libraries or data by mapping virtual pages to
shared physical
pages
More efficient
use of physical
memory space
When a page of memory is requested, if it’s
not in physical memory,
load page from disk
i.e. on demand
Less I/O required
Less physical memory
Faster user response
times (usually)
More user processes
Virtual memory policies include:
The fetch policy governs when a page
should be loaded into memory
The placement policy specifies where a
page is loaded into physical memory
The replacement policy determines which
existing page in memory should be replaced
Page allocation:
In a static allocation scheme,
the number of frames per process is fixed
In a dynamic allocation scheme,
the number of frames per process varies
In an equal allocation scheme, all processes
have an equal number of frames
In a proportional allocation scheme, processes
are allocated frames in proportion to size,
priority, etc.
Associate a valid/invalid bit with
each page table entry
frame #
Initially set all entries to i
During address translation,
if valid/invalid bit is v, page
is already in memory
Otherwise, if bit is i,
a page fault occurs
valid/invalid bit
v
v
v
v
i
….
i
i
page table
Page faults are trapped by the OS:
When an invalid reference occurs in the page table
When a page is not yet in the page table
Page fault recovery:
Get free frame from physical memory
Swap desired page into free frame
Reset page table entry
Set validation bit to v
Restart instruction that caused the page fault
Page faults
are costly!
The page fault rate p is in the range [0.0, 1.0]:
If p is 0.0, no page faults occur
If p is 1.0, every page request is a page fault
Typically p is very low....
The effective memory-access time is
(1 – p) x physical-memory-access +
p
x
( page-fault-overhead +
swap-page-out + swap-page-in +
restart-overhead )
Given:
Memory access time is 200 nanoseconds
Average page-fault service time is 8 milliseconds
The effective memory access time is
(1 – p) x 200ns + p
8ms =
200ns – 200ns p + p x 8,000,000ns =
200ns + 7,999,800 p
x
If a process does not have enough pages, the
page-fault rate is high, leading to thrashing
Process is busy swapping pages
in and out of memory
Low CPU utilization
Operating system might think
it needs to increase the degree
of multiprogramming!
▪ More processes added, further degrading performance
Remember the Principle of Locality
Future memory references in a given
process will likely be local to previous
memory references
This phenomenon is called
the principle of locality
A process executes in
a series of phases, spending
a finite amount of time performing
memory references in each phase
Example graph of page faults versus total
number of allocated frames
Operating system should allocate enough
frames for the current locality of a process:
What happens when too few
frames are allocated?
What happens when
too many frames are allocated?
Example of a
single process:
Dynamic page fault frequency scheme
How do we
identify the victim?
Page replacement algorithms include:
First-in-first-out (FIFO)
Optimal (OPT)
apply these algorithms to a
page reference stream
Least recently used (LRU)
Least frequently used (LFU)
Page fault frequency scheme (introduced earlier)
Working set
The above is a 3-frame memory
How many page faults occur if we use
a 4-frame memory instead?
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
How many page faults occur with a 3-frame
memory?
How many page faults occur with a 4-frame
memory?
Belady’s Anomaly:
More frames may lead to more page faults!
Replace pages that will not be used for
the longest amount of time
FIFO:
OPT:
Replace pages that have not been used for
the longest amount of time
Similar to LRU, but replace
least frequently used pages
Requires usage counts
Initial page replacements are
swapped out quickly because
their usage counts are 1
For each process, maintain a working set of
pages referenced by the process during the
most recent w page references
How do we choose w?
Let d be the sum of the sizes of the working
sets of all active processes
Let F be the total number of frames
If d < F, then the operating system can allow
additional processes to enter the system
If d > F, then the operating system must suspend
one or more active processes
▪ Otherwise thrashing will occur!
Windows XP uses demand paging with
clustering (which loads pages surrounding
the page fault)
Processes are assigned a working set minimum
and a working set maximum
The working set minimum is the minimum
number of pages a process is guaranteed to
have in physical memory
Likewise, a process may be assigned pages up to
its working set maximum
When the amount of free memory in the
system falls below a threshold, automatic
working set trimming is performed
Increases the amount
of free memory
Removes pages from
processes that have pages
in excess of their working
set minimum