Memory Management

Download Report

Transcript Memory Management

Chapter 4
Memory Management
4.1 Basic memory management
4.2 Swapping
4.3 Virtual memory
4.4 Page replacement algorithms
4.5 Modeling page replacement algorithms
4.6 Design issues for paging systems
4.7 Implementation issues
4.8 Segmentation
1
Page Replacement Algorithms
Αλγόριθμοι αντικατάστασης σελίδων
• Page fault forces choice
– which page must be removed
– make room for incoming page
• Modified page must first be saved
– unmodified just overwritten
• Better not to choose an often used page
– will probably need to be brought back in soon
• “Page replacement” problem occurs in:
– memory caches
– web pages
2
Optimal Page Replacement Algorithm
Βέλτιστης αντικατάστασης
• When a page fault occurs, set of pages in memory
– Replace page needed at the farthest point in future
• Optimal but unrealizable (why?)
• Estimate by:
– logging page use on previous runs of process – use
results on subsequent runs
– although this is impractical: one program, one specific
set of inputs.
3
Not Recently Used Page Replacement Algorithm
Αντικατάστασης σελίδας που δε χρησιμοποιήθηκε πρόσφατα
• Each page has Reference bit, Modified bit
– bits are set when page is referenced, modified
– This must be done by hardware, otherwise simulated
• Pages are classified:
–
–
–
–
Class 0: not referenced, not modified (is it possible?)
Class 1: not referenced, modified
Class 2: referenced, not modified
Class 3: referenced, modified
• NRU removes page at random
– from lowest numbered non empty class
– easy to understand, implement, quite efficient
4
FIFO Page Replacement Algorithm
Πρώτη μέσα πρώτη εξώ
• Maintain a linked list of all pages
– in order they came into memory, at the head of the list is the
oldest, at tail the most recent
• Page at beginning of the list replaced (the oldest)
• Disadvantage
– page in memory the longest may be often used
• Modification?
– inspect the R bit of the oldest page
• If R=0, old and unused => replace
• If R=1, old but used => set R to 0; place it at the end of the list
5
Second Chance Page Replacement Algorithm
Αλγόριθμος αντικατάστασης δεύτερης ευκαρίας
• Operation of a second chance
– pages sorted in FIFO order
– Page list if fault occurs at time 20, A has R bit set
(numbers above pages are loading times)
6
The Clock Page Replacement Algorithm
Αλγόριθμος ρολογιού
• Second chance is unnecessarily inefficient
• Same algorithm, different implementation
7
Least Recently Used (LRU)
Λιγότερο πρόσφατα χρησιμοποιημένης σελίδας
• Assume pages used recently will be used again soon
– throw out page that has been unused for longest time
– realizable but not cheap
• Must keep a linked list of pages, sorted by usage
– most recently used at front, least at rear
– update this list every memory reference: finding a page in the list,
deleting it and move it to the front
• Hardware approach 1:keep counter in each page table entry
–
–
–
–
equip the hardware with a 64-bit counter, increment at each instr
after each mem reference the value of counter is copied to entry
choose page with lowest value counter at page fault
periodically zero the counter
8
Least Recently Used (LRU)
Hardware approach #2: n page frames, a matrix of n x n bits, initially
all 0s. When reference page k, set row k to 1s, then column k to 0.
LRU using a matrix – pages referenced in order
0,1,2,3,2,1,0,3,2,3
9
Simulating LRU in Software
• Few machines have special hardware => software
• NFU (Not Frequently Used - μη-συχνά χρησιμοποιημένης ):
–
–
–
–
a counter associated with each page, initally 0
at each clock interrupt,the R bit is added to the counter
at page fault, page with lowest counter is chosen
problem: never forgets anything (e.g. compilation)
• Aging: modification of NFU (γήρανσης)
– before adding the R bit, shift right counters 1 bit ( / 2)
– R bit is added to the leftmost, not the rightmost bit
10
Simulating LRU in Software
• The aging algorithm simulates LRU in software
• Note 6 pages for 5 clock ticks, (a) – (e)
11
Simulating LRU in Software
• Aging differs from LRU in 2 ways:
– Counters are incremented at clock ticks, not during
memory references: loose the ability to distinguish
references early in the clock interval from those
occurring later (e.g. pages 3,5 at (e)).
– Counters have a finite number of bits (8 in this
example). If two counters are 0, we pick one in
random, but one page may be referenced 9 ticks ago,
the other 1000 ticks ago.
12
The Working Set Page Replacement Algorithm
Αλγόριθμος συνόλου εργασίας
• Demand paging: pages are loaded as needed.
• Locality of reference: a process during a phase of
execution references a small fraction of its pages.
• Working set: pages a process is currently using.
• If the working set is not in memory => thrashing.
• Trying to keep track of the working set of each
process and load it before running => prepaging.
• w(k,t): the set of pages at instant t, used by the k
most recent page references. w(k,t) is a monotonically non decreasing function.
13
The Working Set Page Replacement Algorithm
• The working set is the set of pages used by the k
most recent memory references
• w(k,t) is the size of the working set at time, t
14
The Working Set Page Replacement Algorithm
The working set algorithm
15
Review of Page Replacement Algorithms
16
Belady's Anomaly
• FIFO with 3 page frames
• FIFO with 4 page frames
• P's show which page references show page faults
17
Modeling Page Replacement Algorithms
Stack Algorithms
• Belady’s anomaly led to a theory for paging algos
• reference string: sequence of memory references as it runs
• page replacement algorithm
• the number of page frames, m
18
Design Issues for Paging Systems
Local versus Global Allocation Policies
• Original configuration
• Local page replacement – fixed size allocated/proc
• Global page replacement – dynamically allocated
19
Design Issues for Paging Systems
Local versus Global Allocation Policies
• In general, global algorithms work better
• Continuously decide how many page frames to
allocate – decide based on the working set
• Algorithm for allocating page frames to processes
– equal share
– proportional to processes’ size
– PFF (page fault frequency) algorithm – measuring by
taking the running mean
20
Design Issues for Paging Systems
Local versus Global Allocation Policies
Page fault rate as a function of the number of
page frames assigned
21
Design Issues for Paging Systems
Page Size
Small page size
• Advantages
– less internal fragmentation (half of the last page)
– better fit for various data structures, code sections
– less unused program in memory
• Disadvantages
– programs need many pages, larger page tables
– more transfers, more time to load the page table
22
Design Issues for Paging Systems
Page Size
• Overhead due to page table and internal
fragmentation
page table space
s e p
overhead 

p 2
internal
fragmentation
• Where
– s = average process size in bytes
– p = page size in bytes
– e = page entry
Optimized when
p  2se
23
Implementation Issues
Operating System Involvement with Paging
Four times when OS involved with paging:
1. Process creation



determine program size, create & init page table
space allocated in swap area, pages in and out
info about the page table and swap must be stored
Process execution
2.



MMU reset for new process and TLB flushed
new process page table made current
prepaging
24
Implementation Issues
Operating System Involvement with Paging
Four times when OS involved with paging:
3. Page fault time
 read registers, determine virtual address causing fault
 swap target page out, needed page in
 backup program counter and re-execute
4. Process termination time

release page table, pages and swap area
25
Implementation Issues
Page Fault Handling – Χερισμός λαθών σελίδας
1.
2.
3.
4.
5.
Hardware traps to kernel, saving PC
General registers saved, OS called from assembly
OS determines which virtual page needed
OS checks validity and protection of address. If
OK, it seeks page frame (free or replace one).
Otherwise sends a signal or kill to the process
If selected frame is dirty, writes it to disk and
suspends the process
26
Implementation Issues
Page Fault Handling
6.
7.
8.
9.
10.
OS brings new page in from disk
Page tables updated – marked as valid
Faulting instruction backed up to when it began
Faulting process scheduled and the OS returns to
assembly
Registers restored and program continues
27
Implementation Issues
Instruction Backup – Αποθήκευση εντολής
• The instruction causing the fault is stopped part way
• After page fetched, instruction must be restarted
• OS must determine where the first byte is
An instruction causing a page fault
28
Implementation Issues
Instruction Backup
• OS has difficulties determining start of instruction
• Autoincrement, autodecrement registers: side
effect of executing an instruction
• Some CPUs: The PC is copied into a register
before the instruction is executed
• If not available, the OS must jump through hoops
29
Implementation Issues
Locking Pages in Memory – Κλείδωμα σελίδων στη μνήμη
• Virtual memory and I/O occasionally interact
• Proc issues call for read from device into buffer
– while waiting for I/O, another process starts up
– has a page fault
– buffer for the first proc may be chosen to be paged out
• Need to specify some pages locked
– exempted from being target pages (pinning)
• Alternatively, all I/O to kernel buffers
30
Implementation Issues
Backing Store – Βοηθητική αποθήκευση
• Special swap area on disk. As new processes are
started, space is allocated for them.
• Keep in the process table the disk address of the
swap space. Calculating addresses is easy (offsets)
• Processes can grow during execution
• Alternatively, allocate nothing in advance, allocate
disk space for a page when swapped out and
deallocate it when it is back in. For each page not
in memory, there must be a map.
31
Implementation Issues
Backing Store
(a) Paging to static swap area
(b) Backing up pages dynamically
32
Segmentation - Κατάτμηση
• For many problems, 2 or more virtual address spaces may be better
• One-dimensional address space with growing tables
• One table may bump into another
33
Segmentation
• Provide the machine with many completely
independent address spaces, called segments
• Different segments have different lengths
• To specify an address in this 2-d memory,
provide the segment number and the address
• Advantages:
– linking
– sharing
– protection
34
Segmentation
Comparison of paging and segmentation
35