Virtual memory

Download Report

Transcript Virtual memory

Virtual Memory
CSE451
Andrew Whitaker
Review: Address Translation
virtual address
offset
physical memory
page table
physical address
page frame #
page frame #
offset
Note: Each process
has its own page table!
page
frame 0
page
frame 1
page
frame 2
page
frame 3
…
virtual page #
page
frame Y
Problem: Physical Memory Scarcity
 Q: Which is bigger:
A 64 bit address space
Or, the number of hydrogen
molecules in a star?
 A: It’s the star
2^64 bytes in an address space
10^57 hydrogen atoms in a star
57 * log 10 > 64 * log 2
 But, the fact we have to ask is
significant!
Solution: Virtual Memory
 Physical memory stores a subset of virtual address
space
 The rest is stored on disk
 Main memory acts as a page cache
virtual
memory
physical
memory
 Implemented transparently by the OS and hardware
 Application can’t tell which pages are in memory
How Does Virtual Memory Work?
1 1 1
2
V R M
prot
20
page frame number
Page table entry contains a valid bit
Says whether the mapping is valid
If valid bit is not set, the system raises a
page fault
Behaves like an (involuntary) system call
What Happens on a Page Fault?
 Hardware raises an exception
 OS page fault handler runs
First, make sure the address is valid
 If not, raise a segmentation fault
Second, verify access type
 Not allowed to write to a read-only page!
 If access is legal, allocate a new page frame
What happens if physical memory is scarce… stay tuned!
 OS reads page frame from disk
Process/thread is blocked during this process
 Once read completes, OS updates the PTE and
resumes the process/thread
Demand Paging
 What happens when a process first starts?
 Two choices:
Eagerly bring in pages
Lazily fault in pages
 Called demand paging
 Most OS’s prefer demand paging
Doesn’t require guessing, maintaining history
 Slightly smarter approach: clustering
Bring in the faulting page and several subsequent pages
 This is used by Windows XP
Dealing With Memory Scarcity
Physical memory is over-allocated
We can run out!
OS needs a page replacement strategy
Before diving into particular strategies, lets
look at some theory…
Locality of Reference
Programs tend to reuse data and
instructions they have used recently
These “hot” items are a small fraction of
total memory
Rule of thumb: 90% of execution time is spent
in 10% of the code
This is what makes caching work!
The Working Set Model
The working set represents those pages
currently in use by a program
The working set contents change over time
So does the size of the working set
To get reasonable performance, a program
must be allocated enough pages for its
working set
Request / second of throughput
A hypothetical web server
working
set
This is a bimodal
distribution
Number of page frames allocated to process
Thrashing
 Programs with an allocation beneath their
working set will thrash
Each page fault evicts a “hot” page
That page will be needed soon…
Evicted page takes a page fault
[repeat]
 Net result: no work gets done
All time is devoted to paging operations
Over-allocation
Giving a program more
than its working set
does very little good
Page eviction strategies
take advantage of this
Generalizing to Multiple Processes
Let W to be the sum of working sets for all
active processes
Let M be amount of physical memory
If W > M, the system as a whole will thrash
No page replacement policy will work :-(
If W <= M, the system might perform well
Key issue: is the page replacement policy smart
enough to identify working sets?
Which of These Eviction Strategies is
Good at Capturing Working Sets?
Random page eviction
FIFO page eviction
Least-recently used page eviction