PPT - Surendar Chandra

Download Report

Transcript PPT - Surendar Chandra

Review
 Chapter 9: Memory management





Swapping
Contiguous Allocation
Paging
Segmentation
Segmentation with Paging
 Chapter 10: Virtual Memory






4/11/2016
Demand Paging
Process Creation
Page Replacement
Allocation of Frames
Thrashing
Demand Segmentation
CSE 542: Graduate Operating Systems
page 1
How do we run Process1,2 in memory?
Physical memory
0
Process1
Process2
4G
4/11/2016
CSE 542: Graduate Operating Systems
page 2
Binding of Instructions and Data to
Memory
 Address binding of instructions and data to
memory addresses can happen at three different
stages
 Compile time: If memory location known a priori,
absolute code can be generated; must recompile code if
starting location changes
 Load time: Must generate relocatable code if memory
location is not known at compile time
 Execution time: Binding delayed until run time if the
process can be moved during its execution from one
memory segment to another. Need hardware support for
address maps (e.g., base and limit registers)
4/11/2016
CSE 542: Graduate Operating Systems
page 3
Logical vs. Physical Address Space
 The concept of a logical address space that is
bound to a separate physical address space is
central to proper memory management.
 Logical address – generated by the CPU; also referred to
as virtual address.
 Physical address – address seen by the memory unit.
 Logical and physical addresses are the same in
compile-time and load-time address-binding
schemes; logical (virtual) and physical addresses
differ in execution-time address-binding scheme.
4/11/2016
CSE 542: Graduate Operating Systems
page 4
Memory-Management Unit (MMU)
 Hardware device that maps virtual to physical
address
 Otherwise, accesses would slowdown dramatically
 In MMU scheme, the value in the relocation
register is added to every address generated by a
user process at the time it is sent to memory
 The user program deals with logical addresses; it
never sees the real physical addresses
4/11/2016
CSE 542: Graduate Operating Systems
page 5
Loading routines
 Dynamic Loading
 Routine is not loaded until it is called
 Better memory-space utilization; unused routine is never
loaded
 Useful when large amounts of code are needed to handle
infrequently occurring cases.
 No special support from the operating system is required
implemented through program design.
 Dynamic Linking
 Linking postponed until execution time.
 Small piece of code, stub, used to locate the appropriate
memory-resident library routine.
 Stub replaces itself with the address of the routine, and
executes the routine.
 Operating system needed to check if routine is in
processes’ memory address
4/11/2016
CSE 542: Graduate Operating Systems
page 6
Swapping
 A process can be swapped temporarily out of
memory to a backing store, and then brought back
into memory for continued execution
 Backing store – fast disk large enough to
accommodate copies of all memory images for all
users; must provide direct access to these memory
images
 Major part of swap time is transfer time; total
transfer time is directly proportional to the amount
of memory swapped
4/11/2016
CSE 542: Graduate Operating Systems
page 7
Contiguous Page Allocation
 Single-partition allocation
 Relocation-register scheme used to protect user
processes from each other, and from changing operatingsystem code and data. Relocation register contains value
of smallest physical address; limit register contains range
of logical addresses – each logical address must be less
than the limit register
 Multiple-partition allocation
 Hole – block of available memory; holes of various size
are scattered throughout memory.
 When a process arrives, it is allocated memory from a
hole large enough to accommodate it.
 Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
4/11/2016
CSE 542: Graduate Operating Systems
page 8
Dynamic Storage-Allocation Problem
.
 How to satisfy a request of size n from a list of free
holes
 First-fit: Allocate the first hole that is big enough.
 Best-fit: Allocate the smallest hole that is big enough;
must search entire list, unless ordered by size. Produces
the smallest leftover hole
 Worst-fit: Allocate the largest hole; must also search
entire list. Produces the largest leftover hole.
 First-fit and best-fit better than worst-fit in terms of
speed and storage utilization
4/11/2016
CSE 542: Graduate Operating Systems
page 9
Fragmentation
 External Fragmentation – total memory space
exists to satisfy a request, but it is not contiguous.
 Internal Fragmentation – allocated memory may
be slightly larger than requested memory; this size
difference is memory internal to a partition, but not
being used.
 Reduce external fragmentation by compaction
 Shuffle memory contents to place all free memory
together in one large block.
 Compaction is possible only if relocation is dynamic, and
is done at execution time.
 I/O problem
 Latch job in memory while it is involved in I/O.
 Do I/O only into OS buffers.
4/11/2016
CSE 542: Graduate Operating Systems
page 10
Paging
 Logical address space of a process can be
noncontiguous; process is allocated physical
memory whenever the latter is available.
 Divide physical memory into fixed-sized blocks
called frames (size is power of 2, between 512
bytes and 8192 bytes).
 Divide logical memory into blocks of same size
called pages.
 Keep track of all free frames.
 To run a program of size n pages, need to find n
free frames and load program.
 Set up a page table to translate logical to physical
addresses.
 Internal fragmentation.
4/11/2016
CSE 542: Graduate Operating Systems
page 11
Address Translation Scheme
 Address generated by CPU is divided into:
 Page number (p) – used as an index into a page table
which contains base address of each page in physical
memory.
 Page offset (d) – combined with base address to define
the physical memory address that is sent to the memory
unit.
4/11/2016
CSE 542: Graduate Operating Systems
page 12
Address Translation Architecture
4/11/2016
CSE 542: Graduate Operating Systems
page 13
Implementation of Page Table
 Page table is kept in main memory.
 Page-table base register (PTBR) points to the page
table.
 Page-table length register (PRLR) indicates size of
the page table.
 In this scheme every data/instruction access
requires two memory accesses. One for the page
table and one for the data/instruction.
 The two memory access problem can be solved by
the use of a special fast-lookup hardware cache
called associative memory or translation look-aside
buffers (TLBs)
4/11/2016
CSE 542: Graduate Operating Systems
page 14
Paging Hardware With TLB
4/11/2016
CSE 542: Graduate Operating Systems
page 15
Memory Protection
 Memory protection implemented by associating
protection bit with each frame.
 Valid-invalid bit attached to each entry in the page
table:
 “valid” indicates that the associated page is in the
process’ logical address space, and is thus a legal page.
 “invalid” indicates that the page is not in the process’
logical address space.
4/11/2016
CSE 542: Graduate Operating Systems
page 16
Page Table Structure - page tables can
be large (per process and sparse)
 Hierarchical Paging
 Break up the logical address space into multiple page
tables.
 A simple technique is a two-level page table.
 Hashed Page Tables (Common in address spaces
> 32 bit)
 The virtual page number is hashed into a page table. This
page table contains a chain of elements hashing to the
same location
 Virtual page numbers are compared in this chain
searching for a match. If a match is found, the
corresponding physical frame is extracted
 Inverted Page Tables
4/11/2016
CSE 542: Graduate Operating Systems
page 17
Inverted Page Table
 One entry for each real page of memory.
 Entry consists of the virtual address of the page
stored in that real memory location, with
information about the process that owns that page.
 Decreases memory needed to store each page
table, but increases time needed to search the
table when a page reference occurs.
 Use hash table to limit the search to one — or at
most a few — page-table entries.
4/11/2016
CSE 542: Graduate Operating Systems
page 18
Shared Pages
 Shared code
 One copy of read-only (reentrant) code shared among
processes (i.e., text editors, compilers, window systems).
 Shared code must appear in same location in the logical
address space of all processes.
 Private code and data
 Each process keeps a separate copy of the code and
data.
 The pages for the private code and data can appear
anywhere in the logical address space.
4/11/2016
CSE 542: Graduate Operating Systems
page 19
Virtual memory
 separation of user logical memory from physical
memory
 Only part of the program needs to be in memory for
execution.
 Logical address space can therefore be much larger than
physical address space.
 Allows address spaces to be shared by several
processes.
 Allows for more efficient process creation
4/11/2016
CSE 542: Graduate Operating Systems
page 20
Demand Paging
 Bring a page into memory only when it is needed.




Less I/O needed
Less memory needed
Faster response
More users
 Page is needed  reference to it
 invalid reference  abort
 not-in-memory  bring to memory
4/11/2016
CSE 542: Graduate Operating Systems
page 21
Valid-Invalid Bit
 With each page table entry a valid–invalid bit is
associated
(1  in-memory, 0  not-in-memory)
 Initially valid–invalid but is set to 0 on all entries.
 Example of a page table snapshot.
Frame #
valid-invalid bit
1
1
1
1
0

0
0
page
table
 During address
translation,
if valid–invalid bit in page
table entry is 0  page fault.
4/11/2016
CSE 542: Graduate Operating Systems
page 22
Page Fault
 If there is ever a reference to a page, first reference
will trap to
OS  page fault
 OS looks at another table to decide:
 Invalid reference  abort.
 Just not in memory.




Get empty frame.
Swap page into frame.
Reset tables, validation bit = 1.
Restart instruction: Least Recently Used
 block move
 auto increment/decrement location
4/11/2016
CSE 542: Graduate Operating Systems
page 23
What happens if there is no free frame?
 Page replacement – find some page in memory,
but not really in use, swap it out
 performance – want an algorithm which will result in
minimum number of page faults.
 Same page may be brought into memory several
times.
 Page Fault Rate 0  p  1.0
 if p = 0 no page faults
 if p = 1, every reference is a fault
4/11/2016
CSE 542: Graduate Operating Systems
page 24
Process Creation
 Virtual memory allows other benefits during
process creation:
 Copy-on-Write
 Copy-on-Write (COW) allows both parent and child
processes to initially share the same pages in memory.
 If either process modifies a shared page, only then is the
page copied. COW allows more efficient process creation
as only modified pages are copied
 Memory-Mapped Files
 Memory-mapped file I/O allows file I/O to be treated as
routine memory access by mapping a disk block to a page
in memory
 Simplifies file access by treating file I/O through memory
rather than read() write() system calls
 Also allows several processes to map the same file
allowing the pages in memory to be shared
4/11/2016
CSE 542: Graduate Operating Systems
page 25
Page Replacement
 Prevent over-allocation of memory by modifying
page-fault service routine to include page
replacement.
 Use modify (dirty) bit to reduce overhead of page
transfers – only modified pages are written to disk.
 Page replacement completes separation between
logical memory and physical memory – large virtual
memory can be provided on a smaller physical
memory.
4/11/2016
CSE 542: Graduate Operating Systems
page 26
Basic Page Replacement
1. Find the location of the desired page on disk.
2. Find a free frame:
- If there is a free frame, use it.
- If there is no free frame, use a page
replacement
algorithm to select a victim frame.
3. Read the desired page into the (newly) free frame.
Update the page and frame tables.
4. Restart the process.
4/11/2016
CSE 542: Graduate Operating Systems
page 27
Page Replacement Algorithms
 Want lowest page-fault rate.
 Evaluate algorithm by running it on a particular
string of memory references (reference string) and
computing the number of page faults on that string.
 In all our examples, the reference string is
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.
4/11/2016
CSE 542: Graduate Operating Systems
page 28
First-In-First-Out (FIFO) Algorithm
 Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
 3 frames (3 pages can be in memory at a time per
process)
1
1
4
5
2
2
1
3
3
3
2
4
1
1
5
4
2
2
1
5
3
3
2
4
4
3
9 page faults
 4 frames
10 page faults
 FIFO Replacement – Belady’s Anomaly
 more frames  less page faults
4/11/2016
CSE 542: Graduate Operating Systems
page 29
FIFO Illustrating Belady’s Anomaly
4/11/2016
CSE 542: Graduate Operating Systems
page 30
Optimal Algorithm
 Replace page that will not be used for longest
period of time.
 4 frames example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1
4
2
6 page faults
3
4
5
 How do you know this?
 Used for measuring how well your algorithm
performs.
4/11/2016
CSE 542: Graduate Operating Systems
page 31
Least Recently Used (LRU) Algorithm
 Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1
5
2
3
5
4
3
4
 Counter implementation
 Every page entry has a counter; every time page is
referenced through this entry, copy the clock into the
counter.
 When a page needs to be changed, look at the counters
to determine which are to change.
4/11/2016
CSE 542: Graduate Operating Systems
page 32
LRU Algorithm (Cont.)
 Stack implementation – keep a stack of page
numbers in a double link form:
 Page referenced:
 move it to the top
 requires 6 pointers to be changed
 No search for replacement
4/11/2016
CSE 542: Graduate Operating Systems
page 33
LRU Approximation Algorithms
 Reference bit
 With each page associate a bit, initially = 0
 When page is referenced bit set to 1.
 Replace the one which is 0 (if one exists). We do not
know the order, however.
 Second chance
 Need reference bit.
 Clock replacement.
 If page to be replaced (in clock order) has reference bit =
1. then:
 set reference bit 0.
 leave page in memory.
 replace next page (in clock order), subject to same rules.
4/11/2016
CSE 542: Graduate Operating Systems
page 34
Counting Algorithms
 Keep a counter of the number of references that
have been made to each page.
 LFU Algorithm: replaces page with smallest count.
 MFU Algorithm: based on the argument that the
page with the smallest count was probably just
brought in and has yet to be used.
4/11/2016
CSE 542: Graduate Operating Systems
page 35
Thrashing
 If a process does not have “enough” pages, the
page-fault rate is very high. This leads to:
 low CPU utilization.
 operating system thinks that it needs to increase the
degree of multiprogramming.
 another process added to the system.
 Thrashing  a process is busy swapping pages in
and out.
4/11/2016
CSE 542: Graduate Operating Systems
page 36
Thrashing
 Why does paging work?
Locality model
 Process migrates from one locality to another.
 Localities may overlap.
 Why does thrashing occur?
 size of locality > total memory size
4/11/2016
CSE 542: Graduate Operating Systems
page 37
Demand Segmentation
 Used when insufficient hardware to implement
demand paging.
 OS/2 allocates memory in segments, which it
keeps track of through segment descriptors
 Segment descriptor contains a valid bit to indicate
whether the segment is currently in memory.
 If segment is in main memory, access continues,
 If not in memory, segment fault.
4/11/2016
CSE 542: Graduate Operating Systems
page 38