Chapter 3 : Memory Management, Recent Systems
Download
Report
Transcript Chapter 3 : Memory Management, Recent Systems
Chapter 3 : Memory Management,
Recent Systems
•
•
•
•
•
Paged Memory Allocation
Demand Paging
Page Replacement Policies
Segmented Memory Allocation
Segmented/Demand Paged
Memory Allocation
• Virtual Memory
Paged Memory Allocation
Segmented
Demand Paging
Segmented/
Demand Paging
Memory Management
• Early schemes were limited to storing entire program in
memory.
– Fragmentation.
Problems
– Overhead due to relocation.
• More sophisticated memory schemes now that:
– Eliminate need to store programs contiguously.
– Eliminate need for entire program to reside in memory
during execution.
Understanding
Operating Systems
2
More Recent Memory Management
Schemes
• Paged Memory Allocation
• Demand Paging Memory Allocation
• Segmented Memory Allocation
• Segmented/Demand Paged Allocation
Understanding
Operating Systems
3
Paged Memory Allocation
• Divides each incoming job into pages of equal size.
• Works well if page size = size of memory block size (page
frames) = size of disk section (sector, block).
Before executing a program, memory manager:
1. Determines number of pages in program.
2. Locates enough empty page frames in main memory.
3. Loads all of the program’s pages into them.
Understanding
Operating Systems
4
Programs Are Split Into Equal-sized
Pages (Figure 3.1)
Job 1
1st 100 lines
Main Memory
Operating system
Page 0
2nd 100 lines
Page 1
3rd 100 lines
Job 1 – Page 2
Job 1 – Page 0
Page 2
Remaining 50 lines
Wasted space
Understanding
Operating Systems
Page 3
Job 1 – Page 1
Job 1 – Page 3
Page
frame #
0
1
2
3
4
5
6
7
8
9
10
11
12
5
Job 1 (Figure 3.1)
At compilation time every job is divided into pages:
– Page 0 contains the first hundred lines.
– Page 1 contains the second hundred lines.
– Page 2 contains the third hundred lines.
– Page 3 contains the last fifty lines.
• Program has 350 lines.
– Referred to by system as line 0 through line 349.
Understanding
Operating Systems
6
Paging Requires 3 Tables to Track a
Job’s Pages
1. Job table (JT) - 2 entries for each active job.
– Size of job & memory location of its page map table.
– Dynamic – grows/shrinks as jobs loaded/completed.
2. Page map table (PMT) - 1 entry per page.
– Page number & corresponding page frame memory
address.
– Page numbers are sequential (Page 0, Page 1 …)
3. Memory map table (MMT) - 1 entry for each page frame.
– Location & free/busy status.
Understanding OS
7
Job Table Contains 2 Entries for Each
Active Job (Table 3.1)
Job
Table (a)
Job
Table (b)
Job
Table (c)
Job Size
PMT Location
Job Size
PMT Location
Job Size
PMT Location
400
3096
400
3096
400
3096
200
3100
700
3100
500
3150
500
3150
500
3150
(a) JT has 3 entries initially; one for each job in process
(b) second job ends, entry in table is released and replaced by
(c) information about next job that is processed
Understanding
Operating Systems
8
Job 1 Is 350 Lines Long & Divided
Into 4 Pages (Figure 3.2)
9
Displacement (Figure 3.2)
• Displacement (offset) of a line -- how far away a line is
from the beginning of its page.
– Used to locate that line within its page frame.
– Relative factor.
• For example, lines 0, 100, 200, and 300 are first lines for
pages 0, 1, 2, and 3 respectively so each has displacement
of zero.
Understanding
Operating Systems
10
To Find the Address of a Given
Program Line
• Divide the line number by the page size, keeping the
remainder as an integer.
Page size
Understanding
Operating Systems
Page number
line number to be located
xxx
xxx
xxx
Displacement
11
Address Resolution
Each time and instruction is executed or a data value is
used, the OS or (hardware) must:
– Translate the job space address (relative).
– Into a physical address (absolute).
Understanding
Operating Systems
12
Pros & Cons of Paging
• Allows jobs to be allocated in non-contiguous memory
locations.
– Memory used more efficiently; more jobs can fit.
• Size of page is crucial (not too small, not too large).
• Increased overhead occurs.
• Reduces, but does not eliminate, internal fragmentation.
Understanding
Operating Systems
13
Demand Paging
• Bring a page into memory only when it is needed, so less
I/O & memory needed.
– Faster response.
• Takes advantage that programs are written sequentially so
not all pages are necessary at once. For example:
– User-written error handling modules.
– Mutually exclusive modules.
– Certain program options are either mutually exclusive
or not always accessible.
– Many tables assigned fixed amount of address space
even though only a fraction of table is actually used.
Understanding
Operating Systems
14
Demand Paging - 2
• Demand paging made virtual memory widely available.
– Can give appearance of an almost-infinite or nonfinite
amount of physical memory.
• Requires use of a high-speed direct access storage device
that can work directly with CPU.
• How and when the pages are passed (or “swapped”)
depends on predefined policies that determine when to
make room for needed pages and how to do so.
Understanding
Operating Systems
15
Tables in Demand Paging
•
•
Job Table.
Page Map Table (with 3 new fields).
1. Determines if requested page is already in memory.
2. Determines if page contents have been modified.
3. Determines if the page has been referenced recently.
– Used to determine which pages should remain in main
memory and which should be swapped out.
•
Memory Map Table.
Understanding
Operating Systems
16
Page Map Table
Page
0
1
2
3
Status bit
1
1
1
1
Understanding
Operating Systems
Referenced bit
1
0
0
1
Modified bit
1
0
0
0
Page frame
5
9
7
12
17
Hardware Instruction
Processing Algorithm
1.
2.
3.
4.
Start processing instruction
Generate data address
Compute page number
If page is in memory
Then
get data and finish instruction
advance to next instruction
return to step 1
Else
generate page interrupt
call page fault handler
Understanding
Operating Systems
18
Page Fault Handler Algorithm
1. If there is no free page frame
Then
Select page to be swapped out using page removal algorithm
Update job’s page map table
If content of page had been changed then
Write page to disk
End if
End if
2. Use page number from step 3 from the hardware instruction processing
algorithm to get disk address where the requested page is stored.
3. Read page into memory.
4. Update job’s page map table.
5. Update memory map table.
6. Restart interrupted instruction.
19
Thrashing Is a Problem
With Demand Paging
• Trashing – an excessive amount of page swapping back
and forth between main memory and secondary storage.
– Operation becomes inefficient.
– Caused when a page is removed from memory but is
called back shortly thereafter.
– Can occur across jobs, when a large number of jobs are
vying for a relatively few number of free pages.
– Can happen within a job (e.g., in loops that cross page
boundaries).
• Page fault – a failure to find a page in memory.
Understanding
Operating Systems
20
Page Replacement Policies
• Policy that selects page to be removed is crucial to system
efficiency.
– Selection of algorithm is critical.
• First-in first-out (FIFO) policy* – best page to remove is
one that has been in memory the longest.
• Least-recently-used (LRU) policy* – chooses pages least
recently accessed to be swapped out.
• Most recently used (MRU) policy.
• Least frequently used (LFU) policy.
Understanding
Operating Systems
* Most well known policies
21
FIFO policy.
When program calls for Page C, Page A is moved out of 1st page
frame to make room for it (solid lines). When Page A is needed again,
it replaces Page B in 2nd page frame (dotted lines).
Page Frame 1
Requested Pages
Page C
Page A
Page B
Page D
Page B
Page A
Page C
Page D
Understanding
Operating Systems
Page Frame 2
Page B
Swapped Pages
Page A
22
How each page requested is swapped into 2 available page frames
using FIFO. When program is ready to be processed all 4 pages are
on secondary storage. Throughout program, 11 page requests are
issued. When program calls a page that isn’t already in memory, a
page interrupt is issued (shown by *). 9 page interrupts result.
Page
Requested: A
B
A
C
A
B
D
B
A
C
D
A
A
C
C
B
B
B
A
A
D
B
B
B
A
A
D
D
D
C
C
3
*
4
*
5
*
6
*
7
8
*
9
*
10
*
11
Page
Frame 1
Page A
Page
Frame 2
(empty)
Interrupt: *
T ime: 1
Understanding
Operating Systems
*
2
23
FIFO
• High failure rate shown in previous example caused by:
– limited amount of memory available.
– order in which pages are requested by program (can’t
change).
• There is no guarantee that buying more memory will
always result in better performance (FIFO anomaly or
Belady's anomaly).
Understanding
Operating Systems
24
LRU Policy
For program in Figure 3.8. Throughout the program 11 page requests
are issued, but they cause only 8 page interrupts.
Page
Requested: A
B
A
C
A
B
D
B
A
C
D
A
A
A
A
A
D
D
A
A
D
B
B
C
C
B
B
B
B
C
C
3
*
4
5
*
6
*
7
8
*
9
*
10
*
11
Page
Frame 1
Page A
Page
Frame 2
(empty)
Interrupt: *
T ime: 1
Understanding
Operating Systems
*
2
25
LRU
• The efficiency of LRU is only slightly better than with
FIFO.
• LRU is a stack algorithm removal policy – increasing
main memory causes either a decrease in or same number
of page interrupts.
– LRU doesn’t have same anomaly that FIFO does.
Understanding
Operating Systems
26
Mechanics of Paging :
Page Map Table
• Status bit indicates if page is currently in memory or not.
• Referenced bit indicates if page has been referenced recently.
– Used by LRU to determine which pages should be swapped out.
• Modified bit indicates if page contents have been altered
– Used to determine if page must be rewritten to secondary storage
when it’s swapped out.
Page
Status bit
Referenced bit
Modified bit
Page frame
0
1
1
1
5
1
1
0
0
9
2
1
0
0
7
3
1
1
0
12
Understanding
Operating Systems
27
Four Possible Combinations of
Modified and Referenced Bits
Modified
Referenced
Meaning
Case 1
0
0
Not modified AND not referenced
Case 2
0
1
Not modified BUT was referenced
Case 3
1
0
Was modified BUT not referenced
(impossible?)
Case 4
1
1
Was modified AND referenced
Understanding
Operating Systems
28
Page Replacement : The Working Set
• Working set – set of pages residing in memory that can be
accessed directly without incurring a page fault.
– Improves performance of demand page schemes.
• Locality of reference occurs with well-structured
programs.
– During any phase of its execution program references only a small
fraction of its pages.
• System must decide:
– How many pages comprise the working set?
– What’s the maximum number of pages the operating system will
allow for a working set?
Understanding
Operating Systems
29
Pros & Cons of Demand Paging
• First scheme in which a job was no longer constrained by
the size of physical memory (virtual memory).
• Uses memory more efficiently than previous schemes
because sections of a job used seldom or not at all aren’t
loaded into memory unless specifically requested.
• Increased overhead caused by tables and page interrupts.
Understanding
Operating Systems
30
Segmented Memory Allocation
• Based on common practice by programmers of structuring
their programs in modules (logical groupings of code).
– A segment is a logical unit such as: main program,
subroutine, procedure, function, local variables, global
variables, common block, stack, symbol table, or array.
• Main memory is not divided into page frames because size
of each segment is different.
– Memory is allocated dynamically.
Understanding
Operating Systems
31
Segment Map Table (SMT)
• When a program is compiled, segments are set up
according to program’s structural modules.
• Each segment is numbered and a Segment Map Table
(SMT) is generated for each job.
– Contains segment numbers, their lengths, access rights,
status, and (when each is loaded into memory) its
location in memory.
Understanding
Operating Systems
32
Tables Used in Segmentation
•
Memory Manager needs to track segments in memory:
1. Job Table (JT) lists every job in process (one for whole
system).
2. Segment Map Table lists details about each segment (one
for each job).
3. Memory Map Table monitors allocation of main memory
(one for whole system).
Understanding
Operating Systems
33
Pros & Cons of Segmentation
• Compaction.
• External fragmentation.
• Secondary storage handling.
• Memory is allocated dynamically.
Understanding
Operating Systems
34
Segmented/Demand Paged
Memory Allocation
• Evolved from combination of segmentation and demand
paging.
– Logical benefits of segmentation.
– Physical benefits of paging.
• Subdivides each segment into pages of equal size, smaller
than most segments, and more easily manipulated than
whole segments.
• Eliminates many problems of segmentation because it uses
fixed length pages.
Understanding
Operating Systems
35
4 Tables Are Used in
Segmented/Demand Paging
1. Job Table lists every job in process (one for whole
system).
2. Segment Map Table lists details about each segment (one
for each job).
– E.g., protection data, access data.
3. Page Map Table lists details about every page (one for
each segment).
– E.g., status, modified, and referenced bits .
4. Memory Map Table monitors allocation of page frames in
main memory (one for whole system).
Understanding
Operating Systems
36
Pros & Cons of Segment/Demand
Paging
• Overhead required for the extra tables
• Time required to reference segment table and page table.
• Logical benefits of segmentation.
• Physical benefits of paging
• To minimize number of references, many systems use
associative memory to speed up the process.
Understanding
Operating Systems
37
Virtual Memory (VM)
• Even though only a portion of each program is stored in
memory, virtual memory gives appearance that programs
are being completely loaded in main memory during their
entire processing time.
• Shared programs and subroutines are loaded “on demand,”
reducing storage requirements of main memory.
• VM is implemented through demand paging and
segmentation schemes.
Understanding
Operating Systems
38
Comparison of VM With Paging and
With Segmentation
Virtual memory with paging
Virtual memory with segmentation
Allows internal fragmentation within
Doesn’t allow internal fragmentation
page frames
Doesn’t allow external fragmentation Allows external fragmentation
Programs are divided into equal-
Programs are divided into unequal-
sized pages
sized segments
Absolute address calculated using
Absolute address calculated using
page number and displacement
segment number and displacement
Requires PMT
Requires SMT
Understanding
Operating Systems
39
Advantages of VM
•
•
•
•
•
•
•
•
Works well in a multiprogramming environment because most
programs spend a lot of time waiting.
Job’s size is no longer restricted to the size of main memory (or the
free space within main memory).
Memory is used more efficiently.
Allows an unlimited amount of multiprogramming.
Eliminates external fragmentation when used with paging and
eliminates internal fragmentation when used with segmentation.
Allows a program to be loaded multiple times occupying a different
memory location each time.
Allows the sharing of code and data.
Facilitates dynamic linking of program segments.
Understanding
Operating Systems
40
Disadvantages of VM
• Increased processor hardware costs.
• Increased overhead for handling paging interrupts.
• Increased software complexity to prevent thrashing.
Understanding
Operating Systems
41
Key Terms
•
•
•
•
•
•
•
•
address resolution
associative memory
demand paging
displacement
FIFO anomaly
first-in first-out (FIFO) policy
Job Table (JT)
least-recently-used (LRU)
policy
• locality of reference
• Memory Map Table (MMT)
Understanding
Operating Systems
•
•
•
•
•
•
•
•
•
•
•
page
page fault
page fault handler
page frame
Page Map Table (PMT)
page replacement policy
page swap
paged memory allocation
reentrant code
segment
Segment Map Table (SMT)
42
Key Terms - 2
• segmented memory allocation
• segmented/demand paged
memory allocation
• thrashing
• virtual memory
• working set
Understanding
Operating Systems
43