Address Translation

Download Report

Transcript Address Translation

Address Translation
• Memory Allocation
– Linked lists
– Bit maps
• Options for managing memory
– Base and Bound
– Segmentation
– Paging
• Paged page tables
• Inverted page tables
– Segmentation with Paging
1
Memory Management with Linked Lists
• Hole – block of available memory; holes of various
size are scattered throughout memory.
• When a process arrives, it is allocated memory from
a hole large enough to accommodate it.
• Operating system maintains two linked lists for
a) allocated partitions b) free partitions (hole)
OS
OS
OS
OS
process 5
process 5
process 5
process 5
process 9
process 9
process 8
process 2
process 10
process 2
process 2
process 2
2
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes.
• First-fit: Allocate the first hole that is big enough.
• Best-fit: Allocate the smallest hole that is big enough;
must search entire list, unless ordered by size.
Produces the smallest leftover hole.
• Worst-fit: Allocate the largest hole; must also search
entire list. Produces the largest leftover hole.
First-fit and best-fit better than worst-fit in terms of
speed and storage utilization.
3
Memory Allocation with Bit Maps
• Part of memory with 5 processes, 3 holes
– tick marks show allocation units
– shaded regions are free
• Corresponding bit map
• Same information as a list
4
Fragmentation
• External Fragmentation
– Have enough memory, but not contiguous
– Can’t satisfy requests
– Ex: first fit wastes 1/3 of memory on average
• Internal Fragmentation – allocated memory may be
slightly larger than requested memory; this size
difference is memory internal to a partition, but not being
used.
• Reduce external fragmentation by compaction
– Shuffle memory contents to place all free memory together in
one large block.
5
Segmentation
• A segment is a region of logically contiguous
memory.
• Idea is to generalize base and bounds, by
allowing a table of base&bound pairs.
• Break virtual memory into segments
• Use contiguous physical memory per segment
• Divide each virtual address into:
– segment number
– segment offset
– Note: compiler does this, not hardware
6
Segmentation – Address
Translation
• Use segment # to index into segment table
– segment table is table of base/limit pairs
• Compare to limit, Add base
– exactly the same as base/bound scheme
• If greater than limit (or illegal access)
– the segmentation fault
7
Segmentation Hardware
8
For example, what does it look like with this segment
table, in virtual memory and physical memory?
Assume 2 bit segment ID, and 12 bit segment offset
virtual segment
#
physical
segment start
segment size
code
0x4000
0x700
data
0
0x500
-
0
0
stack
0x2000
0x1000
9
virtual memory
physical memory
0
0
6ff
4ff
1000
14ff
2000
3000
2fff
3fff
4000
46ff
10
Segmentation (cont.)
• This should seem a bit strange: the virtual address
space has gaps in it! Each segment gets mapped to
contiguous locations in physical memory, but may be
gaps between segments.
• But, a correct program will never address gaps; if it does,
trap to kernel and then core dump.
• Minor exception: stack, heap can grow. In UNIX, sbrk()
increase size of heap segment. For stack, just take fault,
system automatically increase size of stack.
• Detail: Need protection mode in segmentation table. For
example, code segment would be read-only. Data and
stack segment would be read-write.
11
Segmentation Pros & Cons
+ Efficient for sparse address spaces
• Can keep segment table in registers
+ Easy to share whole segments (for example, code
segment)
+ Easy for address space to grow
– Complicated Memory allocation: still need first fit, best fit,
etc., and re-shuffling to coalesce free fragments, if no
single free space is big enough for a new segment
How do we make memory allocation simple and easy?
12
Paging
• Allocate physical memory in terms of fixed size
chunks of memory, or pages.
• Simpler, because allows use of a bitmap. What
is a bitmap?
001111100000001100
Each bits represents one page of physical
memory – 1 means allocated, 0 means
unallocated. Lots simpler than base & bounds
or segmentation
• OS controls mapping: any page of virtual
memory can go anywhere in physical memory
13
Paging (cont.)
• Avoids fitting variable sized memory units
• Break physical memory into frames
– Determined by hardware
• Break virtual memory into pages
– Pages and frames are the same size
• Divide each virtual address into:
– Page number
– Page offset
• Note:
– With paging, hardware splits address
– With segmentation, compiler generates segmented
code.
14
Paging – Address Translation
• Index into page table with high order bits
– Get physical frame
• Append that to offset
• Now present new address to memory
Note: kernel keeps track of free frames can
be done with a bitmap
15
Address Translation Architecture
16
Paging Example
physical memory
virtual memory
A
B
C
D
E
F
G
H
4
3
1
Page table
I
J
K
L
Where is virtual address 6? 9?
Note: Page size is 4 bytes
I
J
K
L
E
F
G
H
A
B
C
D
17
Paging Issues
•
•
•
•
•
Fragmentation
Page Size
Protection and Sharing
Page Table Size
What if page table too big for main
memory?
18
Fragmentation and Page Size
• Fragmentation
– No external fragmentation
• Page can go anywhere in main memory
– Internal fragmentation
• Average: ½ page per address space
• Page Size
– Small size?
• Reduces (on average) internal fragmentation
– Large size?
• Better for page table size
• Better for disk I/O
• Better for number of page faults
– Typical page sizes: 4K, 8K, 16K
19
Protection and Sharing
• Page protection – use a bit
– code: read only
– data: read/write
• Sharing
– Just “map pages in” to your address space
• Ex: if “vi” is at frames 0 through 10, all processes
can adjust their page tables to “point to” those
frames.
20
Page Tables can be Large
• Page table is kept in main memory.
• Page-table base register (PTBR) points to the
page table.
• Page-table length register (PRLR) indicates size
of the page table.
• In this scheme every data/instruction access
requires two memory accesses. One for the
page table and one for the data/instruction
• Page table can be huge (million or more entries)
• Use multi-level page table
– 2 page numbers and one offset
21
Two-Level Paging Example
• A logical address (on 32-bit machine with 4K page size) is
divided into:
– a page number consisting of 20 bits.
– a page offset consisting of 12 bits.
• Since the page table is paged, the page number is further
divided into:
– a 10-bit page number.
– a 10-bit page offset.
• Thus, a logical address is as follows:
page number
pi
10
page offset
p2
d
10
12
where pi is an index into the outer page table, and p2 is the
displacement within the page of the outer page table.
22
Address-Translation Scheme
• Address-translation scheme for a two-level 32bit paging architecture
23
Inverted Page Table (“Core Map”)
• One entry for each real page of memory.
• Entry consists of the virtual address of the page
stored in that real memory location, with
information about the process that owns that
page.
• Decreases memory needed to store each page
table, but increases time needed to search the
table when a page reference occurs.
• Use hash table to limit the search to one — or at
most a few — page-table entries.
• Good for page replacement
24
Inverted Page Table Architecture
25
Paging the Segments
• Divide address into three parts
– Segment number
– Page number
– Offset
• Segment table contains addr. of page
table
• Use that and page # to get frame #
• Combine frame number and offset
26
What does this buy you?
• Simple management of physical memory
– Paging (just a bitmap)
• Maintain logical structure
– Segmentation
• However,
– Possibly 3 memory accesses to get to
memory!
27
MULTICS Address Translation Scheme
28