EMC Business Continuance

Download Report

Transcript EMC Business Continuance

Chapter 4: Memory
Instructor: Hengming Zou, Ph.D.
In Pursuit of Absolute Simplicity 求于至简,归于永恒
1
Content
 Basic memory management
 Swapping
 Virtual memory
 Page replacement algorithms
 Implementation issues
 Segmentation
2
Memory Management
 Ideally programmers want memory that is
– large
– fast
– non volatile
3
Memory Hierarchy
 Small amount of fast, expensive memory –
– cache
 Some medium-speed, medium price memory
– main memory
 Gigabytes of slow, cheap memory
– disk storage
4
Virtual Memory
 An illusion provided to the applications
 An address space can be larger than
– the amount of physical memory on the machine
– This is called virtual memory
 Memory manager handles the memory hierarchy
5
Basic Memory Management
 Mono-programming
– without Swapping or Paging
 Multi-programming with Fixed Partitions
– Swapping may be used
6
Uni-programming
 One process runs at a time
– One process occupies memory at a time
 Always load process into the same memory spot
 And reserve some space for the OS
7
Uni-programming
 Three simple ways of organizing memory for
– an operating system with one user process
0xFFF…
User
program
Operating
system
In RAM
Operating
system
In ROM
User
program
0
Operating
system
In ROM
User
Program
Operating
system
In RAM
8
Uni-programming
 Achieves address independence by
– Loading process into same physical memory location
 Problems with uni-programming?
– Load processes in its entirety (no enough space?)
– Waste resource (both CPU and Memory)
9
Multi-programming
 More than one process is in memory at a time
 Need to support address translation
– Address from instruction may not be the final address
 Need to support protection
– Each process cannot access other processes’ space
10
Multiprogramming with Fixed Partitions
 Two options exist for fixed memory partitions
 Separate input queues for each partition
– Incoming processes are allocated into fixed partition
 Single input queue
– Incoming processes can go to any partition
– Assume partition is big enough to hold the processes
11
Multiprogramming with Fixed Partitions
12
Benefit of Multiprogramming
 Improve resource utilization
– All resource can be kept busy with proper care
 Improve response time
– Don’t need to wait for previous process to finish to receive a
feedback from computer
13
CPU Utilization of Multiprogramming
 Assume processes spend 80% time wait for I/O
 CPU utilization for uni-programming: 20%
 CPU utilization for two processes: 36%
– Rough estimation only (i.e. ignore overhead)
– CPU utilization = 1 - 0.8×0.8=0.36
 CPU utilization for three processes: 48.8%
14
CPU Utilization of Multiprogramming
Degree of multiprogramming
15
Multiprogramming System Performance
 Given:
– Arrival and work requirements of 4 jobs
– CPU utilization for 1 – 4 jobs with 80% I/O wait
 Plot:
– Sequence of events as jobs arrive and finish
– Show amount of CPU time jobs get in each interval
16
Multiprogramming System Performance
17
Multiprogramming Issues
 Cannot be sure where program will be loaded
– Address locations of variables
– Code routines cannot be absolute
 Solution:
– Use base and limit values
– Address added to base value to map to physical addr
– Address locations larger than limit value is an error
18
Multiprogramming Issues
 Protection processes from each other
– Must keep a program out of other processes’ partitions
 Solution:
– Address translation
– Must translate addresses issued by a process so they don’t
conflict with addresses issued by other processes
19
Address Translation
 Static address translation
– Translate addresses before execution
– Translation remains constant during execution
 Dynamic address translation
– Translate addresses during execution
– Translation may change during execution
20
Address Translation
 Is it possible to:
– Run two processes at the same time (both in memory), and
provide address independence with only static address
translation?
 Does this achieve the other address space abstractions?
– No (i.e. does not offer address protection)
21
Address Translation
 Achieving all the address space abstractions (protection and
independence) requires doing some work on every memory
reference
 Solution:
– Dynamic address translation
22
Dynamic Address Translation
 Translate every memory reference from virtual address to
physical address
 Virtual address:
– An address viewed by the user process
– The abstraction provided by the OS
 Physical address
– An address viewed by the physical memory
23
Dynamic Address Translation
User
Process
Virtual
address
Translator
(MMU)
Physical
address
Physical
memory
24
Benefit of Dynamic Translation
 Enforces protection
– One process can’t even refer to another process’s address
space
 Enables virtual memory
– A virtual address only needs to be in physical memory when
it’s being accessed
– Change translations on the fly as different virtual addresses
occupy physical memory
25
Dynamic Address Translation
 Does dynamic address translation require hardware support?
– It’s better to have but not absolutely necessary
26
Implement Translator
 Lots of ways to implement the translator
 Tradeoffs among:
– Flexibility (e.g. sharing, growth, virtual memory)
– Size of translation data
– Speed of translation
27
Base and Bounds
 The simplest solution
 Load each process into contiguous regions of physical
memory
 Prevent each process from accessing data outside its region
28
Base and Bounds
if (virtual address > bound) {
trap to kernel; kill process (core dump)
} else {
physical address = virtual address + base
}
29
Base and Bounds
 Process has illusion of running on its own dedicated
machine with memory [0, bound)
physical memory size
bound
0
virtual memory
base + bound
base
0
Physical memory
30
Base and Bounds
 This is similar to linker-loader
– But also protect processes from each other
 Only kernel can change base and bounds
 During context switch, must change all translation data
(base and bounds registers)
31
Base and Bounds
 What to do when address space grows?
32
Pros of Base and Bounds
 Low hardware cost
– 2 registers, adder, comparator
 Low overhead
– Add and compare on each memory reference
33
Cons of Base and Bounds
 Hard for a single address space to be larger than physical
memory
 But sum of all address spaces can be larger than physical
memory
– Swap an entire address space out to disk
– Swap address space for new process in
34
Cons of Base and Bounds
 Can’t share part of an address space between processes
Physical memory
virtual address
(process 1)
data (P2)
virtual address
(process 2)
data (P1)
data (P1)
data (P2)
code
code
code
virtual memory
virtual memory
Does this work under base and bound?
35
Cons of Base and Bounds
 Solution:
 use 2 sets of base and bounds
 one for code section
 one for data section
36
Swapping
 Memory allocation changes as:
 Processes come into memory
 Processes leave memory
 Take processes out from memory and bring process into
memory are called “Swapping”
37
Swapping
38
Swapping
 Problem with previous situation?
 Difficult to grow process space
– i.e. stack, data, etc.
 Solution:
– Allocating space for growing data segment
– Allocating space for growing stack & data segment
39
Swapping
40
Memory Mgmt Issues
 Need keep track of memory used and available
 Bit map approach
 Linked list approach
41
Memory Mgmt with Bit
 Divide memory into allocation units
 Use one bit to mark if the unit is allocated or not
42
Memory Mgmt with Bit
 Divide memory into allocation units
 Use a linked list to indicate allocated and available units in
chunks
43
Memory Mgmt with Bit Maps
Memory allocation
Bit-map representation
c) Linked list representation
44
Memory Mgmt with Linked Lists
 What happens when units are released?
 May need to collapse linked list appropriately
45
Memory Mgmt with Linked Lists
46
External Fragmentation
 Processes come and go
 Can leave a mishmash of available mem regions
 Some regions may be too small to be of any use
47
External Fragmentation
 P1 start:100 KB (phys. mem. 0-99 KB)
 P2 start:200 KB (phys. mem. 100-299 KB)
 P3 start:300 KB (phys. mem. 300-599 KB)
 P4 start:400 KB (phys. mem. 600-999 KB)
 P3 exits (frees phys. mem. 300-599 KB)
 P5 start:100 KB (phys. mem. 300-399 KB)
 P1 exits (frees phys. mem. 0-99 KB)
 P6 start:300 KB
48
External Fragmentation
 300 KB are free (400-599 KB; 0-99 KB)
– but not contiguous
 This is called “external fragmentation”
– wasted memory between allocated regions
 Can waste lots of memory
49
Strategies to Minimize Fragmentation
 Best fit:
– Allocate the smallest memory region that can satisfy the
request (least amount of wasted space)
 First fit:
– Allocate the memory region that you find first that can satisfy
the request
50
Strategies to Minimize Fragmentation
 In worst case, must re-allocate existing memory regions
– by copying them to another area
51
Problems of Fragmentation
 Hard to grow address space
– Might have to move to different region of physical memory
(which is slow)
 How to extend more than one contiguous data structure in
virtual memory?
52
Paging
 Allocate physical memory in terms of fixed-size chunks of
memory (called pages)
– fixed unit makes it easier to allocate
– any free physical page can store any virtual page
 Virtual address
– virtual page # (high bits of address, e.g. bits 31-12)
– offset (low bits of addr, e.g. bits 11-0 for 4 KB page)
53
Paging
 Processes access memory by virtual addresses
 Each virtual memory reference is translated into physical
memory reference by the MMU
54
Paging
55
Paging
 Page translation process:
if (virtual page is invalid or non-resident or protected) {
trap to OS fault handler
} else {
physical page # =
pageTable[virtual page #].physPageNum
}
56
Paging
 What must be changed on a context switch?
– Page table, registers, cache image
– Possibly memory images
 Each virtual page can be in physical memory or paged out to
disk
57
Paging
 How does processor know that virtual page is not in physical
memory?
– Through a valid/invalid bit in page table
 Pages can have different protections
– e.g. read, write, execute
– These information is also kept in page table
58
Page Table
 Used to keep track of virtual-physical page map
 One entry for each virtual page
 Also keep information concerning other relevant information
such read, write, execute, valid, etc.
 MMU use it to perform addresses translation
61
Page Table
Typical page table entry
62
Paging
 The relation between
virtual addresses and
physical memory
addresses given by page
table
63
Paging
 The internal
operation of MMU
with 16 4 KB
pages
64
Paging Pros and Cons
 + simple memory allocation
 + can share lots of small pieces of an addr space
 + easy to grow the address space
– Simply add a virtual page to the page table
– and find a free physical page to hold the virtual page before
accessing it
65
Paging Pros and Cons
 Problems with paging?
 The size of page table could be enormous
 Take 32 bits virtual address for example
 Assume the size of page is 4KB
 Then there are 65536 virtual pages
 For a 64 bit virtual address?
66
Paging Pros and Cons
 The solution?
 Use multi-level translation!
 Break page tables into 2 or more levels
 Top-level page table always reside in memory
 Second-level page tables in memory as needed
67
Multi-level Translation
 Standard page table is a simple array
– one degree of indirection
 Multi-level translation changes this into a tree
– multiple degrees of indirection
68
Multi-level Translation
 Example: two-level page table
 Index into the level 1 page table using virtual address bits
31-22
 Index into the level 2 page table using virtual address bits
21-12
 Page offset: bits 11-0 (4 KB page)
69
Multi-level Translation
70
Multi-level Translation
 What info is stored in the level 1 page table?
– Information concerning secondary-level page tables
 What info is stored in the level 2 page table?
– Virtual-to-physical page mappings
71
Multi-level Translation
 This is a two-level tree
Level 1
Page table
0
Virtual address
bits 21-12
Physical
page #
0
10
1
15
2
20
3
2
1
2
NULL
NULL
Level 2
Page
table
3
Virtual address
bits 21-12
Physical
page #
0
10
1
15
2
20
3
2
72
Multi-level Translation
 How does this allow the translation data to take less space?
 How to use share memory when using multi-level page
tables?
 What must be changed on a context switch?
73
Multi-level Translation
 Pros and cons
 + space-efficient for sparse address spaces
 + easy memory allocation
 + lots of ways to share memory
 - two extra lookups per memory reference
75
Inverted Page Table
 An alternate solution to big table size problem
 Rather than storing virtual-physical mapping
 We store physical-virtual mapping
 This significantly reduce the page table size
76
Inverted Page Tables
77
Comparing Basic Translation Schemes
 Base and bound:
– unit (and swapping) is an entire address space
 Segments: unit (and swapping) is a segment
– a few large, variable-sized segments per addr space
 Page: unit (and swapping/paging) is a page
– lots of small, fixed-sized pages per address space
– How to modify paging to take less space?
78
Translation Speed
 Translation when using paging involves 1 or more additional
memory references
– This can be a big issue if not taking care of
 How to speed up the translation process?
 Solution:
– Translation look-aside buffer
79
Translation Look-aside Buffer
 Facility to speed up memory access
 Abbreviated as TLB
 TLB caches translation from virtual page # to physical page
#
 TLB conceptually caches the entire page table entry, e.g.
dirty bit, reference bit, protection
80
Translation Look-aside Buffer
 If TLB contains the entry you’re looking for
– can skip all the translation steps above
 On TLB miss, figure out the translation by
– getting the user’s page table entry,
– store in the TLB, then restart the instruction
81
A TLB to Speed Up Paging
82
Translation Look-aside Buffer
 Does this change what happens on a context switch?
83
Replacement
 One design dimension in virtual memory is
– which page to replace when you need a free page?
 Goal is to reduce the number of page faults
– i.e. a page to be accessed is not in memory
84
Replacement
 Modified page must first be saved
– unmodified just overwritten
 Better not to choose an often used page
– will probably need to be brought back in soon
85
Replacement Algorithms
 Random replacement
 Optimal replacement
 NRU (not recently used) replacement
 FIFO (first in first out) replacement
 Second chance replacement
86
Replacement Algorithms
 LRU (least recently used) replacement
 Clock replacement
 Work set replacement
 Work set clock replacement
87
Random Replacement
 Randomly pick a page to replace
 Easy to implement, but poor results
88
Optimal Replacement
 Replace page needed at farthest point in future
– i.e. page that won’t be used for the longest time
– this yields the minimum number of misses
– but requires knowledge of the future
 Forecast future is difficult if at all possible
89
NRU Replacement
 Replace page not recently used
 Each page has Reference bit, Modified bit
– bits are set when page is referenced, modified
90
NRU Replacement
 Pages are classified into four classes:
– not referenced, not modified
– not referenced, modified
– referenced, not modified
– referenced, modified
 NRU removes page at random
– from lowest numbered non empty class
91
FIFO Replacement
 Replace the page that was brought into memory the longest
time ago
 Maintain a linked list of all pages
– in order they came into memory
 Page at beginning of list replaced
92
FIFO Replacement
 Unfortunately, this can replace popular pages that are
brought into memory a long time ago (and used frequently
since then)
93
Second Chance Algorithm
 A modification to FIFO
 Just as FIFO but page evicted only if R bit is 0
 If R bit is 1, the page is put behind the list
– And the R bit is cleared
 i.e. page brought in the longest time ago but with R bit set is
given a second chance
94
Second Chance Algorithm
Page list if fault occurs at time 20, A has R bit set
(numbers above pages are loading times)
95
LRU Replacement
 LRU stands for Least Recently Used
 Use past references to predict the future
– temporal locality
 If a page hasn’t been used for a long time
– it probably won’t be used again for a long time
96
LRU Replacement
 LRU is an approximation to OPT
 Can we approximate LRU to make it easier to implement
without increasing miss rate too much?
 Basic idea is to replace an old page
– not necessarily the oldest page
97
LRU Replacement
 Must keep a linked list of pages
– most recently used at front, least at rear
– update this list every memory reference !!
 Alternatively use counter in each page table entry
– choose page with lowest value counter
– periodically zero the counter
98
Implementing LRU with Matrix
 Another option is to use n×n matrix
– Here n is the number of pages in virtual space
 The matrix is set to zero initially
 Whenever a page k is referenced:
– Row k is to all one, then column k is set to all zero
 Whenever need to pick a page to evict
– Pick the one with the smallest number (row value)
99
Implementing LRU with Matrix
Pages referenced in order 0,1,2,3,2,1,0,3,2,3
100
Implementing LRU with Aging
 Each page corresponding to a shift register
– Shift register is initially set to zero
 At every clock tick, the value of the shift is shifted one bit left,
and the R bit is added to the left most bit of the
corresponding shifter
 Whenever need to pick a page to evict
– Pick the one with the smallest number
101
Implementing LRU with Aging
102
The Clock Algorithm
 Maintain “referenced” bit for each resident page
– set automatically when the page is referenced
 Reference bit can be cleared by OS
 The resident page organized into a clock cycle
 A clock hand points to one of the pages
103
The Clock Algorithm
 To find a page to evict:
– look at page being pointed to by clock hand
 reference=0 means page hasn’t been accessed in a long
time (since last sweep)
 reference=1 means page has been accessed since your last
sweep. What to do?
104
The Clock Algorithm
105
The Clock Algorithm
 Can this infinite loop?
 What if it finds all pages referenced since the last sweep?
 New pages are put behind the clock hand, with reference=1
106
The Working Set Algorithm
 The working set is the set of pages used by the k most
recent memory references
 w(k,t) is the size of the working set at time, t
108
The Working Set Algorithm
109
The Working Set Algorithm
k (most recent references)
Work set changes as time passes but stabilizes after
110
The Work Set Clock Algorithm
 Combine work set algorithm with clock algorithm
 Pages organized into a clock cycle
 Each page has a time of last use and R bit
 Whenever needs to evict a page:
– Inspect from the page pointed by the clock hand
– The first page that with 0 R bit and is outside the work set is
evicted
111
112
Page Replacement Algorithm Review
113
Design Issues for Paging Systems
 Thrashing
 Local versus Global Allocation Policies
 Page size
 OS Involvement with Paging
 Page fault handling
114
Thrashing
 What happens when a work set is big than the available
memory frames?
 Thrashing!
 i.e. constant page fault to bring pages in and out
 Should avoid thrashing at all cost
115
Local versus Global Allocation
 When evict a page, do we only look at pages of the same
process for possible eviction
– Local allocation policy
 Or do we look at the whole memory for victim?
– Global allocation policy
116
Local versus Global Allocation
Local
policy
Global
policy
117
Local versus Global Allocation
 In global allocation policy, can use PFF to manage the
allocation
– PFF  page fault frequency
 If PFF is large, allocate more memory frames
 Otherwise, decrease the number of frames
 Goal is to maintain an acceptable PFF
118
Local versus Global Allocation
Page fault rate as a function of # of page frames assigned
119
Page Size
 What happens if page size is small?
 What happens if page size is really big?
 Could we use a large page size but let other processes use
the leftover space in the page?
 Page size is typically a compromise
– e.g. 4 KB or 8 KB
120
Page Size
 What happens to paging if the virtual address space is
sparse?
– most of the address space is invalid,
– with scattered valid regions
121
Small Page Size
 Advantages
– less internal fragmentation
– better fit for various data structures, code sections
– less unused program in memory
 Disadvantages
– programs need many pages, larger page tables
122
Page Size
 Therefore, to decide a good page size, one needs to
balance page table size and internal fragmentation
123
Page Size
 Overhead due to page table and internal fragmentation can
be calculate as:
se p
overhead 

p
2
Internal fragmentation
s = average process size in bytes
p = page size in bytesPage table space
e = page entry
124
Page Size
 Overhead is minimized when:
p  2 s e
125
Fixed vs. Variable Size Partitions
 Fixed size (pages) must be compromise
– too small a size leads to a large translation table
– too large a size leads to internal fragmentation
126
Fixed vs. Variable Size Partitions
 Variable size (segments) can adapt to the need
– but it’s hard to pack these variable size partitions into physical
memory
– leading to external fragmentation
127
Load Control
 Despite good designs, system may still thrash
 When PFF algorithm indicates:
– some processes need more memory
– but no processes need less
128
Load Control
 Solution :
 Reduce # of processes competing for memory
 swap 1 or more to disk, divide up pages they held
 reconsider degree of multiprogramming
129
Separate Instruction and Data Spaces
 With combined instruction and data space, programmers
have to fit everything into 1 space
 By separating instruction and data space, we:
– Allows programmers more freedom
– Facility sharing of program text (code)
130
Separate Instruction and Data Spaces
131
OS Involvement with Paging
 Four times when OS involved with paging
132
OS Involvement with Paging
 Process creation
– determine program size
– create page table
 Process execution
– MMU reset for new process
– TLB flushed
133
OS Involvement with Paging
 Page fault time
– determine the virtual address that causes the fault
– swap target page out, needed page in
 Process termination time
– release page table, pages
134
Page Fault Handling
 Hardware traps to kernel
 General registers saved
 OS determines which virtual page needed
 OS checks validity of address, seeks page frame
 If selected frame is dirty, write it to disk
135
Page Fault Handling
 OS brings schedules new page in from disk
 Page tables updated
 Faulting instruction backed up to when it began
 Faulting process scheduled
 Registers restored
 Program continues
136
Locking Pages in Memory
 Sometimes may need to lock a page in memory
– i.e. prohibit its eviction from memory
138
Locking Pages in Memory
 Proc issues call for read from device into buffer
– while waiting for I/O, another processes starts up
– has a page fault
– buffer for the first proc may be chosen to be paged out
 Need to specify some pages locked
– exempted from being target pages
139
Backing Store
 When paged out, where does it go on disk?
 Two options:
 A special designated area: swap area
 Or the normal place of the program
140
Backing Store
 When use swap area, there are two options:
 Allocate swap area for entire process
– Do this at loading time before execution
 Allocate swap area for part of the process that is currently
paged out to disk
– Load process into memory first, then as pages get paged out,
allocate swap area then
141
Backing Store
(a) Paging to a static area
(b) Back up pages dynamically
142
Page Out
 What to do with page when it’s evicted?
– i.e. do we write it back to disk or simply discard?
 Why not write pages to disk on every store?
– Cost CPU time to do this
148
Page Out
 While evicted page is being written to disk, the page being
brought into memory must wait
 May be able to reduce total work by giving preference to
dirty pages
– e.g. could evict clean pages before dirty pages
 If system is idle, might spend time profitably by writing back
dirty pages
149
Page Table Contents
 Data stored in the hardware page table:
 Resident bit:
– true if the virtual page is in physical memory
 Physical page # (if in physical memory)
 Dirty bit:
– set by MMU when page is written
150
Page Table Contents
 Reference bit:
– set by MMU when page is read or written
 Protection bits (readable, writable)
– set by operating system to control access to page
– Checked by hardware on each access
151
Page Table Contents
 Does the hardware page table need to store the disk block
# for non-resident virtual pages?
 Really need hardware to maintain a “dirty” bit?
 How to reduce # of faults required to do this?
 Do we really need hardware to maintain a “reference” bit?
152
MMU
 Memory management unit
 MMU is responsible for checking:
– if the page is resident
– if the page protections allow this access
– and setting the dirty/reference bits
153
MMU
 If page is resident and access is allowed,
– MMU translates virtual address into physical address
– using info from the TLB and page table
 Then MMU issues the physical memory address to the
memory controller
154
MMU
 If page is not resident, or protection bits disallow the access
– the MMU generates an exception (page fault)
155
Segmentation
 In a paging system, each process occupies one virtual
address space
 This may be inconvenient since different sections of the
process can grow or shrink independently
156
Segmentation
 One-dimensional
address space with
growing tables
 One table may
bump into another
157
Segmentation
 The solution is segmentation!
158
Segmentation
 Segment: a region of contiguous memory space
 Segmentation divides both physical and virtual memory into
segments
 Each segment is dedicated to one or more sections of a
process
 The pure segmentation use entire process
159
Segmentation
Allows each table to grow or shrink, independently
160
Segmentation
 Let’s generalize this to allow multiple segments
– described by a table of base & bound pairs
Segment #
Base
Bound
Description
0
4000
700
Code segment
1
0
500
Data segment
2
Unused
3
2000
1000
Stack segment
161
Segmentation
Virtual memory
segment 3
fff
0
stack
46ff
4000
Virtual memory
segment 1
4ff
2fff
data
2000
Virtual memory
segment 0
6ff
4ff
0
0
code
0
Physical
memory
code
stack
data
162
Segmentation
 Note that not all virtual addresses are valid
– e.g. no valid data in segment 2;
– no valid data in segment 1 above 4ff
 Valid means the region is part of the process’s virtual
address space
163
Segmentation
 Invalid means this virtual address is illegal for the process to
access
 Accesses to invalid address will cause the OS to take
corrective measures
– usually a core dump
164
Segmentation
 Protection:
– different segments can have different protection
– e.g. code can be read-only (allows inst. fetch, load)
– e.g. data is read/write (allows fetch, load, store)
 In contrast, base & bounds gives same protection to entire
address space
165
Segmentation
 In segmentation, a virtual address takes the form:
– (virtual segment #, offset)
 Could specify virtual segment # via
– The high bits of the address,
– Or a special register,
– Or implicit to the instruction opcode
166
Implementation of Pure Segmentation
167
Segmentation
 What must be changed on a context switch?
168
Pros and Cons of Segmentation
 + works well for sparse address spaces
– with big gaps of invalid areas
 + easy to share whole segments without sharing entire
address space
 - complex memory allocation
169
Compare Paging and Segmentation
Consideration
Paging
Segmentation
Need programmer aware that this
technique is being used
No
Yes
How many linear address spaces?
1
Many
Can total address space exceed the
size of physical memory
Yes
Yes
Can procedures and data be
distinguished & separately protected
No
Yes
Can tables size fluctuate easily?
No
Yes
Sharing of procedures between users? No
Yes
Why was this technique invented
Allow programs & data to
be broken up into logically
independent spaces and
to aid sharing & protection
To get a large linear
address space
without buying more
memory
170
Segmentation
 Can a single address space be larger than physical memory?
 How to make memory allocation easy and
– allow an address space be larger than physical memory?
171
Segmentation with Paging
172
Segmentation with Paging
 Descriptor segment points to page tables
 Page tables points to physical frames
 MULTICS use this method
173
SP Example: MULTICS
 A 34-bit MULTICS virtual address
Address within
the segment
Segment number
18
Page
number
6
Offset within
the page
10
174
SP Example: MULTICS
MULTICS virtual space
Page
number
6
Segment number
18
Descriptor
Segment
number
Page frame
Offset within
the page
10
Word
offset
Page
number
Descriptor
segment
Page
table
Page
175
SP Example: MULTICS TLB
176
SP Example: Pentium
 Pentium virtual memory contains two tables:
 Global Descriptor Table:
– Describes system segments, including OS
 Local Descriptor Table:
– Describes segments local to each program
178
SP Example: Pentium
 Pentium selector contains a bit to indicate if the segment is
local or global
Bits
13
1
2
LDT or GDT entry numbers
0 = GDT/1 = LDT
Privilege level (0-3)
A Pentium selector
179
SP Example: Pentium
Pentium code segment descriptor (Data segments differ slightly)
180
SP Example: Pentium
Conversion of a (selector, offset) pair to a linear address
181
Pentium Address Mapping
182
Protection on the Pentium
Level
183
Computer Changes Life