OperatingSystems_FA15_8_Memory

Download Report

Transcript OperatingSystems_FA15_8_Memory

Operating Systems
Dr. Jerry Shiao, Silicon Valley University
Fall 2015
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
1
Memory Management


Overview
Memory Management Strategies

Hardware Support


Bind Process Logical Address to Physical Address








Memory Management Unit ( MMU )
Dynamic / Static Linking
Swapping: RR Scheduling Algorithm / Constraints
Sharing Operating System Space and User Space: Contiguous
Memory Allocation Method


Logical Address from CPU must be checked for legality. Cannot be
done in software.
Problems: External and Internal Fragmentation
Paging: Non-Contiguous Address Space / Fixed Size Frames
Structure of the Page Table
Segmentation: Non-Contiguous Address Space / Var Size Frms
Structure of the Segment Table
Intel Pentium Segmentation / Paging Architecture
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
2
Memory Management


For Multiprogramming to improve CPU utilization and
user response time, the Operating System must keep
several processes in memory: Memory must be shared.
Memory Management Strategies (More)

Hardware Support



Performance




Logical Address from CPU must be checked for legality. Cannot be
done in software.
Base Register and Base-Limit Register pair.
Memory-Management Algorithm becomes more complex, time
required to map a logical address to physical address.
Simple systems, Contiguous Memory allocation, compare and add
Base/Base-Limit Registers.
Paging (TLB) and Segmentation, mapping table is required.
Fragmentation


Pack more processes into memory.
Reduce memory waste or internal/external fragmentation.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
3
Memory Management

Memory Management Strategies (Cont)

Relocation



Swapping


Allows more processes to be run that can fit in memory at one time.
Sharing




Using compaction to handle Fragmentation problem.
Program logical addresses relocated dynamically at execution time.
Increase Multiprogramming level by sharing code and data among
users.
Requires Paging or Segmentation to be used.
Synchronization and Deadlock issues.
Protection


User program address space must be protected.
Paging and Segmentation can declare different sections to be
Execute-Only, Read-Only, or Read-Write.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
4
Memory Management
 Basic Hardware
 CPU Access

Hardware Registers


Main Memory



One clock cycle: Decode and operate on registers.
Many clock cycles: Transaction on memory bus.
Cache ( Fast Memory ) between CPU and main memory.
Separate and Protect Memory Space for Processes


CPU hardware compares user address with Base and Limit Registers.
Base Register: Lowest physical memory address.
 Limit Register: Size of the range.

0
Every address generated in user mode compared against Base and Limit
Register.
Operating System
256000
Process
300040
Base
300040
Process
420940
1024000
...
Copyright @ 2009 John
Wiley & Sons Inc.
Limit
120900
Loaded by Operating System in Kernel Mode.
Trap when User Mode access outside of range.
Prevents user program from modifying the code or
data structures of either the Operating System or
users.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
5
Memory Management
 Address Binding
 Process in Input Queue
 Operating System Loads
Process into Memory:
When does binding Occur?
Source
Program
Compile Time
Compiler or Assembler
Load Time
Object
Module
Compile Time: physical memory
location known. Absolute code
(MS-DOS)
Load Time: physical memory
location NOT known until
Loader. Relocatable code.
Loader: Final binding.
Execution Time: physical
memory location not known until
run time. Common method.
NOTE: Require hardware assist.
Other
Object
Module
s
System
Library
Dynamically
Loaded
System Libary
Execution Time
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
Linkage Editor
Load
Module
Loader
In-Memory Binary
memory Image
6
Memory Management

Address Binding
Copyright @ 2009 Pearson
Education, Inc.
Linker resolves references between modules.
Loader binding relocatable code and translates
to absolute address at time of program loading.
Loader: Loaded program has relative
addresses, converted to absolute
address with hardware assist.
SILICON VALLEY UNIVERSITY
Linker: External reference not
7
CONFIDENTIAL
resolved until external call is made.
Memory Management
 Load Modules
Initial program has symbolic
references to functions or data items.
Absolute addresses can be converted
at compiler or loader.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
All memory references within
load module expressed relative
to the beginning of the module.
Load module can be located
anywhere in main memory.
8
Memory Management
 Logical Versus Physical Address Space
 Logical Address Space: Set of all logical addresses
generated by process.


Physical Address Space: Set of all physical addresses
corresponding to logical addresses ( Loaded into
Memory-Address Register ).



Process addresses start at locations “0” to “max”.
Compile-time and Load-time bindings generate Logical
Addresses same as Physical Addresses.
Binding logical addresses and physical addresses can occur at
execution time: Process Logical Address is a Virtual Address.
Memory Management Unit ( MMU ) run-time mapping
from virtual to physical addresses


Relocation Register (i.e. Base Register).
Added to every memory address generated by a process.
Logical Address:
Base to Max
Copyright @ 2009 John
Wiley & Sons Inc.
Physical Address:
( Relocation + Base ) to ( Relocation + Max )
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
9
Memory Management
 Memory Management Unit ( MMU ): Run-time mapping
Virtual to Physical addresses.
 Divides Virtual Address Space into Pages ( 512Bytes to
16 Mbytes )

Paging: Memory Management scheme permits physical address space
to be noncontiguous.
 Translation Lookaside Buffer: Cache for logical to physical address
translation.
MMU
Relocation
Register
Logical
Address:
<346>
CPU
14000
Memory
Physical
Address:
<14336>
+
Page # 1
Page # 2
Page # 3
Page # 4
Page Table
...
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
10
Memory Management
 Process to run, entire program (plus data) had to be in
physical memory.
 Dynamic Loading



Program routines does NOT reside in memory when referenced.
Relocatable Linking Loader loads into memory.
Dynamic Linking and Shared Libraries


Program must use dynamically loaded system libraries.
Static Linking




Dynamic Linking





Modules from system libraries are handled as process modules.
Combined by the loader into the binary program image.
Large image, but portable (all modules self-contained).
Linking occurs during execution time.
Stub code placed in program code to resolve program library reference or
load the library containing the referenced code.
Stub replaced with address of the referenced routine.
Shared routine accessed by multiple processes.
Library Updates

Once shared library replaced, processes will use updated routine.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
11
Memory Management
 Dynamic Linking Procedure
Static Linking:
Combined by the loader into the
binary program image.
Large image, but portable (all
modules self-contained).
Dynamic Linking :
Linking occurs during execution
time.
Stub code placed in program code
to resolve program library reference
or load the library containing the
referenced code.
Stub replaced with address of the
referenced routine.
:
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
12
Memory Management
 Swapping
 Process can be temporarily swapped from memory to
disk (backing store).
 Round Robin Scheduling Algorithm swap out process
when quantum expires and swaps in higher priority
process from Operating System Ready Queue.
Backing Store
( Disk )
Operating System

BBa
Swap Out
Process P1
User Space
Swap In
Process P2
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
13
Memory Management
 Swapping


Process executing in memory does not always stay in memory until it
completes execution.
Swap Considerations:

Address Binding Method



Backing Store ( Disk )



Copies of all memory images in Swap Partition.
Operating System Ready Queue: Processes with memory images in
Backing Store or in memory.
Context-Switch Time



Load time or compile time: swap process into same memory location.
Execution time: swap process into different memory location.
Transfer time latency, proportional to amount of memory swapped.
Reduce swap time, dynamic memory requirements only request memory
needed and release used memory.
Process with pending I/O Operations

Use Operating System buffers and transfer to process memory when
process swapped in.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
14
Memory Management
 Contiguous Memory Allocation
 Memory Shared by Operating System and User Process
 Memory Divided into Two Partitions:

Resident Operating System


User Processes


Low Memory (Reside with Interrupt Vector).
Each process in single contiguous section of memory.
Memory Mapping and Protection
Limit
Register
Logical
Address
CPU
Relocation
Register
Physical
Address
Yes
<
MMU maps logical
address
dynamically using
relocation register
and validates
address range with
limit registers.
+
Memory
No
Trap: Addressing Error
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
15
Memory Management
 Contiguous Memory Allocation
 Memory Allocation: Loading Process into Memory
 Partitioning: Divide memory into fixed size partitions.




Operating System Evaluate Memory Requirements of
Process and Amount of Available Memory Space.
Initially Available Memory is One Large Memory Block.



Each partition contains one process.
Degree of multi-programming bound by number of partitions.
OS Table keeps track of partitions allocated/available.
Eventually, memory contains holes of various sizes.
Dynamic Storage Allocation Problem:





Satisfy request of size “n” from list of free memory blocks.
First Fit: Allocate the first memory block large enough for process.
Best Fit: Allocate the smallest memory block that is large enough.
Worst Fit: Allocate the largest memory block.
First Fit and Best Fit faster, better in managing fragmentation than
Worst Fit.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
16
Memory Management
 Contiguous Memory Allocation
 Fragmentation


Process loaded/removed from memory, memory space is broken
into non-contiguous memory blocks.
External Fragmentation:


50-Percent Rule:





Statiscally, given N blocks, .5N blocks lost to fragmentation.
Compaction for External Fragmentation ( Expensive )


Occurs when TOTAL memory space exist to satisfy request, but
available memory is non-contiguous.
Shuffle memory contents to place free memory in one large block.
Only possible if relocation is dynamic and done at execution time.
Change Relocation Register after compaction.
Another solution: Permit logical address of process to be noncontiguous ( Paging and Segmentation ).
Internal Fragmentation


Physical memory divided into fixed size blocks.
Memory blocks allocated to a process is larger than the requested
memory, unused memory that is internal to a memory partition.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
17
Memory Management
 Memory Management Techniques

Principle
Operation of
Memory
Management
is to bring processes
into main memory
for execution by
the processor.
Copyright @ 2009 Pearson
Education, Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
18
Memory Management
 Memory Management Techniques
 Fixed Partition
Partitions can be equal-sized or
different sizes. Any process whose
size is less than or equal to the
partition size can be loaded into any
available partition.
Process does not fit in a partition
must be coded with overlays so that
only a portion of the program need to
be in memory at any time.
Memory utilization is inefficient.
Internal Fragmentation: Wasted space
internal to a partition due to the fact that
the block of data loaded is smaller than
the partition.
Copyright @ 2009 Pearson
Education, Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
19
Memory Management
 Memory Management Techniques
 Fixed Partition Placement
Algorithm
Scheduling queue for each
partition to hold swapped-out
processes for the partition.
Minimize internal
fragmentation.
Process remains queued for a
partition that could have been
satisfied by another partiion.
Single queue. Process loaded
into memory placed in smallest
available partition.
Copyright @ 2009 Pearson
Education, Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
20
Memory Management
 Memory Techniques
 Dynamic Partition
Partitions are variable length and
number. When a process is brought
into memory, it is allocated exactly as
much memory as it requires.
Process 2 is swapped out, Process 4
swapped in and leaves a hole.
Process 1 is swapped out, Process 2
swapped in and leaves another hole.
Memory becomes more and more
fragmented.
External Fragmentation: Memory external
to all partitions become increasingly
fragmented.
Copyright @ 2009 Pearson
Education, Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
21
Memory Management
 Memory Techniques
 Partition Placement
Algorithm
First Fit: Scans memory from the
beginning and chooses first available
block that is large enough. Best
performer and fastest.
Best Fit: Chooses the block that is
closest in size to the request. Worst
performer, although each memory
request wastes least amount of
memory, result is memory quickly
littered with blocks too small to
satisfy any memory request.
Next Fit: Produce worst results than
First-Fit. Frequently lead to an
allocation from a free block at the
end of memoy.
Copyright @ 2009 Pearson
Education, Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
22
Memory Management






Paging
Permits the Physical Address Space of Process to be Noncontiguous.
Avoids Memory External Fragmentation and Compaction.
Avoids Backing Store Fragmentation.
Frames: Physical Memory partitioned into fixed-size blocks
( Frames ).
Page: Logical Memory partitioned into fixed-size blocks
( Pages ).
Physical Memory
Backing Store
Page 0
Frame 0
Block 0
Page1
Frame 1
Block 1
Page 2
Frame 2
Block 2
Page 3
Frame 3
Block 3
Logical Memory
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
23
Memory Management

Paging

Paging Hardware
Logical Address
Physical Address
Frame 0
F 00 … 00
CPU
Page
Frame F
Offset
Offset
Frame 1
F 11 … 11
1)
2)
Every Logical
Address generated
by CPU contains
two parts: Page
Number and Page
Offset.
The Page Number
indexes into Page
Table, which has the
base address of
each page (frame)
in Physical Memory.
Page
Index
Frame 2
Frame 0
Frame 3
F
Frame 2
Frame 3
3) The base address is
combined with the Page
Offset to generate the
Physical Memory Address.
Physical Memory
Page Table
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
24
Memory Management

Paging

Page Table Maps Logical Memory
To Physical Memory
Frame
Number
0
1
Logical Address
Page 0
Page 0
Page 1
Page 1
2
Offset
3
Page 2
Page 2
0
Page 3
Frame 1
1
Logical Memory
4
Page 1
5
Frame 4
Page
Index
2
Frame 3
3
7
Frame 7
Page Table
Copyright @ 2009 John
Wiley & Sons Inc.
6
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
Page 3
Physical Memory
25
Memory Management
 Paging
 Page Size Defined by Hardware
Power of 2: Typically 4K Bytes and 8 Kbytes
Logical Address = 2ᵐ, Page Size = 2ⁿ, Page Number = 2ᵐ‫־‬ⁿ



Logical
Memory
0
a
1
b
2
c
3
d
4
e
5
f
M = 4, Logical Address = 2^M=2^4=16, N =2 Page Size = 2^N=2^2=4,
Page Number = 2^(4-2)= 4
Physical Memory = 32 (2^5)
Page
Size = 4
Frame Size = 4 (2^2)
Frame = 8 (2^3)
Page Table
0
5
1
6
2
1
0
i
16
1
j
17
2
k
18
3
l
19
4
m
20
a
5
n
21
b
6
o
22
c
7
p
23
d
8
24
e
9
25
f
6
g
7
h
8
i
9
j
10
k
10
26
g
11
l
11
27
h
12
m
12
28
13
n
13
29
14
o
14
30
15
p
15
31
Copyright @ 2009 John
Wiley & Sons Inc.
3
2
Separation of user’s view of
logical contiguous memory
and physical non-contiguous
memory.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
26
Memory Management
 Paging
 Address Translation Hardware: Uses Page Table to
Reconcile User Process View of Memory (Contiguous
Logical Address Space) and Physical Memory.


Prevents User Process from accessing memory outside its
logical address space.
Operating System Manages Physical Memory

Frame Table: Allocation details of physical memory.





Frames Allocated
Frame Available
Total Frames
Entry for each physical page frame: Allocated, Free, Page of
Process using frame.
Operating System maintains copy of Page Table for each
process.


When Operating System Translates Process Logical Address to
Physical Address.
Used to update Frame Table when Process context switch into
memory.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
27
Memory Management
 Paging
 Hardware Support
 Page Table must be fast, every access to memory goes
through Page Table.
 Page Table can be implemented as set of registers.


Page Table implemented in memory with Page-Table
Base Register (PTBR)


Small Page Table ( 256 entries ).
Large Page Table: PTBR in Memory is slow.
Page Table in Fast-Lookup Hardware Cache.

Translation Look-Aside Cache ( TLB ).






High-Speed Memory, contains few Page Table entries (64 to 1024 Entries).
TLB Miss: Access Page Table in memory.
TLB Replacement policy ( LRU, random ).
Some TLB entries are “Wired Down” (Kernel Code).
Each process Address Space Identifies (ASIDs) stored in TLB entry.
 Allows multiple processes to use TLB.
Hit Ratio: Percentage of times that a page is found in the TLB.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
28
Memory Management
 Paging
 TLB Hardware Support
Logical Address
CPU
Page n
Offset
Page
Number
Physical Address
Frame
Number
Frame n
Offset
Physical Memory
Translation Look-Aside
Buffer
TLB Miss
Page Table
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
29
Memory Management
 Paging
 Memory Protection with Protection Bits


Read-Write, Read-Only, Execute-Only
Valid-Invalid:



Tests whether the associated page is in the
process’s logical address space.
OS allow or disallow access to the page.
Hardware Trap
to Operating
System
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
30
Memory Management


Paging
Shared Pages: Sharing Common Code
One copy of read-only (reentrant) code shared among processes
(i.e., text editors, compilers, window systems, run-time libraries).
 Each process Page Table maps onto the same physical frames
of the shared code.
 Sharing memory among processes using shared pages.
Private Code and Data
 Each process has its own copy of registers.
 Each process keeps a separate copy of the code and data.
 The pages for the private code and data can appear anywhere in
the logical address space.


Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
31
Memory Management
 Paging
 Shared Pages: Sharing Common Code
ed 1
ed 2
ed 3
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
32
Memory Management
 Structure of the Page Table
 Hierarchical Page Tables

Problem:





Logical Address: 32 Bits
Page Size: 4KBytes (2^12 = 4096)
Page Table Entries: 2^(32 – 12) = 2^20 = 1M Entries
Page Table Size: 1M Entries X 4 Bytes/Entry = 4MBytes Page
Table per Process
Two-Level Page Table Scheme
Page Number
P1
10



Page Offset
P2
d
10
12
32 Bit Logical Address
P1 = Index into the Outer Page Table
P2 = Displacement within the page of the outer Page Table
d = Physical Page offset
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
33
Memory Management
 Structure of the Page Table
 Hierarchical Page Tables
 Two-Level Page Table Scheme
2^10 = 1024
2^10 = 1024
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
2^12 = 4096
34
Memory Management
 Hierarchical Page Tables
 N-Level Page Tables
P1
42
P2
d
10
12
Copyright @ 2009 John
Wiley & Sons Inc.
64 Bit Logical Address
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
64 Bit Logical Address will
use 2^42. CANNOT use TwoLevel Page Table.
Outer Page Table would have
to be partitioned further.
35
Memory Management
 Structure of the Page Table

Hashed Page Tables

Handle Logical Address Space Greater Than 32 Bits.
1) Virtual Page
(p) is hashed into
a Hash Page
Table.
2) Hash Page Table entry is
chained elements. Virtual
Page (p) is compared with
each element.
Copyright @ 2009 John
Wiley & Sons Inc.
Each Hash Page Table entry contains linked list
of all Virtual Pages that hash to same hash
index. The entry contains the Frame number,
which forms the Physical Address
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
36
Memory Management
 Structure of the Page Table

Inverted Page Tables

Page Table has an entry for each page that the process uses, since the
process references pages through the virtual address.
 Page Table is large, millions of entries.
 Inverted Page Table is a table of real page or frame of memory.
 Only one Real Page Table, NOT one Page Table per process.
 Virtual Address: < process-id, page-number, offset >
 Process-id is the address-space-identifier.
1)
2)
< process-id, page-number>
used to search the Inverted
Page Table.
When match is found, then
the offset, < i >, represent the
physical page offset in
physical memory.
Problems:
- Search for <pid, page> could take whole table.
Use Hash Table to minimize search entries.
- Cannot easily share memory between processes
because one virtual page for every physical page.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
37
Memory Management
 Segmentation
 Memory viewed as collection of variable-sized segment,
with no ordering among segments.

Each segment has specific purpose:



Logical Address Space: Collection of Segments
Segment Logical Address is two parts:




< segment – number, offset >
Segment-number: Identifies the segment
Offset: Offset within the segment
Compiler creates segments:


Main program, subroutine, stack, symbol table, library function.
Code, Global Variables, Heap, Stacks,
Standard C Library
Loader takes segments and assign
segment numbers
<segment-number, offset>
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
38
Memory Management
 Segmentation
1)
Segments does not have
to be contiguous in
memory.
2) Code sharing at segment
level ( same base in two
segment tables ).
3) Segment has Protection
Bits:
- Read-Only Segment
(code)
- Read-Write Segment
(data, heep, stack)
1
4
1
2
3
4
Problems:
1) Complex memory
allocation (i.e. First-Fit).
2) External Fragmentation.
3
user space
Copyright @ 2009 John
Wiley & Sons Inc.
2
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
physical memory space
39
Memory Management
 Segmentation
 Segment Table



Segment Base: Starting physical address where the segment
resides in memory.
Segment Limit: Length of the segment.
< segment – number, offset >





Segment Table Base Register ( STBR )


Segment-number is the index into the segment table.
Offset is the offset into the segment.
Offset between 0 and the Segment Limit.
Offset added to the Segment Base to produce the physical address
in memory.
Segment Table’s location in physical memory. Points to STBR
saved in Process Control Block.
Segment Table Length Register ( STLR )

Number of segments used by a program.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
40
Memory Management
 Segmentation
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
41
Memory Management
 Segmentation
Logical Address = < 2, 53 >
Base = 4300: Physical Address = 4300 + 53 = 4353
Logical Address = < 3, 852 >
Base = 3200: Physical Address = 3200 + 852 = 4052
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
42
Memory Management
 Segmentation
 Intel Pentium


Supports Pure Segmentation and Segmentation with Paging
Logical Address to Physical Address Translation

CPU generates Logical Address.


Segmentation Unit generates Linear Address.




Logical Address Passed to Segmentation Unit.
Linear Address Passed to Paging Unit.
Paging Unit generates Physical Address in Memory.
One Segment Table Per Process
One Page Table Per Segment
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
43
Memory Management
 Segmentation
 Intel Pentium
 Segment Maximum:
4 Gbytes
 Segments per Process: 16K
 Process Logical Address Space

First Partition (Private):



Local Descriptor Table ( LDT ) describes partition.
Second Partition (Shared): 8K Segments


8K Segments
Global Descriptor Table ( GDT ) describes partition.
LDT and GDT: Segment Descriptor with Base Location and Limit
Logical Address: < Selector, Offset >
13
1
2
s
g
p
s : Segment Number
g : GDT or LDT Segment
p : Protection
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
44
Memory Management
 Segmentation
 Intel Pentium
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
45
Memory Management
 Segmentation
 Intel Pentium Paging
 Page size of 4KBytes or 4MBytes

4KBytes Page: Two-Level Paging Scheme


p1: Outermost Page Table or Page Directory Index.
p2: Inner Page Table Index.
 d: Offset in the 4 Kbyte Page

Page Directory Table

Page Size Flag: Set = 4 Mbytes Page Frame, Not Set = 4 Kbytes Page
Frame
 4 Mbyte Page Frame bypasses Inner Page Table and Offset is 22 bits.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
46
Memory Management
 Segmentation
 Intel Pentium Paging
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
47
Memory Management
 Segmentation
 Intel Pentium Segmentation with Paging
 Two Levels of Mapping: Segment Table and Page
Tables


Process has variable-size segments.
Each Segment divided into small fixed-size pages.





Eliminates External Fragmentation problem with segmentation.
Logical Address: < Segment, Page, Offset >
One Segment Table per Process
One Page Table per Segment
Two Levels of Sharing: Segment and Page



Share physical frame by having same frame reference in two
page tables.
Share segment by having same base in two segment tables.
Segment protection bits ( Sharing, ReadOnly, ReadWrite ).
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
48
Memory Management
 Segmentation
 Intel Pentium Linux
 Linux Uses 6 Segments:



Kernel Code Segment
Kernel Data Segment
User Code Segment





User Data Segment
Task-State Segment ( TSS )



Shared by all processes in user mode.
All processes uses same logical address space.
Segment Descriptors in Global Descriptor Table ( GDT ).
Store the hardware context of each process during context switches.
Default Local Descriptor Table( LDT )
Linux Uses Three-Level Paging for 32-Bit or 64-Bit
Architectures.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
49
Memory Management
 Segmentation
 Intel Pentium Linux
-
Each Task has own set of Page Tables.
CR3 Register points to Global Directory for task currently
executing.
CR3 Register saved in TSS Segments of the task during
context switch.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
50
Memory Management
 Summary
 Different Memory-Management Algorithms:





Contiguous Allocation: Each process in single contiguous
section of memory.
Paging
Segmentation
Segmentation with Paging
Comparison Considerations:




Hardware Support: Base-Limit Register pair. MMU for Paging
and Segmentation.
Performance: More complex scheme, mapping logical address
to physical address takes longer. Translation Look-Aside Buffer
(TLB) fast lookup hardware cache to map page to frame number.
Fragmentation: Fixed size allocation (Paging) has internal
fragmentation. Variable size allocation (Segmentation) has
external fragmentaiton.
Relocation: Compaction for external fragmentation.
Copyright @ 2009 John
Wiley & Sons Inc.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
51
Memory Management
 Summary
 Comparison Considerations:




Swapping: Essential for allowing more processes to run than can
fit into memory at one time.
Sharing: Sharing code and data among different users through
shared segments or shared pages.
Protection: For paging or segmentation, different sections of a
user program can be declared execute-only, read-only, or readwrite.
Linux does not rely on segmentation and uses it minimally.





Segment for Kernel Code, Kernel Data.
Segment for User Code, User Data.
Task-State Segment (TSS).
Default LDT Segment.
Linux mainly uses paging:




Copyright @ 2009 John
Wiley & Sons Inc.
User Code and User Data Segment shared by all processes.
All Segment Descriptors are in the Global Descriptor Table.
TSS stores hardware context of each process during context switch.
Default LDT Segment not used.
SILICON VALLEY UNIVERSITY
CONFIDENTIAL
52