Slides - Choong

Download Report

Transcript Slides - Choong

Operating Systems
CS3013 / CS502
WEEK 5
LINKING AND LOADING
MEMORY MANAGEMENT
PAGING AND SEGMENTATION
Agenda
2
 Linking and Loading
 Memory Management
 Paging and Segmentation
Objectives
3
 Understand Linking and Loading
 Differentiate Static and Dynamic Linking
Executable Files
4
 Every OS expects executable files in a specific format
 Header Info
Code locations
 Data locations



Code & Data
Symbol Table
List of names of things defined in your program and where they
are located within your program.
 List of names of things defined elsewhere that are used by your
program, and where they are used.

Example
5
#include <stdio.h>
 Symbols defined in the
program and used
elsewhere
int main () {
printf (“hello,
world\n”)

 Symbols defined
elsewhere and used in
the program

}
main
printf
Example
6
#include <stdio.h>
Extern int errno;
int main () {
printf (“hello,
world\n”)
<check errno>
}
 Symbols defined in the
program and used
elsewhere

main
 Symbols defined
elsewhere and used in
the program


printf
errno
Linking and Loading
7
 Linking: Combining a set of programs, including
library routines, to create a loadable image


Resolving symbols defined within the set
Listing symbols needing to be resolved by loader
 Loading: Copying the loadable image into memory,
connecting it with any other programs already
loaded, and updating addresses as needed

(in all systems) kernel image is special (own format)
Source
8
 Binding is the act of connecting
names to addresses
 Most compilers produce
relocatable object code

Source
(.c, .cc)
Compiler
Addresses relative to zero
 The linker combines multiple
Object
(.o)
object files and library modules
into a single executable file

Addresses also relative to zero
 The Loader reads the executable
Linker
Static libraries
(.a)
file



Allocates memory
Maps addresses within file to
memory addresses
Resolves names of dynamic library
items
Other Objects
(.o)
Executable
Loader
In-memory Image
Dynamic libraries
(.dll)
Static Linking
9
Printf.c
Main.c
gcc
gcc
Printf.o
Main.o
ar
Static Library
Linker
a.out
loader
Memory
Classic Unix
10
 Linker lives inside of cc or gcc command
 Loader is part of exec system call
 Executable image contains all object and library
modules needed by program
 Entire image is loaded at once
 Every image contains its own copy of common
library routines
 Every loaded program contain duplicate copy of
library routines
Dynamic Linking
11
 Routine is not loaded until it is called
 Better memory-space utilization; unused routines
are never loaded.
 Useful when large amounts of code are needed to
handle infrequently occurring cases.
Linker-Assisted Dynamic Linking
12
 For function call to a dynamic function
 Call is indirect through a link table

Each link table entry is initialized with address of small stub of
code to locate and load module.

When loaded, loader replaces link table entry with address of
loaded function

When unloaded, loader restores table entry with stub address

Works only for function calls, not static data
Linker-Assisted Dynamic Linking
13
Your program
void main () {
Link table
Stub
void load() {
…
load(“IOLib”);
printf (…);
…
}
}
Linker-Assisted Dynamic Linking
14
Your program
void main () {
printf (…);
Link table
IOLib
read() {…}
}
printf() {…}
scanf() {…}
Shared Library
15
 Observation – “everyone” links to standard libraries
(libc.a, etc.)
 These consume space in
 every executable image
 every process memory at runtime
 Would it be possible to share the common libraries?
Shared Library
16
 Libraries designated as “shared”
 .so, .dll, etc.
 Supported by corresponding “.a” libraries containing symbol
information
 Linker sets up symbols to be resolved at runtime
 Loader: Is library already in memory?
 If yes, map into new process space
 If not, load and then map
Run Time Dynamic Linking
17
Printf.c
Main.c
gcc
gcc
Printf.o
Main.o
ar
Dynamic
Library
Run-time
Loader
Linker
a.out
loader
Memory
Dynamic Linking
18
 Complete linking postponed until execution time.
 Stub used to locate the appropriate memory-resident
library routine.
 Stub replaces itself with the address of the routine,
and executes the routine.
 Operating system needs to check if routine is in
address space of process
 Dynamic linking is particularly useful for libraries.
Dynamic Shared Library
19
 Static shared libraries requires address space pre-
allocation
 Dynamic shared libraries – address binding at
runtime


Code must be position independent
At runtime, references are resolved as

Library_relative_address + library_base_address
Linker
20
 Linker – key part of OS – not in kernel

Combines object files and libraries into a “standard” format
that the OS loader can interpret

Resolves references and does static relocation of addresses

Creates information for loader to complete binding process

Supports dynamic shared libraries
Loader
21
 An integral part of the OS
 Resolves addresses and symbols that could not be
resolved at link-time
 May be small or large


Small: Classic Unix
Large: Linux, Windows XP, etc.
 May be invoke explicitly or implicitly
 Explicitly by stub or by program itself
 Implicitly as part of exec
Agenda
22
 Linking and Loading
 Memory Management
 Paging and Segmentation
Objectives
23
 Understand Different Memory Management
Strategies
 Differentiate Internal and External Fragmentation
Simple Memory Management
24
 One process in the memory
 Uses the entire memory available
 Each program needs I/O driver
RAM
User
Program
I/O Drivers
Simple Memory Management
25
 Small, protected OS
OS
RAM
User
Program
OS
 “Mono-processing”
User
Program
Device
Drivers
User
Program
OS
ROM
Memory Management with Fixed Partitions
26
Partition 4
Partition 4
900k
Partition 3
Partition 3
500k
Partition 2
Partition 2
Partition 1
Partition 1
OS
OS
 Unequal Queues
 Waste Large Partitions
300k
200k
Physical Memory
27
xFFFFFFFF
Empty
Process 3
Physical
Address
Space
Process 2
Process 1
OS Kernel
x00000000
Process 2 Terminates
28
xFFFFFFFF
Empty
Process 3
Physical
Address
Space
Empty
Process 1
OS Kernel
x00000000
Problem
29
 What happens when Process 4
comes along and requires space
larger than the largest empty
partition?
Empty
Process 3
Empty
Process 4
Process 1
OS Kernel
Solution
30
 Virtual Address: an address used by the program
that is translated by computer into a physical
address each time it is used

Also called Logical Address
 When the program utters 0x00105C, …
 … the machine accesses 0x01605C
Implementation
31
 Base and Limit registers
 Base automatically added to all addresses
 Limit checked on all memory references
 Introduced in minicomputers of early 1970s
 Loaded by OS at each context switch
Limit Reg
CPU
yes
<
logical
address
Base Reg
no
error
Physical
Memory
+
physical
address
Physical Memory
32
xFFFFFFFF
Empty
Limit
Process 3
Physical
Address
Space
Base
Empty
Process 1
OS Kernel
x00000000
Advantage
33
 No relocation of program addresses at load time

All addresses relative to zero!
 Built-in protection provided by Limit

No physical protection per page or block
 Fast execution

Addition and limit check at hardware speeds within each instruction
 Fast context switch

Need only change base and limit registers
 Partition can be suspended and moved at any time


Process is unaware of change
Potentially expensive for large processes due to copy costs!
34
xFFFFFFFF
Process 4
Limit
Physical
Address
Space
Process 3
Base
Process 1
OS Kernel
x00000000
Memory Allocation Strategy
35
 First Fit
 First big enough hole
 Best Fit
 Smallest hole that is big enough
 Worst Fit
 Largest hole
Challenge – Memory Allocation
36
 How should we partition the physical memory for
processes?
Fixed-sized Partitions
37
 Fixed Partitions – divide memory into equal sized
pieces (except for OS)


Degree of multiprogramming = number of partitions
Simple policy to implement
All processes must fit into partition space
 Find any free partition and load the process

 Problem – what is the “right” partition size?
 Process size is limited
 Internal Fragmentation – unused memory in a partition that
is not available to other processes
Internal Fragmentation
38
 “Empty” space for each process
Stack
Allocated to
Process
Room for growth
Data
Program
OS
Variable-sized Partitions
39
 Idea: remove “wasted” memory that is not needed in
each partition

Eliminating internal fragmentation
 Memory is dynamically divided into partitions based
on process needs
 Definition:


Hole: a block of free or available memory
Holes are scattered throughout physical memory
 New process is allocated memory from hole large
enough to fit it
External Fragmentation
40
 Total memory space exists to satisfy request but it is
not contiguous
Empty
100k
Process 3
200k
?
Process 4
Empty
Process 1
OS Kernel
150k
Analysis of External Fragmentation
41
 Assumption
 First-fit allocation strategy
 System at equilibrium
 N allocated blocks
 ½ N blocks lost to fragmentation
 Fifty-percent rule
Compaction
42
Process 1
(a)
(b)
Process 1
Process 1
50k
125k
Process 4
Process 2
75k
Process 3
75k
100k
OS Kernel
Process 2
Process 3
OS Kernel
Process 3
Process 2
OS Kernel
Solutions?
43
 Minimize external fragmentation
 Large Blocks
 But internal fragmentation
 Tradeoff
 Sacrifice some internal fragmentation for reduced external
fragmentation
 Paging!
Agenda
44
 Linking and Loading
 Memory Management
 Paging and Segmentation
Objectives
45
 Understand Paging Mechanism
 Translate Virtual Address Using Paging
 Analyze Different Requirements for Paging
 Differentiate Different Paging Strategies
 Understand Segmentation
Memory Management
46
 Logical address vs. Physical address
 Memory Management Unit (MMU)

Set of registers and mechanisms to translate
virtual addresses to physical addresses
 Processes (and processors) see virtual
addresses


Processor
Virtual address space is same for all
processes, usually 0 based
Virtual address spaces are protected from
other processes
 MMU and devices see physical
addresses
Logical Addresses
MMU
Physical Addresses
Memory
I/O Devices
Paging
47
 Logical address space noncontiguous; process gets
memory wherever available

Divide physical memory into fixed-size blocks
Size is a power of 2, between 512 and 8192 bytes
 Called Frames


Divide logical memory into fixed-size blocks

Called Pages
 Address generated by CPU divided into:
 Page number (p) – index to page table


Page table contains base address of each page in physical memory
(frame)
Page offset (d) – offset into page/frame
Paging
48
logical address
offset
physical memory
page table
physical address
page frame #
F(PFN)
offset
page
frame 0
page
frame 1
page
frame 2
page
frame 3
…
virtual page #
page
frame Y
Paging Example
49
 Page size 4 bytes
0
 Memory size 32 bytes (8 pages)
1
Page 0
2
Page 0
Page 1
Page 2
Page 3
Logical
Memory
0
1
2
3
1
3
Page 2
4
Page 1
4
3
5
7
Page Table
6
Physical
Memory
7
Page 3
Paging Example
000
Page 1
010
100
Page 3
000
Offset
001
Page 2
Page 0
50
0
0
1
0
001
1
1
011
101
110
111
010
011
Page
00
01
01
11
10
00
11
10
Page Table
Frame
100
101
110
111
Paging
51
 Address space 2m
 Page offset 2n
 Page number 2m-n
Page number
Page offset
p
d
m-n bits
n bits
 Not: not losing any bytes
Paging Example
52
 Consider
 Physical memory = 128 bytes
 Physical address space = 8 frames
 How many bits in an address?
 How many bits for page number?
 How many bits for page offset?
 Can a logical address space have only 2 pages? How
big would the page table be?
Another Paging Example
53
 Consider:
 8 bits in an address
 3 bits for the frame/page number
 How many bytes (words) of physical memory?
 How many frames are there?
 How many bytes is a page?
 How many bits for page offset?
 If a process’ page table is 12 bits, how many logical
pages does it have?
Page Table Example (7 bits)
54
Process A
Page 0
Page 1
Process B
0
Page Table
0
1
1
4
1
Page 0 (A)
2
Page number
Page offset
3
Page 0 (B)
p
m-n = 3
d
n=4
4
Page 1 (A)
5
Page Table
Page 0
0
3
Page 1
1
7
6
7
Page 1 (B)
Paging Tradeoffs
55
 Advantage
 No external fragmentation (no compaction)
 Relocation (now pages, before were processes)
 Disadvantage
 Internal fragmentation
Consider: 2048 byte pages, 72,766 byte proc
 35 pages + 1086 bytes = 962 bytes
 Average : ½ page per process
 Small pages
 Overhead
 Page table / process (context switch + space)
 Lookup (especially if page to disk)

Implementation of Page Table
56
 How would you implement page tables in operating
systems?
Implementation of Page Table
57
 Page table kept in main memory
 Page Table Base Register (PTBR)
Page 0
0
3
Page 1
1
7
Logical Memory
3 7
PTBR
Page Table
 Page Table Length
 2 memory accesses per data access
 Solution?
Physical Memory
Associative Registers
58
Logical Address
p
d
Page
number
frame
number
Associative
Registers
miss
hit
f
d
f
Physical Memory
Associative Register Performance
59
 Hit Ratio – percentage of times that a page number




is found in associative registers
Hit time = reg. time + mem. time
Miss time = reg. time + mem. time * 2
Effective access time =
hit ratio * hit time + miss ratio * miss time
Example:

80% hit ratio, reg. time = 20 ns, mem. time = 100 ns
Protection
60
 Protection bits with each frame
 Store in page table
 Expand to more permissions
0
1
v
0
Page 1
Page 0
1
0
v
1
Page 0
Page 1
2
3
v
2
Page 2
3
0
i
3
Logical
Memory
Page Table
Page 2
Physical
Memory
Large Address Spaces
61
 Typical logical address spaces:
 4 GBytes => 232 address bits (4-byte address)
 Typical page size:
 4 Kbytes = 212 bits
 Page table may have:
 232 / 212 = 220 = 1million entries
 Each entry 3 bytes => 3MB per process!
 Do not want that all in RAM
 Solution? Page the page table
 Multilevel paging
Multilevel Paging
62
page number
p1
10
p2
10
page offset
d
12
...
…
Page 0
Page 1
Page 2
Logical
Memory
Outer Page
Table
Inner Page Tables
Inverted Page Table
63
p
Search
pid
d
i
pid
d
p
Physical Memory
View of Memory
64
 Paging lost users’ view of memory
Segmentation
65
 Logical address: <segment, offset>
 Segment table - maps two-dimensional user defined
address into one-dimensional physical address


base - starting physical location
limit - length of segment
 Hardware support
 Segment Table Base Register
 Segment Table Length Register
Segmentation
66
Index to segment
register table
Segment register table
limit
physical memory
base
segment 0
segment #
offset
segment 1
virtual address
segment 2
<?
yes
no
raise
protection fault
+
Physical
Address
segment 3
segment 4
Segmentation
67
 Protection. With each entry in segment table
associate:


validation bit = 0 illegal segment
read/write/execute privileges
 Protection bits associated with segments
 Since segments vary in length, memory management
becomes a dynamic storage-allocation problem
 Still have external fragmentation of memory