Introduction to Memory Management
Download
Report
Transcript Introduction to Memory Management
W4118 Operating Systems
Instructor: Junfeng Yang
Logistics
Homework 4 out, due 3:09pm 3/26
You will add a new scheduling policy to Linux
This is likely the most difficult programming
assignment of this course, so start today
1
Last lecture: Advanced Scheduling
Advanced Scheduling Concepts
Multilevel Queue Scheduling
Multiprocessor Scheduling
Real-time Scheduling
Linux Scheduling
Goals
Data structures: runqueue, task_struct
Procedures: schedule(), scheduler_tick()
2
Today: Memory Management
Dynamic memory allocation
Stack and Heap
Intro to Memory management
3
Motivation for Dynamic Memory
Allocation
Static (compile time) allocation is not possible
for all data
Recursive calls
Runtime input from user
Complicated data structure
…
Two ways of dynamic allocation
Stack allocation
• Restricted, but simple and efficient
Heap allocation
• More general, but less efficient
• More difficult to implement
4
Stack Organization
Stack: memory is freed in opposite order from
allocation
When useful?
• Function call frames
Pointer separating allocated and free space
Allocate: increment pointer
Free: decrement pointer
Advantage
Simple and efficient
Keeps all free space continuous
Not for data structures
Memory usage pattern follows LIFO
Example
Implementation
Last in First out (LIFO)
Disadvantage
5
Heap Organization
Heap: allocate from random locations
Memory consists of allocated area and free area (or
holes)
When is it useful?
Allocate and free are unpredictable
Complex data structures
• new in C++, malloc in C, kmalloc in Linux kernel
Advantage: general, works on arbitrary
allocation and free patterns
Disadvantage: end up with small chunks of free
space
6
Fragmentation
Small trunks of free memory, too small for
future allocations
Goal
External: visible to system
Internal: visible to process (e.g. if allocate at some
granularity)
Reduce number of holes
Keep holes large
Stack: all free space is together as one big
hole
7
Heap Implementation
Data structure: linked list of free blocks
Allocation
free list: chains free blocks together
Choose block large enough for request
Update free list
Free
Add block back to list
Merge adjacent free blocks
8
Best vs. First vs. Worst
Best fit
First fit
Search the whole list on each allocation
Choose the smallest block that can satisfy request
Can stop search if exact match found
Choose first block that can satisfy request
Worst fit
Chose largest block (most leftover space)
Which is better?
9
Examples
Best algorithm: depends on sequence of
requests
Example: free list has 2 blocks of size 20 and
15 bytes
Allocation requests: 10 then 20
Allocation requests: 8, 12, then 12
10
Comparison of Allocation Strategies
Best fit
First fit:
Tends to leave very large holes and very small holes
Disadvantage: very small holes may be useless
Tends to leave “average” size holes
Advantage: faster than best fit
Worst fit:
Simulation shows that worst fit is worse in terms of
storage utilization
11
Today: Memory Management
Dynamic memory allocation
Stack
Heap
Allocation strategies
Intro to Memory management
12
Motivation for Memory Management
Simple uniprogramming with a single segment
per process
Early batch systems
Early personal computers
OS
Disadvantages
Only one process can run a time
Process can destroy OS
User
Process
13
Multiple Processes in Memory
Process A
Process B
14
Multiprogramming Wish-list
Sharing
Transparency
Processes not aware that memory is shared
Run regardless of number and/or locations of processes
Protection
multiple processes coexist in main memory
Cannot corrupt OS or other processes
Privacy: cannot read data of other processes
Efficiency: should have reasonable performance
Purpose of sharing is to increase efficiency
Do not waste CPU or memory resources
15
Relocation
Transparency relocation
Process can run anywhere in memory (can’t
predict in advance)
How?
16
Background
Compiler compiles source files into object files
Linker links object files and system libraries
into a program
Loader loads program and dynamically-linked
library (from disk) into memory and placed
within a process for it to be run
17
Dynamic Linking
Linking postponed until execution time
Implementation
Advantage
Small piece of code, stub, used to locate the
appropriate memory-resident library routine
Stub replaces itself with the address of the
routine, and executes the routine
useful for libraries: updating the library
updates all programs that use it (shared
libraries)
Saves space
Disadvantage
Difficult to manage dependencies
Runtime failures
18
Dynamic Loading
Routine is not loaded until it is called
Better memory-space utilization; unused
routine is never loaded
Useful when large amounts of code are
needed to handle infrequently occurring
cases
19
20
When to Relocate?
Compile time
• Each program needs to be written with others in mind
• Not really transparent
Load time
Hardwire physical location at compile time (absolute code)
Problem
Compiled code relocatable (relocatable code)
How? All addresses relative to a start address. Change
start address to relocate
Problem
• Once loaded, can’t change or move
Execution time
Can move during execution
This is what’s generally done, but need hardware support
21
Relocation at Execution Time
Logical Addresses
CPU
MMU
MEMORY
Physical Addresses
Map program-generated address to hardware
address dynamically at every reference
MMU: Memory Management Unit
• Controlled by OS
Program: logical (virtual) address
Hardware: physical (real) addresses
Address space: each process’s view of memory
22
Address Spaces
OS
AS1
AS2
AS3
Logical view
Physical view
23
Hardware Support
Two operating modes
Privileged (protected, kernel) mode: when OS runs
User mode: when user processes run
• When trap into OS (system calls, interrupts, exceptions)
• Allows certain instructions to be executed
• Allows OS to access all of physical memory
• Performs translation of logical address to physical
address
• Protects OS and other processes physical memory
How to implement protection?
Base register: start physical location of address space
Limit register: last valid address the process may access
• Appears to have private memory of size equal to limit
register
24
Implementation
Translation on every memory access
Compare logical address to limit register
• If greater, generate exception
Add base register to logical address to generate
physical address
limit
base
<= limit?
+
Logical Addresses
CPU
no
exception
Physical Addresses
25
Managing Processes with Base and Limit
Context switch
Protection requirement
User process cannot change base and limit register
User process cannot change to privileged mode
26
Pros and Cons of Base and Limit
Continuous allocation: each process is in a contiguous
memory block
Advantages
Supports dynamic relocation of address space
Supports protection across multiple spaces
Cheap: few registers and little logic
Fast: add and compare can be done in parallel
Disadvantages
Each process must be allocated contiguously in real memory
Must allocate memory that may not be used
No sharing: cannot share limited parts of address space
• Fragmentation: cannot allocate a new process
• Solution: swapping (next)
• e.g. cannot shared code with private data
27
Swapping
A process can be swapped temporarily out of memory to
a backing store, and then brought back into memory for
continued execution
Backing store – fast disk large enough to accommodate
copies of all memory images for all users; must provide
direct access to these memory images
Roll out, roll in – swapping variant used for prioritybased scheduling algorithms; lower-priority process is
swapped out so higher-priority process can be loaded and
executed
Major part of swap time is transfer time; total transfer
time is directly proportional to the amount of memory
swapped
Modified versions of swapping are found on many
systems (i.e., UNIX, Linux, and Windows)
28
Schematic View of Swapping