Network Security - Institute for Computing and Information

Download Report

Transcript Network Security - Institute for Computing and Information

Memory Management
2010
1
three levels
1. hardware
– caches to speed up
– concurrent access from treads, cores, multiple cpu’s
2. programming (language dependent)
• malloc, free
• new…, garbage collection
3. OS
• number of programs in memory
• swap to/from disk
• size of programs
2010
2
OS Memory Management
reference to data,
e.g. linked list
•many programs and the OS (partly) in memory, ready to
run when needed or possible
•efficient use of the cpu and the available memory
2010
3
Requirements
•Relocation: adjustment of references to memory when
program is (re)located in memory
•Protection: against reading or writing of memory
locations by another processes, during execution
•Sharing: data and code (libraries), communication
between processes
•Logical Organization: programs are written in modules,
compiled independently; different degrees of protection
•Physical Organization: memory available for a program
plus its data may be insufficient; movements to and from
disks
2010
4
Loading of programs
Library
Module 1
Module 2
Linker
Load
Module
Loader
Memory
Relocatable loading:
address relative to fixed point (begin of program) in load module
list of these addresses (relocation dictionary)
loader adds (load address – fixed point) to those addresses
Swapping only possible if program returns to same position.
2010
5
Dynamic Run Time loading
Relative
Address
Process Control
Block
Base
Register
Code
Adder
Bounds
Register
Comparator
Absolute
Address
Interrupt to
Operating System
Done in the
hardware
Can also provide
protection
Data
Stack
Relative address is an example of a “logical” address, which is
independent on the place of the program in physical memory
2010
6
Fixed partitioning
The part of the memory not used by the operating system is divided
in parts of equal or different length.
Disadvantage of partitions of equal size:
•if the program is too big for the chosen size, the programmer
must work with “overlays”.
•small programs still use the full partition: “internal
fragmentation” of memory
Advantage:
•placing of a new program in memory is easy: take any free
partition
2010
7
Variable sized partitions
•Number and sizes of
partitions are fixed
during system generation
•There may be day/night
variation: more small
partitions during the day
for testing; more large
partitions during night
for production.
OS
128 K
128 K
New
256 K
New
256 K
512 K
Processes
Processes
1M
•two choices for loading
new programs
Minimizes
internal fragmentation
2010
Maximizes
number of loaded
programs
8
Dynamic partitioning
•Each process gets exactly
(ceiled to 1, 2 or 4KB) the
memory it needs.
•Number and size of the
partitions are variable
OS
P1
OS
320 K
P1
OS
320 K
P2
224 K
96 K
P2
224 K
P4
128 K
P4
96 K
P3
288 K
64 K
P3
288 K
64 K
128 K
96 K
P3
288 K
64 K
•Gives “external fragmentation”
•If that gets too big: use “compaction”:
•stop all processes
•move them in memory, all free space at the end
•need algorithm for placement of processes
2010
9
Paging
•Partition memory into small equal-size chunks (frames) and divide
each process into the same size chunks (pages)
•Operating system maintains a page table for each process:
•contains the frame location for each page in the process
•memory address consists of a page number and offset
•A special hardware register points during execution to the page
table of the executing process.
•An extra read access to memory is thus needed, caching (in the
CPU) can be used to speed it up.
•Contiguous frames in memory are not necessary; as long as there
are more free frames than pages in a process, the process can be
loaded and executed.
•No external fragmentation.
•Little internal fragmentation (only in the last page of a process)
2010
10
Process page table
16-bit Logical Adress
6-Bit Page
10-Bit Offset
000001 0111011110
Process Page Table
000101
•process maximal 64
pages of 1KB each
•physical memory
maximal 64 frames of
1KB
000110
011001
000110 0111011110
16 bit Physical Address
Add 2 bits to the physical address and to the page tables:
•there are now 256 pages of 1 KB
•a process can still have maximal 64 pages, be 64 KB long
•there can be more processes in memory
2010
11
Segmentation
16-bit Logical Adress
•Process is divided into a
number of segments, which can
0001 001011110011
be of different size; there is a
maximal size.
•For each process there is a
Process Segment Table
table with the starting address
0 001011101110 0000010000000000
of each segment; segments can
1 011110011110 0010000000100000
+
be non-contiguous in memory.
•Segmentation is often visible
Length
Base address
for the programmer which can
0 0 1 0 0 0 1 1 0 0 0 1 0 0 1 1 place functions and data blocks
Usable for
into certain segments.
protection
4-Bit Segment
12-Bit Offset
16 bit Physical Address
•No internal fragmentation, only external
•Placement algorithm is needed
•Base address can be longer, to use more physical memory
•Tables are larger than with paging; more hardware support needed.
2010
12
Virtual memory basis
Two properties of simple paging and segmentation:
•all memory references in a process are logical addresses which
during execution are dynamically converted by hardware to
physical addresses
•a process can be divided in parts (pages or segments) which do
not have to be in contiguous physical memory during execution
are the basis for a fundamental breakthrough:
•not all the pages or segments of a process have to be in memory
during execution
•as long as the next instruction and data items needed by it are in
physical memory, the execution can proceed.
•if that is not the case (page or segment fault) those pages (or
segments) must be loaded before execution can proceed
•not used pages or segments can be swapped to disk.
2010
13
VM
Implications:
•more processes can be in memory: better usage of the CPU,
better response times for interactive users
•a process can be larger than the available physical memory
This gives the name “virtual memory”, available on the swap
disk, not limited by the “real memory”.
VM can be based on paging, segmentation, or both.
Needed for virtual memory systems:
•hardware (address translation, usage bits, caches for speed)
•management software (tables, disk I/O, algorithms)
VM now used on mainframes, workstations, PC’s, etc.
Not for some “real time” systems as the execution time of
processes is less predictable.
2010
14
Thrashing, locality principle
Thrashing
•Swapping out a piece of a process just before that piece is
needed
•The processor spends most of its time swapping pieces rather
than executing user instructions
Principle of locality
•Program and data references within a process tend to cluster
•Only a few pieces of a process will be needed over a short period
of time
•Possible to make intelligent guesses about which pieces will be
needed in the future
•This suggests that virtual memory may work efficiently
•provided programmer and compiler care about locality
2010
15
Page tables in VM
Control bits:
•P(resent): page in memory or on
disk
•M(odified): page modified or not
•time indication of last use
•Two level,
hierarchical page
table
•Part of it can be on
disk
2010
16
Translation Lookaside Buffer
•Contains page table entries that have been most recently used
•Functions same way as a memory cache
•Given a virtual address, processor examines the TLB
•If page table entry is present (a hit), the frame number is retrieved
and the real address is formed
•If page table entry is not found in the TLB (a miss), the page
number is used to index the process page table
•First checks if page is already in main memory; if not in main
memory a page fault is issued
•The TLB is updated to include the new page entry
2010
17
Use of TLB
2010
18
Combined segmentation and paging
•Paging is transparent to the programmer
•Paging eliminates external fragmentation
•Segmentation is visible to the programmer (and compiler)
•allows for growing data structures, modularity, and support for
sharing and protection
•embedded systems: program in ROM, data in RAM
2010
19
OS policies
• Fetch: when a page should be brought into memory?
– on-demand paging, only when needed
– pre-paging, bring in more, try to reduce page faults
• Replacement: which page is replaced?
• Frame Locking, to increase efficiency
– Kernel of the operating system, Control structures, I/O buffers
• Resident Set Size: how many pages per process
– fixed allocation
– variable allocation, global or local scope
• Cleaning: when to re-use a frame for another page
– on demand or pre-cleaning
• Load control: number of processes resident in main memory
• Process suspension: scheduling policy
2010
20