Principles of Operating System
Download
Report
Transcript Principles of Operating System
Principles of Operating Systems
Abhishek Dubey
Daniel Balasubramanian
Slides Based On Power points and book material from William Stallings
Fall 2014
Operating System Defined
• 2 common meanings:
1. Entire package consisting of the central software
managing a computer’s resources and all of the
accompanying software tools
2. The software that manages and allocates computer
resources (CPU, RAM, DVD, display, etc)
• The second is also called a kernel; this is the
definition we will use
• The kernel itself is an executable; on Linux, this
executable is located at /boot/vmlinuz
Operating System
• A program that
controls the
execution of
application programs
and manages the
computer, which is a
set of resources for
the movement,
storage, and
processing of data
• An interface between
applications and
hardware
Figure 2.1 Computer Hardware and Software Infrastructure
Main objectives of an OS:
• Convenience
• Efficiency
• Ability to evolve
• Functions in the same way as ordinary
computer software
• Program, or suite of programs, executed by
the processor
• Frequently relinquishes control and must
depend on the processor to allow it to
regain control
What does a kernel do?
•
•
•
•
•
•
•
Process scheduling
Memory management
Provides file systems
Creates and terminates processes
Provides device access
Networking
Provides a system call interface
Process Scheduling
• The CPUs execute the instructions of programs
• Linux is a preemptive, multitasking OS:
– Multiple processes simultaneously reside in
memory
– The rules governing which process receives the
CPU and for how long are determined by the OS
process scheduler.
• In other words, the kernel decides when to
run a process and how long to run it
Key Elements of an
Operating System
Memory Management
• A process must reside in main memory to run
– But RAM is limited, so the OS has to manage how
memory is allocated to processes
• Linux virtual memory management provides 2
things
– Processes are isolated from one another and from
the kernel
– Only part of a process is kept in memory at a time,
which allows more processes in RAM
simultaneously
Provides File Systems
• File system: an organized collection of regular
files and directories
• A file system provides a way to store, retrieve,
update and delete files
• The kernel must understand a variety of file
systems
– In Linux, you can see which file-system types the
kernel knows about by viewing the file
/proc/filesystems
Create and terminate processes
• The kernel loads new programs into memory
and gives them resources
– Memory, CPU, file access
• An instance of a running program is called a
process
• Once a process is finished, the kernel reclaims
its resources for subsequent reuse
Device access
• The kernel provides a standardized interface
to simplify access to devices
• Also arbitrates access to each device by many
processes
Networking
• The kernel transmits and receives network
messages for user processes.
– Includes routing network packets to the target
system
Provides a system call interface
• A system call is the way processes can request
the kernel to perform tasks on their behalf
• Equivalently, a system call is the entry point
for a process to request the kernel to perform
tasks on its behalf
Processor Modes
• Modern processors have at least 2 CPU modes: user
mode and kernel mode
– Hardware instructions switch between the two
• Areas of virtual memory are marked as user or kernel
space.
– When in user space, can only access memory in user
space; kernel mode can access both.
• Some operations are only available in kernel mode
– Halt instruction to stop system, accessing memory
management hardware.
• This hardware design prevents user programs from
affecting the kernel and executing certain instructions.
The shell
• Special program that reads commands you
type and executes commands in response
– Also called a command interpreter
• User level process
– Therefore it is not considered part of the OS
kernel
– Several different shells: Bourne shell (sh), C shell
(csh), Korn shell (ksh), Bourne again shell (bash)
Different Architectural Approaches
• Demands on operating systems require new
ways of organizing the OS
Different approaches and design elements have been tried:
• Microkernel architecture
• Multithreading
• Symmetric multiprocessing
• Distributed operating systems
• Object-oriented design
Microkernel Architecture vs the
monolithic kernel approach
• Assigns only a few essential functions to the
kernel:
address
spaces
interprocess
communication
(IPC)
basic
scheduling
– The approach:
simplifies
implementation
provides
flexibility
is well suited to a
distributed
environment
Key Interfaces
• Instruction set architecture (ISA)
• Application binary interface (ABI)
• Application programming interface (API)
Evolution of Operating Systems
A major OS will evolve over time for a
number of reasons:
Hardware upgrades
New types of hardware
New services
Fixes
Evolution of
Operating Systems
Multiprogrammed
Batch Systems
Simple Batch
Systems
Serial
Processing
Time
Sharing
Systems
Serial Processing: 1940s – mid 1950s
Earliest Computers:
• No operating system
• programmers interacted
directly with the
computer hardware
• Computers ran from a
console with display lights,
toggle switches, some form
of input device, and a
printer
• Users have access to the
computer in “series”
Problems:
• Scheduling:
– most installations used a
hardcopy sign-up sheet to
reserve computer time
– time allocations could
run short or long,
resulting in wasted
computer time
– Setup time
– a considerable amount of
time was spent just on
setting up the program to
run
Simple Batch Systems: 1950s –
1960s
• Early computers were very expensive
– important to maximize processor utilization
• Monitor
– user no longer has direct access to processor
– job is submitted to computer operator who batches them
together and places them on an input device
– program branches back to the monitor when finished
• Monitor controls the sequence of
events
• Resident Monitor is software
always in memory
• Monitor reads in job and gives
control
• Job returns control to monitor
• Processor executes instruction from the memory
containing the monitor
• Executes the instructions in the user program until it
encounters an ending or error condition
• “control is passed to a job” means processor is fetching
and executing instructions in a user program
• “control is returned to the monitor” means that the
processor is fetching and executing instructions from the
monitor program
Memory protection for monitor
• while the user program is executing, it must not alter the memory area
containing the monitor
Timer
• prevents a job from monopolizing the system
Privileged instructions
• can only be executed by the monitor
Interrupts
• gives OS more flexibility in controlling user programs
Simple Batch System Overhead
• Processor time alternates between execution of user
programs and execution of the monitor
• Sacrifices:
– some main memory is now given over to the monitor
– some processor time is consumed by the monitor
– Despite overhead, the simple batch system improves
utilization of the computer
Multiprogrammed
Batch Systems
• Processor is
often idle
• even with
automatic
job
sequencing
• I/O devices
are slow
compared
to
processor
• The processor spends a certain amount of
time executing, until it reaches an I/O
instruction; it must then wait until that I/O
instruction concludes before proceeding
•
There must be enough memory to hold the OS (resident monitor)
and one user program
•
When one job needs to wait for I/O, the processor can switch to
the other job, which is likely not waiting for I/O
Effects on Resource Utilization
Table 2.2 Effects of Multiprogramming on Resource Utilization
• Can be used to handle multiple interactive jobs
• Processor time is shared among multiple users
• Multiple users simultaneously access the system
through terminals, with the OS interleaving the
execution of each user program in a short burst
or quantum of computation
Table 2.3 Batch Multiprogramming versus Time Sharing
• Operating Systems are among the most
complex pieces of software ever developed
Major advances in
development include:
• Processes/Multi Threading
• Memory management
• Information protection and security
• Scheduling and resource
management
• System structure
• Technique in which a process, executing an application, is
divided into threads that can run concurrently
Thread
• dispatchable unit of work
• includes a processor context and its own data area to enable subroutine branching
• executes sequentially and is interruptible
Process
• a collection of one or more threads and associated system resources
• programmer has greater control over the modularity of the application and the timing
of application related events
Causes of Errors
• Improper
synchronization
– a program must wait until
the data are available in a
buffer
– improper design of the
signaling mechanism can
result in loss or duplication
• Failed mutual exclusion
– more than one user or
program attempts to make
use of a shared resource at
the same time
– only one routine at at time
allowed to perform an
update against the file
• Nondeterminate program
operation
– program execution is
interleaved by the processor
when memory is shared
– the order in which programs
are scheduled may affect
their outcome
• Deadlocks
– it is possible for two or
more programs to be hung
up waiting for each other
– may depend on the chance
timing of resource
allocation and release