Functions of the Operating System

Download Report

Transcript Functions of the Operating System

TOPIC 1
THE FUNCTIONS OF OPERATING
SYSTEMS
CONTENT:
1. Features of operating systems
2. Scheduling
3. Interrupt handling
4. Job queues and priorities
5. Memory management
6. Spooling
7. Modern personal computer operating systems
Functions of the Operating System
The main purpose of an operating system is to make the computer easier to use. That is,
the software provides an interface that is more user friendly than underlying hardware. As a
part of this process, the operating system manages the resources of the computer in an
attempt to meet overall system goals such as efficiency. The operating system usually
hides such complexities from the user.
 Control Program for the computer
 Allocates computer resources (including but not limited to)
o CPU
o Memory
o Disk and tape storage
o Printers
o Terminals
o Modems
o Other devices attached to computer
 Schedules Tasks to be executed
 Provides the only meaningful way for the a user to get their requests made
known to the computer
 Often divided into a kernel which is always in memory and other portions
called in by the kernel as needed
Types of Operating Systems
The most common ways of classifying operating systems are based on the kind
of user interface provided.
One way of classifying operating systems is concerned with the number of
users the system can support at one time.
A single-job system is one that runs one user job at a time.
A multiprogramming system permits several user jobs to be executed
concurrently
A multiprocessor system is similar to a multiprogramming system, except that
there is more than one CPU available
User Interface
The user interface provided by an operating system is designed to serve the needs of the
various groups of people who must deal with the computer.
 In a simple operating system, such as one designed for a personal computer, the user
interface is also relatively simple. The typical user of such a system is primarily
concerned with running programs and managing files. The interface is generally
designed to be easy to use.
 For more complex systems, there may be a number of different user-interface
languages. Menus and graphical interfaces are sometimes provided for occasional users
of the system. There may also be a more complex and more powerful command
language that is intended for use by professional programmers and system manages. In
addition, there is usually a special language that is used to communicate with the
operators of the computer.
MEMORY MANAGEMENT
Memory Management.htm
Memory Management has always been one of the most important and interesting
aspects of any operating system for serious developers. It is an aspect that kernel
developers ignore. Memory management, in essence, provides a thumbnail impression
of any operating system.
Microsoft has introduced major changes in the memory management of each new
operating system they have produced. Microsoft had to make these changes because
they developed all of their operating systems for Intel microprocessors, and Intel
introduced major changes in memory management support with each new
microprocessor they introduced.
Early PCs based on Intel 8086/8088 microprocessors could access only 640K of RAM
and used the segmented memory model. Consequently, good old DOS allows only 640K
of RAM and restricts the programmer to the segmented memory model.
In the segmented model, the address space is divided into segments. Proponents of the segmented
model claim that it matches the programmer’s view of memory. They claim that a programmer
views memory as different segments containing code, data, stack, and heap. Intel 8086 supports
very primitive segmentation. A segment, in the 8086 memory model, has a predefined base
address. The length of each segment is also fixed and is equal to 64K. Some programs find a single
Spooling
Spooling - Computerworld.htm
Spooling a computer document or file is the process of reading it and storing it in a buffer,
either on a hard disk or in a special area in memory, so it can be printed or otherwise
processed at a more convenient time - for example, after a printer has finished printing
another document.
To understand spooling, think of it as the process of reeling a document or task list onto a spool, like
thread, so it can be unreeled at a more convenient time. Spooling is useful because devices access data
at different rates. The spool buffer provides a waiting station where data can rest while a slower
device, such as a printer, catches up. When the slower device is ready to handle a new job, it can read
another batch of information from the spool buffer.
Since computers operate at a much faster rate than I/O devices such as printers, it was more effective
to store the read-in lines on a magnetic disk until they could be conveniently printed, when the printer
was free and the computer wasn't so busy working on other tasks.
The most common form of spooling is print spooling. Documents that are to be printed are placed in a
print queue and then printed one at a time as the printer becomes ready for them. Most often, they're
printed on a first-come, first-served basis, but some systems allow documents to be prioritized so
more important documents can be printed first. Modern printers do have memory buffers of their own,
but frequently, they aren't large enough to hold entire documents (or multiple documents), requiring
multiple I/O operations with the printer.
The Benefits
The spooling of documents for printing and batch job requests still goes on in
mainframe computers where many users share a pool of resources. With the
proliferation of low-cost printers, however, many users have printers of their own
and need not share them with others. Even in this case, however, print spooling
remains useful, because it allows users to continue working while printing in the
background. Spooling even makes it possible to set up multiple print jobs at once
without having to wait for each job to complete before starting the next.
In complex work environments where many different types of computers with
different operating systems are networked together, it's often possible to set up
shared print spooling to common printers. This can become fairly complicated,
though, since the data will need to be translated to or from several different formats
and often requires third-party software, hardware or consulting services to get
everything working smoothly.
Mail spoolers collect e-mail (or other data, such as Usenet newsgroup postings) for
delivery at a later time so the sender doesn't need to be connected to the Internet just
to compose an e-mail message.
Graphics applications may need to spool data to the hard disk if a computer's RAM
can't hold an entire image at once. Similarly, video compression and decompression
programs that require a lot of memory may spool data to disk.
Interrupts and Interrupt Handling
interrupts.htm
Linux uses a lot of different pieces of hardware to perform many different tasks. The
video device drives the monitor, the IDE device drives the disks and so on. You could
drive these devices synchronously, that is you could send a request for some operation
(say writing a block of memory out to disk) and then wait for the operation to complete.
That method, although it would work, is very inefficient and the operating system would
spend a lot of time ``busy doing nothing'' as it waited for each operation to complete. A
better, more efficient, way is to make the request and then do other, more useful work and
later be interrupted by the device when it has finished the request. With this scheme, there
may be many outstanding requests to the devices in the system all happening at the same
time.
There has to be some hardware support for the devices to interrupt whatever the CPU is
doing. Most, if not all, general purpose processors such as the Alpha AXP use a similar
method. Some of the physical pins of the CPU are wired such that changing the voltage
(for example changing it from +5v to -5v) causes the CPU to stop what it is doing and to
start executing special code to handle the interruption; the interrupt handling code. One of
these pins might be connected to an interval timer and receive an interrupt every 1000th
of a second, others may be connected to the other devices in the system, such as the SCSI
controller.
Systems often use an interrupt controller to group the device interrupts together before
passing on the signal to a single interrupt pin on the CPU. This saves interrupt pins on the
CPU and also gives flexibility when designing systems. The interrupt controller has mask
and status registers that control the interrupts. Setting the bits in the mask register enables
and disables interrupts and the status register returns the currently active interrupts in the
Some of the interrupts in the system may be hard-wired, for example, the real time clock's interval
timer may be permanently connected to pin 3 on the interrupt controller. However, what some of the
pins are connected to may be determined by what controller card is plugged into a particular ISA or
PCI slot. For example, pin 4 on the interrupt controller may be connected to PCI slot number 0
which might one day have an ethernet card in it but the next have a SCSI controller in it. The
bottom line is that each system has its own interrupt routing mechanisms and the operating system
must be flexible enough to cope.
Most modern general purpose microprocessors handle the interrupts the same way. When a
hardware interrupt occurs the CPU stops executing the instructions that it was executing and jumps
to a location in memory that either contains the interrupt handling code or an instruction branching
to the interrupt handling code. This code usually operates in a special mode for the CPU, interrupt
mode, and, normally, no other interrupts can happen in this mode. There are exceptions though;
some CPUs rank the interrupts in priority and higher level interrupts may happen. This means that
the first level interrupt handling code must be very carefully written and it often has its own stack,
which it uses to store the CPU's execution state (all of the CPU's normal registers and context)
before it goes off and handles the interrupt. Some CPUs have a special set of registers that only exist
in interrupt mode, and the interrupt code can use these registers to do most of the context saving it
needs to do.
When the interrupt has been handled, the CPU's state is restored and the interrupt is dismissed. The
CPU will then continue to doing whatever it was doing before being interrupted. It is important that
the interrupt processing code is as efficient as possible and that the operating system does not block
interrupts too often or for too long.
One of the principal tasks of Linux's interrupt handling subsystem is to route the interrupts
to the right pieces of interrupt handling code. This code must understand the interrupt
topology of the system. If, for example, the floppy controller interrupts on pin 6 1 of the
interrupt controller then it must recognize the interrupt as from the floppy and route it to
the floppy device driver's interrupt handling code.
Linux uses a set of pointers to data structures containing the addresses of the routines that
handle the system's interrupts. These routines belong to the device drivers for the devices
in the system and it is the responsibility of each device driver to request the interrupt that it
wants when the driver is initialized. Figure 7.2 shows that irq_action is a vector of
pointers to the irqaction data structure. Each irqaction data structure contains information
about the handler for this interrupt, including the address of the interrupt handling routine.
As the number of interrupts and how they are handled varies between architectures and,
sometimes, between systems, the Linux interrupt handling code is architecture specific.
This means that the size of the irq_action vector vector varies depending on the number of
interrupt sources that there are.