Computer Architecture

Download Report

Transcript Computer Architecture

COMPUTER ARCHITECTURE
Lecture 1
Engr. Hafiz Ali Hamza Gondal
BOOK WE FOLLOW

COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING
FOR PERFORMANCE EIGHTH EDITION BY William Stallings
MARKS DIVISION
Quiz
10
Assignments
10
PPT
10
Mid
25
Final
45
CHAPTER 1
1.
2.
3.
(OUTLINE)
Computer Organization vs Architecture
Functions of Computer
Structure of Computer
COMPUTER ORGANIZATION VS ARCHITECTURE

Computer Organization:
It deals with physical aspects of computer like circuits design,
memory and their types, microprocessor design etc.
Computer Architecture:
It deals with design aspects of computer that assembly
programmer needs to know like instruction set(i.e. instructions
supported), instruction format (i.e. how instruction will be
specified), data types supported etc.
STRUCTURE AND FUNCTION
A computer is a complex system; contemporary computers contain
millions of elementary electronic components.
At each level, the designer is concerned with structure and function:
• Structure: The way in which the components are interrelated
• Function: The operation of each individual component as part of the
structure

FUNCTION





In general terms, there are only four:
Data processing
Data storage
Data movement
Control
PROCESSING AND STORAGE
The computer, of course, must be able to process data. The data
may take a wide variety of forms, and the range of processing
requirements is broad.
 It is also essential that a computer store data. Even if the
computer is processing data on the fly (i.e., data come in and get
processed, and the results go out immediately), the computer must
temporarily store at least those pieces of data.

CONTROL
When data are received from or delivered to a device that is
directly connected to the computer, the process is known as input–
output (I/O), and the device is referred to as a peripheral.
 When data are moved over longer distances, to or from a remote
device, the process is known as data communications.
 Finally, there must be control of these three functions.
Ultimately, this control is exercised by the individual(s) who
provides the computer with instructions.

FOUR OPERATION OF CONTROLLER (1)
The computer can perform function as a data
movement device simply transferring data from one
peripheral or communications line to another.
FOUR OPERATION OF CONTROLLER (2)
It can also function as a data storage device, with data
transferred from the external environment to computer
storage (read) and vice versa (write)
FOUR OPERATION OF CONTROLLER (3,4)
The final two diagrams show operations involving data
processing, on data either in storage or route between
storage and the external environment
STRUCTURE
The computer interacts in some fashion with its external
environment. In general, all of its linkages to the external
environment can be classified as peripheral devices or
communication lines.
COMPUTER TOP LEVEL STRUCTURE
There are four main structural components:
Central processing unit (CPU):Controls the operation of the
computer and performs its data processing functions; often simply
referred to as processor.
• Main memory:Stores data.
• I/O:Moves data between the computer and its external
environment.
• System interconnection: Some mechanism that provides for
communication among CPU, main memory, and I/O.
Common interconnection is by means of a system bus, consisting of
a number of conducting wires to which all the other components
attach.
STRUCTURAL COMPONENTS OF CPU
However, for our purposes, the most interesting and in some ways
the most complex component is the CPU. Its major structural
components are as follows:
Control Unit: Controls the operation of the CPU and hence the
computer
Arithmetic and logic unit (ALU):Performs the computer’s data
processing functions
Registers: Provides storage internal to the CPU
CPU interconnection: Some mechanism that provides for
communication among the control unit, ALU, and registers
COMPUTER TOP LEVEL STRUCTURE
CHAPTER 2

Computer Evolution and Performance
HISTORY OF COMPUTERS
•
•
•
•
(1945–55) Vacuum Tubes
(1955–65) Transistors and Batch Systems
(1965–1980) ICs and Multiprogramming
(1980–Present) Personal Computers
1ST VACUUM TUBES

First generation computers relied on machine language,






Lowest-level programming language used understood by computers, to
perform operations,
Could only solve one problem at a time,
It could take days or weeks to set-up a new problem.
Input was based on punched cards and paper tape,
Output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation
computing devices. The UNIVAC was the first commercial computer
delivered to a business client, the U.S. Census Bureau in 1951.
2ND TRANSISTOR AND BATCH SYSTEMS


Transistors replace vacuum tubes and ushered in the second generation of
computers.
The transistor was invented in 1947 but did not see widespread use in computers
until the late 1950s.








The transistor was far superior to the vacuum tube allowing computers to become
smaller,
faster,
cheaper,
more energy-efficient
and more reliable than
their first-generation predecessors.
Though the transistor still generated a great deal of heat that subjected the
computer to damage, it was a vast improvement over the vacuum tube. Secondgeneration computers still relied on punched cards for input and printouts for
output.
3RD IC AND MULTIPROGRAMMING
The development of the integrated circuit was the hallmark of the
third generation of computers. Transistors were miniaturized and
placed on silicon chips, called semiconductors, which drastically
increased the speed and efficiency of computers.
 Instead of punched cards and printouts, users interacted with third
generation computers through keyboards and
monitors and interfaced with an operating system, which allowed
the device to run many different applications at one time with a
central program that monitored the memory. Computers for the
first time became accessible to a mass audience because they were
smaller and cheaper than their predecessors.

4TH GENERATION
Computers have performance similar to 3rd generation, but prices
drastically different
 CP/M


First disk-based OS
1980, IBM PC, Basic Interpreter, DOS(Disk operating system),
MS-DOS
 GUI--Lisa—Apple: user friendly
 MS-DOS with GUI– Win95/98/me—winNT/xp…

DESIGNING FOR PERFORMANCE
Year by year, the cost of computer systems continues to drop
dramatically,
 Performance and capacity of those systems continue to rise equally
dramatically. At
 Local warehouse club, you can pick up a personal computer for less
than $1000.
 Todays systems have:

• Speech recognition
• Videoconferencing
• Multimedia Authoring
• Voice and video annotation of files
MOORE’S LAW
Gordon Moore – co-founder of Intel
 Number of transistors on a chip will double every year
 Since 1970’s development has slowed a little


Number of transistors doubles every 18 months
Cost of a chip has remained almost unchanged
 Higher packing density means shorter electrical paths, giving
higher performance
 Smaller size gives increased flexibility
 Reduced power and cooling requirements
 Fewer interconnections increases reliability

IMPROVEMENTS IN CHIP ORGANIZATION AND
ARCHITECTURE
Increase the hardware speed of the processor. This increase is
fundamentally due to shrinking the size of the logic gates on the
processor chip, so that more gates can be packed together more
tightly and to increasing the clock rate. With gates closer together,
the propagation time for signals is significantly reduced, enabling a
speeding up of the processor. An increase in clock rate means that
individual operations are executed more rapidly.
 • Increase the size and speed of caches that are interposed between
the processor and main memory. In particular, by dedicating a
portion of the processor chip itself to the cache, cache access times
drop significantly.
 • Make changes to the processor organization and architecture
that increase the effective speed of instruction execution. Typically,
this involves using parallelism in one form or another.

RISC VS CISC


Instruction set architecture(ISA) is the set of processor design techniques
used to implement the instruction work flow on hardware. In more
practical words, ISA tells you that how your processor going to process
your program instructions.
CISC


A complex instruction set computer (CISC /pronounce as ˈsisk’/) is a computer
where single instructions can execute several low-level operations (such as a load
from memory, an arithmetic operation, and a memory store) or are capable of
multi-step operations within single instructions, as its name suggest “COMPLEX
INSTRUCTION SET”.
RISC

A reduced instruction set computer (RISC /pronounce as ˈrisk’/) is a computer
which only use simple instructions that can be divide into multiple instructions
which perform low-level operation within single clock cycle, as its name suggest
“REDUCED INSTRUCTION SET”
THE EVOLUTION OF THE INTEL X86
ARCHITECTURE






CISC Intel x86
Complex instruction set Computers
Highlighted Evolution of Intel Products
8080 :The world’s first general-purpose microprocessor. This was an 8-bit
machine, with an 8-bit data path to memory. The 8080 was used in the
first personal computer, the Altair.
8086 :A far more powerful, 16-bit machine. In addition to a wider data
path and larger registers, the 8086 sported an instruction cache, or
queue, that prefetches a few instructions before they are executed. A
variant of this processor, the 8088, was used in IBM’s first personal
computer, securing the success of Intel. The 8086 is the first appearance
of the x86 architecture.
• 80286:This extension of the 8086 enabled addressing a 16-MByte
memory instead of just 1 MByte.
THE EVOLUTION OF THE INTEL X86
ARCHITECTURE (CONT…)





80386:Intel’s first 32-bit machine, and a major overhaul of the product. With a
32-bit architecture, the 80386 rivaled the complexity and power of
minicomputers and mainframes introduced just a few years earlier. This was
the first Intel processor to support multitasking, meaning it could run multiple
programs at the same time.
80486:The 80486 introduced the use of much more sophisticated and powerful
cache technology and sophisticated instruction pipelining. The 80486 also
offered a built-in math coprocessor, offloading complex math operations from the
main CPU.
Pentium:With the Pentium, Intel introduced the use of superscalar techniques,
which allow multiple instructions to execute in parallel.
Pentium Pro:The Pentium Pro continued the move into superscalar
organization begun with the Pentium, with aggressive use of register renaming,
branch prediction, data flow analysis.
Pentium II:The Pentium II incorporated Intel MMX technology, which is
designed specifically to process video, audio, and graphics data efficiently.
THE EVOLUTION OF THE INTEL X86
ARCHITECTURE (CONT…)
Pentium III:The Pentium III incorporates additional floatingpoint instructions to support 3D graphics software.
 Pentium 4:The Pentium 4 includes additional floating-point and
other enhancements for multimedia.
 Core :This is the first Intel x86 microprocessor with a dual core,
referring to the implementation of two processors on a single chip.
 Core 2:The Core 2 extends the architecture to 64 bits. The Core 2
Quad provides four processors on a single chip.

EMBEDDED SYSTEMS AND THE ARM




The ARM architecture refers to a processor architecture that has evolved
from RISC design principles and is used in embedded systems.
ARM chips are high-speed processors that are known for their small die
size and low power requirements. They are widely used in PDAs and
other handheld devices, including games and phones.
ARM chips are the processors in Apple’s popular iPod and iPhone
devices. ARM is probably the most widely used embedded processor
architecture and indeed the most widely used processor architecture of
any kind in the world.
Embedded system. A combination of computer hardware and software,
and perhaps additional mechanical or other parts, designed to perform a
dedicated function. In many cases, embedded systems are part of a larger
system or product, as in the case of an antilock braking system in a car.
EXAMPLE OF EMBEDDED SYSTEMS
AMDAHL’S LAW
Use to find expected performance of a system when a one part of
system is improved
 It is often used in parallel computing to predict the theoretical
maximum speedup using multiple processors
