Chapter 5 Slides

Download Report

Transcript Chapter 5 Slides

Invitation to Computer Science
6th Edition
Chapter 5
Computer Systems
Organization
Objectives
In this chapter, you will learn about:
• The components of a computer system
• The Von Neumann architecture
• Non-Von Neumann architectures
Invitation to Computer Science, 6th Edition
2
Introduction
• Computer organization
– Branch of computer science that studies computers
in terms of their major functional units and how they
work
• Concept of abstraction
– Used throughout computer science
– Without it, it would be virtually impossible to study
computer design or any other large, complex system
Invitation to Computer Science, 6th Edition
3
Figure 5.1 The Concept of Abstraction
Invitation to Computer Science, 6th Edition
4
The Components of a Computer
System
• Von Neumann architecture is based on the following
three characteristics
– Four major subsystems called memory, input/output,
the arithmetic/logic unit (ALU), and the control unit
– The stored program concept
– The sequential execution of instructions
Invitation to Computer Science, 6th Edition
5
Figure 5.2 Components of the Von Neumann Architecture
Invitation to Computer Science, 6th Edition
6
Memory and Cache
• Memory
– Functional unit of a computer that stores and
retrieves the instructions and the data being
executed
• Random access (RAM)
– Access technique used by computer memory
• Read-only memory (ROM)
– Information is prerecorded during manufacture
Invitation to Computer Science, 6th Edition
7
Memory and Cache (continued)
• With a cell size of 8 bits:
– The largest unsigned integer value that can be
stored in a single cell is 11111111 (255)
• [0 . . (2N – 1)]
– Range of addresses available on a computer
• When dealing with memory:
– Distinguish between an address and the contents
of that address
Invitation to Computer Science, 6th Edition
8
Figure 5.3 Structure of Random Access Memory
Invitation to Computer Science, 6th Edition
9
Figure 5.4 Maximum Memory Sizes
Invitation to Computer Science, 6th Edition
10
Memory and Cache (continued)
• Basic memory operations
– Fetching and storing
• Memory access time
– Typically about 5 to 10 nsec
• Memory registers
– Used to implement the fetch and store operations
• Memory Address Register (MAR)
– Holds the address of the cell to be fetched or stored
Invitation to Computer Science, 6th Edition
11
Memory and Cache (continued)
• Memory Data Register (MDR)
– Contains data value being fetched or stored
• Two-dimensional structure
– Memory organization
• Memory locations
– Stored in row major order
• Selection lines
– Row selection line
– Column selection line
Invitation to Computer Science, 6th Edition
12
Figure 5.5 Organization of Memory and the Decoding Logic
Invitation to Computer Science, 6th Edition
13
Figure 5.6 Two-Dimensional Memory Address Organization
Invitation to Computer Science, 6th Edition
14
Memory and Cache (continued)
• Fetch/store controller
– Determines whether we put contents of a memory
cell into the MDR or put the contents of the MDR into
a memory cell
• Cache memory
– Principle of Locality: when the computer uses
something, it will probably use it again very soon,
and it will probably use the “neighbors” of this item
very soon
Invitation to Computer Science, 6th Edition
15
Figure 5.7
Overall RAM
Organization
Invitation to Computer Science, 6th Edition
16
Input/Output and Mass Storage
• Input/output (I/O) units
– Devices that allow a computer system to
communicate and interact with the outside world as
well as store information
• Volatile memory
– Information disappears when power is turned off
• Nonvolatile storage
– Role of mass storage devices such as disks and
tapes
Invitation to Computer Science, 6th Edition
17
Input/Output and Mass Storage
(continued)
• Input/output devices come in two basic types
– Those that represent information in human-readable
form for human consumption
– Those that store information in machine-readable
form for access by a computer system
• Disk
– Stores information in units called sectors
• Fixed number of sectors are placed in a concentric
circle on the surface of the disk, called a track
Invitation to Computer Science, 6th Edition
18
Figure 5.8 Overall Organization of a Typical Disk
Invitation to Computer Science, 6th Edition
19
Input/Output and Mass Storage
(continued)
• Seek time
– Time needed to position the read/write head over the
correct track
• Latency
– Time for the beginning of the desired sector to rotate
under the read/write head
• Transfer time
– Time for the entire sector to pass under the
read/write head and have its contents read into or
written from memory
Invitation to Computer Science, 6th Edition
20
Input/Output and Mass Storage
(continued)
• Sequential access storage device (SASD)
– Does not require that all units of data be identifiable
via unique addresses
• Direct access storage devices
– Much faster at accessing individual pieces of
information
• I/O controller
– Has small amount of memory (I/O buffer)
– I/O control and logic: ability to handle mechanical
functions of the I/O device
Invitation to Computer Science, 6th Edition
21
Figure 5.9 Organization of an I/O Controller
Invitation to Computer Science, 6th Edition
22
The Arithmetic/Logic Unit
• Subsystem that performs addition, subtraction, and
comparison for equality
• Components
– Registers, interconnections between components,
and the ALU circuitry
• Register
– Storage cell that holds the operands of an arithmetic
operation and holds its result
• Bus
– Path for electrical signals
Invitation to Computer Science, 6th Edition
23
Figure 5.10 Three-Register ALU Organization
Invitation to Computer Science, 6th Edition
24
Figure 5.11 Multiregister
ALU Organization
Invitation to Computer Science, 6th Edition
25
Figure 5.12 Using a Multiplexor Circuit to Select the
Proper ALU Result
Invitation to Computer Science, 6th Edition
26
Figure 5.13 Overall ALU Organization
Invitation to Computer Science, 6th Edition
27
The Control Unit
• Stored program
– Sequence of machine language instructions stored
as binary values in memory
• Control unit
– Tasks: fetch, decode, and execute
Invitation to Computer Science, 6th Edition
28
Machine Language Instructions
• Instructions that can be decoded and executed by
the control unit of a computer
• Operation code field
– Unique unsigned integer code assigned to each
machine language operation recognized by the
hardware
• Address field(s)
– Memory addresses of values on which the operation
will work
Invitation to Computer Science, 6th Edition
29
Figure 5.14 Typical Machine Language
Instruction Format
Invitation to Computer Science, 6th Edition
30
Machine Language Instructions
(continued)
• Instruction set
– Set of all operations that can be executed by a
processor
• Reduced instruction set computers or RISC
machines
– Include as little as 30–50 instructions
• Complex instruction set computers (CISC
machines)
– Include 300–500 very powerful instructions
Invitation to Computer Science, 6th Edition
31
Machine Language Instructions
(continued)
• Classes of machine language instructions
–
–
–
–
Data transfer
Arithmetic
Compare
Branch
Invitation to Computer Science, 6th Edition
32
Machine Language Instructions
(continued)
• Data transfer operations
– Move values to and from memory and registers
– Instruction Examples:
Load X: Load register R with the contents of memory cell X
STORE X: Store the contents of register R into memory cell X
MOVE X Y: Copy the contents of memory cell X into memory
cell Y
Invitation to Computer Science, 6th Edition
33
Machine Language Instructions
(continued)
• Arithmetic/logic operations
– Move values to and from memory and registers
– Instruction Examples:
ADD X, Y, Z (Three – address instruction)
CON(Z) = CON(X) + CON(Y)
ADD X, Y (Two – address instruction)
CON(Y) = CON(X) + CON(Y)
ADD X (One – address instruction)
R = CON(X) + R
Invitation to Computer Science, 6th Edition
34
Machine Language Instructions
(continued)
• Compare operations
– Compare two values and set an indicator on the basis of
the results of the compare; set status register (or we call
condition register, special register) bits
– Instruction Examples:
COMPARE X, Y
CON(X) > CON(Y) set GT = 1, EQ = 0, LT = 0
CON(X) = CON(Y) set GT = 0, EQ = 1, LT = 0
CON(X) < CON(Y) set GT = 0, EQ = 0, LT = 1
Invitation to Computer Science, 6th Edition
35
Machine Language Instructions
(continued)
• Branch operations
– Jump to a new memory address to continue processing
– Instruction Examples:
JUMP X (unconditionally)
JUMPGT X / JUMPEQ X / JUMPLT X
JUMPGE X / JUMP LE X
HALT
Invitation to Computer Science, 6th Edition
36
Control Unit Registers and Circuits
• Program counter (PC)
– Holds the address of the next instruction to be
executed
• Instruction register (IR)
– Holds a copy of the instruction fetched from memory
• Instruction decoder
– Determines what instruction is in the IR
Invitation to Computer Science, 6th Edition
37
Figure 5.15 Examples of
Simple Machine Language
Instruction Sequences
Invitation to Computer Science, 6th Edition
38
Figure 5.16 Organization of the Control Unit Registers and
Circuits
Invitation to Computer Science, 6th Edition
39
Figure 5.17 The Instruction Decoder
Invitation to Computer Science, 6th Edition
40
Putting All the Pieces Together–the
Von Neumann Architecture
• Program execution phases
– Fetch, decode, and execute
• Von Neumann cycle
– The repetition of the fetch/decode/execute phase
Invitation to Computer Science, 6th Edition
41
Figure 5.18 The Organization
of a Von Neumann Computer
Invitation to Computer Science, 6th Edition
42
Figure 5.19 Instruction Set for Our Von Neumann Machine
Invitation to Computer Science, 6th Edition
43
Non-Von Neumann Architectures
• Problems that computers are being asked to solve
– Have grown significantly in size and complexity
• Important limit on increased processor speed
– Inability to place gates close together on a chip
• Slowing down
– Rate of increase in performance of newer machines
• Von Neumann bottleneck
– Inability of the sequential one-instruction-at-a-time
Von Neumann model to handle today’s large-scale
problems
Invitation to Computer Science, 6th Edition
44
Figure 5.20 Graph of Processor Speeds, 1945 to the Present
Invitation to Computer Science, 6th Edition
45
Non-Von Neumann Architectures
(continued)
• Parallel processing
– Building computers not with one processor, but with
tens, hundreds, or even thousands
• SIMD parallel processing
– ALU is replicated many times
– Each ALU has its own local memory where it may
keep private data
• MIMD parallel processing
– All processors are replicated
– Every processor is capable of executing its own
separate program
Invitation to Computer Science, 6th Edition
46
Figure 5.21 A SIMD Parallel Processing System
Invitation to Computer Science, 6th Edition
47
Non-Von Neumann Architectures
(continued)
• Scalability
– It is possible to match the number of processors to
the size of the problem
• Massively parallel MIMD machines
– Have achieved solutions to large problems
thousands of times faster than is possible using a
single processor
• Grid computing
– Enables researchers to easily and transparently
access computer facilities without regard for their
location
Invitation to Computer Science, 6th Edition
48
Summary of Level 2
• Chapter 4
– Looked at the basic building blocks of computers:
binary codes, transistors, gates, and circuits
• Chapter 5
– Examined the standard model for computer design,
called the Von Neumann architecture
• System software
– Intermediary between the user and the hardware
components of the Von Neumann machine
Invitation to Computer Science, 6th Edition
49
Summary
• Computer organization
– Examines different subsystems of a computer:
memory, input/output, arithmetic/logic unit, and
control unit
• Machine language
– Gives codes for each primitive instruction the
computer can perform and its arguments
• Von Neumann machine
– Sequential execution of a stored program
• Parallel computers
– Improve speed by doing multiple tasks at one time
Invitation to Computer Science, 6th Edition
50