CS152: Computer Architecture and Engineering
Download
Report
Transcript CS152: Computer Architecture and Engineering
CENG 450
Computer Systems & Architecture
Lecture 2
Amirali Baniasadi
[email protected]
Outline
Power & Cost
Performance
Performance measurement
Amdahl's Law
Benchmarks
History
1. “Big Iron” Computers:
Used vacuum tubes, electric relays and bulk magnetic storage
devices. No microprocessors. No memory.
Example: ENIAC (1945), IBM Mark 1 (1944)
History
Von Newmann:
Invented EDSAC (1949).
First Stored Program Computer.
Uses Memory.
Importance: We are still using the same basic design.
Computer Components
Output
Processor
Control
(CPU)
Memory
Input
Printer
Screen
Disk
...
keyboard
Mouse
Disk
...
Computer Components
Datapath of a von Newman machine
OP1 + OP2
...
Op1
Op2
Bus
Op1
General-purpose
Registers
Op2
ALU i/p registers
ALU
OP1 + OP2
ALU o/p register
Computer Components
Processor(CPU):
Active part of the motherboard
Performs calculations & activates devices
Gets instruction & data from memory
Components are connected via Buses
Bus:
Collection of parallel wires
Transmits data, instructions, or control signals
Motherboard
Physical chips for I/O connections, memory, & CPU
Computer Components
CPU consists of
Datapath (ALU+ Registers):
Performs arithmetic & logical operations
Control (CU):
Controls the data path, memory, & I/O devices
Sends signals that determine operations of datapath,
memory, input & output
Technology Change
Technology changes rapidly
HW
Vacuum tubes: Electron emitting devices
Transistors: On-off switches controlled by electricity
Integrated Circuits( IC/ Chips): Combines thousands of transistors
Very Large-Scale Integration( VLSI): Combines millions of transistors
What next?
SW
Machine language: Zeros and ones
Assembly language: Mnemonics
High-Level Languages: English-like
Artificial Intelligence languages: Functions & logic predicates
Object-Oriented Programming: Objects & operations on objects
Moore’s Prediction
Moore’s Law:
A new generation of memory chips is introduced every 3 years
Each new generation has 4 times as much memory as its predecessor
Computer technology doubles every 1.5 years:
Example: DRAM capacity
100,000
64M
16M
K b it c a p a cit y
10,000
4M
1M
1000
256K
100
64K
16K
10
1976
1978
1980
1982
1984
1986
1988
Year o f introduction
1990
1992
1994
1996
Technology => dramatic change
Processor
logic capacity: about 30% per year
clock rate:
about 20% per year
Memory
DRAM capacity: about 60% per year (4x every 3 years)
Memory speed: about 10% per year
Cost per bit: improves about 25% per year
Disk
capacity: about 60% per year
Question: Does every thing look OK?
Software Evolution.
Machine language
Assembly language
High-level languages
Subroutine libraries
There is a large gap between what is convenient for computers & what is
convenient for humans
Translation/Interpretation is needed between both
Language Evolution
swap (int v[], int k)
{ int temp
temp = v[k];
v[k] = v[k+1];
v[k+1] = temp;
}
swap:
muli $2, $5, 4
add $2, $4, $2
lw
$15, 0($2)
lw
$18, 4($2)
sw
$18, 0($2)
sw
$15, 4($2)
jr
$31
0 00 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 00 00 00 00 1 1 0 0 0
0 00 0 0 0 0 0 1 0 0 0 1 1 1 0 0 0 0 1 1 0 0 0 0 0 1 00 0 0 1
1 00 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 00 0 0 0 0 00 00 0 0 0
1 00 0 1 1 0 0 1 1 1 1 0 0 1 0 0 0 0 00 0 0 0 0 00 00 1 0 0
1 0 1 0 1 1 0 0 1 1 1 1 0 0 1 0 0 0 0 00 0 0 0 0 00 00 0 0 0
1 0 1 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 00 0 0 0 0 00 00 1 0 0
0 00 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 1 0 0 0
High-level language
program (in C)
Assembly language
program (for MIPS)
Binary machine language
program (for MIPS)
HW - SW Components
Hardware
Memory components
Registers
Register file
memory
Disks
Functional components
Adder, multiplier, dividers, . . .
Comparators
Control signals
Software
Data
Simple
• Characters
• Integers
• Floating-point
• Pointers
Structured
• Arrays
• Structures ( records)
Instructions
Data transfer
Arithmetic
Shift
Control flow
Comparison
. . .
Things You Will Learn
Assembly language introduction/Review
How to analyze program performance
How to design processor components
How to enhance processors performance (caches, pipelines, parallel
processors, multiprocessors)
The Processor Chip
Processor Chip Major Blocks
• Example: Intel Pentium
• Area: 91 mm2
• ~ 3.3 million transistors ( 1 million for cache memory)
Control
Data
cache
Instruction
cache
Bus
Integer
datapath
Branch
Floatingpoint
datapath
Memory
Categories
Volatile memory
Loses information when power is switched-off
Non-volatile memory
Keeps information when power is switched-off
Types
Cache:
Volatile
Fast but expensive
Smaller capacity
Placed closer to the processor
Main memory
Volatile
Less expensive
More capacity
Secondary memory
Nonvolatile
Low cost
Very slow
Unlimited capacity
Input-Output (I/O)
I/O devices have the hardest organization
Wide range of speeds
Graphics vs. keyboard
Wide range of requirements
Speed
Standard
Cost . . .
Least amount of research done in this area
Our Primary Focus
The processor (datapath and control)
Implemented using millions of transistors
Impossible to understand by looking at each transistor
We need abstraction
Hides lower-level details to offer simple model at higher level
Advantages
• Intensive & thorough research into the depths
• Reveals more information
• Omits unneeded details
• Helps us cope with complexity
Examples of abstraction:
• Language hierarchy
• Instruction set architecture (ISA)
Instruction Set Architecture (ISA)
Instruction set:
Complete set of instructions used by a machine
ISA:
Abstract interface between the HW and lowest-level SW. It encompasses
information needed to write machine-language programs including
Instructions
Memory size
Registers used
. . .
Instruction Set Architecture (ISA)
ISA is considered part of the SW
Several implementations for the same ISA can exist
Modern ISA’s:
80x86/Pentium/K6, PowerPC, DEC Alpha, MIPS, SPARC, HP
We are going to study MIPS
Advantages:
Different implementations of the same architecture
Easier to change than HW
Standardizes instructions, machine language bit patterns, etc.
Disadvantage:
Sometimes prevents using new innovations
Instruction Set Architecture (ISA)
•Instruction Execution Cycle
Fetch Instruction From Memory
Decode Instruction determine its size & action
Fetch Operand data
Execute instruction & compute results or status
Store Result in memory
Determine Next Instruction’s address
What Should we Learn?
A specific ISA (MIPS)
Performance issues - vocabulary and motivation
Instruction-Level Parallelism
How to Use Pipelining to improve performance
Exploiting Instruction-Level Parallelism w/ Software Approach
Memory: caches and virtual memory
I/O
What is Expected From You?
•
•
•
•
Read textbook & readings!
Be up-to-date!
Come back with your input & questions for discussion!
Appreciate and participate in teamwork!
Power?
Everything is done by tiny switches
Their charge represents logic values
Changing charge energy
Power energy over time
Devices are non-ideal power heat
Excess heat Circuits breakdown
Need to keep power within acceptable limits
N
iu
m
III
II
iu
m
uc
IV
le
ar
R
ea
ct
or
Pe
nt
Pe
nt
Pr
o
iu
m
iu
m
iu
m
Pe
nt
Pe
nt
Pe
nt
i4
86
i3
86
W/cm2
POWER in the real world
1000
100
10
1
Integrated Circuits Costs
Die cost =
Wafer cost
Dies per Wafer * Die yield
Percentage of good dies on a wafer
Dies per wafer =
Wafer Area
Die Area
* Wafer diameter
(2 * die area)1/2
Die yield = Wafer yield * ( 1+ (Defects per unit area * Die area )/ ) -
= 4.0
Integrated Circuits Costs-example
Find the die yield for a die with a defect density of 0.6 for dies with areas 1.0 and .49.
For the larger die: Die yield = (1+ (0.6*1)/4)
-4 =
for the smaller die: Die yield = (1+ (0.6*0.49)/4)
Why?
0.57
-4 =
0.75
Other Costs
IC cost = Die cost + Testing cost + Packaging cost
Final test yield
Packaging Cost: depends on pins, heat dissipation, ...
Chip
386DX
486DX2
PowerPC 601
HP PA 7100
DEC Alpha
SuperSPARC
Pentium
Die Packaging
$4
$1
$12
$11
$53
$3
$73
$35
$149
$30
$272
$20
$417
$19
Testing
$4
$12
$21
$16
$23
$34
$37
Total
$9
$35
$77
$124
$202
$326
$473
System Cost: Workstation
System
Subsystem
% of total cost
Cabinet
Sheet metal, plastic
Power supply, fans
Cables, nuts, bolts
(Subtotal)
1%
2%
1%
(4%)
Processor
DRAM (64MB)
Video system
I/O system
Printed Circuit board
(Subtotal)
6%
36%
14%
3%
1%
(60%)
Keyboard, mouse
Monitor
Hard disk (1 GB)
Tape drive (DAT)
(Subtotal)
1%
22%
7%
6%
(36%)
Motherboard
I/O Devices
COST v. PRICE
Q: What % of company income
on Research and Development (R&D)?
+50–80%
Average
Discount
(33–45%)
gross margin
(33–14%)
direct costs
direct costs
(8–10%)
component
cost
component
cost
(25–31%)
avg. selling price
+25–100% Gross Margin
+33% Direct Costs
Component
Cost
Input:
chips,
displays, ...
component
cost
Making it:
labor, scrap,
returns, ...
(WS–PC)
list price
Overhead:
R&D, rent,
marketing,
profits, ...
Commission:
channel profit,
volume
discounts,
Performance
Purchasing perspective
given a collection of machines, which has the
best performance ?
least cost ?
best performance / cost ?
Design perspective
faced with design options, which has the
best performance improvement ?
least cost ?
best performance / cost ?
Both require
basis for comparison
metric for evaluation
Our goal is to understand cost & performance implications of architectural
choices
Two notions of “performance”
Plane
DC to Paris
Speed
Passengers
Throughput
Boeing 747
6.5 hours
610 mph
470
286,700
BAD/Sud
Concorde
3 hours
1350 mph
132
178,200
Which has higher performance?
° Time to do the task (Execution Time)
– execution time, response time, latency
° Tasks per day, hour, week, sec, ns. ..
– throughput, bandwidth
Response time and throughput often are in opposition
Example
• Time of Concorde vs. Boeing 747?
• Concord is 1350 mph / 610 mph = 2.2 times faster
= 6.5 hours / 3 hours
• Throughput of Concorde vs. Boeing 747 ?
• Concord is 178,200 pmph / 286,700 pmph = 0.62 “times faster”
• Boeing is 286,700 pmph / 178,200 pmph = 1.6 “times faster”
• Boeing is 1.6 times (“60%”)faster in terms of throughput
• Concord is 2.2 times (“120%”) faster in terms of flying time
We will focus primarily on execution time for a single job
Definitions
Performance is in units of things-per-second
bigger is better
If we are primarily concerned with response time
performance(x) =
1
execution_time(x)
" X is n times faster than Y" means
Performance(X)
n
=
---------------------Performance(Y)
Performance measurement
How about collection of programs?
Example:
Three machines: A, B and C. Two Programs: P1 and P2.
A
B
C
P1
1
10
20
P2
1000
100
20
Arithmetic mean: Weight i * Time i
W(1)
500.5
55
20
W(2)
91.9
18
20
W(3)
2
10
20
W(1)
W(2)
W(3)
.5
.9
.99
.5
.1
.01
Performance measurement
Other option: Geometric Means (Self study pages 37-39 text book)
Metrics of performance
Answers per month
Operations per second
Application
Programming
Language
Compiler
ISA
(millions) of Instructions per second – MIPS
(millions) of (F.P.) operations per second – MFLOP/s
Datapath
Control
Megabytes per second
Function Units
Transistors Wires Pins
Cycles per second (clock rate)
Relating Processor Metrics
CPU execution time = CPU clock cycles X clock cycle time
or CPU execution time = CPU clock cycles ÷ clock rate
CPU clock cycles= Instructions X avg. clock cycles per instr.
or CPI = CPU clock cycles÷ Instructions
CPI tells us something about the Instruction Set Architecture, the
Implementation of that architecture, and the program measured
Aspects of CPU Performance
CPU time
= Seconds
Program
instr. count
Program
Compiler
Instr. Set Arch.
Organization
Technology
= Instructions x Cycles
Program
Instruction
CPI
clock rate
x Seconds
Cycle
Aspects of CPU Performance
CPU time
= Seconds
Program
instr count
= Instructions x Cycles
Program
Instruction
CPI
clock rate
Program
X
Compiler
X
(x)
Instr. Set.
X
X
Organization
Technology
x Seconds
X
X
X
Cycle
Organizational Trade-offs
Application
Programming
Language
Compiler
ISA
Datapath
Control
Instruction Mix
CPI
Function Units
Transistors Wires Pins
Cycle Time
CPI
“Average cycles per instruction”
CPI = (CPU Time * Clock Rate) / Instruction Count
= Clock Cycles / Instruction Count
n
CPU time = ClockCycleTime * SUM
i =1
CPI
i
* Ii
"instruction frequency"
n
CPI = SUM CPI i *
i =1
F
i
where Fi
Invest Resources where time is Spent!
=
Ii
Instruction Count
Example (RISC processor)
Base Machine (Reg / Reg)
Op
Freq Cycles CPI(i)
ALU
50%
1
.5
Load
20%
5
1.0
Store
10%
3
.3
Branch
20%
2
.4
2.2
% Time
23%
45%
14%
18%
Typical Mix
How much faster would the machine be if a better data cache
reduced the average load time to 2 cycles?
How does this compare with using branch prediction to shave a
cycle off the branch time?
What if two ALU instructions could be executed at once?
Example (RISC processor)
Base Machine (Reg / Reg)
Op
Freq Cycles CPI(i)
ALU
50%
1
.5
Load
20%
5
1.0
Store
10%
3
.3
Branch
20%
2
.4
2.2
% Time
23%
45%
14%
18%
Typical Mix
How much faster would the machine be
if:
A) Loads took “0” cycles?
B) Stores took “0” cycles?
C) ALU ops took “0” cycles?
D)Branches took “0” cycles?
MAKE THE COMMON CASE FAST
Amdahl's Law
Speedup due to enhancement E:
ExTime w/o E
Speedup(E) = -------------------ExTime w/ E
Performance w/ E
=
--------------------Performance w/o E
Suppose that enhancement E accelerates a fraction F of the task
by a factor S and the remainder of the task is unaffected then,
ExTime(with E) = ((1-F) + F/S) X ExTime(without E)
Speedup(with E) = ExTime(without E) ÷ ((1-F) + F/S) X ExTime(without E)
Speedup(with E) =1/ ((1-F) + F/S)
Amdahl's Law-example
A new CPU makes Web serving 10 times faster. The old CPU spent 40%
of the time on computation and 60% on waiting for I/O. What is the
overall enhancement?
Fraction enhanced= 0.4
Speedup enhanced = 10
Speedup overall =
1
0.6 +0.4/10
= 1.56
Example from Quiz 1-2004
a)A program consists of 80% initialization code and of 20% code
being the main iteration loop, which is run 1000 times. The total
runtime of the program is 100 seconds. Calculate the fraction of
the total run time needed for the initialization and the iteration.
Which part would you optimize? B) The program should have a total
run time of 60 seconds. How can this be achieved? (15 points)
Marketing Metrics
= Instruction Count / Time * 10^6
MIPS
= Clock Rate / CPI * 10^6
•machines with different instruction sets ?
•programs with different instruction mixes ?
• dynamic frequency of instructions
• uncorrelated with performance
GFLOPS
= FP Operations / Time * 10^9
•machine dependent
•often not where time is spent
playstation: 6.4 GFLOPS
Why Do Benchmarks?
How we evaluate differences
Different systems
Changes to a single system
Provide a target
Benchmarks should represent large class of important
programs
Improving benchmark performance should help many
programs
For better or worse, benchmarks shape a field
Good ones accelerate progress
good target for development
Bad benchmarks hurt progress
help real programs v. sell machines/papers?
Inventions that help real programs don’t help benchmark
Basis of Evaluation
Cons
• representative
Actual Target Workload
• portable
• widely used
• improvements
useful in reality
• easy to run, early in
design cycle
• identify peak
capability and
potential bottlenecks
• very specific
• non-portable
• difficult to run, or
measure
• hard to identify cause
•less representative
Full Application Benchmarks
Small “Kernel”
Benchmarks
Microbenchmarks
• easy to “fool”
• “peak” may be a long
way from application
performance
Successful Benchmark: SPEC
1987 RISC industry mired in “bench marketing”:
(“That is 8 MIPS machine, but they claim 10 MIPS!”)
EE Times + 5 companies band together to perform Systems Performance
Evaluation Committee (SPEC) in 1988:
Sun, MIPS, HP, Apollo, DEC
Create standard list of programs, inputs, reporting: some real programs,
includes OS calls, some I/O
SPEC first round
First round 1989; 10 programs, single number to summarize performance
One program: 99% of time in single line of code
New front-end compiler could improve dramatically
800
700
500
400
300
200
100
Bench mark
tomcatv
fpppp
matrix300
eqntott
li
nasa7
doduc
spice
epresso
0
gcc
SPEC Perf
600
SPEC95
Eighteen application benchmarks (with inputs) reflecting a
technical computing workload
Eight integer
go, m88ksim, gcc, compress, li, ijpeg, perl, vortex
Ten floating-point intensive
tomcatv, swim, su2cor, hydro2d, mgrid, applu, turb3d, apsi,
fppp, wave5
Must run with standard compiler flags
eliminate special undocumented incantations that may not
even generate working code for real programs
Summary
CPU time
= Seconds
Program
= Instructions x Cycles
Program
Instruction
x Seconds
Cycle
Time is the measure of computer performance!
Good products created when have:
Good benchmarks
Good ways to summarize performance
If not good benchmarks and summary, then choice between improving
product for real programs vs. improving product to get more sales=> sales
almost always wins
Remember Amdahl’s Law: Speedup is limited by unimproved part of
program
Readings & More…
Reminder:
READ:
TEXTBOOK:
Chapter 1 pages 1 to 47
Moore paper (posted on course web site).