Lecture 3 - Electrical and Computer Engineering

Download Report

Transcript Lecture 3 - Electrical and Computer Engineering

CENG 450
Computer Systems & Architecture
Lecture 3
Amirali Baniasadi
[email protected]
Performance
 Purchasing perspective
 given a collection of machines, which has the
best performance ?
least cost ?
best performance / cost ?
 Design perspective
 faced with design options, which has the
best performance improvement ?
least cost ?
best performance / cost ?
 Both require
 basis for comparison
 metric for evaluation
 Our goal is to understand cost & performance implications of architectural
choices
Two notions of “performance”
Plane
DC to Paris
Speed
Passengers
Throughput
Boeing 747
6.5 hours
610 mph
470
286,700
BAD/Sud
Concorde
3 hours
1350 mph
132
178,200
Which has higher performance?
° Time to do the task (Execution Time)
– execution time, response time, latency
° Tasks per day, hour, week, sec, ns. ..
– throughput, bandwidth
Response time and throughput often are in opposition
Example
• Time of Concorde vs. Boeing 747?
• Concord is 1350 mph / 610 mph = 2.2 times faster
= 6.5 hours / 3 hours
• Throughput of Concorde vs. Boeing 747 ?
• Concord is 178,200 pmph / 286,700 pmph = 0.62 “times faster”
• Boeing is 286,700 pmph / 178,200 pmph = 1.6 “times faster”
• Boeing is 1.6 times (“60%”)faster in terms of throughput
• Concord is 2.2 times (“120%”) faster in terms of flying time
We will focus primarily on execution time for a single job
Definitions
 Performance is in units of things-per-second
 bigger is better
 If we are primarily concerned with response time
 performance(x) =
1
execution_time(x)
" X is n times faster than Y" means
Performance(X)
n
=
---------------------Performance(Y)
Performance measurement
 How about collection of programs?
 Example:
Three machines: A, B and C. Two Programs: P1 and P2.
A
B
C
P1
1
10
20
P2
1000
100
20
Arithmetic mean:  Weight i * Time i
W(1)
500.5
55
20
W(2)
91.9
18
20
W(3)
2
10
20
W(1)
W(2)
W(3)
.5
.9
.99
.5
.1
.01
Performance measurement
 Other option: Geometric Means (Self study pages 37-39 text book)
Metrics of performance
Answers per month
Operations per second
Application
Programming
Language
Compiler
ISA
(millions) of Instructions per second – MIPS
(millions) of (F.P.) operations per second – MFLOP/s
Datapath
Control
Megabytes per second
Function Units
Transistors Wires Pins
Cycles per second (clock rate)
Relating Processor Metrics
 CPU execution time = CPU clock cycles X clock cycle time
 or CPU execution time = CPU clock cycles ÷ clock rate
 CPU clock cycles= Instructions X avg. clock cycles per instr.
 or CPI = CPU clock cycles÷ Instructions
 CPI tells us something about the Instruction Set Architecture, the
Implementation of that architecture, and the program measured
Aspects of CPU Performance
CPU time
= Seconds
Program
instr. count
Program
Compiler
Instr. Set Arch.
Organization
Technology
= Instructions x Cycles
Program
Instruction
CPI
clock rate
x Seconds
Cycle
Aspects of CPU Performance
CPU time
= Seconds
Program
instr count
= Instructions x Cycles
Program
Instruction
CPI
clock rate
Program
X
Compiler
X
(x)
Instr. Set.
X
X
Organization
Technology
x Seconds
X
X
X
Cycle
Organizational Trade-offs
Application
Programming
Language
Compiler
ISA
Datapath
Control
Instruction Mix
CPI
Function Units
Transistors Wires Pins
Cycle Time
CPI
“Average cycles per instruction”
CPI = (CPU Time * Clock Rate) / Instruction Count
= Clock Cycles / Instruction Count
n
CPU time = ClockCycleTime * SUM
i =1
CPI
i
* Ii
"instruction frequency"
n
CPI = SUM CPI i *
i =1
F
i
where Fi
Invest Resources where time is Spent!
=
Ii
Instruction Count
Example (RISC processor)
Base Machine (Reg / Reg)
Op
Freq Cycles CPI(i)
ALU
50%
1
.5
Load
20%
5
1.0
Store
10%
3
.3
Branch
20%
2
.4
2.2
% Time
23%
45%
14%
18%
Typical Mix
How much faster would the machine be if a better data cache
reduced the average load time to 2 cycles?
How does this compare with using branch prediction to shave a
cycle off the branch time?
What if two ALU instructions could be executed at once?
Example (RISC processor)
Base Machine (Reg / Reg)
Op
Freq Cycles CPI(i)
ALU
50%
1
.5
Load
20%
5
1.0
Store
10%
3
.3
Branch
20%
2
.4
2.2
% Time
23%
45%
14%
18%
Typical Mix
How much faster would the machine be
if:
A) Loads took “0” cycles?
B) Stores took “0” cycles?
C) ALU ops took “0” cycles?
D)Branches took “0” cycles?
MAKE THE COMMON CASE FAST
Amdahl's Law
Speedup due to enhancement E:
ExTime w/o E
Speedup(E) = -------------------ExTime w/ E
Performance w/ E
=
--------------------Performance w/o E
Suppose that enhancement E accelerates a fraction F of the task
by a factor S and the remainder of the task is unaffected then,
ExTime(with E) = ((1-F) + F/S) X ExTime(without E)
Speedup(with E) = ExTime(without E) ÷ ((1-F) + F/S) X ExTime(without E)
Speedup(with E) =1/ ((1-F) + F/S)
Amdahl's Law-example
A new CPU makes Web serving 10 times faster. The old CPU spent 40%
of the time on computation and 60% on waiting for I/O. What is the
overall enhancement?
Fraction enhanced= 0.4
Speedup enhanced = 10
Speedup overall =
1
0.6 +0.4/10
= 1.56
Example from Quiz 1-2004

a)A program consists of 80% initialization code and of 20% code
being the main iteration loop, which is run 1000 times. The total
runtime of the program is 100 seconds. Calculate the fraction of
the total run time needed for the initialization and the iteration.
Which part would you optimize? B) The program should have a total
run time of 60 seconds. How can this be achieved? (15 points)
Marketing Metrics
= Instruction Count / Time * 10^6
MIPS
= Clock Rate / CPI * 10^6
•machines with different instruction sets ?
•programs with different instruction mixes ?
• dynamic frequency of instructions
• uncorrelated with performance
GFLOPS
= FP Operations / Time * 10^9
•machine dependent
•often not where time is spent
playstation: 6.4 GFLOPS
Why Do Benchmarks?
 How we evaluate differences
 Different systems
 Changes to a single system
 Provide a target
 Benchmarks should represent large class of important
programs
 Improving benchmark performance should help many
programs
 For better or worse, benchmarks shape a field
 Good ones accelerate progress
 good target for development
 Bad benchmarks hurt progress
 help real programs v. sell machines/papers?
 Inventions that help real programs don’t help benchmark
Basis of Evaluation
Cons
• representative
Actual Target Workload
• portable
• widely used
• improvements
useful in reality
• easy to run, early in
design cycle
• identify peak
capability and
potential bottlenecks
• very specific
• non-portable
• difficult to run, or
measure
• hard to identify cause
•less representative
Full Application Benchmarks
Small “Kernel”
Benchmarks
Microbenchmarks
• easy to “fool”
• “peak” may be a long
way from application
performance
Successful Benchmark: SPEC
 1987 RISC industry mired in “bench marketing”:
(“That is 8 MIPS machine, but they claim 10 MIPS!”)
 EE Times + 5 companies band together to perform Systems Performance
Evaluation Committee (SPEC) in 1988:
Sun, MIPS, HP, Apollo, DEC
 Create standard list of programs, inputs, reporting: some real programs,
includes OS calls, some I/O
SPEC first round
 First round 1989; 10 programs, single number to summarize performance
 One program: 99% of time in single line of code
 New front-end compiler could improve dramatically
800
700
500
400
300
200
100
Benchmark
tomcatv
fpppp
matrix300
eqntott
li
nasa7
doduc
spice
epresso
0
gcc
SPEC Perf
600
SPEC95
 Eighteen application benchmarks (with inputs) reflecting a
technical computing workload
 Eight integer
go, m88ksim, gcc, compress, li, ijpeg, perl, vortex
 Ten floating-point intensive
tomcatv, swim, su2cor, hydro2d, mgrid, applu, turb3d, apsi,
fppp, wave5
 Must run with standard compiler flags
eliminate special undocumented incantations that may not
even generate working code for real programs
Summary
CPU time
= Seconds
Program
= Instructions x Cycles
Program
Instruction
x Seconds
Cycle
 Time is the measure of computer performance!
 Good products created when have:
 Good benchmarks
 Good ways to summarize performance
 If not good benchmarks and summary, then choice between improving
product for real programs vs. improving product to get more sales=> sales
almost always wins
 Remember Amdahl’s Law: Speedup is limited by unimproved part of
program
Readings & More…
Reminder:
READ:
TEXTBOOK:
Chapter 1 pages 1 to 47
Moore paper (posted on course web site).