Chapter 1: Fundamentals of Quantitative Design and Analysis

Download Report

Transcript Chapter 1: Fundamentals of Quantitative Design and Analysis

Computer Architecture
A Quantitative Approach, Fifth Edition
Chapter 1
Fundamentals of Quantitative
Design and Analysis
Copyright © 2012, Elsevier Inc. All rights reserved.
1

Performance improvements:

Improvements in semiconductor technology


Feature size, clock speed
Improvements in computer architectures



Introduction
Computer Technology
Enabled by High Level Language (HLL) compilers,
UNIX
Lead to RISC architectures
Together have enabled:


Lightweight computers
Productivity-based managed/interpreted
programming languages
Copyright © 2012, Elsevier Inc. All rights reserved.
2
Move to multi-processor
Introduction
Single Processor Performance
RISC
Copyright © 2012, Elsevier Inc. All rights reserved.
3
Moore’ Law
Exponential Growth – doubling of transistors every couple of year
4
4
Do you want to be a millionaire?

You double your investment everyday


Starting investment - one cent.
How long it takes to become a millionaire?
a)
b)
c)
d)
e)
20 days
27 days
37 days
365 days
Lifetime ++
5
5
Do you want to be a millionaire?

You double your investment everyday


How long it takes to become a millionaire
a)
b)
c)

Starting investment - one cent.
20 days
27 days
37 days
One million cents
Millionaire
Billionaire
Doubling transistors every 18 months

This growth rate is hard to imagine
6
6

Cannot continue to leverage Instruction-Level
parallelism (ILP)


Single processor performance improvement ended in
2003
New models for performance:




Introduction
Current Trends in Architecture
Data-level parallelism (DLP)
Thread-level parallelism (TLP)
Request-level parallelism (RLP)
These require explicit restructuring of the
application
Copyright © 2012, Elsevier Inc. All rights reserved.
7

Personal Mobile Device (PMD)



Desktop Computing


Emphasis on availability, scalability, throughput
Clusters / Warehouse Scale Computers




Emphasis on price-performance
Servers


e.g. start phones, tablet computers
Emphasis on energy efficiency and real-time
Classes of Computers
Classes of Computers
Used for “Software as a Service (SaaS)”
Emphasis on availability and price-performance
Sub-class: Supercomputers, emphasis: floating-point
performance and fast internal networks
Embedded Computers

Emphasis: price
Copyright © 2012, Elsevier Inc. All rights reserved.
8

Classes of parallelism in applications:



Data-Level Parallelism (DLP)
Task-Level Parallelism (TLP)
Classes of Computers
Parallelism
Classes of architectural parallelism:




Instruction-Level Parallelism (ILP)
Vector architectures/Graphic Processor Units (GPUs)
Thread-Level Parallelism
Request-Level Parallelism
Copyright © 2012, Elsevier Inc. All rights reserved.
9

Single instruction stream, single data stream (SISD)

Single instruction stream, multiple data streams (SIMD)




Vector architectures
Multimedia extensions
Graphics processor units
Multiple instruction streams, single data stream (MISD)


Classes of Computers
Flynn’s Taxonomy
No commercial implementation
Multiple instruction streams, multiple data streams
(MIMD)


Tightly-coupled MIMD
Loosely-coupled MIMD
Copyright © 2012, Elsevier Inc. All rights reserved.
10

“Old” view of computer architecture:


Instruction Set Architecture (ISA) design
i.e. decisions regarding:


registers, memory addressing, addressing modes,
instruction operands, available operations, control flow
instructions, instruction encoding
Defining Computer Architecture
Defining Computer Architecture
“Real” computer architecture:



Specific requirements of the target machine
Design to maximize performance within constraints:
cost, power, and availability
Includes ISA, microarchitecture, hardware
Copyright © 2012, Elsevier Inc. All rights reserved.
11
Correlations To Other Fields
Applications – HTML/XML, Audio, Video, Data Compression, and
many others.
Compilers – Gcc, Intel C++, Visual C++, C#, Java, etc.
Operating Systems – Linux, Windows, Unix, etc.
Computer Architecture – Instruction Set, Memory Hierarchy,
Parallelism, Power efficient design, etc.
Circuits and Physical Devices.
- Advances or demands from one field will drive other fields.
Copyright © 2012, Elsevier Inc. All rights reserved.
12

Integrated circuit technology



Transistor density: 35%/year
Die size: 10-20%/year
Integration overall: 40-55%/year

DRAM capacity: 25-40%/year (slowing)

Flash capacity: 50-60%/year


Trends in Technology
Trends in Technology
15-20X cheaper/bit than DRAM
Magnetic disk technology: 40%/year


15-25X cheaper/bit then Flash
300-500X cheaper/bit than DRAM
Copyright © 2012, Elsevier Inc. All rights reserved.
13

Bandwidth or throughput




Total work done in a given time
10,000-25,000X improvement for processors
300-1200X improvement for memory and disks
Trends in Technology
Bandwidth and Latency
Latency or response time



Time between start and completion of an event
30-80X improvement for processors
6-8X improvement for memory and disks
Copyright © 2012, Elsevier Inc. All rights reserved.
14
Trends in Technology
Bandwidth and Latency
Log-log plot of bandwidth and latency milestones
Copyright © 2012, Elsevier Inc. All rights reserved.
15

Feature size



Minimum size of transistor or wire in x or y
dimension
10 microns in 1971 to .032 microns in 2011
Transistor performance scales linearly


Trends in Technology
Transistors and Wires
Wire delay does not improve with feature size!
Integration density scales quadratically
Copyright © 2012, Elsevier Inc. All rights reserved.
16

Problem: Get power in, get power out

Thermal Design Power (TDP)



Characterizes sustained power consumption
Used as target for power supply and cooling system
Lower than peak power, higher than average power
consumption

Clock rate can be reduced dynamically to limit
power consumption

Energy per task is often a better measurement
Copyright © 2012, Elsevier Inc. All rights reserved.
Trends in Power and Energy
Power and Energy
17

Dynamic energy



Dynamic power


Transistor switch from 0 -> 1 or 1 -> 0
½ x Capacitive load x Voltage2
Trends in Power and Energy
Dynamic Energy and Power
½ x Capacitive load x Voltage2 x Frequency switched
Reducing clock rate reduces power, not energy
Copyright © 2012, Elsevier Inc. All rights reserved.
18




Intel 80386
consumed ~ 2 W
3.3 GHz Intel
Core i7 consumes
130 W
Heat must be
dissipated from
1.5 x 1.5 cm chip
This is the limit of
what can be
cooled by air
Copyright © 2012, Elsevier Inc. All rights reserved.
Trends in Power and Energy
Power
19

Techniques for reducing power:




Do nothing well
Dynamic Voltage-Frequency Scaling
Low power state for DRAM, disks
Overclocking, turning off cores
Copyright © 2012, Elsevier Inc. All rights reserved.
Trends in Power and Energy
Reducing Power
20

Static power consumption



Currentstatic x Voltage
Scales with number of transistors
To reduce: power gating – turning off the
power supply to idle circuits to reduce
leakage.
Copyright © 2012, Elsevier Inc. All rights reserved.
Trends in Power and Energy
Static Power
21

Cost driven down by learning curve

Yield

DRAM: price closely tracks cost

Microprocessors: price depends on
volume

Trends in Cost
Trends in Cost
10% less for each doubling of volume
Copyright © 2012, Elsevier Inc. All rights reserved.
22
Manufacturing ICs

Yield: proportion of working dies per wafer
Copyright © 2012, Elsevier Inc. All rights reserved.
23
Intel Core i7 Wafer


300mm wafer, 280 chips, 32nm technology
Each chip is 20.7 x 10.5 mm
Copyright © 2012, Elsevier Inc. All rights reserved.
24

Integrated circuit

Bose-Einstein formula:

Defects per unit area = 0.016-0.057 defects per square cm (2010)
N = process-complexity factor = 11.5-15.5 (40 nm, 2010)

Copyright © 2012, Elsevier Inc. All rights reserved.
Trends in Cost
Integrated Circuit Cost
25
Dependability
Dependability

Module reliability




Mean time to failure (MTTF)
Mean time to repair (MTTR)
Mean time between failures (MTBF) = MTTF + MTTR
Availability = MTTF / MTBF
Copyright © 2012, Elsevier Inc. All rights reserved.
26

Typical performance metrics:



Speedup of X relative to Y


Execution timeY / Execution timeX
Execution time



Response time
Throughput
Measuring Performance
Measuring Performance
Wall clock time: includes all system overheads
CPU time: only computation time
Benchmarks




Kernels (e.g. matrix multiply)
Toy programs (e.g. sorting)
Synthetic benchmarks (e.g. Dhrystone)
Benchmark suites (e.g. SPEC06fp, TPC-C)
Copyright © 2012, Elsevier Inc. All rights reserved.
27
Benchmark Suites
•
•
•
Desktop

SPEC CPU2006: 12 integer, 17 floating-point

SPECviewperf, SPECapc: graphics benchmarks
Server

SPEC CPU2006: running multiple copies, SPECrate

SPECSFS: for NFS performance

SPECWeb: Web server benchmark

TPC-x: measure transaction-processing, queries, and decision
making database applications
Embedded Processor

New area

EEMBC: EDN Embedded Microprocessor Benchmark Consortium
28
28
SPEC2006 Programs and the Evolution of
the SPEC Benchmarks
29
29
Comparing Performance
n
•
•
Arithmetic Mean:
1
n
 Time
i 1
Weighted Arithmetic Mean:
i
n
 Weight  Time
i
i
i 1
•
Geometric Mean:
n
n
 Execution Time Ratio i
i 1


Execution time ratio is normalized to a base
machine
Is used to figure out SPECrate
30
30
Comparing Performance
n
Arithmetic
Mean:
1
n
 Time
i
i 1
For Program P1 and P2:
•
B is 9.1 times faster than A
•
C is 25 times faster than A
•
C is 2.75 times faster than B
31
31
Comparing Performance
n

 Weight  Time
Weighted Arithmetic Mean:
i
i
i 1
Different conclusions can be obtained from
different weight
•
32
32
Amdahl’s Law
Performance improvement to be gained from
using some faster mode of execution is limited by
the fraction of the time the faster mode can be
used.

Speedup 
•
•
1
(1  f )  ( f / n)
Where:
f is a fraction of the execution time that can be enhanced
n is the enhancement factor
Example: f = .4, n = 10 => Speedup = 1.56
Total Speedup is limited if only a portion can be
enhanced.
•
33
33
Amdahl’s Law - example


Amdahl’s law is useful for comparing overall performance of two design
alternatives.
Example:

Floating-point (FP) operations consume 50% of the execution time
of a graphics application. FP square root (FPSQRT) is used 20% of
the time.

Improve FPSQR operation execution by 10 times

Speedup = 1 / ((1-0.2) + 0.2/10) = 1.22

Improve all FP operations by 1.6 times

Speedup = 1 / ((1-0.5) + 0.5/1.6) = 1.23

Improving the performance of the FP operations overall is slightly
better because of the higher frequency.
34
34
CPU Performance Equation
CPU Time  InstructionCount  CyclePerInst  CycleTime
 InstructionCount  CyclePerInst  1 / ClockRate
•
•
•


Clock Cycle Time: Hardware technology and organization
CPI: Organization and Instruction Set Architecture (ISA)
Instruction Count: ISA and compiler technology
We will focus more on the organization issues
Many performance enhancing techniques improves one with
small/predictable impacts on the other two.
35
35
Example
•
•
Parameters:

FP operations (including FPSQR) = 25%

CPI for FP operations = 4; CPI for others = 1.33

Frequency of FPSQR = 2%; CPI of FPSQR = 20
Compare 2 designs:

Decrease CPI of FPSQR to 2, or CPI of all FP to 2.5
n
CPI orig   CPI i 
i 1
IC i
 (4  25%)  (1.33  75%)  2.0
Total IC
CPI newFPSQR  CPI orig  2%  (CPIoldFPSQR CPInewFPSQR)
 2.0  2%  (20  2)  1.64
CPInewFP  (75% 1.33)  (25%  2.5)  1.625
36
36
Principle of Locality
•
•
•
•
•
•
The most important program property
Programs tend to reuse data and instructions they have used recently.
A rule of thumb: a program spends 90% of its execution time in only 10% of
the code.
Predict program’s action in the near future based on its accesses in the
recent past.
Temporal locality: recently accessed items are likely to be accessed in the
near future.
Spatial locality: items whose addresses are near one another tend to be
referenced close together in time.
37
37

Take Advantage of Parallelism


e.g. multiple processors, disks, memory banks,
pipelining, multiple functional units
Principle of Locality


Principles
Principles of Computer Design
Reuse of data and instructions
Focus on the Common Case

Amdahl’s Law
Copyright © 2012, Elsevier Inc. All rights reserved.
38

Principles
Principles of Computer Design
The Processor Performance Equation
Copyright © 2012, Elsevier Inc. All rights reserved.
39

Principles
Principles of Computer Design
Different instruction types having different
CPIs
Copyright © 2012, Elsevier Inc. All rights reserved.
40
Figure 1.13 Photograph of an Intel Core i7 microprocessor die, which is evaluated in
Chapters 2 through 5. The dimensions are 18.9 mm by 13.6 mm (257 mm2) in a 45 nm
process. (Courtesy Intel.)
Copyright © 2011, Elsevier Inc. All rights Reserved.
41
Figure 1.14 Floorplan of Core i7 die in Figure 1.13 on left with close-up of floorplan of
second core on right.
Copyright © 2011, Elsevier Inc. All rights Reserved.
42
Performance, Price-Performance (SPEC)
43
43
Performance, Price-Performance (TPC-C)
44
44
Misc. Items
•
Check SPEC web site for more information, http://www.spec.org
•
Read Fallacies and Pitfalls

For example,
InstCount
ClockRate
MIPS 

6
ExecTime10
CPI 106
•
•
•
MIPS is an accurate measure for comparing performance among
computers is a Fallacy
MIPS is dependent on the instruction set. Difficult to compare MIPS of
computers with different instruction sets.
MIPS varies between programs on the same computer.
MIPS can vary inversely to performance!! (consider a machine with
floating point hardware vs. software floating point routings)
45
45
Example Using MIPS
•
•
Instruction distribution:
 ALU: 43%, 1 cycle/inst
 Load: 21%, 2 cycle/inst
 Store: 12%, 2 cycle/inst
 Branch: 24%, 2 cycle/inst
Optimization compiler reduces 50% of ALU
•
CPI unoptimized  1  .43  2  .21  2  .12  2  .24  1.57
MIPS unoptimized
•
ClockRate
5


ClockRate

6
.
37

10
1.57 106
CPI optimized  (1 (.43 / 2)  2  .21  2  .12  2  .24) /(1  .43 / 2)
ClockRate
5
MIPS optimized 

ClockRate

5
.
78

10
1.73 106
46
46