Computer Architecture and Engineering Lecture 8: Divide, Floating

Download Report

Transcript Computer Architecture and Engineering Lecture 8: Divide, Floating

Lecture 7:
Midterm Review
and
Floating Point
Professor Mike Schulte
Computer Architecture
ECE 201
Midterm Format
° Open book, open note
• Know the material well enough so that you do not need your book
or notes
° Bring calculator and scratch paper.
° About five problems (with multiple parts)
• Short answers
• Explanations
• Problem solving
° Based on lecture notes, book, and homeworks (Lectures 1 - 6,
Chapters 1 - 4, Homeworks 1&2). No division or floating point, until
midterm.
° 75 minutes for midterm, in class on February 21st. .
° Try sample test and 1998 midterm on course homepage
Chapter 1 : Computer Abstractions and Technology
° Instruction Set Architecture and Machine Organization
° Levels of abstraction
• Interface (outside view)
• implementation (inside view)
° Current trends in capacity and performance
• processor, memory, and I/O
° Predicting improvements in performance
• Processor performance improves by 50% per year
• How much faster will the processor be in 5 years
° Types of computer components
• datapath, control, memory, input, and output
Chapter 2: The Role of Performance
° Execution time (seconds) vs. Performance (tasks per second)
° X is n times faster than Y" means
performance(X)
execution_time(Y)
n=
----------------------- = ------------------------performance(Y)
execution_time(X)
° Calculating CPU execution time
CPU time = Instruction count x CPI x clock cycle time
CPU time = Instruction count x CPI / clock rate
What affects each of the above factors?
° Computer Benchmarks (SPEC Benchmarks)
° Summarizing performance: arithmetic mean vs. geometric mean
° Poor performance metrics: MIPS and MFLOPS
° Amdahl’s law:
Speedup =
ExTimeold
ExTimenew
1
=
(1 - Fractionenhanced) + Fractionenhanced
Speedupenhanced
Chapter 3: Instruction Set Architecture
° The MIPS architecture
• registers and memory addressing
• instruction formats and fields
• instructions supported
° Differences between x86 and MIPS
° Using MIPS instructions to accomplish tasks
° Evaluating instruction set alternatives
° Going from C to MIPS assembly and from MIPS assembly to machine
code
° Pseudo-instructions
Chapter 4: Arithmetic For Computers
° Unsigned and two’s complement number systems
° Negating two’s complement numbers
° Binary addition and subtraction
° MIPS logical operations
° ALU design
• Full adders, multiplexors, 1-bit ALUs
• Addition/subtraction, logic operations, set less than
• Overflow and zero detection.
° Carry Lookahead Addition
° Binary multiplication and booth’s algorithm
Floating-Point
° What can be represented in N bits?
° Unsigned
0 to 2N-1
° 2s Complement
- 2N-1 to 2N-1 - 1
° But, what about?
• very large numbers?
9,349,398,989,787,762,244,859,087,678
1.23 x 1067
• very small number?
0.0000000000000000000000045691
• fractional values?
• Mixed numbers?
2.98 x 10-32
0.35
10.28
• Irrationals?
p
Recall Scientific Notation
exponent
Sign, magnitude
decimal point
23
6.02 x 10
Mantissa
1.673 x 10
-24
radix (base)
Sign, magnitude
IEEE F.P.
± 1.M x 2
e - 127
° Issues:
• Representation
• Arithmetic operations(+, -, *, / )
•
•
•
•
Range and Precision
Rounding
Exceptions (e.g., divide by zero, overflow, underflow)
Errors
° On most general purpose computers, these issues are addressed by the
IEEE 754 floating point standard.
IEEE-754 Single Precision Floating-Point Numbers
single precision
(32 bits, float in C)
1
sign S
8
E
23
M
Mantissa or significand
exponent:
sign + magnitude, normalized
excess 127
binary integer binary significand w/ hidden
one bit: 1.M
actual exponent is
e = E - 127
0 < E < 255
S E-127
X = (-1) 2
(1.M)
Magnitude of numbers that can be represented is in the range:
2
-126
(1.0)
to
which is approximately:
-38
to
1.8 x 10
2
127
(2 - 2 23 )
3.40 x 10
38
Why use a biased exponent?
Why are floating point numbers normalized?
IEEE-754 Double Precision Floating-Point Numbers
double precision
(64 bits, double in C)
1
sign S
52
M
Mantissa or significand
exponent:
excess 1023 sign + magnitude, normalized
binary integer binary significand w/ hidden
one bit: 1.M
actual exponent is
e = E - 1023
11
E
0 < E < 2048
S E-1023
X = (-1) 2
(1.M)
Magnitude of numbers that can be represented is in the range:
2
-1022
(1.0)
to
which is approximately:
-308
to
2.2 x 10
2
1025
(2 - 2 52 )
1.8 x 10
308
The IEEE 754 standard also supports extended single-precision (more
than 32 bits) and extended double-precision (more than 64 bits).
Special values for the exponent and mantissa are used to indicate
other values, like zero and infinity.
Converting from Binary to Decimal Floating Point
° What is the decimal single-precision floating point number that
corresponds to the bit pattern 01000100010010010000000000000000?
° Use the equation
X = (-1)S x 2E-127 x (1.M)
where
S=0
E = 100010002 = 1362
1.M = 1. 10010010000000000000000 = 1 + 2-1 + 2-4 + 2 -7 = 1.5703125
so
X = (-1)0 x 2136-127 x 1.5703125 = 804 = 8.04 x 102
Converting from Decimal to Binary Floating Point
° What is the binary representation for the single-precision floating point
number that corresponds to X = -12.2510?
° What is the normalized binary representation for the number?
-12.2510 = -1100.012 = -1.100012 x 23
° What are the sign, stored exponent, and normalized mantissa?
S = 1 (since the number is negative)
E = 3 + 127 = 130 = 128 + 2 = 100000102
M = 100010000000000000000002
X = 110000010100010000000000000000002
° What is the binary representation for the double-precision floating point
number that corresponds to X = -12.2510?
110000000010100010000000000000000000000000000000000000000000000
00002
Denormalized Numbers and zero
2-bias
denorm2 -bias
1-bias
2
2
gap
normal numbers with hidden bit
B = 2, p = 4
The gap between 0 and the next representable number is much larger
than the gaps between nearby representable numbers.
0
IEEE standard uses denormalized numbers to fill in the gap, making the
distances between numbers near 0 more alike.
0
2 1-bias
2 -bias
p-1
bits of
precision
2
2-bias
p bits of
precision
Denormalized numbers have an exponent of zero and a value of
S -bias+1
X = (-1) 2
(0.M)
NOTE: Zero is represented using 0 for the exponent and 0 for the mantissa.
Either, +0 or -0 can be represented, based on the sign bit.
Infinity and NaNs
result of operation overflows, i.e., is larger than the largest number that
can be represented
overflow is not the same as divide by zero (raises a different exception)
+/- infinity
S 1...1 0...0
It may make sense to do further computations with infinity
e.g., X/0 > Y may be a valid comparison
Not a number, but not infinity (e.q. sqrt(-4))
invalid operation exception (unless operation is = or =)
NaN
S 1 . . . 1 non-zero
NaNs propagate: f(NaN) = NaN
HW decides what goes here
Basic Addition Algorithm
For addition (or subtraction) this translates into the following steps:
(1) compute Ye - Xe (getting ready to align binary point)
Xe-Ye
(2) right shift Xm that many positions to form Xm 2
Xe-Ye
(3) compute Xm 2
+ Ym
if representation demands normalization, then normalization step follows:
(4) left shift result, decrement result exponent (e.g., 0.001xx…)
right shift result, increment result exponent (e.g., 11.1xx…)
continue until MSB of data is 1 (NOTE: Hidden bit in IEEE Standard)
(5) if result is 0 mantissa, may need to zero exponent by special step
Note: Book also gives algorithm for floating point multiplication - look
over. and see http://www.ecs.umass.edu/ece/koren/arith/simulator/.
Rounding Digits
normalized result, but some non-zero digits to the right of the
significand --> the number should be rounded
E.g., Base = 10, p = 3:
-
2-bias
0 2 1.69 = 1.6900 * 10
0 0 7.85 = - .0785 * 10 2-bias
0 2 1.61 = 1.6115 * 10 2-bias
IEEE Standard 754:
four rounding modes: round to nearest (default)
round towards plus infinity
round towards minus infinity
round towards 0
See http://www.ecs.umass.edu/ece/koren/arith/simulator/.