Group II - Lectures for All

Download Report

Transcript Group II - Lectures for All

MICROPROCESSOR AN OVERVIEW
 A microprocessor is a multipurpose, programmable, clock
driven, register based device that takes input and provides
output. A microprocessor incorporates most or all of the
functions of a computer's central processing unit (CPU) on
a single integrated circuit (IC, or microchip). The first
microprocessors emerged in the early 1970s and were used
for electronic calculators, using binary-coded
decimal (BCD) arithmetic in 4-bit words.
Other embedded uses of 4-bit and 8-bit microprocessors,
such as terminals, printers, various kinds
of automation etc., followed soon after. Affordable 8-bit
microprocessors with 16-bit addressing also led to the first
general-purpose microcomputers from the mid-1970s on.
CPU ARCHITECTURE
 The processor (really a short form for microprocessor
and also often called the CPU or central processing
unit) is the central component of the PC. This vital
component is in some way responsible for every single
thing the PC does. It determines, at least in part,
which operating systems can be used, which software
packages the PC can run, how much energy the PC
uses, and how stable the system will be, among other
things. The processor is also a major determinant of
overall system cost: the newer and more powerful the
processor, the more expensive the machine will be.
 For some years two families of microprocessor
dominated the PC industry - Intel's Pentium and the
Apple/IBM/Motorola alliance's PowerPC - each CPU
being a prime example of the competing CPU
architectures of the time, CISC and RISC.
CISC - Complex Instruction Set
Computer
 CISC is the traditional architecture of a computer, in which
the CPU uses microcode to execute very comprehensive
instruction set. These may be variable in length and use all
addressing modes, requiring complex circuitry to decode
them.
 For a number of years, the tendency among computer
manufacturers was to build increasingly complex CPUs
that had ever-larger sets of instructions. In 1974, John
Cocke of IBM Research decided to try an approach that
dramatically reduced the number of instructions a chip
performed. By the mid-1980s this had led to a number of
computer manufacturers reversing the trend by building
CPUs capable of executing only a very limited set of
instructions.
RISC - Reduced Instruction Set
Computer
 RISC CPUs keep instruction size constant, ban the
indirect addressing mode and retain only those
instructions that can be overlapped and made to
execute in one machine cycle or less. One advantage of
RISC CPUs is that they can execute their instructions
very fast because the instructions are so simple.
Another, perhaps more important advantage, is that
RISC chips require fewer transistors, which makes
them cheaper to design and produce.
CPU Basic Structure
 Core: The heart of a modem CPU is the execution
unit.The Pentium microprocessor has two parallel
integer pipeplines enabling it to read , interpret ,
exeute and dispatch two instruction simultaneously.
 Branch Predictor : The branch predictor unit tries to
guess which sequence will be executed each time the
program contains a conditional jump, so that the
prefetch and decode unit can get instruction ready in
advance.
 Floating Point: The third execution unit in a pentium
microprocessor, where non –integer calculations are
performed .
 Level 1 Cache: the Pentium has two on –chip caches of
8kb each, one for code and one for data, which are far
quicker than the larger external secondary cache (L2
Cache).
 Bus Interface: This brings a mixture of code and data
in the CPU , seperates the two ready for use and then
recombines then and sends them back out.
Moore's law
 describes a long-term trend in the history of
computing hardware. The number of transistors that
can be placed inexpensively on an integrated circuit
has doubled approximately every two years. The trend
has continued for more than half a century and is not
expected to stop until 2015 or later.
Microprocessor Progression: Intel
The Intel 8080 was the first
microprocessor in a home
computer.
 The first microprocessor to make it into a home
 computer was the Intel 8080, a complete 8-bit
computer on one chip, introduced in 1974. The first
microprocessor to make a real splash in the market
was the Intel 8088, introduced in 1979 and
incorporated into the IBM PC (which first appeared
around 1982).
 Clock speed is the maximum rate that the chip can be
clocked at. Clock speed will make more sense in the
next section.
 Data Width is the width of the ALU. An 8-bit ALU can
add/subtract/multiply/etc. two 8-bit numbers, while a
32-bit ALU can manipulate 32-bit numbers. An 8-bit
ALU would have to execute four instructions to add
two 32-bit numbers, while a 32-bit ALU can do it in
one instruction. In many cases, the external data bus is
the same width as the ALU, but not always. The 8088
had a 16-bit ALU and an 8-bit bus, while the modern
Pentiums fetch data 64 bits at a time for their 32-bit
ALUs.
 MIPS stands for "millions of instructions per second"
and is a rough measure of the performance of a CPU.
Modern CPUs can do so many different things that
MIPS ratings lose a lot of their meaning, but you can
get a general sense of the relative power of the CPUs
from this column.
CPU Evolution
 The 4004 CPU was the forerunner of all of today's Intel
offerings and, to date, all PC processors have been
based on the original Intel designs. The first chip used
in an IBM PC was Intel's 8088. This was not, at the
time it was chosen, the best available CPU, in fact
Intel's own 8086 was more powerful and had been
released earlier. The 8088 was chosen for reasons of
economics: its 8-bit data bus required less costly
motherboards than the 16-bit 8086.
 More transistors also allow for a technology called
pipelining. In a pipelined architecture, instruction
execution overlaps. So even though it might take five
clock cycles to execute each instruction, there can be
five instructions in various stages of execution
simultaneously. That way it looks like one instruction
completes every clock cycle.
 Many modern processors have multiple instruction
decoders, each with its own pipeline. This allows for
multiple instruction streams, which means that more
than one instruction can complete during each clock
cycle. This technique can be quite complex to
implement, so it takes lots of transistors.
Multi-tasking
 Multitasking, in an operating system, is allowing a user
to perform more than one computer task (such as the
operation of an application program) at a time. The
operating system is able to keep track of where you are
in these tasks and go from one to the other without
losing information. Microsoft Windows 2000, IBM's
OS/390, and Linux are examples of operating systems
that can do multitasking (almost all of today's
operating systems can). When you open your Web
browser and then open word at the same time, you are
causing the operating system to do multitasking.
 A multi-core processor is a processing system
composed of two or more independent cores. One can
describe it as an integrated circuit to which two or
more individual processors (called cores in this sense)
have been attached. Manufacturers typically integrate
the cores onto a single integrated circuit die (known as
a chip multiprocessor or CMP), or onto multiple dies
in a single chip package.
 A many-core processor is one in which the number of
cores is large enough that traditional multi-processor
techniques are no longer efficient — this threshold is
somewhere in the range of several tens of cores — and
probably requires a network on chip.
 A multi-core processor is an integrated circuit (IC) to
which two or more processors have been attached for
enhanced performance, reduced power consumption,
and more efficient simultaneous processing of
multiple tasks (see parallel processing).
 A dual-core processor contains two cores, a quad-
core processor contains four cores, and a hexa-core
processor contains six cores. A multi-core processor
implements multiprocessing in a single physical
package. Designers may couple cores in a multi-core
device together tightly or loosely. For example, cores
may or may not share caches, and they may implement
message passing or shared memory inter-core
communication methods.
Hyper-threading
 Hyper-threading (officially Hyper-Threading
Technology, and abbreviated HT Technology, HTT
or HT) is Intel's term for its simultaneous
multithreading implementation in its Atom, Core i3,
Core i5, Core i7, Itanium, Pentium 4 and Xeon CPUs.
 Hyper-threading is an Intel-proprietary technology
used to improve parallelization of computations
(doing multiple tasks at once) performed on PC
microprocessors. For each processor core that is
physically present, the operating system addresses two
virtual processors, and shares the workload between
them when possible. Hyper-threading requires not
only that the operating system support multiple
processors, but also that it be specifically optimised for
HTT, and Intel recommends disabling HTT when
using operating systems that have not been so
optimized.
 Intel Pentium 4 processor that incorporates Hyper-
Threading Technology
 Hyper-threading works by duplicating certain sections
of the processor—those that store the architectural
state—but not duplicating the main execution
resources. This allows a hyper-threading processor to
appear as two "logical" processors to the host
operating system, allowing the operating system to
schedule two threads or processes simultaneously.
THE END