The “Laws” of Computers

Download Report

Transcript The “Laws” of Computers

The “Laws” of Computers
Robert M. Hayes
2002
Overview
 Grosch’s Law



General Nature
The Rationale
The Impact Today
 Moore's Law






General Nature
The Origins of Solid-State Electronics
The Integrated Circuit
The Commentary by Moore in 1965
The Reprise in 1975
The Reprise in 1995
 The Implications of Moore's Law

As a Basis for Growth of Information Technology

As a Guide to Industry

As a Context for Software
Grosch's Law
 General Nature
 The Rationale
 The Impact Today
General Nature - 1
 For many years, a reigning theory about the economics
of computers was Grosch's Law. It asserted that there
were huge economies of scale available because of the
rapid increase of capabilities with increasing costs of
computers. Accordingly, the profitability of
computerization would show up when firms bought
large-scale equipment and centralized the workload in
data centers for more efficient processing.
 Herbert R. J. Grosch, an IBM employee at the time he
made that assertion and subsequently the head of the
U.S. Department of Commerce's National Bureau of
Standards, said that "a computer's power increases
with the square of its costs".
General Nature - 2
 Though Grosch never published his work, his theory
became the accepted truth for computer capacity planning
for more than 20 years.
 It was never clear whether Grosch's Law was a reflection of
how IBM priced its computers or whether it related to
actual costs, but it provided the rationale for the view that,
for computers, "bigger is better".
 IBM used Grosch's Law to persuade organizations to
acquire more computing capacity, and it became the
justification for offering time-sharing services from big
data centers to replace distributed computing.
The Rationale - 1
 There is in fact an underlying rationale for Grosch's
law. At the time he formulated it, the internal memory
and its associated circuitry was the major component of
the variable cost of a computer. All other internal
elements, such as the central processing unit and means
for dealing with input and output were essentially fixed
costs. Peripheral units, such as magnetic tapes as
storage devices and printers as means for output were
linear costs, but relatively small.
 At that time, the means for internal storage were
magnetic cores, arranged in three-dimensional arrays.
The fixed size of a "word" as a unit of memory
represented one dimension of the array, and the other
two dimensions represented the number of words in the
memory.
The Rationale – 2
 Given that the number of words was a rectangular
array, the cost of the circuitry for the memory was the
sum of the two rectangular dimensions. Thus, if the
size of the memory was N*N words, the cost of the
circuitry was 2*C*N. So, the total cost was 2*C*N plus
the essentially fixed costs for other internal elements
and the linear costs for peripherals.
 The crucial point is that the power of a computer is
primarily dependent upon the size of the internal
memory. That determines the size of the program and
the amount of data that can be processed internally.
Hence, by doubling the cost, from 2*C*N to 2*C*(2*N)
one could increase the memory to (2*N)*(2*N) and thus
quadruple its size and the power of the computer.
The Impact Today
 Today the situation is totally different as we will see in a
moment when we consider Moore's Law. The size of the
internal memory has become almost insignificant as
part of the cost of the computer, and internal memories
of hundreds of megabytes are common, even in
personal computers.
 As yet, there is no evidence that computer investments
are exhibiting economies of scale. The now virtually
irrelevant Grosch's Law should serve as a reminder
that the history of the economics of computing has had
an abundance of unsupported misperceptions or at
most temporarily realistic perceptions. Ideas of how to
invest in new equipment acquire temporary popularity
until they are conveniently abandoned for yet another
rationalization of how to spend money.
Moore's Law






General Nature
The Origins of Solid-State Electronics
The Integrated Circuit
The Commentary by Gordon E. Moore in 1965
The Reprise in 1975
The Reprise in 1995
General Nature
 "Moore's Law" is the observation, made in 1965 by
Gordon E. Moore, that semiconductor circuit densities
had doubled on a regular basis for the prior several
years. He projected that they would continue to do so
for the foreseeable future. That prediction has been
validated and now carries with it enormous influence.
 The manufacturers regard Moore's Law as a challenge,
which they have continued to meet. As a result, it has
led users to expect a continuous stream of faster, better,
and cheaper information technology products.
 The most recent behavior is shown in the following
chart:
The Origins of Solid-State Electronics
 It is worth briefly reviewing the context for Moore's
law in the invention of the "transistor" in 1947 by Bell
Laboratory researchers. It created a new era of solidstate electronics, and the 1950s saw significant progress
in the creation of an entire new industry that would
design and manufacture semiconductor devices.
The Integrated Circuit - 1
 The development of the integrated circuit in 1958
represents a major product milestone, made possible by
overcoming technological barriers. One innovation was
the introduction of a masking process which allowed
the laying of intricate patterns on the semiconductor. A
second innovation was to flatten the structure into a
plane enabled electrical connections to be made, not
laboriously by hand, but by depositing an evaporated
metal film on appropriate portions of the
semiconductor wafer. The "microchip" was born out of
the planar transistor. Most significantly, the planar
process enabled the integration of circuits on a single
substrate since electrical connections between circuits
could be accomplished internal to the chip.
The Integrated Circuit – 2
 Fairchild introduced the first planar transistor in 1959
and the first planar integrated circuit in 1961. Moore
views the 1959 innovation of the planar transistor as
the origin of "Moore's Law." Perhaps more than any
other single process innovation, planarization set the
industry on its historical exponential pace of progress.
 Amazingly, the industry has not veered from this
course since then. With time, chip manufacturers
improved the masking process with more precise
photographic methods, and "photolithography" thus
became the standardized production method for the
industry. More pertinent to "Moore's Law,"
photolithography enabled manufacturers to continue to
reduce feature sizes of devices.
The Commentary by Moore in 1965
 In the April 19, 1965 issue of Electronics there was an
article entitled, "Cramming more components onto
integrated circuits," by Gordon E. Moore, then
Director, Research and Development Laboratories,
Fairchild Semiconductor. Moore had been asked by
Electronics to predict what would happen in the
semiconductor components industry over the next 10
years—to 1975. He speculated that by 1975 it would be
possible to squeeze as many as 65,000 components on a
single silicon chip occupying an area of only about onefourth a square inch.
The Commentary by Moore in 1965
 His reasoning was, "The complexity for minimum
component costs has increased at a rate of roughly a
factor of two per year. Certainly over the short term
this rate can be expected to continue, if not to increase.
Over the longer term, the rate of increase is a bit more
uncertain, although there is no reason to believe it will
not remain nearly constant for at least 10 years."
The Reprise in 1975 - 1
 Ten years later, Moore delivered a paper at the 1975
IEEE International Electron Devices Meeting in which
he reexamined the annual rate of density-doubling.
Amazingly the plot had held through a scatter of
different complex types of devices introduced over the
ten-year period. And a new device to be introduced in
1975 indeed contained almost 65,000 components.
 In this paper, Moore also offered his analysis of the
major contributors to or causes of the exponential
behavior. He cited three reasons.



First, micro-chip sizes were getting bigger and manufacturers
could work with larger areas without sacrificing quality.
Second, there was a simultaneous evolution to finer feature
sizes or line widths.
Third was what Moore called "circuit and device cleverness."
The Reprise in 1975 - 2
 But Moore concluded, "There is no room left to squeeze
anything out by being clever. Going forward from here
we have to depend on the two size factors—bigger
(chips) and finer dimensions." So Moore revised his
annual rate of circuit density-doubling, concluding that
every eighteen months was a reasonable rate. He
redrew the plot from 1975 forward with a less steep
slope reflecting a slowdown in the rate, but still
behaving in a log-linear fashion. So, Moore's Law now
states that circuit density or capacity of semiconductors
doubles every eighteen months.
The Reprise in 1995
 In 1995 Moore compared the actual performance
against his revised projection of 1975. Amazingly, it
tracked the slope of the exponential curve fairly closely.
Chip sizes had continued to increase while line widths
had continued to decrease at exponential rates
consistent with his 1975 analysis.
 In early-1996 IBM claimed that a gigabit (billion bits)
memory chip was actively under development and
would be commercially available within a few years.
And papers presented at a 1995 IEEE International
Solid-State Circuits Conference contend that terachips
(capable of handling a trillion bits or instructions) will
arrive by the end of the next decade.
The Implications of Moore's Law
 As a Basis for Growth of Information Technology
 As a Guide to Industry
 As a Context for Software
Growth of Information Technology
 The implications of Moore's Law are quite obvious and
profound. As more computing power can be placed on
a single micro-chip, greater power can be obtained at
virtually the same cost. As long as Moore's law
continues to apply, therefore, the capabilities of
information technology will also grow exponentially. As
Moore says, "By making things smaller, everything gets
better simultaneously. There is little need for tradeoffs.
The speed of our products goes up, the power
consumption goes down, system reliability, as we put
more of the system on a chip, improves by leaps and
bounds, but especially the cost of doing things
electronically drops as a result of the technology."
As a Guide to Industry
 Perhaps the broadest implication of Moore's Law is
that it has become an almost universal guide for the
entire industry. In a sense, Moore's Law represents not
the physics and chemistry of semiconductors but the
effect of organizational efforts. In that respect, Moore's
Law has become almost a self-fulfilling prophecy, since
it has guided industrial developments.
As a Context for Software - 1
 There is another side to this picture, though. While the
theoretical capabilities have grown exponentially, that
by no means guarantees that those capabilities will be
effectively used. Recall that the underlying rationale for
Grosch's Law was that the size of memory determined
the power of the computer because of the size of
program and amount of data that could be stored
internally.
As a Context for Software - 2
 In the early days of computing, internal memory was
costly and scarce. As a result, software had to fit into
limited memory. That meant efficient use of memory or
what was called "tight" code. But now the situation is
dramatically different. Under Moore's Law average PC
memory sizes have grown at an exponential rate. Thus,
software has no longer been constrained to "tight
spaces". The result has been the proliferation of
thousands, then many thousands, and now millions of
"lines of code" as the norm for complex system
software.
As a Context for Software - 3
 In 1975, the programming language Basic had 4,000
lines of code. In 1995, it had roughly half a million. In
1982, the first version of Microsoft Word consisted of
27,000 lines of code. In 2002 it has grown to about 2
million. So the size and complexity of software has
increased even faster than Moore's Law.
 As a result, software has become a much larger part of
the cost of a computer system. More complex software
requires more memory and more processing capacity,
and software designers have come to expect that they
will be available. Indeed, "software expands to fill the
available memory".
THE END