Case Study: W hat Happened to Supercomputers?
14
CHAPT ER 1 INT RODUCT ION
unit for the instruction set architecture. Chapter 7 covers the organization of memory units, and memory management techniques. Chapter 8 covers input,
output, and communication. Chapter 9 covers advanced aspects of single-CPU systems which might have more than one processing unit. Chapter 10 covers
advanced aspects of multiple-CPU systems, such as parallel and distributed architectures, and network architectures. Finally, in Appendices A and B, we look
into the design of digital logic circuits, which are the building blocks for the basic components of a computer.
1.8 Case Study: W hat Happened to Supercomputers?
[Note from the authors: T he following contribution comes from Web page http:www.paralogos.comDeadSuper created by Kevin D. Kissell at
kevinkacm.org. Kissell’s Web site lists dozens of supercomputing projects that have gone by the wayside. One of the primary reasons for the near-extinction of
Memory Input output
Battery Plug-in expansion card slots
Power supply connector
Pentium II processor slot ALUcontrol
Figure 1-6 A Pentium II based motherboard. [Source: TYAN Computer, http:www.tyan.com.]
CHAPT ER 1 INT RODUCT ION
15
supercomputers is that ordinary, everyday computers achieve a significant frac- tion of supercomputing power at a price that the common person can afford.
T he price-to-performance ratio for desktop computers is very favorable due to low costs achieved through mass market sales. Supercomputers enjoy no such
mass markets, and continue to suffer very high price-to-performance ratios.
Following Kissell’s contribution is an excerpt from an Electrical Engineering Times article that highlights the enormous investment in everyday microproces-
sor development, which helps maintain the favorable price-to-performance ratio for low-cost desktop computers.]
T he Passing of a Golden Age?
From the construction of the first programmed computers until the mid 1990s, there was always room in the computer industry for someone with a clever, if
sometimes challenging, idea on how to make a more powerful machine. Com- puting became strategic during the Second World War, and remained so during
the Cold War that followed. High-performance computing is essential to any modern nuclear weapons program, and a computer technology “race” was a logi-
cal corollary to the arms race. While powerful computers are of great value to a number of other industrial sectors, such as petroleum, chemistry, medicine, aero-
nautical, automotive, and civil engineering, the role of governments, and partic- ularly the national laboratories of the US government, as catalysts and incubators
for innovative computing technologies can hardly be overstated. Private industry may buy more machines, but rarely do they risk buying those with single-digit
serial numbers. T he passing of Soviet communism and the end of the Cold War
Figure 1-7 The Manchester University Mark I, made operational on 21 June 1948. Not to be con- fused with the Harvard Mark I, donated to Harvard University by International Business Machines
in August, 1944.
16
CHAPT ER 1 INT RODUCT ION
brought us a generally safer and more prosperous world, but it removed the rai- son detre for many merchants of performance-at-any-price.
Accompanying these geopolitical changes were some technological and economic trends that spelled trouble for specialized producers of high-end computers.
Microprocessors began in the 1970s as devices whose main claim to fame was that it was possible to put a stored-program computer on a single piece of silicon.
Competitive pressures, and the desire to generate sales by obsoleting last year’s product, made for the doubling of microprocessor computing power every 18
months, Moores celebrated “law.” Along the way, microprocessor designers bor- rowed almost all the tricks that designers of mainframe and numerical supercom-
puters had used in the past: storage hierarchies, pipelining, multiple functional units, multiprocessing, out-of-order execution, branch prediction, SIMD pro-
cessing, speculative and predicated execution. By the mid 1990s, research ideas were going directly from simulation to implementation in microprocessors des-
tined for the desktops of the masses. Nevertheless, it must be noted that most of the gains in raw performance achieved by microprocessors in the preceding
decade came, not from these advanced techniques of computer architecture, but from the simple speedup of processor clocks and quantitative increase in proces-
sor resources made possible by advances in semiconductor technology. By 1998, the CPU of a high-end Windows-based personal computer was running at a
higher clock rate than the top-of-the-line Cray Research supercomputer of 1994.
It is thus hardly surprising that the policy of the US national laboratories has shifted from the acquisition of systems architected from the ground up to be
supercomputers to the deployment of large ensembles of mass-produced micro- processor-based systems, with the ASCI project as the flagship of this activity. As
of this writing, it remains to be seen if these agglomerations will prove to be suf- ficiently stable and usable for production work, but the preliminary results have
been at least satisfactory. T he halcyon days of supercomputers based on exotic technology and innovative architecture may well be over.
[...] Kevin D. Kissell
kevinkacm.org February, 1998
[Note from the authors: T he following excerpt is taken from the Electronic Engi-
CHAPT ER 1 INT RODUCT ION
17
neering Times, source: http:techweb.cmp.comeetnews98994newsinvest.html.]
Invest or die: Intel’s life on the edge
By Ron Wilson and Brian Fuller
SANTA CLARA, Calif. -- With about 600 million to pump into venture companies this year, Intel Corp. has
joined the major leagues of venture-capital firms. But the unique imperative that drives the microprocessor
giant to invest gives it influence disproportionate to even this large sum. For Intel, venture investments
are not just a source of income; they are a vital tool in the fight to survive.
Survival might seem an odd preoccupation for the worlds largest semiconductor company. But Intel, in a
way all its own, lives hanging in the balance. For every new generation of CPUs, Intel must make huge
investments in process development, in buildings and in fabs-an investment too huge to lose.
Gordon Moore, Intel chairman emeritus, gave scale to the wager. An RD fab today costs 400 million just
for the building. Then you put about 1 billion of equipment in it. That gets you a quarter-micron fab
for maybe 5,000 wafers per week, about the smallest practical fab. For the next generation, Moore said,
the minimum investment will be 2 billion, with maybe 3 billion to 4 billion for any sort of volume produc-
tion. No other industry has such a short life on such huge investments.
Much of this money will be spent before there is a proven need for the microprocessors the fab will pro-
duce. In essence, the entire 4 billion per fab is bet on the proposition that the industry will absorb a
huge number of premium-priced CPUs that are only some- what faster than the currently available parts. If for
just one generation that didnt happen-if everyone judged, say, that the Pentium II was fast enough,
thank you-the results would be unthinkable.
My nightmare is to wake up some day and not need any more computing power, Moore said.
18
CHAPT ER 1 INT RODUCT ION
■
SUMMARY
Computer architecture deals with those aspects of a computer that are visible to a programmer, while computer organization deals with those aspects that are at a
more physical level and are not made visible to a programmer. Historically, pro- grammers had to deal with every aspect of a computer – Babbage with mechanical
gears, and ENIAC programmers with plugboard cables. As computers grew in sophistication, the concept of levels of machines became more pronounced, allow-
ing computers to have very different internal and external behaviors while man- aging complexity in stratified levels. The single most significant development that
makes this possible is the stored program computer, which is embodied in the von Neumann model. It is the von Neumann model that we see in most conventional
computers today.
■
Further Reading
T he history of computing is riddled with interesting personalities and mile- stones. Anderson, 1991 gives a short, readable account of both during the last
century. Bashe et. al., 1986 give an interesting account of the IBM machines. Bromley, 1987 chronicles Babbage’s machines. Ralston and Reilly, 1993 give
short biographies of the more celebrated personalities. Randell, 1982 covers the history of digital computers. A very readable Web based history of computers by
Michelle A. Hoyle can be found at http:www.interpac.net~eingangLec- turetoc.html. SciAm, 1993 covers a readable version of the method of finite
differences as it appears in Babbage’s machines, and the version of the analytical difference engine created by the Science Museum in London.
Tanenbaum, 1999 is one of a number of texts that popularizes the notion of levels of machines.
Anderson, Harlan, Dedication address for the Digital Computer Laboratory at the University of Illinois, April 17, 1991, as reprinted in IEEE Circuits and Sys-
tems: Society Newsletter, vol. 2, no. 1, pp. 3–6, March 1991.
Bashe, Charles J., Lyle R. Johnson, John H. Palmer, and Emerson W. Pugh, IBM’s Early Computers, T he MIT Press, 1986.
CHAPT ER 1 INT RODUCT ION
19
Bromley, A. G., “T he Evolution of Babbage’s Calculating Engines,” Annals of the History of Computing,
9
, pp. 113-138, 1987. Randell, B., The Origins of Digital Computers, 3e, Springer-Verlag, 1982.
Ralston, A. and E. D. Reilly, eds., Encyclopedia of Computer Science, 3e, van Nostrand Reinhold, 1993.
Tanenbaum, A., Structured Computer Organization, 4e, Prentice Hall, Engle- wood Cliffs, New Jersey, 1999.
■
PROBLEMS
1.1
Moore’s law, which is attributed to Intel founder Gordon Moore, states that computing power doubles every 18 months for the same price. An unre-
lated observation is that floating point instructions are executed 100 times faster in hardware than via emulation. Using Moore’s law as a guide, how long
will it take for computing power to improve to the point that floating point instructions are emulated as quickly as their earlier hardware counterparts?
20
CHAPT ER 1 INT RODUCT ION
CHAPT ER 2 DAT A REPRESENT AT ION
21