Computer chip power will continue to soar
October 7, 1998
by Gary Anthes
(IDG) -- In 1965, an engineer at Fairchild Semiconductor named Gordon Moore noted that the number of transistors on a chip doubled every 18 to 24 months. A corollary to "Moore's Law," as that observation came to be known, is that the speed of microprocessors, at a constant cost, also doubles every 18 to 24 months.
Moore's Law has held up for more than 30 years. It worked in 1969 when Moore's start-up, Intel Corp., put its first processor chip - the 4-bit, 104-KHz 4004 - into a Japanese calculator. And it still works today for Intel's 32-bit, 450-MHz Pentium II processor, which has 7.5 million transistors and is 233,000 times faster than the 2,300-transistor 4004.
Intel says it will have 100-million-transistor chips on the market in 2001 and a 1-billion-transistor powerhouse performing at 100,000 MIPS in 2011.
For users, it's been a fast, fun and mostly free ride. But can it last?
Although observers have been saying for decades that exponential gains in chip performance would slow in a few years, experts today generally agree that Moore's Law will continue to govern the industry for another 10 years, at least. Nevertheless, it does face two other formidable sets of laws: those of physics and economics.
A mind-numbing variety of things get exponentially harder as the density of circuits on a silicon wafer increases. The Semiconductor Industry Association's (SIA) 1997 Technology Roadmap identified a number of "grand challenges" as the width of individual circuits on a semiconductor chip shrinks from today's 250 nanometers (or billionths of a meter) to 100 nanometers in 2006, four product cycles later. One hundred nanometers is seen as a particularly challenging hurdle because conventional manufacturing techniques begin to fail as chip features approach that size.
And it isn't just making the chips that's getting more difficult - as Intel discovered in 1994 when an obscure flaw in its then-new Pentium processor triggered a firestorm of bad publicity that cost the company $475 million. Modern chips are so complex that it's impossible, as a practical matter, to test them exhaustively. Increasingly, chip makers rely on incomplete testing combined with statistical analysis. The same methods are used to test very complex software, such as operating systems - but for whatever reason, users who are willing to put up with software bugs are intolerant of flaws in hardware.
At the present rate of improvement in test equipment, the factory yield of good chips will plummet from 90% today to an unacceptable 52% in 2012. At that point, it will cost more to test chips than to make them, the SIA says.
Chip makers are hustling to improve testing equipment - and are extremely reluctant to discuss the matter, which they see as vital to their future competitiveness.
Although the cost of a chip on a per-transistor or per-unit-of-performance basis continues to fall smartly, it masks a grim reality for chip makers: A fabrication plant costs about $2 billion today, and the price is expected to zoom to $10 billion - more than a nuclear power plant - as circuit widths shrink below 100 nanometers. Significantly, "scaling" isn't one of the SIA's grand challenges. "Affordable scaling" is.
Indeed, the industry's progress may eventually be slowed by a lack of capital, says James T. Clemens, head of very large-scale integration research at Bell Laboratories, the Murray Hill, N.J., research and development arm of Lucent Technologies, Inc. "Social and financial issues, not technical issues, may ultimately limit the widespread application of advanced [sub-100 nanometers] integrated circuit technology," he says.
As an analogy, Clemens points to the airline industry, which knows how to routinely fly passengers faster than sound but, due to the cost and technical complexity, doesn't do it.
"A lot of people are worried about cost," says John Shen, a professor of electrical and computer engineering at Carnegie Mellon University in Pittsburgh. "You see more and more companies bailing out."
Transistors are etched onto silicon by optical lithography, a process by which ultraviolet light is beamed through a mask to print a pattern of interconnecting lines on a chemically sensitive surface. The conventional approaches that work at 250 nanometers probably can be refined to etch features as small as 130 nanometers: 400 atoms wide, which is a thousand times thinner than a human hair. But at 100 nanometers and below, where the wavelength of light exceeds the size of the smallest features, entirely new methods will be needed.
An Intel-led consortium is working on "extreme ultraviolet" lithography, which uses xenon gas to produce wavelengths down to 10 nanometers. An approach favored by IBM uses X rays with a wavelength of 5 nanometers. Meanwhile, Lucent is developing lithography that uses a beam of electrons. These and other alternatives are complex, costly and still unproven.
Continued progress in processor speeds will require better ways of designing and making chips, but the biggest obstacles to higher performance may currently lie just off the chip: in the motherboard and in the logic that connects the chip to cache memory, graphics ports and other things.
"We do not have the design or manufacturing capabilities in those off-chip structures to keep up with the rapid growth in processor clock speeds," says Bruce Shriver, a consultant in Ossining, N.Y., and a computer science professor at the University of Tromso in Norway. "Unless the design and implementation capabilities in those areas catch up, then they will be a critical limiting point."
But Albert Yu, general manager of Intel's Microprocessor Products Group, says Shriver is worried about a "very temporary problem." Increasingly, off-chip units such as cache will become integrated onto the processor chip, allowing them to work at the same high frequencies as the processor and eliminating the bus between them, he says.
In just the past few months, a number of promising announcements have come out of U.S. research labs:
Says Carnegie Mellon's Shen, "We've always said there's this wall out there, but when you get closer to it, it sort of fades away or gets pushed back."
Many hands make light work
Ultimately, users don't care about transistor counts, clock speeds or even MIPS. They care how much real work their computers get done. One way to make the processor do more work is to move some of the work from hardware to software.
Today's microprocessors are able to achieve "superscalar" performance by executing several instructions simultaneously. Intel's Pentium II - which can execute up to five instructions at a time - predicts the flow of a program through several branches by looking ahead in the program. It analyzes program flow and schedules execution in the most efficient sequence. It also executes instructions "speculatively" - before they are needed - and holds the results in suspense until the predicted branches are confirmed.
But there's a law of diminishing returns for this technique because the chip must devote more and more of its circuitry to management of the complex processes.
Now an old concept - the very long instruction word (VLIW) processor ' is making a comeback, notably in the new 64-bit Merced chip, part of the Explicitly Parallel Instruction Computing (EPIC) family of processors being developed by Intel and Hewlett-Packard Co. VLIW counts on the compiler, and to some extent the programmer, to specify where parallel execution of code is possible, relieving the processor of that burden.
VLIW has some pitfalls, says Carnegie Mellon University's chip expert John Shen. "Merced is hoping that, by moving the work to the compiler, you can make your hardware very clean and fast," he says. But complexity in software traditionally has been harder to manage than complexity in hardware, he says, and it takes longer to develop new compilers than new microprocessors.
Intel senior vice president Albert Yu won't reveal how EPIC works, but he says labeling it a VLIW architecture is a "misinterpretation." But, he says, 'We rely on the compiler to do a lot of stuff."Bruce Shriver, co-author of a new electronic book, The Anatomy of a High-Performance Microprocessor, says improvements in hardware-based branch prediction algorithms will allow superscalar processors to execute a dozen or more instructions simultaneously, twice what is possible today. And he says compilers will be created that do a better job of optimizing code for more efficient execution.
Back to the top
© 2000 Cable News Network. All Rights Reserved.
Terms under which this service is provided to you.
Read our privacy guidelines.