Year in, year out, Intel executive Moore's Law is going to run out of steam. Sometimes he even hears it from his own co-workers.hears the same doomsday prediction:
But Moore's Law, named after Intel co-founder Gordon Moore, who 47 years ago predicted a steady, two-year cadence of chip improvements, keeps defying the pessimists because a brigade of materials scientists like Mayberry continue to find ways of stretching today's silicon transistor technology even as they dig into alternatives. (Such as, for instance, super-thin sheets of carbon graphene.)
Oh, and don't forget the money that's driving that hunt for improvement. IDC predicts chip sales will rise from $315 billion this year to $380 billion in 2016. For decades, that revenue has successfully drawn semiconductor research out of academia, through factories, and into chips that have powered everything from a 1960s mainframe to a 2012 iPhone 5.
The result: Moore's Law has long passed being mere prognostication. It's the marching order for a vast, well-funded industry with a record of overcoming naysayers' doubts. Researchers keep finding ways to maintain a tradition that two generations ago would have been science fiction: That computers will continue to get smaller even as they get more powerful.
"If you're only using the same technology, then in principle you run into limits. The truth is we've been modifying the technology every five or seven years for 40 years, and there's no end in sight for being able to do that," said Mayberry, vice president of Intel's Technology and Manufacturing Group.
Plenty of other industries aren't as fortunate. You don't see commercial supersonic airplane travel, home fusion reactors, or 1,000-mile-per-gallon cars. But the computing industry has a fundamental flexibility that others lack: it's about bits, not atoms.
"Automobiles and planes are dealing with the physical world," such as the speed of sound and the size and mass of the humans they carry, said Sam Fuller, chief technology officer of Analog Devices, a chipmaker that's been in the electronics business even longer than Intel. "Computing and information processing doesn't have that limitation. There's no fundamental size or weight to bits. You don't necessarily have the same constraints you have in these other industries. There potentially is a way forward."
That means that evenand chip components stop shrinking, there are other ways to boost computer performance.
Before we get too carried away with lauding Moore's Law, be forewarned: Even industry optimists, Moore included, think that about a decade from now there could be trouble. Yes, all good things come to an end, and at some point those physical limits people have been predicting will turn out to be real.
To understand those limits and how they may be overcome, I talked to researchers at the big chip companies, academics, and industry gurus. I wanted to go beyond what what most of us think we know about semiconductors and hear it from the experts. Do they have doubts? What are they doing about those doubts? The overwhelming consensus among the chip cognescenti, I found, was, yes, there's a stumbling block a decade or so from now. But don't be surprised if we look back at that prediction 20 years from now and laugh.
For related coverage, seeand .
Moore's Law is named after Gordon Moore, who in a 1965 paper in Electronics Magazine observed an annual doubling in the number of chip elements called transistors. He refined his view in 1975 with a two-year cycle in an updated paper. "I didn't think it would be especially accurate," Moore said in 2005, but it has in fact proved to be. And now, Intel times its tick-tock clock to Moore's Law, updating its chip architecture and its manufacturing technology on alternating years.
Here's a very specific illustration of what Moore's Law has meant. The first transistor, made in 1947 at Bell Labs, was assembled by hand. In 1964, there were about 30 transistors on a chip measuring about 4 square millimeters. Intel's "Ivy Bridge" quad-core chips, the third-generation Core i7 found found in the newest Mac and Windows PCs, has 1.4 billion transistors on a surface area of 160 square millimeters -- and there are chips with even more.
A transistor is the electrical switch at the heart of a microprocessor, similar to a wall switch that governs whether electric current will flow to light a lamp. A transistor element called a gate controls whether electrons can flow across the transistor from its "source" side to its "drain" side. Flowing electrons can be taken logically as a "1," but if they don't flow the transistor reads "0." Millions of transistors connected together on a modern chip process information by influencing each other's electrical state.
In today's chips, a stretch of silicon connects the source to the drain. Silicon is a type of material known as a "semiconductor" because, depending on conditions, it'll either act as a conductor that transmits electrons or as an insulator that blocks them. Applying a little electrical voltage to the transistor's gate controls whether that electron current flows.
To keep up with Moore's Law, engineers must keep shrinking the size of transistors. Intel, the leader in the race, currently uses a manufacturing process with 22-nanometer features. That's 22 billionths of a meter, or roughly a 4,000th the width of a human hair. For contrast, Intel's first chip, the 4004 from 1971, was built with a 10-micron (10,000-nanometer) process. That's about a tenth the width of a human hair.
Intel's Ivy Bridge generation of processors is an example of how hard it can be to sustain that process.
To make the leap from the earlier 32nm process to today's 22nm process, Intel had to rework the basic "planar" transistor structure. Previously, the electrons traveled in a flat silicon channel laid flat into the plane of the silicon wafer and with the gate perched on top. To work around the limits of that approach, Intel flipped the planar transistor's silicon on its side into a fin that juts up out of the plane of the chip. The gate straddles this fin the way a person might straddle a low fence with both legs. To improve performance, Intel can put as many as three of these fins in a single transistor.
The result is a "tri-gate" chip design that shrinks without suffering debilitating new levels of "leakage," which takes place when current flows even when a transistor is switched off. And it means Intel has one more "shrink" of the chip manufacturing process under its belt.
Developing the tri-gate transistors wasn't easy: Intel researchers built the company's first finned transistor in 2002, nine years before it was ready for mass-market production. And it wasn't the only challenge; other fixes include making gates out of metal, connecting transistors with copper rather than aluminum wires, and using "strained" rather than ordinary silicon for the channel between source and drain.
In 2013, Intel plans another shrink to a 14nm process. Then comes 10nm, 7nm, and, in 2019, 5nm.
And it's not just Intel making up these numbers. In the chip business, a fleet of companies depend on coordinated effort to make sure Moore's Law stays intact. Combining academic research results with internal development and cross-industry cooperation, they grapple with quantum-mechanics problems such as electron tunneling and current leakage -- a bugaboo of incredibly tiny components in which a transistor sucks power even when it's switched off.
Doom and gloom
Given the engineering challenges, a little pessimism hardly seems out of place.
A 2005 Slate article bore the title, "The End of Moore's Law." In 1997, the New York Times declared, "Incredible Shrinking Transistor Nears Its Ultimate Limit: The Laws of Physics," and in another piece quoted SanDisk's CEO forecasting a "brick wall" in 2014. In 2009, IBM Fellow Carl Anderson predicted continuing exponential growth only for a generation or two of new manufacturing techniques, and then only for high-end chips.
Even Intel has fretted about the end by predicting trouble ahead getting past 16nm processes.
In decades past, Moore himself was worried about how to manufacture chips with features measuring 1 micron, then later chips with features measuring 0.25 microns, or 250 nanometers. A human hair is about 100 microns wide.
Yes, there are fundamental limits -- for example, quantum mechanics describes a phenomenon called tunneling where the position of an electron can't be pinned down too precisely. From a chip design point of view, that turns out to mean that an electron can essentially hop from source to drain, degrading a chip with leakage current.
So is there an end to Moore's Law? In a 2007 interview, Moore himself said, "There is." He continued:
Any physical quantity that's growing exponentially predicts a disaster. It comes to some kind of an end. You can't go beyond certain major limits... But it's been amazing to me how technologists have been able to keep pushing those out ahead of us. For about as long as I remember, the fundamental limits were about two or three generations out. So far we've been able to get around them. But I think another decade, a decade and a half, or something, we'll hit something that is fairly fundamental.
That was five years ago, and few seem to want to venture too much farther beyond Moore's prediction.
"I think we have at least a decade before we start getting into issues," said Patrick Moorhead, analyst at Moor Insights & Strategy. "I still give it another decade," added Robert Mears, founder and president of Mears Technologies, which has developed a technology called MST CMOS designed to improve the performance of the conventional silicon channel.
Although Moore's Law might not continue if transistors can't be shrunk, the post-silicon future shouldn't be overlooked. When traditional silicon transistors eventually run out of gas, there are plenty of alternatives waiting in the wings.
"The most probable outcome is that silicon technology will find a way to keep scaling, some way continue to deliver more value with succeeding generations," said Nvidia Chief Scientist Bill Dally.
One likely candidate keeps the same basic structure as today's transistors but speeds them up by breaking out of today's constraints in the periodic table of the elements. In transistors now, the source, drain, and channel are made from silicon, which inhabits a column of the periodic table called group IV.
But it's possible to use indium arsenide, gallium arsenide, gallium nitride or other so-called III-V materials from group III and group V. Being from different groups on the periodic table means transistor materials would have different properties, and the big one here is better electron mobility. That means electrons move faster and transistors therefore can work faster.
"You can imagine staying with fairly traditional transistors, moving to silicon-germanium, then III-V structures," Fuller said. But that's mostly a stopgap. "There is some potential future in that, but it pretty quickly runs into similar limits that hit silicon. There may be [performance improvement] factors of two, four, maybe eight to be gained."
Another tweak could replace the silicon channel with "nanowires," super-thin wires made of various semiconductor materials (including, it so happens, lowly silicon itself). More exotic and more challenging is the possibility of using carbon nanotubes instead. These are made of a cylindrical mesh of interlinked carbon atoms that can carry current, but there are lots of difficulties: connecting them to the rest of the transistor, improving their not-so-hot semiconductor properties, and ensuring the nanotubes are sized and aligned correctly.
Which brings us to one of the most promising post-silicon candidates: graphene, a flat honeycomb lattice of carbon that resembles atomic chicken wire. If you roll up a sheet of graphene, you get a nanotube, but it turns out the flat form also can be used as a semiconductor.
One advantage graphene holds over carbon nanotubes is the possibility that it can be manufactured directly as a step in the wafer processing that goes on in chip factories, instead of being fabricated separately and added later. (This is a very big deal in the intricate and minutely choreographed business of chip manufacture.) Another is that it's got fantastically high electron mobility, which could make for very fast switching speeds if graphene is used to connect source and drain in a transistor.
"I think graphene is very promising," Fuller said.
But graphene has plenty of challenges. First on the list: it lacks the good "band gap," a separation in energy levels that determines whether a semiconductor conducts electrons or insulates. Graphene by itself has a band gap of zero, meaning that it just conducts electricity and fails as a semiconductor.
"Graphene has some very nice properties, but as it stands at the moment, it doesn't have a proper band gap," Robert Mears, president of Mears Technologies. "It's not really a replacement for silicon or other semiconductor materials. It's a good connect medium, conductor, but not necessarily a good switch at the moment."
Here's how Fuller describes an ideal transistor: "When you turn on, it comes on strong, and when you turn it off, it consumes almost no power. That's what you want for a great logic gate." The problem so far, though. is that "the graphene transistors today have been hard to turn off."
But there are ways to give the material a band gap, including using two separated strips of graphene fabricated as "nanoribbons." Varying the placement of the transistor gate or gates also can help. If scientists work out the challenges, the result could be a transistor that's not necessarily smaller, but that is a lot faster.
"We're in the early days of exploring the use of graphene, like we were with silicon a long time ago -- in the 1950s, maybe," Fuller said.
But wait, there's more
Another radical approach is called spintronics, which relies on information being transmitted within a chip using a property of electrons called spin.
"If you could use spin to store a 1 or a 0, rather than charge or absence of charge, it doesn't have the same thermodynamic limits that moving charge around does," Fuller said. "You probably wouldn't run into the same power limits."
Silicon photonics, in which light rather than electrons carry information, could be involved in future chips.
"That can be a great partial solution between chips, or even on chips," Fuller said. Today, a large fraction of a chip's power is used to keep the chip components marching lockstep by broadcasting ticks of the chip's clock, but there are promising research projects to do that with optical links.
There are limits to how short optical links get, said Mears, who by the way invented the erbium-doped fiber amplifier (EDFA) technology that vastly improved fiber-optic network capacity. The problem: the wavelength of light is inconveniently large compared to chip components, he said.
"In spite of it having been one of my main research subjects, I'm not a great fan of optics on a chip," Mears said. "Any kind of optical waveguide on a chip will look huge compared to the kinds of devices you can put on a chip."
Fuller concurred. "What makes it great for communicating over long distances makes it difficult to make a logic gate out of them: photons don't interact with each other. If you want to build a NOR gate or NAND gate [two forms of basic logic gates out of which chips are assembled], you need to switch from photons to electrons for the gate, then back to photons to transmit the data," he said.
Mayberry is keeping an eye on so-called spintronics, but as with many technologies he's cautious. "A spin wave travels at a slower rate than an electron wave," he notes. There are also numerous manufacturing challenges.
Beyond that, there's a wide range of even more exotic research under way -- quantum computing, DNA computing, spin wave devices, exitonic field-effect transistors, spin torque majority gates, bilayer pseudospin field-effect transistors, and more. An industry consortium called the "Nanoelectronics Research Initiative" is monitoring the ideas.
"There are something like 18 different candidates they're keeping track of. There's no clear winner, but there are emerging distinctive trends that will help guide future research," Mayberry said.
It's certainly possible that computing progress could slow or fizzle. But before getting panicky about it, look at the size of the chip business, its importance to the global economy, the depth of the research pipeline, and the industry's continued ability to deliver the goods.
"There's an enormous amount of capital that's highly motivated to make sure this continues," said Nvidia's Dally. "The good news is we're pretty clever, so we'll come through for them."