X

What would happen if Moore's Law did fizzle?

An end to the guiding principle of chip development would come with a whimper, not a bang. That would give us time to prepare -- and to make improvements in other areas.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
8 min read
With Intel's "tick-tock" approach, the company moves to a new chip manufacturing technology every two years -- the cadence Moore's Law dictates. But what if the pendulum stopped swinging?
With Intel's "tick-tock" approach, the company moves to a new chip manufacturing technology every two years -- the cadence Moore's Law dictates. But what if the pendulum stopped swinging? Intel

First of all, don't panic.

If Moore's Law came to an end and computers stopped getting steadily faster, plenty of companies would suffer. But an end likely would come with lots of warning, lots of measures to cushion the blow, and lots of continued development even if transistors stopped shrinking.

The hardest hit would be companies dependent on consumers replacing their electronics every few years and tech companies such as Google whose long-term plans hinge on faster computers, cheaper storage, and better bandwidth. And the continuing miniaturization of computers -- mainframes to minicomputers to PCs to smartphones -- might not make the leap to even smaller devices such as tiny networked sensors.

Watch this: Innovations to keep Moore's Law alive

For the rest of us, there would be ripple effects. Corporate productivity gains might slow as the spread of computerization into new domains stops. You might not get your Dick Tracy watch, your mom might not get her cancer-attacking nanobots, and impoverished children might never get that supercheap mobile phone.

But even if chip progress stopped, that wouldn't mean computing progress would screech to a halt. Instead, attention would focus on new ways of getting more work out of existing computing technology.

There are several prominent examples of what happens when explosions of innovation settle down. Perhaps the best is the auto industry.

There, an early flurry of activity and experimentation eventually stabilized. Occasional ideas such as rotary engines, automatic transmissions, or fuel injection cropped up, but many of the basics remain unchanged. Even today's dramatic technology departures -- electric vehicles and self-driving vehicles -- reuse many of the same mechanical workings.

"I drive a 1964 car. I also have a 2010. There's not that much difference -- gross performance indicators like top speed and miles per gallon aren't that different. It's safer, and there are a lot of creature comforts in the interior," said Nvidia Chief Scientist Bill Dally. If Moore's Law fizzles, "We'll start to look like the auto industry."

Keeping Moore's Law ticking (pictures)

See all photos

That's not to say nothing would change -- the aforementioned e-vehicles and robot cars are now becoming reality, for example. But it would mean a more sedate pace of innovation. Technophiles could lose that sense of perpetual excitement even as everybody got a chance to figure out how to use their electronic gizmos before they're obsolete.

"Moore's Law means that every two years you're throwing away your laptop to get a better laptop and throwing away your smartphone to get a better smartphone," said William Tunstall-Pedoe, an artificial-intelligence researcher and founder of semantic search company Evi. "If your smartphone ended up being good for another 10 years, or your laptop wouldn't be replaced for another 10 years, the amount spent on hardware and on new phones would be dramatically less."

That of course would be disastrous for electronics makers and their suppliers, who'd have to get accustomed to lower revenues and therefore lower investments in future technology. But hardware isn't the only factor in computing technology.

For related coverage, see why Moore's Law is the rule that really matters in tech and a Q&A with Intel's Mike Mayberry.

Software picks up the slack
Programmers would be the first in the hot seat to pick up where hardware improvements left off.

"If Moore's Law were to come to an end tomorrow, you'd still see performance improvements, but that would come from improvements to the software," Tunstall-Pedoe said. "There would be less resources spent on features and more on squeezing extra performance and capabilities."

Jon Bennett, chief technology officer of flash-storage company Violin Memory, agrees that a lot of performance is squandered today. His company helps customers open up bottlenecks in their software that become evident with today's storage speeds, he said.

"Even if [chipmakers] start to slow down, we have plenty of time catching up to what we can consume today," he said.

"You could see a 10-year software wave when that becomes the best way to get the economics moving," said Kevin Brown, chief executive of Coraid, another storage system maker. "Right now, doing just the performance work just in software is somewhat wasteful because it's pretty easy to ride that curve," where hardware improvements deliver the new computing speed.

Here's one example of how the software industry has worked: the addition of new layers of abstraction that make life easier for programmers.

Keeping Moore's Law ticking (pictures)

See all photos

The earliest computers were programmed at a very low level -- for example, instructions for the chip to put a particular number in a storage register, to add the value of another to it, to compare the result with what's in another register. Higher-level languages like C came along that were much easier for humans to understand but that had to be compiled into native instructions for the chip.

Then even higher-level programming technologies arrived that run programs not on hardware but in software simulations of them called virtual machines. That means people writing Java, C#, or JavaScript programs don't have to worry about what chip is underneath. Each new level of abstraction gave programmers new power and made software easier to create, but it also meant the computer spent more of its energy accommodating humans rather than getting work done.

Software wouldn't be the only vein to mine for speed boosts. Chips can be designed more cleverly, for example sacrificing backward compatibility with existing software to move to designs with a fresh start.

"There's plenty of room left in architectural innovation," said Bob Doud, director of marketing at chip designer Tilera.

Another refinement: multidie packaging, in which several chips are sandwiched atop one another, perhaps linking a processor on one layer with memory on another. High-speed links called through-silicon vias, or TSVs, connect the layers.

The processor power panic
We've already tangled with the end of Moore's Law in one sense. Last decade, the processor industry ran into a wall: excess power consumption.

A National Academy of Sciences report shows how processor frequencies, measured here in megahertz, aren't increasing at the pace they had for years earlier.
A National Academy of Sciences report shows how processor frequencies, measured here in megahertz, aren't increasing at the pace they had for years earlier. National Academy of Sciences

Intel's NetBurst chip architecture was supposed to carry its Pentium processors to 4GHz, but instead it carried them to inordinately high electrical power usage. That's crippling in a computer: it leads directly to overheating that crashes and potentially even damages a computer. And nowadays, with laptops reigning supreme, it means batteries don't last long.

The result of this problem has been an industry focus not just on transistor counts, but on performance per watt of power used. In the good old days, processors ran faster with each shrink, but that's not the case anymore.

"Since six years ago or so, the clock rates of microprocessors have not increased much above several gigahertz, and the power has not gone much above 100 watts," said Sam Fuller, CTO of Analog Devices.

The clock in a 2.5GHz Intel Core processor ticks 2.5 billion times each second, fetching new instructions and executing them step by step with each tick. A hundred watts is enough to power a bright incandescent lightbulb, which up until a few years ago was plenty to power a chip.

The party ended with the end of a phenomenon called Dennard scaling. It's named after IBM researcher Robert Dennard, who in 1974 observed that the increases in the number of transistors enabled by next-generation manufacturing was counterbalanced exactly by reductions in each transistor's power usage.

"It went on for more than three decades. It was really great. You shrank the size of the circuits, scaled down the voltage, and adjusted the doping," which means adding carefully chosen chemical extras to the chip's silicon substrate, Fuller said. "What you got with each generation was twice the transistors and an increase in speed and performance, with no increase in power consumed and no increase in cost."

With the end of Dennard scaling, processors have been getting more transistors, but typically not faster ones. Instead, chips have pushed in the multicore direction. Where there once was a single processing engine, dual-core chips share the work between two engines on a single slice of silicon. Mainstream personal computer chips now are quad-core models, and server chips have eight cores.

The plight of parallelism
Multicore systems can juggle multiple tasks better, and many computing chores such as displaying high-resolution graphics or encoding video get faster on multicore machines. Unfortunately, though, many tasks don't.

One persistent computer industry challenge is parallel programming -- the creation of software split into multiple pieces that execute simultaneously. It's a thorny problem. People naturally think of algorithms that take place with a single thread of instructions. And parallel programming gets profoundly complicated when it's time to manage how different threads try to change the same data at the same time. Or when one thread stalls because it has to wait for another to finish. Or even worse, when two threads deadlock because each is waiting for the other.

Tilera has aggressively embraced the multicore philosophy by designing chips now used for network gear, media processing, and cloud computing. Doud thinks software developers have to wake up and smell the multicore coffee.

"It's virtually impossible to buy anything with fewer than two cores these days. Multicore is here to stay," Doud said. Programmers who can't handle multicore have stale skills. "That might have played in 2005, but now anybody who's not on board is going to be a dinosaur," he said.

Tilera's chip architecture is geared for what the company believes is the future: many cores operating in parallel. It's a design that requires a new approach to software when it comes to mainstream computing.
Tilera's chip architecture is geared for what the company believes is the future: many cores operating in parallel. It's a design that requires a new approach to software when it comes to mainstream computing. Tilera

Moving to parallel programming is tough, despite the apparent mutability of software.

"Anything that requires a software change is always harder than a hardware change," said Patrick Moorhead, analyst at Moor Insights & Strategy.

Chipmakers are working to make parallel programming less painful, he added.

"I actually think both Intel and Nvidia are preparing for that already," with huge numbers of employees focused on software, Moorhead said. Some of that work involves programming tools that hide the complexities of parallel programming. "You see a lot more resources going in to keep the utility curve of what you can do with the silicon moving up to the right."

Lean on the cloud
But a 1,000-core processor in a smartphone? It's not going to happen, even in Doud's view. Instead, the sensible approach is to offload work to servers on the cloud, the way Apple uses servers to handle Siri voice commands, he said.

The utility of the cloud will improve as networks get faster and more ubiquitous. And higher-end Internet companies have already figured out how to build massive data centers: They've partly cracked the nut of parallel programming.

The upshot is people won't focus on chip transistor density, because the cloud will offer a more relevant measurement: "It's compute power per dollar," said Coraid's Brown.

A lot of companies haven't matched the state of the art, though, he added. If the steady hardware progress embodied by Moore's Law slowed down, ordinary companies would rush to achieve the high computing efficiencies that only elite companies today have achieved, Brown said.

"Many IT companies have no idea. They have nothing that looks like Amazon and Google. There's a lot of change left to happen there," Brown said. "One way or another evolution will drive the cost down."

Moore's Law won't be easy to maintain, but a persistent optimism pervades the industry that computing hardware will steadily improve, even after today's silicon transistor technology meets its limits.

"I'm going to bet," Brown said, "on human ingenuity."

Processor frequency increases may have stalled, but the number of transistors continues to increase, a National Academy of Sciences report showed. The transistors are used now to build multicore chips with parallel processing engines. Though relative performance isn't increasing as fast, power consumption is holding level.
Processor frequency increases may have stalled, but the number of transistors continues to increase, a National Academy of Sciences report showed. The transistors are used now to build multicore chips with parallel processing engines. Though relative performance isn't increasing as fast, power consumption is holding level. National Academy of Sciences