Intel co-founder Gordon Moore chuckled at those who, in decades past, predicted the imminent demise of Moore's Law. This is the dictum that resulted from his observation in 1965 that transistor density doubles every 18 months, a pattern that has held true to this day.
But the traditional semiconductor chip is finally approaching some fundamental physical limits. Dr. Moore recently admitted that his law, as we know it, will run out of gas in 2017. Intel's .045-micron process is expected to come in 2007, with a gate oxide that is only three atoms thick. It is hard to imagine many more doublings from there, even with further innovation in insulating materials.
Another factor is the escalating cost of a semiconductor fab plant, which is doubling every three years, a phenomenon dubbed Moore's Second Law. Human ingenuity keeps shrinking the CMOS transistor, but with increasingly expensive manufacturing facilities--currently $3 billion per fab.
Any one technology, like the CMOS transistor, follows an elongated S-shape curve of upward progress over time. But a more generalized
capability--like computation, storage or bandwidth--tends to follow a pure exponential curve, bridging a variety of technologies and their cascade of S-curves.
If history is any guide, Moore's Law will transcend CMOS silicon and jump to a different substrate. It has done so five times in the past. In his forthcoming book, "The Singularity Is Near," Ray Kurzweil traces the historical exponential capability curves for a variety of technologies.
The exponential curve of computational power extends smoothly back in time to 1890, long before the invention of the semiconductor. Through five paradigm shifts, from electromechanical calculators to vacuum tube computers to the integrated circuit, the processing power that $1,000 buys has doubled, on average, every two years. For the past 30 years, it has been doubling every year.
Draper Fisher Jurvetson has been investing in a variety of companies like BinOptics, Coatue, Cognigine, FlexICs and Nantero, which are working on the next paradigm shift to extend Moore's Law beyond 2017. One near-term extension to Moore's Law focuses on the cost side of the equation. Imagine rolls of wallpaper embedded with inexpensive transistors.
FlexICs deposits traditional transistors at room temperature on plastic, a much cheaper process than growing and cutting crystalline silicon ingots.
Another strong contender for the post-silicon computation paradigm is molecular electronics, a nanoscale alternative to the CMOS transistor.
Eventually, these molecular switches will revolutionize computation by scaling into the third dimension--overcoming the planar-deposition
limitations of CMOS. Initially, they will substitute for the transistor bottleneck in an otherwise standard silicon chip.
For example, Nantero is growing carbon nanotubes on silicon to create high-density nonvolatile memory chips. Carbon nanotubes are small (10 atoms wide), stronger than diamond, and perform the functions of wires and transistors with better speed, power, density and cost. Cheap nonvolatile memory enables important advances, like "instant-on" PCs.
Other companies like ZettaCore and Hewlett-Packard are combining organic chemistry with a silicon substrate to create memory elements that self-assemble by chemical bonds that form along pre-patterned regions of exposed metal. Dr. Angela Belcher at UT Austin has selected for biological virus strains that have a binding affinity for specific inorganic surfaces, engaging them to print interconnects of gold on silicon surfaces.
To transcend Moore's Law, we will need more than a faster, cheaper transistor. Unlike memory chips, which have a regular array of elements, processors and logic chips, they are limited by the rat's nest of wires that span the chip on multiple layers. The bottleneck in logic chip design is not raw numbers of transistors, but a design approach that can use all of that capability in a timely fashion. For a solution, Cognigine has redesigned "systems on silicon" with a distributed computing bent; wiring bottlenecks are localized, and chip designers can be more productive by using a high-level programming language, instead of wiring diagrams and logic gates. Chip design benefits from the abstraction hierarchy of computer science.
Compared with the relentless march of Moore's Law, the cognitive capability of humans is relatively fixed. We have relied on the compounding power of our tools to achieve exponential progress. To take advantage of accelerating hardware power, we must further develop layers of abstraction in software to manage the underlying complexity.
For the next 1,000-fold improvement in computing, the imperative will shift to the growth of distributed complex systems. Our inspiration will likely come from biology.