CNET también está disponible en español.

Ir a español

Don't show this again

Tech Industry

End of Moore's Law? Wrong question

Sun microprocessor guru David Yen says the processor industry has reached a fork in the road. Now, it must choose which direction to follow.

    Every once in a while, the Cassandras need to be taken seriously.

    Their job, of course, is to look for dark linings in every silver cloud. One of our industry's "doom and gloom" scenarios has Moore's Law--the central driver of IT progress for the past 30-plus years--running out of steam in the near future.

    One of Intel's co-founders, Gordon Moore, famously predicted that the density of microprocessors would double every 18 months. Higher density means more and more tiny circuits, crammed ever more tightly, which increases the speed and performance of the chip. We all know that because of Moore's Law, we experience greater levels of computing capacity at lower costs: The processing power of a common game cube today, priced at $299, dwarves the capacity of a $2,000 PC of five years ago.

    The doomsayers are right to question the march of Moore's Law: Can it keep going, indefinitely?

    There are enough technical and physical obstacles to make the coming train wreck of Moore's Law appear to be an inevitable crash in slow motion. Circuit dimensions, now approaching 90 nanometers, cannot get much smaller without fundamental alterations in how we make semiconductors. We have already stretched the limitations of photolithography, using "phase shifting" masks and more advanced photoresists (i.e. deep ultraviolet) to squeeze every bit out of the light spectrum.

    This basic technique of chipmaking is what permits the printing of circuits in silicon the way books or magazines are produced in bulk.

    The doomsayers are right to question the march of Moore's Law: Can it keep going, indefinitely?
    Looking to the future, as circuit dimensions approach the physical limitation of the wavelength of light, the finer circuit dimensions Moore's Law assumes will challenge traditional photolithography. New methods of mass production will need to be pioneered, or else the circuits of tomorrow will be etched the way monks used to write Bibles--relatively slowly and one at a time.

    As interesting as it is to speculate about the end of Moore's Law, what if this is not even the right question to ask? What if, instead of Moore's Law marching down a dead end, chipmaking took a different path entirely? And what if, rather than manically focusing our energies on faster chips, we took a step back and looked at mechanisms to increase capacity and performance throughout the entire system?

    We are literally at this fork in the road today, and the consequences will be another continued wave of tech innovation.

    The approach to chipmaking today, most clearly in evidence at Moore's Intel, is to maximize the clock speed of the processor. But the traditional approach to driving increased speed from the processor reminds us of playing football in the days of leather helmets. "Three yards and a cloud of dust," as one coach put it, capturing the essence of running the ball over and over, hard, smack into the line. For a long time, this was the only way people knew how to play football: single wing formation, running the ball, pounding it up the middle. That reminds us of the "brute force" approach to computer processing: Keep running that chip smack into the beef of the line.

    While many coaches concentrated on ways to improve the running game, making incremental gains on the field, the innovators asked a key question: How can we score more points than the other team with the same 11 players on our side?

    They came up with new offensive formations and, with the introduction of the forward pass, football evolved from the leather helmet days to the modern game we know today. Teams now score with the run and the pass.

    The traditional approach to driving increased speed from the processor reminds us of playing football in the days of leather helmets.
    In fact, a pass receiver on one play might block for the running back during the next play. The options available to score, still with 11 players a side, multiplied tenfold. Just when it seemed as if there was very little room to improve on the run-only offense, along came a better way to play football.

    The chip industry is in the middle of a similar massive shift in its playing field. Instead of focusing on squeezing more speed out of a single processor, which could be a dead end, the innovators are looking at a new class of processor, in which four to eight "cores" divide and conquer the load. This way, no one core has to operate at hyperspeed. All eight cores can run much slower. But by working together, the total "throughput" of the processor is increased.

    As with football, it's just a more efficient way to score points: eight-chip cores working in parallel, together, rather than one running it hard up the middle.

    The future of multicore processors, already well understood in the lab, will deliver symmetrical multiprocessing on a chip. When one core is processing a calculation, for example, another might be fetching data from memory or sending instructions to the operating system, and so on. With multiple "threads" of information processing, the result will be a huge improvement in overall system performance (to say nothing of bypassing the end of Moore's Law). Moreover, with the future of computing dominated by distributed processing on heterogeneous networks, the applications users need will be highly suited to the multithreaded architecture of these new processors.

    The emerging world of network-delivered services is causing a very important alignment of hardware and software innovation. The future will be one in which all the computing capacity of the data center (servers, storage, access devices) is virtualized, with resources dynamically assigned and reassigned as necessary for various service delivery levels. Systems management will move up to a higher level of abstraction, directing traffic at the level of network services.

    What this means, in a practical sense, is that the days of a server dedicated uniquely to identity management or transactions will be over. That same server will not have down time between peak loads, because its role in the network will be provisioned on the fly.

    Parallel advances in hardware will optimize the resulting systems to take advantage of the highly threaded software requirements of network-delivered services. The days of costly high-systems overhead will be replaced by smarter networks, in which the hardware and software operate very specific network services with much greater efficiency and with a dramatic decrease in idle systems capacity.

    As with football, it is not how hard you run that matters. It's about which team scores the most points.