X

The factor factor, part 1

Ever wonder why new chip designs fail in the market, even though they offer real advantages? Or why others succeed in spite of serious disadvantages? It's apparently a secret. Part one of three.

Peter Glaskowsky
Peter N. Glaskowsky is a computer architect in Silicon Valley and a technology analyst for the Envisioneering Group. He has designed chip- and board-level products in the defense and computer industries, managed design teams, and served as editor in chief of the industry newsletter "Microprocessor Report." He is a member of the CNET Blog Network and is not an employee of CNET. Disclosure.
Peter Glaskowsky
5 min read

Listen carefully. I am about to reveal one of the great apparent secrets of the microprocessor industry. This secret largely determines whether new products succeed or fail.

I don't know why it seems to be a secret. It's simple enough. I figured it out early, in my first job in the industry, and I've seen it demonstrated over and over since then. I'm hardly the only one who knows this secret; I've seen dozens of talks that allude to it, and a few that mentioned it specifically. I've talked about it myself in articles I wrote for Microprocessor Report and other publications.

Unfortunately, I've also seen hundreds of products brought to market in apparent ignorance of this simple rule, and they've all failed, wasting the billions of dollars invested in their development. Assuming the developers weren't throwing away their money on purpose, I conclude they must not have known the one basic fact that doomed their projects, which means it must be a secret.

The secret is...well, let's see if you can figure it out.

From RISC to fail
When I first got involved in the microprocessor business, it was as an engineer at Integrated Device Technology. IDT was developing new CPUs based on the MIPS architecture for use in Silicon Graphics workstations. SGI's Indy and O2 workstations used chips designed by Quantum Effect Devices and manufactured by IDT.

The QED/IDT R4600 Orion processor was part of an early-1990s effort to get MIPS-architecture microprocessors into the Windows PC market. IDT

This was all happening in the early 1990s, when Microsoft was developing Windows NT. Much of the development work for the initial release, Windows NT 3.1, was done on MIPS-based computer systems; Microsoft itself intended to support NT on x86 and MIPS architectures, and DEC was funding the development of NT for its Alpha line.

MIPS and Alpha were part of the RISC (reduced instruction-set computing) movement, which was winning mindshare in those days against CISC (complex instruction-set computing) designs such as x86 and DEC's VAX architecture. Decoding the x86 instruction set and managing its more complicated addressing modes could consume as much die area and power as executing the instructions.

A RISC processor could be designed more easily than an x86 chip, or with comparable design effort could be more sophisticated internally. The QED/IDT R4600 was roughly twice as fast as Intel's contemporary Pentium (133MHz vs. 66MHz) and cheaper to make.

To IDT and other RISC vendors, RISC-based Windows NT sounded like a license to print money. How could it fail? The hardware was intrinsically superior, and Microsoft itself was committed to solving all the software-related issues.

But it did fail. Some of the benefits of RISC technology were eventually incorporated into CISC designs, but only at the expense of even greater complexity. The RISC-based ARM architecture has achieved great success in small systems like cell phones, but RISC never caught on in PCs. Why not? That's the secret.

Mediocre media processors
While the RISC NT story was playing itself out, a new kind of processor burst onto the scene, backed by almost a billion dollars of investment money. The media processor was a variant of the popular digital signal processor (DSP), but optimized for the needs of audio and video processing.

The CPUs in personal computers of the day, whether CISC or RISC, weren't very good at this kind of work. There were two reasons for this disadvantage: the preferred data formats weren't a good fit for CPU registers and arithmetic units, and PC operating systems weren't designed for real-time processing.

Decoding MPEG video, for example, involves processing 8-bit data values with an additional sign bit--in effect, a 9-bit value. CPUs had to allocate 16 bits for each of these values, wasting almost half the capacity of the chip.

Media processors, such as the Mpact from Chromatic Research, could be designed with 9-bit registers, and even more importantly, could be designed to perform many parallel 9-bit operations. The Mpact design featured 72-bit registers and execution units that could be configured to perform eight 9-bit operations from a single instruction (a technique called SIMD, for Single Instruction Multiple Data, or equivalently known as vector processing), making it much faster than the original Pentium processor for MPEG decoding.

It was obvious enough that vector extensions could be added to CPUs; Cray had done the same thing in the 1980s, Intel had its own SIMD processor (the i860, which was the original platform for Windows NT), and Sun Microsystems brought SIMD to its RISC-based UltraSPARC chip in 1995. But the benefits of native 9-bit processing were never realized in CPUs, which were too dependent on byte-oriented datapaths and memory.

On the software side, enhancing operating systems to support hard real-time deadlines for media processing was a big challenge. It was clearly easier to offload that work to a separate chip, the media processor, which didn't have to worry about running applications or responding to interrupts from I/O devices.

So, once again, it looked like an intrinsic factor-of-two advantage in the hardware plus significant software benefits would ensure the success of these new chips.

But they failed too. Chromatic, IBM, Micro Unity, Philips, Samsung, and other companies lost huge amounts of money because they didn't know the secret--or, at least, they didn't act accordingly.

Intelligent RAM--not smart enough
The same thing happened later in the 1990s when David Patterson, a professor at UC Berkeley, proposed integrating processors with memory to create IRAM (intelligent RAM) devices.

Patterson was one of the pioneers of RISC and a widely recognized expert in processor architecture. The IRAM project proved that this kind of integration offered substantial advantages--a factor of two to three reduction in memory-access delays and potentially even larger improvements in memory bandwidth. But ultimately, the IRAM concept went nowhere; today, processors still use small amounts of on-chip SRAM (usually as caches) and separate DRAMs.

In part 2 of this series, I illustrate the other side of this apparently secret rule: why some innovations succeed in spite of significant disadvantages. That should give it away. If not, I promise I'll reveal it myself in part 3.