In the first part of this series, I claimed that a great secret in the microprocessor industry largely determines whether new products succeed or fail.
I noted that this secret shouldn't be a secret at all because many people (including myself) have talked about it over the years, but clearly a lot of people are in the dark because they continually disregard it and develop products that are doomed.
I gave several examples of products that failed because their creators didn't know the great secret. Those products included RISC processors, media processors, and intelligent RAM chips, in which processor cores were integrated with memory to eliminate one of the great bottlenecks in computer performance.
During my eight years at Microprocessor Report, I covered the markets for media processors, 3D-graphics chips, network processors, and what I coined extreme processors--chips with large numbers of simple cores running in parallel. Many of these chips were cheaper, easier to design, and twice as fast as competing products--and still failed.
However, some did succeed. The critical factor that made the difference in most of these cases is the essence of the so-called secret.
One of those successes is the graphics processing unit, or GPU.
I was reminded again of the secret at Nvidia's recent GPU Technology Conference, where many of the talks dealt with GPU computing.
(Disclosure: I recently wrote a technical white paper for Nvidia.)
Although the GPU field dates back only five or six years, GPUs have already earned a place alongside CPUs. Each is clearly superior for certain kinds of applications.
This is true in spite of the fact that GPUs aren't nearly as easy to program as CPUs. Like other forms of parallel programming, GPU programming requires new hardware (the GPU itself), significant new extensions for programming languages, and a different mindset for programmers--one that simply wasn't part of standard computer-science curriculum for most of the last 50 years.
… Read more