These days, most new PCs have dual-core central processors (CPU). That's one chip with two complete microprocessors on it, both sharing one path to memory and peripherals.
If you have a high-end gaming PC or a workstation, you might have one or two processor chips with four cores each. An eight-core PC is a very powerful machine--in real terms, up to eight times faster than the best desktop PCs you could get in 2004. For many years, PC performance doubled roughly every 18 months; multicore technology has produced annual doubling for three years now.
But that's not really so impressive when you look at the 15-year history of 3D graphics on PCs. The companies making graphics processors (GPUs) have delivered a doubling of performance every 6 months or so for this whole time. That means today's graphics chips are faster than 1992 products by a ratio of 2 to the 30th power, or about a billion to one.
CPU progress is slow by comparison. Over the last 30 years of the microprocessor, performance on integer operations has improved by about a million to one. Floating-point performance looks much better, since early CPUs had to emulate floating-point operations in software. With hardware floating-point units (FPUs), today's processors run FP-intensive code about 100 million times faster than those of 1977.
Floating-point performance is the key to the rapid progress of graphics chips, too. Most of the math required to display the special effects in a game like
That's the loophole here, the trick that graphics chips exploit to boost performance so much with each new generation of 3D chips. It's all done in parallel.
So if you think eight processor cores is great, think about an Nvidia GeForce 8800 GPU with 128-thread processors running at 1.35GHz...or an AMD Radeon HD 2900 with 320-stream processors running at 743MHz. These processors are very simple by comparison with the cores in a CPU, but there sure are a lot of them. (There's no easy way to make direct comparisons between these numbers, so don't worry about it...just let the numbers flow over you.)
Then think about this: either one of those chips could, in principle, run a word-processing program all by itself. But such a program would probably run on only a few of those thread or stream processors, and inefficiently at that, so the program would probably run more slowly overall than it would on some old Windows 98 machine. For some things, CPUs are still much better than GPUs.
And this brings me back to the subject of yesterday's blog (). Nobody's really sure how to evolve a CPU to the point that it could replace a GPU without losing what makes it a good CPU. Or vice-versa. CPUs and GPUs are likely to have distinct designs for a long time to come.
But they won't necessarily stay on separate chips. I'll explain why later this week.