This is the eighth in a series of posts from the Hot Chips conference at Stanford. The previous installments looked at , , , designs, efforts, , and . Other CNET coverage may be found here. This is sort of an experiment for me; I usually prefer to have time to review my work before I publish it. If you see anything wrong, please leave a comment!
The second keynote here comes from Phil Hester, the chief technical officer at AMD. It's titled "Multicore and Beyond: Evolving the x86 Architecture."
He began by describing the history of the PC, starting with the 1981 introduction of the original IBM PC. Over time, new applications showed up, creating a wider variety of software. For example, early PCs didn't have floating-point math hardware, but over time, FPUs were added to the platform as software emerged that needed it. More recently, multimedia and 3D graphics drove similar transitions.
Where he was going with this argument was summed up by a line from his fifth slide: "by the end of the decade, homogeneous multi-core becomes increasingly inadequate." His point is that the traditional PC model of one CPU (whether single- or multi-core) surrounded by peripherals is no longer good enough. There needs to be increasing integration of CPUs with graphics chips and other peripherals to support new workloads.
Hester described this trend as "the parallel software/hardware evolution" and called for new hardware extensions to support it: software transactional memory, fast context switching, accelerated cross-core communication, and lightweight profiling.
Lacking the time to explain all of these items in detail (although I covered AMD's lightweight profiling proposal), I'll just say they're basically about reducing the overhead for multiprocessing. Hester proposed a new acronym for this class of functionality: xSP, extensions for software parallelism.
Hester went on to talk about stream processing, a term currently in vogue in the microprocessor industry. Stream processing is basically about dealing with data that streams in from some source and streams out again to some destination-- like video data, for example. A processor optimized for this kind of processing can be much more efficient in terms of energy per unit of work than a traditional CPU. In fact, this is exactly how GPUs are designed, as I've previously described.
Hester was leading up to the AMD Fusion project, in which CPU and GPU cores will be combined on a single chip. This integration means that software developers can start to take for granted that there is a GPU in the system, which means they'll start to take advantage of the GPU's capabilities in mainstream software.
Hester demonstrated a series of applications, from the Folding@Home project to oil exploration, that can benefit from GPU-style acceleration. He went on to explain that this approach is valuable not only because it delivers higher peak performance, but is also more energy-efficient.
Hester doesn't believe that GPUs will kill CPUs; both are needed to support the full range of software found on future PCs. Nor will integrated GPUs kill discrete GPUs. High-end systems will need more GPU performance than can be economically integrated into a CPU chip.
The next step after CPU-GPU integration is microarchitectural integration-- building CPU instructions into GPUs and vice-versa. Eventually it may be possible to unify both types of devices, but personally, I'm not so sure. General-purpose software and processors are inherently different-- radically different-- than streaming software and processors. But it'll be interesting to see this research move forward.
Next up here at Hot Chips: lunch. No, I won't be blogging about the lunch. After lunch, there's a special presentation on wireless broadband by Reed Hundt, former chairman of the FCC. The next regular session is on networking, then mobile PC processors, then "Big Iron." And that'll be that! I can give my carpal tunnels a rest for a while.