X

Graphic content at GH2007

Glaskowsky describes the goings-on at the 2007 Graphics Hardware conference.

Peter Glaskowsky
Peter N. Glaskowsky is a computer architect in Silicon Valley and a technology analyst for the Envisioneering Group. He has designed chip- and board-level products in the defense and computer industries, managed design teams, and served as editor in chief of the industry newsletter "Microprocessor Report." He is a member of the CNET Blog Network and is not an employee of CNET. Disclosure.
Peter Glaskowsky
3 min read

During my time at Microprocessor Report, I watched the growth of the market for 3D graphics chips grow from just a handful of seed companies (notably 3dfx, 3Dlabs, PowerVR, and Rendition) to a virtual forest. At one point, I was tracking over 50 companies, most of which never launched a product.

So in 1999, when the organizers of the Siggraph/Eurographics Workshop on Graphics Hardware (a name wisely since shortened to simply Graphics Hardware) were looking for someone to help find presentations from graphics-chip vendors, they called me.

I chaired that first Hot3D session, and I've done it again every two years since then when Graphics Hardware is co-located with the main Siggraph conference in Southern California (sometimes Los Angeles, sometimes San Diego). I almost missed out this year, but when I was able to get free for Siggraph at the last minute, the GH2007 folks offered to let me handle Hot3D again, preserving my streak.

It's a great little conference (though not so little any more, with over 200 attendees this year). Many of the new ideas you'll find in today's graphics chips were at Graphics Hardware two or three years ago. Graphics is a highly cooperative industry, however competitive it may appear on your local retail shelves. Each new 3D chip builds on the previous work of the whole industry, so the people who lead technology development for the industry-- the people who attend Graphics Hardware-- tend to get along well.

Most of the presentations at Graphics Hardware are deeply technical. This isn't the place to explain them, and I'm not the person to do it either. I can tell you that the paper titled "Stochastic Rasterization using Time-Continuous Triangles" is really just about a new, more efficient way to simulate the apparent blurring of a moving object-- but that's as far as I'll go.

Anyway, there was a lot of very accessible content at GH2007, too. Among the big trends these days:

  • The blurring of the formerly bright lines between CPUs (central processing units) and GPUs (graphics processing units)
  • The rapid advances in GPUs for handheld devices
  • The possibility that real-time graphics will eventually give up today's strategy of drawing one triangle at a time in favor of the technique called ray tracing, which is the method used to render 3D movies such as Disney's recent Ratatouille

The first of these is attracting the most attention these days because several other trends are forcing the issue:

  • Improvements in process technology will soon make more transistors available than CPU designers and PC software developers really know how to use
  • This means it'll soon become mandatory to integrate other functions onto the same chip as the CPU, and graphics is the prime candidate
  • At the same time, GPUs are adding more and more CPU-like features, making a collision inevitable even if the integration argument wasn't present

So-- will future PC processors combine conventional CPU and GPU cores? Will CPUs evolve to be more like GPUs? Will GPUs become more like CPUs?

I think the answer to all three questions is "yes," but you'll have to stay tuned for a more useful answer in future blog entries about Graphics Hardware 2007 over the next week or so.