X

Tectonic shifts in PC graphics

Nvidia CEO Jen-Hsun Huang says recent change in graphics processing reflects bigger changes in the way people are using their computers.

David Becker Staff Writer, CNET News.com
David Becker
covers games and gadgets.
David Becker
8 min read
Nvidia, one of the leading semiconductor companies on Earth, is now rapidly moving beyond the PC graphics processors it dominates. Where to? Try Mars.

The Santa Clara, Calif.-based chipmaker developed one of the key technological advances used in the Mars Exploration Rover Mission. An Nvidia-designed graphics system assembled at NASA's control center collects the thousands of points of terrain data the Rover gathers every minute and assembles them into a 3D representation of the Martian landscape.

The NASA project isn't just a matter of prestige, either. Nvidia CEO Jen-Hsun Huang sees practical applications in the near future. "Taking telemetry of thousands of points, synthesizing a 3D environment and presenting it in front of you--that's no different than the 3D display in your car 15 years from now," he said. "Imagine driving your car down the street in total darkness, total fog, and you can still see absolutely everything in that environment."

In the here and now, Huang runs a company that, in the course of a decade, has gone from a scrappy start-up to the dominant force in PC graphics. It is now rapidly moving into other markets. CNET News.com caught up with Huang at the International Consumer Electronics Show earlier this month.

Q: How has the role of the PC graphics chip changed since Nvidia entered the market? It seems like it's come much closer to being an equal partner with the microprocessor.
A: The graphics processor has become more and more relevant over the years, because the applications that are being used in computing are more rich in graphics. It's less about processing your spreadsheet or word processor. It's much more about video and rich graphics and 3D graphics and games and movies--all these heavyweight digital-media applications. Processing demand has increased dramatically, and it has to do with where applications are moving and how people are using computers.

3D graphics obviously have a long way to go before we reach a level consumers would say is absolutely good enough. From that angle, it's true that GPUs (graphics processing units) will continue to advance at a very rapid pace.

Another perspective focuses on the GPU's accelerating programmability, which makes it a very good processing element for a particular type of very mathematically intensive function. Longhorn, the next-generation operating system from Microsoft, will elevate the GPU and expose much more of its fundamental processing capabilities.

How has the market changed with the rise of the integrated chipset for low-end PCs?
The integrated chipset has captured about 50 percent of the market, and I think that it makes a lot of sense. If you think about corporate desktops alone, there is no fundamental reason why there needs to be a great deal of capability beyond basic 2D and very entry-level 3D graphics to support those applications, like surfing the Web, running Excel or PowerPoint--whatever it happens to be. Most corporate applications don't require a lot of graphics-processing capability.

I think that two things are going to change that. One is Longhorn.

Our thinking is that computing for the sake of information processing is pretty much over.
Unless you have some capability in the GPU, you simply aren't going to get the benefit of the 3D interface there. The entire Mac OS X interface is based on a 3D application programming interface. That's why every single Macintosh ships with quite a formidable GPU. My sense is that the PC will move in that direction as well--using the GPU to enhance the user interface--to make it more delightful for any user.

The second thing is about bringing rich 3D applications to the desktop. It won't happen in every single company, but companies like Boeing or General Motors that are focused on industrial design--they have hundreds of parts in their products and want to be able to share that information at the corporate level. That's going to require some level of 3D graphics that's related to computer-aided design applications. But for the time being, there's no fundamental reason for having a lot of graphics processing power on most corporate desktops.

With integrated chipsets accounting for more of the market, how has that changed the nature of Nvidia's relationship with Intel? In some respects, Intel's a competitor as much as a partner.
You could argue that Intel is our biggest competitor. But that's really a symptom of something else that's our biggest competitor.

Wherever a market doesn't require the added value, the GPU horsepower or the vibrancy and fidelity of graphics we deliver--if that market has no application, and consumers have no need for that capability, those markets will very quickly gravitate toward lesser technologies. So the microprocessor becomes just good enough, and very low-end integrated graphics technology becomes good enough. So is Intel the competitor there, or is it the fact that the markets simply don't need our technology?

Intel has gotten into TV sets, for instance. Is that an area of interest for Nvidia?
We'll try to add value in areas where we see our core competency. And our competency is in GPUs and in media and communications processors. That's where we'll keep our focus.

How would you characterize Nvidia's experience with Microsoft as one of the main component suppliers for the Xbox?
I would characterize it as challenging but rewarding. First of all, we were a very small company--I think 400 or so employees--at the time we engaged with Microsoft on Xbox.

The Xbox 2 contract is not something that would make sense for us to do.
From that perspective, it was a gargantuan task. And the schedule was so tight: We literally had one year to work on Xbox, and we were designing so much of it by ourselves. It was a very large program and a very tight schedule, and the technology was very challenging. From that perspective, it was a lot of fun, and it turned out to be a great product.

But how was it as a business venture?
Obviously, we had some disagreements with Microsoft on the contract. Those issues were resolved, and I think that we had the good wisdom to engage a third party to resolve some contract issues that were rather public. Those issues are behind us, and the business part of it is fine now. It's a very large business. It represents about $300 million a year in revenue. The Xbox business alone is larger than most fabless semiconductor companies in the world.

Any regrets about rival chipmaker ATI Technologies getting the contract for Xbox 2?
The Xbox 2 contract is not something that would make sense for us to do. We have 20 or more projects I wish we could do. There are so many markets that we know are going to be very large and that I know we have the ability to pursue, but we simply don't have to resources to do them all. So I have to be very selective about our return on investment. If it met those criteria, we'd be delighted to work on Xbox 2, but it just didn't work out that way.

To date, you've only made chipsets for AMD processors. Is that voluntary, or would you like to be in the Intel market, too?
It's voluntary. We would like to consider building Pentium chipsets in the future. But there are several reasons that we decided to focus on Advanced Micro Devices first. First of all, building chipsets is not easy. You're competing against real companies that have real expertise and have been in that market for a really long time.

And the profit margins are razor thin.
It's possible, if you don't do it well. But I think that if you build good products, the margins will ultimately reflect that. If your manufacturing cost base can be addressed, the margins will follow. I do believe that it's possible to add value there.

I looked at this a few years ago, when we went into the nForce business. I really believed that the chipset business was going to become very focused on networking, connectivity and security. There will be opportunities to add a great deal of value to the computing platform and to enhance the user experience by building better technology.

The question became: How do we enter the market? If it's a new market, you need to focus. You should focus, because you're not the incumbent; you're learning, and by focusing, it will be much more clear to the marketplace, as far as what value you bring. I think that that strategy--focusing on the AMD platform first--has been a good strategy. Hopefully, someday we'll go into the Intel platform. But this year, we're very busy with advancing the 64-bit platform. nForce 3 is a great product, and this is going to be the year we move 64-bit computing into the mass market.

You had a difficult product launch with the GeForce 5800. What lessons did you take from that?
There are many fundamental changes we made in how we build products and how we design products. Some of the changes we made have to do with really understanding the maturity of a manufacturing process before we go into it.

I think that in the future, we'll just have to send in probes before we launch a product into a new process. Before we chart the unexplored, it might be a good thing to send in a probe and run test chips or do a cost reduction--or whatever it is to get a grip on the new process.

On the product side, one of the biggest mistakes we made was deciding that DDR2 (double data rate 2) and faster memory was going to be more readily available and more mature. We decided to go narrower and faster with memory bandwidth, kind of like the Rambus strategy, instead of wider and slower. That, in retrospect, was a mistake. We should have gone with wider and slower rather than narrower and faster.

We certainly also learned a bunch of other lessons. But when you're a pioneer, you're trying to push technology, and you're developing technology as quickly as we are, you're going to take a lot of risks, whether its process technology risk or memory risk or architectural risk or new standards. All these risks compound themselves on new generations of GPUs. We can't be intimidated by that.

When you entered this market, there were a lot of competitors. The way it is now, with the market mostly split between Nvidia and ATI, is that the natural order of things for the PC market now?
There's lots of competition. It's more than just the two of us.

There's lots of smaller competitors as well. The number of companies serving the graphics market will continue to be more than two. But my sense is that Nvidia and ATI will continue to be the most successful companies for some time. That's because the research and development investment necessary to build your next-generation product is so high.

I think that naturally, you just have to be larger. The technology is so complex. It's not unlike microprocessors. It takes hundreds of engineers and many millions of dollars to build a next-generation microprocessor. The GPU marketplace is becoming like that. It has become increasingly complex, and that just makes the R&D investment higher.