CNET también está disponible en español.

Ir a español

Don't show this again

Christmas Gift Guide
Tech Industry

Jumping the next chip hurdle

Nvidia's chief scientist Dave Kirk talks about the latest challenge facing graphics processors and why Moore's Law does not apply to these chips.

Graphics chipmaker Nvidia has remained one of the relatively few bright spots in an otherwise dreary tech market.

The company, which recently said it expects to beat Wall Street earnings estimates, was the best performer in the S&P 500 index last year. The wind is still in its sails in 2002 as Nvidia's GeForce4 is generally considered to be the most powerful graphics chip on the market.

But Nvidia has encountered some turbulence of late. The company, which was recently required by the Securities and Exchange Commission to restate its financial results for the past three years, indicated that net income for fiscal 2002 will increase by about $2.1 million.

However, the restatement will also show that net income for fiscal 2001 will decrease by about $3.7 million. Separately, the company has been haggling with Microsoft. The two companies are now in arbitration over the prices of chips Nvidia supplies for use in Microsoft's Xbox game console.

At the end of the day, Nvidia's prospects will ultimately be defined by the usefulness of its technology. And that's where Dave Kirk, Nvidia's chief scientist, looms large.

While in London as part of a European tour, Kirk talked about the future of computer graphics and whether Nvidia will be able to hold on to its hard-earned success.

Q: There seems to have been a lot of confusion over the GeForce branding lately. Have you learned any lessons from this?
A: It is interesting to compare what we saw in the market after launch versus what we thought about when we decided to do it. The picture we had was that GeForce4 was the 2002 model--the name denoted the model year, not the architecture, but a lot of customers perhaps did not realize this.

It is something for us to think about next time. The same naming dilemma will come up again when we have another product in autumn 2002 and spring 2003. Say we call our next processor the GeForce5; this does not necessarily mean it has a new feature set, just that it is a new product, so we may call it something else.

Did Nvidia's philosophy change with the purchase of 3dfx Interactive?
Not too much. We still want to be profitable and we still want to stay in business, so they haven't influenced us in that. What we did, though, was to mix the development teams up completely. I didn't want 3dfx people versus Nvidia people. I wanted to have us all learn from each other and make different products.

I want to have PCs and game machines making images that look as good as what you see at the movies, and we can't do that just by making a faster GeForce3.
Both companies had products in development at the time, and we could have just picked up 3dfx's products and developed those. But instead I took the two teams and shuffled them around. I got the Nvidia people to argue for 3dfx products and the 3dfx people to argue for Nvidia, so they all had to learn the advantages of the competing products. We ended up changing the projects so much that they really weren't recognizable from before, and that was the goal; we wanted the best from both sides. Plus, they are all Nvidia people now.

What is the limiting factor for the quality of 3D games?
One of the disappointing things for us is that we bring out a new piece of hardware like the GeForce3 and time passes but there still aren't many games coming along to take advantage of it. No developer can develop for a new technology before they have it, and the time between us getting a new technology (that we have developed) and shipping is only a couple of months.

(Microsoft's) Xbox is a unique (console) platform because it is not painful to program, so it did not take long for developers to get competent with it. In contrast, when (Sony's PlayStation 2) launched, it was so difficult to program that all the games looked like (PlayStation) games; they had the same shading...nobody looked at the screen and thought, "Wow."

The way I see consoles work is that the first-generation games are "learning hardware" games. With the Xbox the graphics hardware is GeForce3 so I would expect the first wave of games to be impressive but not amazing. But the second wave--that is when the developers will really take advantage of the platform.

There will always be a difference between games on a PC and games on a console, because console-games developers develop for a fixed platform. On the PC, developers are always on the "just learning" part of the curve, but consoles are a fixed platform, and this Christmas is when we will start to see games taking advantage of effects available in the Xbox and be really amazing.

So where do you go from here?
Once you get to running at resolutions of 1,600 pixels by 1,200 pixels with a refresh rate of 75Hz, speed is not an issue, and you need to think less about more pixels and more about better pixels. We passed this point about a year ago, with the GeForce3.

Now we are looking at improving features such as anti-aliasing (smoothing jagged edges). We have already added dedicated hardware for this, and it is something that is most noticeable when it is not there. People don't jump up and say, "Hey, look at this," when they are not there, but you really do notice jagged edges when the anti-aliasing is switched off.

It has to do with the quality of experience you are creating. It is about story and game play, and is less about computer graphics. In the movies they call it suspension of disbelief. So anti-aliasing is about peeling away stuff you don't want to see. It turns out to be more important for laptops and flat panels than CRT (cathode-ray tube) monitors because pixels on LCD (liquid crystal display) panels are really clear and square. We now want to get to the point where you don't chose anti-aliasing or not--it just happens.

How far off is this?
It is never going to be true that there is no (performance) penalty for anti-aliasing. Now that we have added dedicated hardware, the penalty for switching on anti-aliasing is less than 50 percent. Soon we will stop optimizing non-anti-aliased graphics, so that when you switch it off there will be no difference in speed. We are no longer thinking about how to make aliased rendering go faster. Instead we want to concentrate on things like making smoother edges, better shadows and better reflections.

One of the disappointing things for us is that we bring out a new piece of hardware like the GeForce3 and time passes but there still aren't many games coming along to take advantage of it.
So we have "better pixels." Where does this get us?
I want to have PCs and game machines making images that look as good as what you see at the movies, and we can't do that just by making a faster GeForce3. Last year we took a scene from "Final Fantasy," dumbed it down a bit and (we) were able to render it in real time at 12 to 15 frames per second. That same scene runs at well over 30 frames per second on a GeForce4; so now, if we did it again, we should be able to render the original scene from the movie at 12 to 15 frames per second. We are less than one year away from rendering it with full details and at full speed of 30 frames per second on PC hardware.

It is a chase though. For rendering frames for the movies you can always afford to wait around for a couple of hours, but in games, you need them instantly. So we will be able to render movies like "Final Fantasy," "Shrek" and "Toy Story" in real time on a PC next year, but of course the movie studios will raise the barrier. But once we can get movie quality in games and can start getting movie studios to use games hardware to create their movies, it will really bring movies and games much closer together.

The biggest opportunity we have is that graphics is an infinitely parallelizable problem (parallelism is the science of processing two or more tasks at once), much more so than PCs. With GPUs (graphics processing units) we are able to take advantage of more transistors because we can keep them busy better than in CPUs (central processing units), and this helps us double the speed much faster then every 18 months (the rule dictated by Moore's Law for CPUs).

But what happens to yields as you increase the number of transistors?
Already the GeForce4 has more transistors on it than the Intel Pentium III and Pentium 4 combined. You have to remember that Moore's Law is not the rate at which semiconductors get faster and more condensed, but the rate at which CPU manufacturers can make CPUs more productive.

The number of transistors in a given area for a given cost rises faster than Moore's Law. The reason that CPUs are unable to keep pace is that everything in the sequential architecture of the CPU has to go through one pipe. With graphics, where we can have a more scalable architecture, the curve is much steeper; we can double performance every six months.

And since our computational growth rate for the same number of transistors is faster, we don't have to be ahead of the CPU manufacturers such as Intel in terms of process technology. 

ZDNet U.K.'s Matt Loney reported from London.