With a new graphics driver and a series of free, "Power Pack" downloads, Nvidia has finally switched on the GPU computing capabilities of its 8000, 9000, and 200 series GeForce cards. Among the things to try are three games (one full, one demo, one Unreal Tournament 3 map), a demo of a fashion-oriented social-networking program called Nurien, a video-encoding application, and a GPU-accelerated Folding@Home client.
All of these programs rely on Nvidia's CUDA software to target your GeForce card, and as such, they require special coding on the part of their programmers. As it's Nvidia-specific code, these programs won't work if you have an integrated Intel graphics chip or an ATI graphics card (at least, technically).
According to Jon Peddie Research, Nvidia currently owns 31.4 percent of the graphics market for desktops and laptops. Even if we incorrectly assume that all of those chips are CUDA-capable, that leaves at least two-thirds of the computer market that can't use this special software. Nvidia might be able to provide some financial incentives to developers to offset the limited user base, but it certainly can't afford to subsidize the majority.
But perhaps there's a killer application in one of these Nvidia downloads. We'll forget that ATI's Radeon cards can also accommodate GPU processing and that the next version of Adobe's Creative Suite will support platform-agnostic GPU acceleration. Maybe the Nurien demo will ignite a tween girl fan following of Hannah Montana-size proportions. Even if it does, we would still be surprised if we saw an industry-wide embrace of CUDA-based software for consumers. The reason is Microsoft.
We'vethat Microsoft will include a GPU computing element in DirectX 11 when it ships in a year or two with the next version of Windows. Once that happens, the need for a specialized software path for GPU processing goes away. As Direct3D cleared up the inanity of game developers having to program for each different graphics chip, so too should DirectX 11 eliminate the need for any proprietary software interface, such as CUDA, to access your graphics card for general-purpose computing.
This is not to say that in the meantime CUDA won't provide any performance gains in certain programs, or any extra physics effects in certain games. But what it will not do is allow a developer of a AAA game apply substantive physics acceleration throughout the core gameplay. It will also not likely corner the market on GPU acceleration for larger productivity applications like Adobe's Creative Suite.
With most proprietary standards, consumers typically need to see a major, game-changing benefit to spur a significant switch. But in the case of CUDA, an effectively universal standard is just below the horizon in the form of DirectX 11 (not to mention OpenCL, the OpenGL-style, open GPU computing specification spurred on by Apple). That leaves CUDA an exceedingly small window in programming terms for that killer application to sway the masses its way.