Editor's note: After retesting their initial findings, our colleagues at GameSpot found that the GeForce 9800 GX 2 had better scores on "Call of Duty 4" and "Crysis" than was first reported. With that new information, we have found that the GTX280 has a smaller performance advantage over Nvidia's previous product. Our overall rating and recommendation remains the same.
Nvidia's new graphics chip, the GeForce GTX280, is the latest in what's felt like a steady stream of new high-end GPUs this year, most from Nvidia. And true to Nvidia's recent marketing push, the GTX280 not only includes powerful 3D graphics capabilities, but it's also one of the first consumer graphics cards that can take over certain application processing tasks. Like most high-end 3D cards, the GTX280 (reviewed here on the Asus ENGTX280) is expensive at $650, which is about $50 more than we're used to for the fastest single-chip 3D card on the market. For that price, you get a measurable if not revolutionary performance edge. And while we're intrigued by the idea of GPU-based application processing, that capability still needs the software to catch up before it's truly useful. At the very least, wait until ATI's next-generation cards come to light next week before making a purchase. If you're interested in the GTX280 for application processing, we'd also suggest holding off until the software emerges. Just keep in mind that by the time we do see such applications, Nvidia's next expensive 3D card will be that much closer.
There's actually quite a bit to talk about with the GTX280. In addition to the 3D and application processing, Nvidia has built-in support for its newly acquired PhysX physics processing software framework. It's also a part of Nvidia's HybridPower ecosystem, which has implications for system power consumption.
The biggest competitor to the GTX280, at least for now, is Nvidia's GeForce 9800GX2. That card, released only three months ago, uses two 9800 chips on a single graphics card, and sells for about $600. The GeForce GTX280 is designed to replace it. Because the GTX280 is a single chip card, one of its advantages is that you don't need to worry about how well a game can take advantage of a two-chip card, like the 9800GX2, or a traditional multicard set up.
|GeForce GTX280||GeForce 9800GX2|
|Manufacturing process||65 nm||65 nm|
|Transistors||1.4 billion||1.5 billion|
|Stream processors||240||256 (per chip)|
|Memory||1GB GDDR3||512MB GDDR3 (per chip)|
|Memory speed (data rate)||1.1GHz (2.2GHz)||1GHz (2GHz)|
Even though the stream processor and transistor count went down a bit for the GTX280 compared with the 9800GX2, the key differentiator in terms of performance is the memory. The new card not only has twice the memory, it also has a wider 512-bit memory path. This is not GDDR5 RAM, as we expect to see from AMD in its forthcoming Radeon HD 4800 series cards, but Nvidia has stuck with GDDR3 for a while, and it seems satisfied with the results.
What's most important is that the GTX280 comes in faster than the 9800GX2 on actual game tests. The 3DMark 2006 and 3DMark Vantage scores are interesting, but as synthetic benchmarks, they don't exactly represent real world gameplay. We're most impressed by the results. The GTX280 still can't hit the hallowed ground of 60 frames per second we like to see in our shooters, but it makes a significant leap over the 9800GX2. You will still experience some chop if you dial Crysis all the way up, but at 1,280x1,024-pixel resolution at high quality, you should get fairly smooth gameplay.
|1,920 x 1,280 (4x anti-aliasing, 16x anisotropic filtering, extreme settings)||1,280 x 1,024 (default settings)|
|2,048 x 1,536 (4x anti-aliasing, 8x anisotropic filtering)||1,280 x 1,024 (default settings)|
|1,600 x 1,200 (4x anti-aliasing, high quality)||1,280 x 1,024 (high quality)|
The and scores are less exciting. The 9800GX2 outperforms the GTX280 by a slim edge on Call of Duty 4. The GTX280 also comes in slower than the pair of 9800 GTX cards in SLI mode, which together cost less than this new card. The Team Fortress 2 scores are better, especially the sky-high 1,920x1,280-pixel resolution results, but Team Fortress 2 isn't exactly a challenge any more, and the other cards do a fine job overall at both resolutions.
|1,920 x 1,280 (4x anti-aliasing, maximum quality)||1,600 x 1,200 (4x anti-aliasing, maximum quality)|
|2,048 x 1,536 (8x anti-aliasing, 16x anisotropic filtering)||1,920 x 1,280 (4x anti-aliasing, 8x anisotropic filtering, high quality)|
That brings us to our main issue with the GTX280 card for 3D gaming. Yes, it's the fastest single-chip 3D card for PC gaming. But it can't totally dominate Crysis, and the other cards can more than handle Call of Duty 4 and Team Fortress 2, making $600 price tags in general hard to swallow. A few more recent PC games such as and appear to be more challenging (we encourage you to check out HardOCP's excellent performance write-ups on those titles), so if you have a mind to play those two games and you want a high-end graphics card to do so, the GTX280 may be worth it. But our hunch is it won't be until at late August or even later this year when games such as , , , and updates to the and franchises will really start to expand the PC gaming library with titles that demand advanced graphics horsepower. By then, Nvidia's traditional fall/winter product update won't be that far away.
Your buying decision really depends on how eager you are to spend $600-plus on a new 3D card, and how soon you want to be playing the latest titles. If you know you'll want both sooner, the GTX280 will give you a nice boost now and will likely provide a solid gaming experience on the forthcoming titles out later this year. Just know that as the new games hit in the third and fourth quarters, there's a good chance Nvidia's next batch of 3D cards will be out around that time as well.
On top of 3D gaming, the GTX280 (and its lower-end counterpart, the GTX260) is also receiving a heavy push from Nvidia for certain kinds of nongaming applications. The backbone of this capability is called Cuda, which is an Nvidia-created programming language that allows software developers to direct their applications toward the graphics card to process them, rather than the CPU.
The benefits of this strategy, as explained to us by Nvidia, are that certain kinds of software runs more quickly on a graphics card than on a CPU, in particular applications that need to crunch large amounts of predetermined data. Examples include video transcoding, photo editing, and graphics processing, among others. The GPU won't be replacing your normal processor any time soon, because a traditional CPU is still better at tasks that involve interpreting real-time data, such as loading a Web page, processing artificial intelligence in a game, or running typical productivity software like word processing or spreadsheet programs.
We find the idea of GPU-based processing very intriguing, and it helps explain Nvidia's recent marketing push against Intel. What kind of processor would be better for dividing up a large workload among multiple cores, a quad-core CPU that has four processing threads, each running at 3.0GHz, or a 240-core GPU with each core running at 1.3 GHz? Seems like simple math to us.
The question, as usual, comes down to the software. Nvidia provided us with a prerelease beta of a video transcoding application called BadaBoom, from a developer called Elemental. BadaBoom uses the GTX280 to transform a video file from one format to another, and the results are very fast, much faster than if you ran the same operation on a CPU-based program. Adobe has said that the next version of its Creative Suite will support GPU-based processing (and it's going to support graphics cards from both Nvidia and ATI). Apple has also recently announced that it will offer a similar GPU computer-programming language called OpenCL in the next version of its OS X operating system.
In other words, GPU computing is coming regardless of whether Intel likes it, and we expect programs that take advantage of it will be out before the end of the year. And, as we said earlier, we also expect that Nvidia will have another graphics card out before the end of the year. Thus, we fully endorse the idea of moving appropriate tasks to the 3D card, but if you can't buy software that will do that today, it doesn't make sense to spend $650 for that purpose now either, especially if it's possible that a better 3D card than the GTX280 is available for the same price around the same time the software is ready. It's also worth pointing out that all GeForce cards from the 8000 series on up support Cuda-based GPU processing (pending driver support), so you may already be prepared to take advantage of the new software when it comes.
Parallel to general computing on the GPU, Nvidia is also touting the GTX280's support for its recently acquired PhysX game physics software. As with Cuda, all GeForce 8000 series graphics cards on up will support physics processing on the GPU for games that use the PhysX standard, although Nvidia says that while the GTX200-series will support hardware physics with a new software driver out this week, older cards that can support it won't get support until the third quarter.
Game physics is a poorly understood concept right now, and we suspect not something most gamers really want to have to think about. Right now there are two major physics standards, PhysX and the more prevalent Havok, the owners of which, also called Havok, announced an alliance with AMD last week. Both can run on your CPU, but as physics processing is a parallel data task, the GPU looks like a more promising alternative.
Thus far, hardware-based physics in games has mostly involved either relatively unimpressive, superficial effects (like more debris per explosion), or specialized, one-off levels. The reason is that few gamers have sprung for standalone PhysX cards (whose developer, Ageia, Nvidia acquired earlier this year), and game developers don't want to isolate customers who can't support the added effects. With the capability to move the physics processing to the 3D card, we may eventually see more games where material and objects have more realistic behavior. We hope that we do. But going full-bore on PhysX means that game developers would again lose gamers who can't support it. As the installed base of supporting 3D cards grows, hardware-based physics in games will grow. And with Nvidia's backing, PhysX may eventually become the de facto standard. But until both of those issues are sorted out, we don't expect any game to take advantage of either standard in a meaningful way throughout the core gameplay.
From a buying perspective, Nvidia has listed several games coming out this year and next that will offer some kind of PhysX support. But again, they're not out yet, we don't know to what extent they'll take advantage of PhysX, nor can we say what that support will do to your overall frame rate. We think the concept of hardware physics is great, but we're wary of buying expensive hardware today for the above-stated reasons.
In addition to its processing power, the GTX280 also brings some familiar additional capabilities. It can, of course transmit protected HD video content, including Blu-ray movies, and at full 1080p resolution. As before, you'll need a separate audio cable (included in the box from Asus) to drive audio out through the video ports if you have a DVI-to-HDMI adapter. If you have an AMD CPU-based Nvidia Nforce 700a series motherboard, the GTX2 series will also support HybridPower, Nvidia's power management technology that throttles down your graphics cards' power consumption when it's not doing a lot of work. And of course, the GTX280 and the GTX260 are SLI capable, which means you can use two, or even three cards in the same system provided you can supply them with enough power, and that you have at least one other PCI-Express graphics card slot.
Getting it to work
About the power supply--like the GeForce 9800GX2 before it, the GTX280 requires one six-pin and one eight-pin connection to your PC's power supply unit. For a single card, Nvidia recommends at least a 550 watt PSU, for two or three you'll need between a 1,000-watt to a 1,500 watt PSU, depending on the rest of the hardware in your system. A single card won't require a more powerful PSU than we've come to expect in midrange gaming PCs, but for two- or three-way SLI that's a lot of power, and it's not cheap, with prices ranging from $250 to $400 for PSUs in that range of wattage. We did not find the GTX280 overly loud under load, at least, which we were happy to find.
As you may have surmised, we find it hard to get overly excited about the new features available on the GTX280, at least today. It's fast, there's no question about that, and for those wary of multichip solutions, that might be enough to justify a purchase. We expect you'll enjoy this card if you do. But similar to early GeForce 8000 series cards that offered DirectX 10 support before Windows Vista was out, the physics and GPU computing capabilities of the GTX280 has few applications to make use of today. You can argue that buying now is a wise future-proofing measure, and we suspect that when the software is ready it will indeed put the GTX280 to work. For our $650, we'd rather not have to wait for the software to catch up.
Test bed: Windows Vista SP1; 3.0GHz Intel Core 2 Extreme QX9650; EVGA NForce 780i motherboard; 2GB 1,142MHz DDR2 SDRAM; 750GB 7,200rpm Seagate hard drive.
Thanks to our GameSpot colleague Sarju Shah for providing us with benchmark results.