Asus ENGTX280 review: Asus ENGTX280

  • 1
MSRP: $474.00

The Good Faster Crysis performance than Nvidia's previous flagship 3D card; supports GPU-based computing and physics processing.

The Bad We'd like to see a larger performance gap between generations; few applications that can take advantage of the card's new features; newer 3D cards will likely hit the market shortly after the software catches up.

The Bottom Line Nvidia's new GTX280 graphics chip brings fast 3D performance and exciting new possibilities for speeding up certain kinds of multimedia applications. We'd be more enthusiastic about this card if the software was available to take advantage of the new features.

Visit for details.

7.3 Overall
  • Design 7
  • Features 8
  • Performance 7

Editor's note: After retesting their initial findings, our colleagues at GameSpot found that the GeForce 9800 GX 2 had better scores on "Call of Duty 4" and "Crysis" than was first reported. With that new information, we have found that the GTX280 has a smaller performance advantage over Nvidia's previous product. Our overall rating and recommendation remains the same.

Nvidia's new graphics chip, the GeForce GTX280, is the latest in what's felt like a steady stream of new high-end GPUs this year, most from Nvidia. And true to Nvidia's recent marketing push, the GTX280 not only includes powerful 3D graphics capabilities, but it's also one of the first consumer graphics cards that can take over certain application processing tasks. Like most high-end 3D cards, the GTX280 (reviewed here on the Asus ENGTX280) is expensive at $650, which is about $50 more than we're used to for the fastest single-chip 3D card on the market. For that price, you get a measurable if not revolutionary performance edge. And while we're intrigued by the idea of GPU-based application processing, that capability still needs the software to catch up before it's truly useful. At the very least, wait until ATI's next-generation cards come to light next week before making a purchase. If you're interested in the GTX280 for application processing, we'd also suggest holding off until the software emerges. Just keep in mind that by the time we do see such applications, Nvidia's next expensive 3D card will be that much closer.

There's actually quite a bit to talk about with the GTX280. In addition to the 3D and application processing, Nvidia has built-in support for its newly acquired PhysX physics processing software framework. It's also a part of Nvidia's HybridPower ecosystem, which has implications for system power consumption.

The biggest competitor to the GTX280, at least for now, is Nvidia's GeForce 9800GX2. That card, released only three months ago, uses two 9800 chips on a single graphics card, and sells for about $600. The GeForce GTX280 is designed to replace it. Because the GTX280 is a single chip card, one of its advantages is that you don't need to worry about how well a game can take advantage of a two-chip card, like the 9800GX2, or a traditional multicard set up.

  GeForce GTX280 GeForce 9800GX2
Price $650 $600
Manufacturing process 65 nm 65 nm
Transistors 1.4 billion 1.5 billion
Core clock 602MHz 600MHz
Stream processors 240 256 (per chip)
Memory 1GB GDDR3 512MB GDDR3 (per chip)
Memory speed (data rate) 1.1GHz (2.2GHz) 1GHz (2GHz)
Memory interface 512-bit 256-bit

Even though the stream processor and transistor count went down a bit for the GTX280 compared with the 9800GX2, the key differentiator in terms of performance is the memory. The new card not only has twice the memory, it also has a wider 512-bit memory path. This is not GDDR5 RAM, as we expect to see from AMD in its forthcoming Radeon HD 4800 series cards, but Nvidia has stuck with GDDR3 for a while, and it seems satisfied with the results.

What's most important is that the GTX280 comes in faster than the 9800GX2 on actual game tests. The 3DMark 2006 and 3DMark Vantage scores are interesting, but as synthetic benchmarks, they don't exactly represent real world gameplay. We're most impressed by the results. The GTX280 still can't hit the hallowed ground of 60 frames per second we like to see in our shooters, but it makes a significant leap over the 9800GX2. You will still experience some chop if you dial Crysis all the way up, but at 1,280x1,024-pixel resolution at high quality, you should get fairly smooth gameplay.

3DMark Vantage
(Longer bars indicate better performance)
1,920 x 1,280 (4x anti-aliasing, 16x anisotropic filtering, extreme settings)  
1,280 x 1,024 (default settings)  
Nvidia GeForce GTX280
(2) Nvidia GeForce 9800 GTX
Nvidia GeForce 9800 GX2
Nvidia GeForce 9800 GTX

3DMark 2006
(Longer bars indicate better performance)
2,048 x 1,536 (4x anti-aliasing, 8x anisotropic filtering)  
1,280 x 1,024 (default settings)  
(2) Nvidia GeForce 9800 GTX
Nvidia GeForce 9800 GX2
Nvidia GeForce GTX280
Nvidia GeForce 9800 GTX

(Longer bars indicate better performance)
1,600 x 1,200 (4x anti-aliasing, high quality)  
1,280 x 1,024 (high quality)  
Nvidia GeForce GTX280
(2) Nvidia GeForce 9800 GTX

The and scores are less exciting. The 9800GX2 outperforms the GTX280 by a slim edge on Call of Duty 4. The GTX280 also comes in slower than the pair of 9800 GTX cards in SLI mode, which together cost less than this new card. The Team Fortress 2 scores are better, especially the sky-high 1,920x1,280-pixel resolution results, but Team Fortress 2 isn't exactly a challenge any more, and the other cards do a fine job overall at both resolutions.

Call of Duty 4
(Longer bars indicate better performance)
1,920 x 1,280 (4x anti-aliasing, maximum quality)  
1,600 x 1,200 (4x anti-aliasing, maximum quality)  
(2) Nvidia GeForce 9800 GTX
Nvidia GeForce GTX280

Team Fortress 2
(Longer bars indicate better performance)
2,048 x 1,536 (8x anti-aliasing, 16x anisotropic filtering)  
1,920 x 1,280 (4x anti-aliasing, 8x anisotropic filtering, high quality)  
Nvidia GeForce GTX280
(2) Nvidia GeForce 9800 GTX

That brings us to our main issue with the GTX280 card for 3D gaming. Yes, it's the fastest single-chip 3D card for PC gaming. But it can't totally dominate Crysis, and the other cards can more than handle Call of Duty 4 and Team Fortress 2, making $600 price tags in general hard to swallow. A few more recent PC games such as and appear to be more challenging (we encourage you to check out HardOCP's excellent performance write-ups on those titles), so if you have a mind to play those two games and you want a high-end graphics card to do so, the GTX280 may be worth it. But our hunch is it won't be until at late August or even later this year when games such as , , , and updates to the and franchises will really start to expand the PC gaming library with titles that demand advanced graphics horsepower. By then, Nvidia's traditional fall/winter product update won't be that far away.

Your buying decision really depends on how eager you are to spend $600-plus on a new 3D card, and how soon you want to be playing the latest titles. If you know you'll want both sooner, the GTX280 will give you a nice boost now and will likely provide a solid gaming experience on the forthcoming titles out later this year. Just know that as the new games hit in the third and fourth quarters, there's a good chance Nvidia's next batch of 3D cards will be out around that time as well.

GPU-based computing
On top of 3D gaming, the GTX280 (and its lower-end counterpart, the GTX260) is also receiving a heavy push from Nvidia for certain kinds of nongaming applications. The backbone of this capability is called Cuda, which is an Nvidia-created programming language that allows software developers to direct their applications toward the graphics card to process them, rather than the CPU.

The benefits of this strategy, as explained to us by Nvidia, are that certain kinds of software runs more quickly on a graphics card than on a CPU, in particular applications that need to crunch large amounts of predetermined data. Examples include video transcoding, photo editing, and graphics processing, among others. The GPU won't be replacing your normal processor any time soon, because a traditional CPU is still better at tasks that involve interpreting real-time data, such as loading a Web page, processing artificial intelligence in a game, or running typical productivity software like word processing or spreadsheet programs.

We find the idea of GPU-based processing very intriguing, and it helps explain Nvidia's recent marketing push against Intel. What kind of processor would be better for dividing up a large workload among multiple cores, a quad-core CPU that has four processing threads, each running at 3.0GHz, or a 240-core GPU with each core running at 1.3 GHz? Seems like simple math to us.