The Gizmo Report: NVIDIA's GeForce GTX 280 GPU-- introduction
NVIDIA launches the GTX 280 graphics processor, and Peter Glaskowsky gets some hands-on time with a graphics card based on the new chip. Here's an introduction to the chip itself.
Today, NVIDIA officially announces its new GeForce GTX 200 family of graphics processing units (GPUs) and the first two products in the family, the GeForce GTX 280 and the GeForce GTX 260.
The GeForce GTX 280 is the new flagship of NVIDIA's GPU product line, taking over from last year's GeForce 9800 GTX. (The change in the product-name format from "9800 GTX" to "GTX 280" is potentially confusing and doesn't seem that useful to me, but I'm sure we'll get used to it over time. I suppose NVIDIA's other choice was to go with numbers above 10,000, which might have been even worse.)
NVIDIA disclosed the details of these products at an Editor's Day conference in May, and most of the attendees, including myself, received GTX 280 graphics cards for editorial review. These cards are NVIDIA reference boards, not retail products.
I'll be doing this review in multiple parts, each addressing a different aspect of these products and the effects they'll have on the PC graphics market.
First, an overview of the GTX 280 chip itself.
This is a huge chip. NVIDIA won't say exactly how large, and I'm not going to bust open the chip package on my reference board just to find out, but NVIDIA VP of technical marketing Tony Tamasi says it's the biggest chip ever made by TSMC, NVIDIA's manufacturing partner.
The raw numbers are very impressive.
The chip has 1.4 billion transistors, about 80% of which are used to perform the mathematical calculations required for 3D rendering. (By comparison, only a small fraction of the 820 million transistors in a quad-core Intel processor are directly used to execute software; the rest comprise memory blocks, instruction decoders, data transfer channels, and other support functions.)
That's almost twice as many transistors as found on NVIDIA's 9800 series chips. The extra transistors boost the number of cores per chip from 128 to 240. Each core runs at almost 1.3 GHz.
Three floating-point operations per clock period per core at 1.296 GHz works out to 933 GFLOPS (billions of floating-point operations per second) for single-precision computations, a record for a production chip. (Intel made an experimental 80-core floating-point processor in 2007 that exceeded 1 TFLOPS, but never brought it to market.) The GTX family can also handle double-precision math, which will help in professional applications; in this mode, the GTX 280 delivers over 90 GFLOPS. The chip has 142 GB/s (gigabytes per second) of memory bandwidth over a 512-bit memory interface. It can manage a gigabyte of 1.1-GHz GDDR3 frame-buffer memory.
These are truly astounding numbers for a single-chip processor, suggesting that the GTX 280 is an order of magnitude faster than the theoretical capability of current quad-core PC CPUs.
But a direct comparison is unfair to both.
A GTX 280 achieves its high throughput only for software that is able to take full advantage of 240 cores with a very specific combination of operations. NVIDIA designs its GPUs to be effective on 3D rendering and other workloads with similar characteristics. Although one could write a word processor for a GPU, it would likely use very little of the chip.
A CPU, on the other hand, lacks the special-purpose hardware found in a GPU that accelerations specific portions of the 3D-rendering process. Software-based 3D rendering on a CPU isn't merely one tenth the performance of a GPU, it's much slower than that.
So both kinds of chips have a role to play in our computers, and in spite of ongoing efforts by Intel, AMD, and others to blur the line between CPUs and GPUs, I think the distinction will continue to exist indefinitely.
And when we aren't watching Intel and NVIDIA fight over the ultimate destiny of the PC, we can play video games.
That's the primary market for the GTX 280, so that's how I tested it..