Updated on April 27 at 8:20 a.m. PDT with additional information about DirectX 11 and correcting for Intel comments at bottom.
Graphics chips will be tapped to accelerate more tasks in upcoming versions of Apple's and Microsoft's operating systems, according to Nvidia.
In an interview Friday with Sumit Gupta, product manager for Nvidia's Tesla products, Gupta described how new programming environments will tap into the latent compute horsepower of graphics processors to accelerate software in Apple's upcoming OS X Snow Leopard and Microsoft's Windows 7 operating systems.
Graphics chips aren't just for games anymore. GPGPU. But the essence of General Purpose computing on Graphics Processing Units is pretty simple: use the scores--or even hundreds in higher-end chips--of processing cores inside GPUs to speed tasks that, in some cases, would be done much less efficiently by the central processing unit (CPU).is defined by an acronym that doesn't exactly roll off the tongue:
This is where OpenCL (Open Computing Language) comes in. OpenCL is a programming environment for "heterogeneous" computing. That is, computers using a mix of multicore CPUs and GPUs. Microsoft's analogous programming environment is DirectX.
Apple says this about OpenCL on its Web site. "Another powerful Snow Leopard technology, OpenCL...makes it possible for developers to efficiently tap the vast gigaflops of computing power currently locked up in the graphics processing unit."
Today, on a PC or a Mac, the CPUs made by Intel and Advanced Micro Devices are adept at handling general operating system tasks. For instance, handling the sequence of things that must happen after the user clicks on an icon to start an application on their desktop.
But some tasks traditionally handled by the CPU will be shifted over to the GPU--or divvied up, so certain operations are done on the CPU, while others are done on the GPU. "The really interesting thing about OpenCL and DirectX is that OpenCL is going to form part of the Apple operating system (Snow Leopard) and DirectX (version 11) will form part of Windows 7," said Gupta. "And what that essentially means to consumers is, if your laptop has an Nvidia GPU or ATI (AMD) GPU, it will run the operating system faster because the operating system will essentially see two processors in the system. For the first time, the operating system is going to see the GPU both as a graphics chip and as a compute engine," he said.
(Note: Gupta's comments implied future versions of DirectX, not DirectX generically.)
"For example, when you launch (Google) Picasa, that is completely run on the CPU. (But) the minute you choose an image and apply a filter, that filter should run on the GPU. (This happens) when you have Apple and Microsoft pushing the application developers to do that," Gupta said.
Gupta continued. "If you look at the Apple OS today. It's a beautiful interface where there actually is more visual content than there is sequential (CPU) content...stuff that's more available to the GPU. The CPU is one aspect but not necessarily the most important aspect anymore," he said.
That said, CPUs from Intel and AMD are still indispensable. "If you're running an unpredictable task, the CPU is the jack of all trades. It is really good at these unpredictable tasks. The GPU is a master of one task. And that is a highly parallel task." he said.
One of the limiting factors of unlocking the potential of the GPU has been the programming environment. "The hardest part about using the GPU was that you had to use a graphics language to program it," Gupta said. This is changing, however, with OpenCL and Nvidia's CUDA development environment based on the C programming language.
Intel sees it this way: "Since the graphics pipeline is becoming more and more programmable, the graphics workload is making its way to be more and more suited to general purpose computing--something the Intel Architecture excels at and Larrabee will feature," and Intel spokesman said Friday, referring to Intel's upcoming graphics chip.
"Coming out with this C compiler and the CUDA architecture, that's the big change we made. We came up with an architecture that was more friendly and familiar to your average C programmer," Gupta added.
Note: The paragraph quoting an Intel spokesman originally said "Intel agrees--generally." This was changed to: "Intel sees it this way..."