Galaxy S23 Leak ChatGPT and Bing Father of Big Bang Theory 'The Last of Us' Recap Manage Seasonal Depression Tax Refunds and Identity Theft Siri's Hidden Talents Best Smart Thermostats
Want CNET to notify you of price drops and the latest stories?
No, thank you

Amazon adds graphics-chip computing service

Tapping into the GPU's power for more than just graphics work is a major trend in computing. Now Amazon Web Services is jumping into the market with a GPU-based service.

Reflecting a growing trend in the tech industry, Amazon Web Services has added a new computing style that uses computers' graphics chips.

AWS' Elastic Compute Cloud (EC2) lets people pay to use different varieties of online computing resources, paying as they go. EC2 began with a conventional business server setup, but Amazon has been adding different instance types tuned to particular computing needs.

The new Cluster GPU Instance is a server with two quad-core Intel Nehalem-series Xeon X5570 processors, two Nvidia Fermi-series Tesla M2050 graphics chips, 22GB of memory, 1.7TB of storage, and a 10 gigabit per second Ethernet connection, Amazon said today in a blog post by Jeff Barr, Amazon's lead web services evangelist.

Graphics processors began their existence as chips dedicated to speeding up graphics operations on computers, chiefly 3D games and design software. But graphics chips, which have been dramatically increasing in performance in recent years, are good for more than that, which is why they've begun showing up in supercomputers--including the new fastest supercomputer, the Tianhe-1A in China--and why graphics chips are now called upon for seemingly mundane chores such as rendering text and Web pages.

Amazon Web Services' new service combines conventional central processing units (CPUs) with graphics processing units (GPUs).
Amazon Web Services' new service combines conventional central processing units (CPUs) with graphics processing units (GPUs). Amazon

More specifically, GPUs can be used for processing media data--resizing videos, for example, or compressing audio--and for some kinds of calculations that run in parallel. That's because graphics chips are good at running the same sort of operation on lots of data. Indeed, each Nvidia M2050 has 448 processing cores for that sort of parallel work.

Programming these hybrid systems is tricky, though--for example, the graphics chip and conventional processor have their own memory. To use it, programmers can write directly to Nvidia's CUDA technology for GPU processing, use libraries of code adapted for it, or use higher-level interfaces such as the OpenCL standard from Khronos Group.

The new GPU instance for now only runs Linux and is available in Amazon's northern Virginia region. And it's the most expensive option so far from EC2, costing $2.10 per hour to use. That compares to 34 cents per hour for a "large" but conventional server running Linux or 48 cents for the same machine with Windows.