X

AMD's SSE5 ends the old RISC vs. CISC debate

Prompted by the chipmaker's announcement of the SSE5 instruction-set extensions, Glaskowsky analyzes the ultimate outcome to this old controversy.

Peter Glaskowsky
Peter N. Glaskowsky is a computer architect in Silicon Valley and a technology analyst for the Envisioneering Group. He has designed chip- and board-level products in the defense and computer industries, managed design teams, and served as editor in chief of the industry newsletter "Microprocessor Report." He is a member of the CNET Blog Network and is not an employee of CNET. Disclosure.
Peter Glaskowsky
3 min read

Remember how I said that Moore's Law is "the full-employment act for computer pundits"?

In the smaller niche of microprocessor journalism, there used to be another topic that was always good for a column: RISC vs. CISC.

In the early days of computing, a CPU (central processing unit) was a series of refrigerator-size cabinets in the computer room. Memory capacity was very limited. Computer scientists would analyze how programs executed on these machines and look for ways to shorten and speed up their programs by defining new instructions.

For example, if a database application was seen to fetch data from an address, add one to the address, and repeat, the next machine in the series might get some extra logic to implement a single instruction that does the fetch and the increment operation. That way the program would be one instruction shorter and run a little faster.

So that explains where CISC (complex instruction-set computing) came from. The x86 processors from Advanced Micro Devices and Intel are examples of CISC processors.

The granddaddy of all computer architectures today, IBM's zSeries, has some 894 different instructions (as I mentioned in my blog post about IBM's z6 presentation at Hot Chips).

But even back in the 1960s and 1970s, IBM and CDC (Control Data Corp.) had people researching a different approach, which became known as RISC, for reduced instruction-set computing. The basic idea behind RISC is to have relatively few instructions and a very regular logic design, omitting all the special-purpose instructions in favor of being able to run faster and more efficiently.

In the 1990s, there was a lot of debate over whether RISC or CISC was the better approach, especially in the pages of the Microprocessor Report newsletter where I used to work.

Ultimately, however, they both won--CISC in software and RISC in hardware. The x86 architecture dominates the PC and server markets, but the guts of modern x86 chips are very RISC-like. The combination is made possible by translating complex individual instructions into short sequences of simple ones. It sounds a little awkward but works well in practice; this approach has been standard for 10 years now.

What prompts this history lesson is Thursday's announcement by AMD that it will be adding a new set of multimedia instructions to its forthcoming "Bulldozer" processor...and that it's calling them SSE5 (for Streaming SIMD Extensions).

AMD's use of this name is rather unusual because the previous four sets of SSE instructions were defined by Intel. That company hasn't yet indicated how it feels about AMD's use of the acronym or whether it will adopt SSE5 in its own processors. AMD has adopted Intel's SSE, SSE2 and SSE3 instructions, as well as a few of the SSE4 instructions Intel announced earlier this year.

I could go on for another page describing the details of all these sets of SSE instructions and how they've been fragmented across both AMD and Intel microprocessors, but suffice it to say that the SSE5 announcement proves beyond any further doubt that CISC and RISC are both alive and well. Instruction sets will keep getting more complicated, and they'll continue to be executed by processors built on the principles of RISC.

Updated to add that last sentence to make the conclusion implied by the headline a little clearer. Sometimes things make more sense in the writer's head than on the written page. Sorry... --png