CNET también está disponible en español.

Ir a español

Don't show this again

Tech Industry

Charting Sun's new chip strategy

Sun's David Yen explains why cozying up to x86 processors is a good idea for a company that's traditionally preferred to do its own chips.

After years of boasting about the advantages of a single-processor computing architecture, Sun Microsystems is changing with the times. Under pressure from rivals, the company has begun to incorporate lower-end "x86" chips into its product lines.

"We see Sun offering certain x86-based products," said David Yen, a 15-year veteran who runs the company's processor group. "In other words, for some of the (chip areas), we will not waste any of our resources to pursue that. Maybe an Intel solution or an AMD solution is good enough."

It's probably a good idea. Unix systems built around proprietary chips have traditionally powered most of the servers in high-end data processing computers. But in the second quarter of 2002, IDC reported that more money was being spent on Intel-based servers for the first time.

The decision to let x86, or Intel-compatible, processors into the fold wasn't as difficult as it might seem at first blush. Sun has been trying for months to redefine itself as a company that sells integrated collections of hardware and software--systems, as company executives are fond of saying.

The job of making all this fall into place falls to Yen, who discussed his views with CNET News.com about changes in chip technology as well as Sun's overhaul of the UltraSparc family following its acquisition of Afara Websystems.

Q: So do you think Itanium is doomed?
A: It is an architecture that's kind of inspired by the late '70s high-performance computing--in other words a pretty narrow focus, and yet trying to get applied in the '90s for a general-purpose computing environment.

Certainly there have been rumors from the beginning that Intel has sunk multiple billions of dollars into this program. On top of that, their image and self-respect--everything is at stake. So I think they will continue carrying Itanium. But...they knew the consequences of their decision. They knew they broke the binary compatibility (preventing programs written for x86 chips to run easily on Itanium chips). They knew they had to re-create an ecosystem. Probably they have such high confidence on their resourcefulness and their "macho-ness" that think they can overcome.

Sun actually tried it (breaking binary compatibility) several years back...but after we provided the compilers and ported the operating system, we looked at it on the applications side and eventually threw in the towel. We know how formidable it is to try to create a new software environment.

They created such an opportunity for AMD, which came in and did an incremental 32-bit to 64-bit extension on the foundation of the Athlon design in a relatively short time with Opteron. A huge advantage is they allow a customer to continue running their 32-bit applications.

How do you think Intel will respond?
Intel certainly recognized (the Opteron threat) a year or two ago...If you look at Prescott, if you look at some of the new (chips)--in particular some of those coming up in 2004--Intel's very quietly putting something called a 64-bit extension in. I think Intel will have to address this vulnerability created by Itanium versus Opteron. They need a better defense.

So Intel put a 64-bit extension into Prescott?
Yeah.

With 64-bit memory addressing?
With their 64-bit architecture, they will allow programs to have 64-bit addressability. I suspect there also will be 64-bit registers and 64-bit function units. But basically it's an Opteron-type of compatible design.

You're getting pushed on one end by IBM and by Dell on another. Are you in danger of finding yourself in a revenue death spiral? Texas Instruments helps with chip manufacturing, but is Sun going to be big enough to invest in processor technology? Isn't the fact that you've let Intel in at the low end an admission of the new economic reality?

Frankly speaking, there's not much difference designing a data processor chip versus designing a full system.
Frankly speaking, there's not much difference designing a data processor chip versus designing a full system. Nobody criticizes a system company, saying they invest too much money building their own hardware system. And yet people think that investing in R&D to build your own processor is too much to bear. My section is only a small section of Sun's $2 billion (spent annually) in R&D.

In fact, more and more system parts are actually being absorbed into the processor chip. You always need a processor. But then with more capacity and more capability, what do you do? You absorb. Either you try to have SMP (symmetrical multiprocessing) on a chip--

Multicore processors?
Yep. Or if the processor is good enough, and yet only costs a fraction of the possible die size, then you start absorbing the memory controller, the networking interface, the crypto logic--everything you can think of onto that chip. Probably the only thing you should do without is the DRAM (the computer's main memory).

If that's the case, some of the system companies only have two choices. One is to retain their microelectronics capabilities so that they will be forever able to control their own destiny and do their own hardware, whether on the chip or on the board. Or you have to rely on the chip industry to supply you with all the parts and you become a system integrator.

Your Niagara processor (scheduled to debut in 2005 or 2006) will be the first of a new family of multicore processors. What's happening with the higher-end S series successors to Niagara?
We actually have multiple projects.

Three projects? Four?
More than two.

Will UltraSparc VI be on the UltraSparc V lineage, or something that's going to pick up from the Afara side (where the Niagara technology originated)?
The issue is the balance. On the one hand, we strongly believe in the supercomputing vision, and that inspired a lot of good ideas. Still, all today's businesses are based on these more traditional processors...So how do you allocate resources to pursue some of these more incremental developments, versus a new product, a new vision, a new architecture? That is the challenge.

We're not religious. We want to keep our options open.
I think you will see in the next six to 12 months Sun make significant steps--particularly in the processor area--to allow the processor group to transition from a more incremental development scenario to a more aggressive, next generation of processor scenario.

Have you considered using IBM to build your chips? They build for example the PA-RISC line for HP.
We actually did have an interaction with IBM, especially last year, because of their strong desire to acquire more customers to help amortize their new 300-millimeter (a larger silicon wafer size) fab investment...(but) there are so many additional considerations. From a product side, they are certainly competing with us. We're not religious. We want to keep our options open. In the last two years, we did (a review), not just with IBM but with several potential fab candidates. Every time, we came to the very objective conclusion that we have one of the best partners.

How about the Power architecture? It seems that's closer to Sun's philosophy. How do you see IBM as a competitive threat?
I think the Power4 project was a good one. Power5 is an enhancement in the right direction...But IBM is IBM. It is probably going to (be) more conservative. IBM probably can make up some of that conservatism with the fact that they also own the system and maybe the operating system, assuming they aren't dumping AIX--or they can extend Linux to the point where it really is efficient and scalable.

IBM is moving "hypervisor" technology onto the chip (giving the chip control over hardware resources so the server is more flexible and can be partitioned better into smaller independent servers).
All the new Sparc processors, including Niagara and follow-ons, will include a "hypervisor" layer. That will provide us extra flexibility to somewhat decouple the operating system and the underlying hardware platform, which may translate into a faster release cycle.

Would that make it easier to move a task from one processor to another? Or is it an abstraction layer where you can change from one chip design to the next chip design without the software needing to know?
It's an extra layer of abstraction. Once it's done, the hardware has bigger flexibility to make certain changes without disturbing the Solaris operating system. The challenge there is you don't want to degrade the performance. You want to make that layer very lean and mean, and yet provide the abstraction and, therefore, flexibility.