X

AMD's man of the future

newsmaker CTO Phil Hester's chip design strategy spans supporting co-processors and helping the developing world.

Tom Krazit Former Staff writer, CNET News
Tom Krazit writes about the ever-expanding world of Google, as the most prominent company on the Internet defends its search juggernaut while expanding into nearly anything it thinks possible. He has previously written about Apple, the traditional PC industry, and chip companies. E-mail Tom.
Tom Krazit
6 min read
Advanced Micro Devices might not be planning a huge change in the way it makes chips this year, but that doesn't mean Phil Hester doesn't have a lot on his mind.

The new chief technology officer went to AMD last year from Newisys, an early designer of servers built with AMD's Opteron processor. He is responsible for the direction of AMD's overall chip design strategy, which encompasses everything from servers to PCs to newer devices that haven't really gone beyond the drawing board.

On Thursday, the chipmaker is holding a half-day meeting for analysts at its headquarters in Sunnyvale, Calif., where new details are expected to be revealed about its road map and future direction. Any announcements will receive intense scrutiny, as many people are wondering what AMD has in store for its next trick. It has announced plans for only minor changes this year, such as DDR2 memory support, as it prepares for a new architecture in 2007.

In just a few short years, AMD has moved from a niche player known mostly to gamers to a supplier to Fortune 500 companies across the world, even convincing longtime holdout Dell to build a server based on its Opteron chip earlier this month. Its market share among retail PCs has also increased over the past year, actually exceeding Intel's share at certain points in the U.S. market. But Intel is declaring that it is back, with the pending launch of new processors based on a more power-efficient architecture.

At the Future in Review conference in Coronado, Calif., recently, Hester sat down with CNET News.com to discuss how AMD is helping the software industry deal with multiple processing cores and gave his take on the hot conference topic, computers for the developing world.

Q:When it comes to client software, what is your role in getting the industry ready for the multicore era?
We spend a lot of time really trying to understand the end-user scenarios of "what is the application going to be like that these people would use? And what software characteristics do those applications have?"

If you look at it in the server space, I would argue that more or less all the contemporary applications have been developed for multiprocessing environment. Multicore is just a different way to package that up, and so the server software works well in a multicore environment.

You can't say the same thing about the client software, because you go back 20-plus years down the PC and the client applications, and--other than the possible exception of high-end workstation, dual-processor stuff--those applications always have benefited from single-threaded performance improvements in the processor.

Now that game has got to change. If you look at what's happening in the server side of things, you've seen companies build specialized accelerators for things like Java and XML, and the ability to easily add these customized processing blocks around a dual-purpose core processor is something that we think it?s very important. You'll see us do more and talk a quite a bit about that at the upcoming analyst meeting.

It seems that quad-core designs are going to be the way to go for a while. But then some people have started to wonder if you will get into a core race, the way you got into a gigahertz race.
There are only two ways to go faster: Better single-threaded performers or more multithreaded performers. I mean, that's it--those are the two vectors. So the question then becomes, what's the balance between those two? Where we tend to lean right now is that across dual workloads, you could in fact create more cores than you would see a benefit from.

That gets back to discussion about the evolution of the software that's in the client space. For sure, there has got to be thought given about the performance of things beyond quad-core.

What do you guys think about things like (IBM's) Cell processor--that notion of the one larger processor with many smaller cores that do the heavy lifting?
It has its place. Again, it gets back to the question (of) how well does it run across a diverse set of workloads?

For graphics processing, it's a great model, but not for all the applications you are running. Software development is also hard. I mean, look at what's happening with the PlayStation 3, for example. I would argue that a lot of the issues around that are the complexity of the software.

It's a hard programming model. If you do it right, it's got some big benefits. (One model we might look at is) potentially a lighter-weight co-processor for specific workloads (around) an architecture that runs well across the diverse set of workloads. If you look at the client or server workloads that we see, you can't optimize for one thing without (hurting) the performance of something else that would be a big issue to enough customers.

The end user has to have a good experience across the whole range of workloads. Some of those workloads and applications--if you were able to accelerate those without penalizing the others, you'd like that. So we kind of start off with the range they use to perform well. Then you could add in things--externally or, at some point, internally--to deal with specialized workloads, where enough of a time you are going to use the benefits from that extra cost.

What do you make of this debate over the proper way to bring computing to the rest of the world?
Somewhere between 15 percent and 20 percent of the world has a PC. Well, some of those (other) people will need a traditional PC, and there will also be something that merges a cell phone and today's notebook. There will be a lot of focus on power efficiency and cost in those devices, so part of that is also to enable different ways to sell these platforms.

One of the things that we've seen is that...inside emerging countries, the purchase price of the cell phone was an issue that you could fix with a kind of subscription model. There's a similar sort of thing we think would happen on the PC. If you tell somebody this is going to cost $250, and it's a one-time payment before you can get access to it, that's a harder discussion. But if you can make 25 payments of $10 apiece, spread over a period of time--and by the way, you're not going to be forced to pay that amount on regular basis, you'll pay it when you can afford it and the device will work when you cannot afford it.

You also have to deal with the protection of components. When you first get a lot of these devices, someone other than you is on the hook for the liability of the cost. You are in there, you pay for it over time, right, but there is a phenomena that can happen like stripping cars. If somebody steals it, tries to take the components out and sell the components as individual piece parts, for some period of time, there is a higher value in trying to do that than in protecting the device. So you need to think about how you protect the components of these systems to do as much as possible to prevent that sort of thing from happening.

Some participants at the conference have promoted a thin-client architecture for developing nations, which wouldn't require much processing power in each device. What's your take on that?
You just look at, for example, the rural areas, what goes on at the neighborhood school. In number of cases, it would be restricting to say that you have to have a networking infrastructure in place to be able to take the first step. We think there will be devices which would certainly have the ability to run as a thin client with the network attached. But they are still alone going to be standalone-capable computing devices.

I think here in the U.S. and in a lot of developed countries, we now take broadband and, in general, even wireless broadband pretty much for granted. That's not the case in number of these countries. In lot of these places power, for example, is unreliable. They don't even have AC power other than couple of hours a day.