HolidayBuyer's Guide

Intel server revamp to follow AMD

The chipmaker plans to launch an internal computer communications system that mirrors a technology central to AMD's gains.

Intel is getting ready to introduce a chip communications technology that mirrors an approach central to recent successes of rival Advanced Micro Devices.

If Intel's newly competitive chips recently brought to market act as the brains of a server, then the Common System Interface (CSI) is its nervous system. The technology, set for release in 2008, provides a new way for processors to communicate with each other and with the rest of a computer.

And alongside CSI, Intel plans to release an integrated memory controller, which is housed on the main processor rather than on a separate supporting chip. This will speed memory performance and so dovetail with the new communications system, the company expects.

Together, they could help Intel provide a much-needed counterpunch to AMD, which in 2003 introduced an integrated memory controller and a high-speed interconnect called HyperTransport in its Opteron and Athlon 64 processors. The two communication technologies, marketed together as "Direct Connect Architecture," deliver lower processor costs and chip performance advantages, which AMD has used to win a place in the designs of all of the big four server makers.

"Intel is hoping CSI will do for them in servers what 'CSI' did for CBS in ratings," said Insight 64 analyst Nathan Brookwood, referring to the hit TV series "CSI: Crime Scene Investigation."

Intel has been tight-lipped about CSI. However, , general manager of the company's Digital Enterprise Group, did confirm some details in a recent CNET News.com interview. Further glimpses have come from server makers, who are eager for CSI's debut in the "Tukwila" Itanium chip, due in 2008.

Tracking CSI
CSI brings two major changes. First, it will boost processor performance compared with Intel's current chip communication technology, the front-side bus.

"From a pure performance perspective, when we get to Tukwila and CSI, and we actually get some of the benefits of that protocol introduced into our systems, I think it's going to be really a big deal," said Rich Marcello, general manager of HP's Business Critical Server group.

CSI will be instrumental in helping double the performance of the Tukwila generation of servers, he noted.

Second, CSI will help Itanium server designers take advantage of mainstream Xeon server technology. Both chip families will use the interface, Kilroy said. That's particularly useful for companies such as Unisys, whose servers can use both processor types. It will make it possible for elements of a design to be used in both kinds of machine, reducing development costs and speeding development times.

"CSI allows us to continue to consolidate and standardize on fewer technologies," said Mark Feverston, Unisys' director of enterprise servers. "We can now go to a more common platform that allows us to build the same solutions in a more economical fashion."

CSI hasn't been easy to bring to market, though. In 2005, Intel dramatically altered the schedule for its introduction. Initially, the plan was for it to debut in 2007 with the Tukwila Itanium processor and the high-end "Whitefield" Xeon. But in October, Intel delayed Tukwila to 2008 and canceled Whitefield.

"Intel is hoping CSI will do for them in servers what 'CSI' did for CBS in ratings."
--Nathan Brookwood, analyst, Insight 64

Whitefield's replacement "Tigerton," and a sequel called "Dunnington," both use the front-side bus for communications. That means CSI won't arrive in high-end Xeons until 2009.

In the meantime, Intel has used other methods to compete with AMD--speeding up the front-side bus and building in large amounts of cache memory, for example.

"We've taken a different road, but down the road we'll end up getting an integrated memory controller and CSI in our platform," Kilroy said. "It's just a matter of priority for us."

Why add CSI?
Memory communication speed is a major factor in computer design today. In particular, its increasing performance sluggishness compared with processors is causing problems. To compensate, computer designers have put special high-speed memory, called "cache," directly on the processor.

But in multiprocessor systems, cache poses a problem. If one processor changes a cache memory entry, but that change isn't reflected in the main memory, there's a risk that another processor might retrieve out-of-date information from that main memory. To keep caches synchronized--a requirement called "cache coherency"--processors must keep abreast of changes other processors make.

With Intel's current designs, an extra chip called the chipset coordinates such communications between processors via the front-side bus. In contrast, with HyperTransport and CSI, the processors communicate directly with each other.

Intel also relies on the chipset to help with the communication between chips and the main memory. But technology such as CSI makes it easier for processors to communicate directly with memory. That's because one processor can quickly retrieve data stored in memory connected to another chip.

"The biggest advantage CSI offers is performance and the fact that you basically get a direct connection between the processors. That results in reduced latency between the processors," said Craig Church, Unisys's director of server development. The integrated memory controllers, too, will reduce latency, or communication delays, when a chip is fetching data from its own memory, he added.

AMD has adopted the integrated memory controller in all its x86 chips, but it's not alone in endorsing the approach. IBM's Power and Sun Microsystems' UltraSparc, which compete with Intel's server line, have had integrated memory controllers for years.

With a chipset controlling memory instead of the main processor, "You basically have this middleman, and that introduces a significant amount of latency in the memory transaction," said Mercury Research analyst Dean McCarron.

An integrated memory controller not only lets main memory respond faster, it also allows cache sizes, and therefore chip-manufacturing expenses, to be reduced. Indeed, smaller cache sizes have helped AMD remain competitive with Intel, even though it's about a year behind in its transition to more advanced manufacturing with smaller circuitry elements.

Intel defends its decision to stick with the front-side bus as long as it has, arguing that the choice has given it flexibility in memory standards and that it's been able to compensate elsewhere to keep up with performance.

"Our competition had to go to an integrated memory controller because they can't get the same...amount of cache on a die as we can," Kilroy said. "And we've been able to scale the front-side bus far greater than ever thought. We're now at 1333MHz. The speculation was that we wouldn't be able to scale to that."

Lowering design barriers
CSI is designed to lower hardware barriers, making it less expensive for server makers to design servers using both chips. Indeed, the word "common" refers to the fact that Itanium and Xeon use the interface.

With CSI, a server could be designed to be totally "plug-compatible," meaning the chips would be interchangeable, Church said. "From a Unisys perspective, if a customer wants an Itanium system, we take an Itanium processor and plug it into our common platform. If they want Xeon, we plug a Xeon into our common platform," he said. "That essentially is the nirvana, and it is the goal."

Nevertheless, server makers are faced with some differences in CSI for Xeon and Itanium, Marcello said. "The CSI implementations are 95 percent the same, but there's a little bit of difference there. For that reason, we'll be close but not exactly the same," he said. However, they will be similar enough that some joint design work can be shared, he added.

Keeping up with the Joneses
Once Intel matches AMD's chip communication technologies, it will become a better competitor, Brookwood said.

"The big issue for Intel is moving from the front-side bus architecture to more of a distributed architecture," Brookwood said. "Once they get that in place and have workable schemes for managing cache coherency and memory access across processors, then they will be well-positioned to compete on almost any basis with what AMD has been doing. The Direct Connect architecture has been AMD's not-so-secret sauce for the last four years."

But AMD has plans of its own. In 2007, it will move to HyperTransport 3.0. The update increases communication speeds and enables construction of 16-processor servers instead of the eight-processor machines that HyperTransport currently permits, said Marty Seyer, a vice president in AMD's commercial business unit.

In addition, the company believes the openness of HyperTransport is an advantage. The technology is governed and licensed by the HyperTransport Consortium.

One company very interested in HyperTransport's openness is Cray. "It's a huge benefit," said Jan Silverman, senior vice president of corporate strategy at the supercomputer maker. "It's not free, but the terms are much more palatable than anything that I have seen from Intel in the past."

The openness also means Cray can use HyperTransport to connect Opteron chips to its own networking chips. And when it wants to use HyperTransport to plug calculation acceleration engines into a computer, it can buy them from a company called DRC Computer that specializes in the engines, instead of having to make its own.

AMD's Opteron years have left an impression on Silverman that Intel will have to work hard to reverse. "There was a point in time when Intel used to lead the industry. Now they're following AMD on 64 bits, following on dual-core, following on low-power consumption chips, and now they're going to follow AMD in exposing their Intel architecture," Silverman said.

Close
Drag