As CTO of IBM's xSeries Intel server group, Tom Bradicich sometimes knocks heads with colleagues to promote a product line his own company still views as second-rate. But his labors are bearing fruit.
It's one thing to stand up to criticism from competitors. But as chief technology officer of IBM's xSeries Intel server group, Bradicich also has his own company to reckon with. His colleagues who sell higher-end Unix servers and mainframes routinely make the "server consolidation" sales pitch, urging customers to replace legions of unmanageable Intel servers with centralized powerful machines.
But with support from IBM's top managers, Bradicich has registered steady progress. Most recently, he helped shepherd the x440 "Vigil" server, which uses IBM's Enterprise X Architecture "Summit" chipset to create a system with as many as 16 Intel Xeon processors.
Along with the z900 "Freeway" mainframe and p690 "Regatta" Unix server, the Vigil line is part of IBM's server resurgence after a period of drift. For several years, Big Blue was caught between Sun Microsystems at the height of the server market and Dell Computer and Compaq Computer on the lower end.
Bradicich has been the brains behind part of IBM's response: the X Architecture plan launched in 1998 to endow its Intel servers with features borrowed from designs for higher-end servers.
Not coincidentally, the X Architecture plan has demoted Intel to the lesser role of component supplier rather than system designer. That arrangement has worked to put the initiative--if not the profit--back into IBM's hands.
After receiving his Ph.D. in computer engineering from the University of Florida, Bradicich joined IBM's PC division right after the debut of the company's original personal computer in 1981. He moved to work on servers in 1998 to lead the X Architecture program. Bradicich spoke about the ups and downs of IBM's Intel server business in an interview with CNET News.com.
Q: How different is it working on servers compared with PCs and laptops?
A: Much different.
How so?
Because of the business-critical nature of it and the reliability (requirements). If you lose the report you're writing on the desktop, the corporation says, "Stay another half-hour late and redo it." When a server goes down, you're losing $10,000 a minute as a small business or $1 million as a large business. The impact or profundity of the quality to me is intriguingly different.
How hard has it been convincing IBM that Intel servers are worth investing in?
Prior to 1998 (or) 1997, we had a very poor approach to (Intel servers)...But at the same time, it wasn't an abandonment. We really felt Intel could not cover the entire marketplace.
Prior to 1998 (or) 1997, we had a very poor approach to (Intel servers)...But at the same time, it wasn't an abandonment. We really felt Intel could not cover the entire marketplace. |
We are very big on consolidation of Intel servers on to Intel servers. It frankly is very hard to justify because of the software. A hardened Unix application requires a level of security, for example, (that) just doesn't exist (on xSeries) today. There is a need for that.
What are the most significant X Architecture technologies in the market?
I think the systems management tools and the software rejuvenation jumps out--the ability to predict a hang and take evasive action...it remains exclusive. Under that same banner is concurrent diagnostics--the ability to take diagnostics without bringing the server down.
The second (most significant) would be the way we've developed the input-output architecture. You recall PCI to PCI-X to PCI-X 2.0 now. And then of course InfiniBand.
The third one is Enterprise X Architecture. EXA is really the quintessential manifestation of companywide cooperation. We had contributions from five groups: our mainframe division, our research laboratories, our acquisition of Sequent, our microelectronics division and our software division.
Hyper-threading (which lets a single processor act in some ways like two) is showing up in the Xeon chips from Intel. What type of performance improvements have you seen?
Like most performance improvements, it runs the gamut based on application. We are seeing some applications as good as 20 percent with hyper-threading--those that are very internally CPU-intensive (as opposed to those that rely on fast communications with memory, the network or storage systems).
Keep in mind hyper-threading is more like a bicycle built for two than it is two separate chips. A bike built for two has separate seats, handlebars and pedals, but they share tires and a body. It's a sort of a hybrid--it's not two processors.
How has Linux changed the dynamics of the Intel server market? It appears to me that it has given IBM a bit more control over the system. IBM doesn't control Linux, but it can have more influence than with Windows.
It has given IBM an opportunity we didn't have before to play to our strengths, which is availability and reliability on an Intel platform.
When Linux came along as a viable OS that runs on Intel, it made one less component of an Intel server unilaterally controlled. |
Therefore, that opens it up for any company--including IBM--(as a way) to play to its strengths. Ours is reliability and scalability. We can influence by contributing to open source, by getting hold of the code and understanding it. We get source code from Microsoft, too, because we have that type of relationship with them. But we're not free to customize it.
How much influence do you have over Windows? You have the Microsoft center in Washington, the Cornhusker cluster technology that came and went.
We have among the most--if not the most--in the industry. That is because of our expertise with operating-system technologies. Remember Active PCI (which allows PCI cards to be added to a system without shutting it down)? Microsoft liked that so much they began shipping it. If you count up all the things, we probably win against my Intel server competitors. We still have our IBM Center for Microsoft Technology located very close to Redmond, Wash., up there in Kirkland. We still have that fully staffed.
PCI-X now is standard in ServerWorks chipsets, and PCI-X 2.0 is solidly established. How do you see PCI-X and 3GIO (now called PCI Express) competing? 3GIO comes from the desktop but PCI-X came from the server companies.
The application set and the segmentation of the server market is allowing for more technologies than I, and others, would otherwise have thought would be able to coexist. It's not unlike (the) automobile industry, where there 's the convertible, the pickup truck, the sedan.
If you take an existing standard and incrementally make it better, it's always going to win. (Computer product designers) say, "I can use my same design tools and engineers; I don't have to learn a new technology." The customer side says, "You're giving me something that's backward-compatible." That's what we saw with PCI-X and that's what we're seeing with PCI-X 2.0.
3GIO does not exist yet. Since it's not a ratified specification, you're learning about proposals on the table. The first implementers will be in middle-to-late 2004. Some would say they're more aggressive. It is born from--and optimized for--the desktop, so it will show up there. I don't have a doubt about it.
And what about InfiniBand?
InfiniBand is not low-cost. But it's really the only (input/output) technology that has built low latency and a high quality of retry. It ensures the packet of data gets to its intended destination. Ethernet is very scalable, but doesn't have those two features. It's very high latency now and doesn't have the security of acknowledging, "I got the packet." It's the fastest thing on copper right now. (InfiniBand chipmaker) Mellanox has 10 gigabits per second on copper (wires). Ten gigabits on Ethernet is coming, but it's on optical (fiber, not copper wires).