X

High-end servers to stave off low-end attack

Massive multiprocessor machines will only lose some ground to groups of small systems, experts at CeBit America predict.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
3 min read
NEW YORK--Low-end servers linked tightly together will encroach on the turf of massive multiprocessor machines but won't replace them, server experts predicted on Wednesday.

More powerful networks and better database software have made groups of low-end servers better able to share a single database, but the technology still is immature, panelists speaking at the CeBit America trade show here said.

"The challenge there is really management. It's easier to manage a single operating system image than to manage a cluster of smaller nodes," said Jay Bretzmann, the manager of product marketing for IBM's Intel-based xSeries servers.

John Miller, the director of server marketing for Hewlett-Packard's Business-Critical Systems Organization, agreed, saying most customers with clustered databases are just dipping their toes in the technology with two-node groups.

"I think that the technology needs to evolve more...I don't think, if you look at the broader context, you're going to see a mass exodus from scale-up computing," Miller said, referring to the use of massive single systems.

There are some areas where the "scale out" approach--with numerous small systems--beats the "scale up" approach hands down, for tasks such as sending out streams of videos and hosting online games, for example, Bretzmann said.

But in terms of money spent, larger multiprocessor systems still dominate, said Mark Melenovsky, an analyst with research firm IDC. "While scale-out has gotten a lot of attention, more than half of spending is still on four-way or greater platforms," he said.

Oracle and Dell are loud advocates of clustered databases, with HP and IBM giving more qualified support.

The 64-bit question
Panelists also ventured other predictions about the future of servers at the show. One debate likely to fade quickly is about the need for 64-bit processors, which can gracefully accommodate more than 4GB of memory, Bretzmann predicted.

"I don't think we'll discuss this after 2004, because 64-bit computing will be ubiquitous," he said.

Chipmaker Advanced Micro Devices led the charge to 64-bit enhancements to the previously 32-bit "x86" chips, and Intel plans to follow suit by releasing its equivalent, the first Xeon model, in coming weeks. Intel also supports a separate, higher-end family of 64-bit chips--Itanium--which was codeveloped with HP but which has so far not been widely used.

Intel has said that Itanium and Xeon systems will cost the same by 2007, through the use of common hardware designs. However, HP's Miller was more cautious.

"Quite frankly, I think that's a little soon. But it does show what Intel's belief is," Miller said.

IDC in January lowered forecasts for sales of Itanium-based servers as a result of the arrival of 64-bit x86 chips.

Exercising utility
The CeBit America panelists also tackled the idea of utility computing--a concept also touted under the adaptive, on-demand, organic and dynamic computing labels. It aims to cut management costs by enabling businesses to pool servers and other technology into a fluid computing resource, so that the cost of running the system matches the computing load it handles.

IBM, HP and Sun Microsystems are moving as fast as they can to build utility computing technology, which tends to be enormously complicated. But other providers may not be keeping up.

"I do think Microsoft is behind," Miller said, adding that the software maker "can evolve and shouldn't be taken for granted."

And Microsoft will begin to make more headway with the release of its Virtual Server 2005 product, which lets several operating systems be run on a single server through a technology called virtualization, Bretzmann said. That technology, which already exists on Unix servers, on mainframes, and on Intel-based servers using EMC's VMware software, is a key step for building a fluid computing environment.

"The problem they had is a support issue. If somebody was running virtualization and had a problem, the first thing Microsoft would tell you is, 'Take that virtualization away,'" Bretzmann said.

Microsoft's response to the competition will arise out of its expansion of its software lines beyond its Windows products, Melenovsky said.

"A lot of the profit pool associated with the core operating system is draining and bleeding away into more advanced feature such as workload management and virtualization. It's certainly a strategic direction for Microsoft," Melenovsky said.