Intel feeds virtualization's need for speed

Chipmaker's first-generation virtualization technology focused on features, but performance is the priority for the sequel.

Stephen Shankland principal writer
Stephen Shankland has been a reporter at CNET since 1998 and writes about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science Credentials
  • I've been covering the technology industry for 24 years and was a science writer for five years before that. I've got deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and other dee
Stephen Shankland
3 min read
SAN FRANCISCO--With the first generation of Intel Virtualization Technology now being built into most of the chipmaker's products, Intel is turning its attention to improving its performance.

Virtualization makes it easier to run several operating systems simultaneously on the same computer, an idea of high interest to those trying to run multiple tasks more efficiently. But the virtualization control software, called a hypervisor or virtual machine manager, imposes a performance penalty as it manages resources such as memory or input-output.

It's that performance penalty Intel is trying to fix.

"As we build future implementations, we're making things perform better within the constraints of the architectural foundation, but without requiring software changes. Then we're also extending the architecture," Richard Uhlig, senior principal engineer at Intel, said during an interview at the Intel Developer Forum.

IDF Spring 2006

Intel has begun building its VT abilities into its chips that make virtualization easier, and its top rival, Advanced Micro Devices, will begin selling its AVT equivalent later this year with its Rev F processors.

The goal of Intel's first version of VT was to improve features of virtual machine software. For example, the leading virtualization product for x86 computers, EMC's VMware, can run 64-bit operating systems in its virtual machines through use of VT. And an open-source rival, Xen, can run Microsoft Windows on VT-enabled systems.

One planned improvement is a feature called extended page tables, an idea similar to an AMD virtualization technology called nested page tables. Both technologies speed up a facet of virtual machines dealing with memory.

In a computer without virtual machines, the operating system expects memory addresses to start at zero and work their way upward. But with many virtual machines sharing a computer's memory, zero isn't the starting place, and memory addresses skip from one patch to another, Uhlig said.

Consequently, one important job of a hypervisor is "page table shadowing," which translates a virtual machine's memory addresses to the real ones used by the actual computer. The more translation is required, the slower the virtual machine runs, and with programs such as databases that constantly switch among different patches of memory, the performance penalty can be anywhere from 10 percent to 25 percent, Uhlig said.

New versions of VT will get a feature called the page table walker, in which the processor rather than the hypervisor keeps track of that memory issue, he said. The overhead imposed "doesn't drop to zero," but will be much faster than the software-based function, Uhlig said.

Intel's "Paxville" versions of its Xeon chip brought a VT implementation called VT-x to the server market in 2005, and more widely used "Dempsey" processors are due to arrive in servers in May or June. The VT-i version for Itanium processors is scheduled to arrive in servers starting in the third quarter with systems using Intel's "Montecito" chip.

Another improvement coming in hardware support for virtualization is the expansion of the technology into the domain of networking and other input-output tasks. Intel announced its VT-d specification Tuesday for some I/O virtualization, a month after AMD made a similar move.

But more sophisticated changes to networking are farther off because they require changes to the PCI standard that network cards and many other add-on devices use. For example, one idea that Intel plans to support is the splitting of a network card's capacity among different virtual machines.

Work is under way at the PCI Special Interest Group to add features that will permit such splitting, said Rajesh Sankaran, an Intel senior staff researcher. The new specification is due later this year, and the first products supporting it are expected in 2007, he said.