Server virtualization is getting a boost from the newest Intel microprocessor architecture, known as Nehalem.
With an integrated memory controller, an additional memory channel and a slew of performance improvements, Nehalem will enable IT managers to consolidate more virtual machines onto fewer physical systems. The new Intel platform should also be a trigger for IT managers to take a second look at virtualizing applications that did not perform well when run on VMs hosted on the previous generation of Intel server chips.
Data center managers anticipating server and virtualization implementations or upgrades should start testing Nehalem-based systems to gauge performance and compatibility.
With VMware ESX 4.0 expected to ship sometime this year, now is the time to begin the evaluation process to see how Intel’s newest processor platform might fit in your data center. eWEEK Labs will be testing servers based on Nehalem as soon as they become available, so watch for our hands-on test results.
Performance capabilities in the Nehalem microarchitecture-whose formal name is the Intel Xeon 5500 series-will shift the limiting factors that currently govern server virtualization from purely physical considerations of CPU, memory and network bandwidth. The new limit will very likely be the amount of risk an organization is willing to take by putting many virtual eggs in a single physical basket.
The considerable advances in the Intel Nehalem chip design include Turbo mode, in which the processor frequency can be controlled based on workload, and hyperthreading-a function resurrected from the Intel Pentium 4 processor and now called “simultaneous multithreading.” These changes set the stage for new advances in server virtualization, but not a new benchmark-Intel rival Advanced Micro Devices has had integrated memory controllers for some time in its Opteron server chips.
Virtualization heavyweight VMware has been a close partner of both Intel and AMD to ensure that CPU advances can improve VM performance with the goal of constantly reducing the difference between physical and virtual machines.
IT managers also should note that the newly minted Cisco Unified Computing System server blades, which were announced on March 15, launched using only Intel Nehalem-based processors. Cisco is hoping that data center managers will be compelled by this platform, on which it has packaged compute, storage, network, memory and server virtualization in a unified chassis.
Networking and Power Priorities
Networking and Power Priorities
IT managers should bear in mind that when designing the data center, networking and power trump server chip selection. That said, the recent advances in server CPU designs will affect IT managers’ decisions with regard to server virtualization platforms.
IT managers will need to be mindful of new hardware components, including NICs and memory configurations, that will improve VM performance but will also drive up the initial hardware purchase price. Premium 8GB ECC (error-correcting code) DDR 3 (double data rate 3) DIMMs (dual in-line memory modules) are among the most expensive components needed to fully equip a physical server to take full advantage of the Nehalem-based processors.
Another concern is how the current hypervisor platforms will use the new CPU features while maintaining backward compatibility.
When running on an Intel-based server, VMware ESX 3.5 uses a baseline of features that correspond to the Intel Merom processor core to enable enhanced VMotion capabilities.
VMware, working with Intel, intercepts the CPUID and tells the operating system that it is working with a CPU that has the features of a Merom processor, which was released in 2006 and uses the previous-generation Intel Core microprocessor architecture. This is the trade-off to enable enhanced VMotion across older Intel processors and Nehalem-based processors that are in the same VMotion cluster.
Rich Brunner, chief platform architect in the CTO Office of VMware, indicated that while ESX 3.5 is limited to one mode of enhanced VMotion-where all processors are presented as having the Merom feature set-ESX 4.0, which is currently in beta, will have more flexibility.
ESX 4.0 will present the CPUID of the lowest supported processor in the VMotion cluster. For example, in a mix of Penryn- and Nehalem-based processors, Penryn would be used. Looking ahead, Brunner said it is likely that when Intel processors based on the successor to Nehalem-code-named Westmere-are available, this form of backward compatibility would be continued.
Memory Management
Memory Management
IT managers must use care to create VMotion clusters composed of Nehalem-based processors and anticipate the release of ESX 4.0 to take full advantage of the processor’s new memory management features.
And these features are prodigious. The Nehalem-based processors can now handle up to 18 8GB DIMMs, for a total of 144GB of RAM per physical host.
“A general guideline is to provide 4GB of RAM per hardware thread per logical processor,” said VMware’s Brunner. A four-socket system with quad-core Nehalem processors could be easily accommodated in the new memory address scheme.
Aside from supporting dense VM deployments with large RAM support, Nehalem processors use another significant update dubbed VT-d (Virtualization Technology for Directed I/O) that connects dedicated DMA (direct memory access)-capable I/O resources for virtual machines.
Indeed, one of the big challenges of virtualization has been handling I/O. VT-d allows the direct assignment of a VM to a physical device. As implemented, VT-d seeks to reduce the performance overhead incurred as the hypervisor moderates state among the guest VMs.
For example, each time a network interface on the physical server processes a packet, it goes through an interrupt process in the operating system. In virtualization, the problem is that every interrupt requires that a VM exit to the VM manager and back again. Using VT-d, the VM is directly assigned to the networking device, thereby bypassing the interrupt and exit process and significantly cutting network I/O overhead.