Blade servers are a common sight in most data centers these days, and many enterprise companies have scaled out their IT infrastructures using blade hardware running a broad spectrum of applications.
eWEEK Labs recently reviewed three of the major blade server vendors newest server and management offerings.
As these vendors—IBM, Hewlett-Packard Co. and RLX Technologies Inc.—and others continue to innovate and refine their blade platforms, the future of blades might not rest on hardware advances but on the development of intelligent management solutions that integrate business processes or applications more tightly with hardware.
In fact, eWEEK Labs tests show that getting the green light for blade deployment or expansion will depend on the quality of the management platform that supports blade hardware.
In our tests of blade server systems from IBM, HP and RLX, RLX had the best management platform, while IBM and HP were neck and neck in hardware flexibility. IBM came out slightly ahead of the others in terms of raw density, and the HP system had the most up-to-date components for its blades.
The IBM BladeCenter and the HP ProLiant BL p-Class systems each offer similar Intel Corp.-based hardware blade models: a low-end blade for high-density environments, a midrange blade with storage options suitable for midtier applications and a powerful four-way blade.
All three vendors offer a modular and scalable chassis for their blades, as well as redundant switch components to connect their respective blade infrastructures to external storage.
When evaluating blade systems, one thing is conspicuous by its absence: standards.
However, vendors such as HP are using hardware components that are similar to the ones they use for standard servers. This will give IT managers flexibility to interchange components if necessary.
In addition, IBM—in a joint effort with Intel—has opened the BladeCenter design specifications so that third-party vendors can build compatible hardware components for its blade chassis.
Technical Analyst Francis Chu can be reached at [email protected].