Mellanoxs InfiniBand Design Is Boon for Blades

Reference design should spark new interest in server blades.

Mellanox Technologies Inc. earlier this month announced the release of its Nitro II InfiniBand server blade reference design, and it may be just the thing to kick-start sluggish demand for blades.

InfiniBand is the next-generation I/O technology that is designed to replace the aging PCI I/O buses found in server systems today. InfiniBand has much higher bandwidth and lower latency than PCI and offers better reliability with quality-of-service implementations.

Like its predecessor, released in January, the second-generation Nitro II is designed to help foster OEM interest in InfiniBand-based server blade systems. Vendors including Dell Computer Corp., IBM and Sun Microsystems Inc. are considering the release of server blades with the InfiniBand technology, and Microsoft Corp. has been a strong InfiniBand supporter.

The economic downturn has slowed interest in server blade systems, but the introduction of next-generation InfiniBand to the blade market could increase demand, especially with the upcoming release of higher-performance server blades that can tackle sophisticated enterprise applications such as databases.

Server Anatomy

The Nitro II hosts Mellanoxs second-generation InfiniBand chips: the InfiniHost HCA (Host Channel Adapter) and InfiniScale. The Nitro II server blades use Intel Corp.s 2.2GHz Pentium 4 processor and the ServerWorks Grand Champion chip set. Most impressive, though, is the 10G-bps InfiniBand backplane, capable of supporting 480G bps of switching capacity.

The compact chassis can hold as many as 14 blades. The reference design has dual, 16-port, 10G-bps switch blades that can be used to link multiple chassis to form server clusters for high-performance computing.

The Nitro II blades can also support as much as 4GB of memory, a much higher capacity than current server blades and on par with high-end servers.

In addition, because InfiniBand hardware transport has much lower latency and much higher bandwidth than traditional LAN-based remote storage, there is no need for local storage.

The superfast InfiniBand backplane allows the Nitro II server blades to run completely diskless and headless. And the InfiniBand HCA allows Nitro II blades to boot remotely, accessing all operating system, application and data images stored in network-attached storage or storage area network systems.

The Nitro II server blade costs $6,500; the chassis costs $8,500. The 16-port Nitrex II switch is priced at $15,000, and Mellanox officials said the Nitrex II InfiniBand reference chassis platform will be available to OEM customers in August.

eWeek Labs plans a complete review of a production system using the Mellanox reference design when one becomes available.

Technical Analyst Francis Chu can be reached at