The year 2003 is shaping up to become an important period for InfiniBand, the much-touted but little-seen high-speed interconnect technology.
Despite setbacks earlier this year, when Intel Corp. and Microsoft Corp., two founding members of the InfiniBand Trade Association, backed away from InfiniBand development efforts, smaller InfiniBand vendors have recently launched enterprise-ready products, highlighted by Mellanox Technologies Inc., JNI Corp., Topspin Communications Inc. and Paceline Systems Corp.
Such rollouts are expected to continue in 2003. Even better, last week, top-tier server vendors Dell Computer Corp., IBM and Sun Microsystems Inc. said they will deploy InfiniBand-enabled products over the next couple of years, answering the call of industry observers who say InfiniBand wont take off until larger server vendors embrace the technology and drive it deep into the enterprise.
InfiniBand, a channel-based, switched-fabric architecture, initially was expected to replace other interconnect technologies, such as PCI. But while InfiniBand has reached the 10G-bps mark faster than other technologies, one of its problems is that its a brand-new architecture that companies with tight IT budgets need to fit into their infrastructures. In addition, such connectivity technologies as PCI-X and 10 Gigabit Ethernet can fit in with current systems, and storage connectivity technologies such as Fibre Channel and iSCSI also have seen performance and speed improvements.
Current innovation in InfiniBand is being driven by smaller vendors that are rolling out products, many of which are aimed at making larger vendors products InfiniBand-ready.
InfiniSwitch Corp., of Westboro, Mass., which builds high-speed switches for InfiniBand-enabled devices, in the first half of 2003 will announce general availability of its Leaf and Director products. Leaf, aimed at the low end, will offer two 12-port blades—or 24 switches—enabling users to bring InfiniBand connectivity into their data centers. Director, which is currently capable of 1x—or 2.5G-bps—data transfer speed, will run at 4x, or 10G bps.
Page Two
: InfiniBand: Whats Next?”>
Voltaire Inc., of Bedford, Mass., in the first quarter will release the next generation of its nVigor InfiniBand switch router, also at 4x speed, officials said. The product will be aimed primarily at OEMs, although it will also be marketed to enterprises, they said.
Further down the road, InfiniCon Systems Inc., of King of Prussia, Pa., will upgrade its InfinIO 7000 Shared I/O System, which was released in September, to enable data centers to integrate InfiniBand into Fibre Channel and Ethernet networks.
In the first quarter of 2004, InfiniCon will increase the number of Fibre Channel ports in the chassis from 16 to 32 and Ethernet ports from three to eight, said CEO Chuck Foley. The number of InfiniBand ports will remain at 60, Foley said, although he expects by 2005 those ports will be handling 12x (30G-bps) InfiniBand.
But until major server makers fully and publicly embrace InfiniBand, the technology will not resonate with corporate IT.
Dell, of Round Rock, Texas, said that its next generation of PowerEdge blade servers will be InfiniBand-ready and that the company is testing InfiniBand clusters in its laboratories. IBM in 2003 will begin deploying InfiniBand across its entire eServer line. Starting next year, the Armonk, N.Y., company will enable an InfiniBand switched network that includes a host channel adapter, switch and fabric management on its eServer xSeries line of Intel-based servers. It also is developing a common clustering interconnect using InfiniBand.
Sun, of Santa Clara, Calif., said it will incorporate InfiniBand in future switches, storage and server platforms, including its next generation of blade servers starting in 2004.
Hewlett-Packard Co. in September said it was leaning toward Ethernet-based solutions—including remote direct memory addressing—over InfiniBand, although Karl Walker, CTO of the Palo Alto, Calif., companys Industry Standard Servers unit, said HP has not ruled out InfiniBand. Like other OEMs, Walker said, part of the decision will be made once the company sees what kind of ecosystem crops up around InfiniBand.
Some users are still hanging back. Joe Gottron, CIO for Huntington Bancshares Inc., in Columbus, Ohio, was taken aback this summer when Intel and Microsoft stepped back from InfiniBand—although both are still supporters of the interconnect. Gottron said a high-speed I/O will be needed to eliminate data transfer slowdowns among servers in data centers. Gottron said he is now “in a wait-and-see kind of mode,” adding that he will hold back on InfiniBand until he sees “what direction the market goes.”
However, one area where InfiniBand is making some inroads is high-performance computing, where low latency and high bandwidth are important. Los Alamos National Laboratory, in Los Alamos, N.M., is planning to connect 128 servers via InfiniBand to create a supercomputer environment.
Mike Boorman, team leader at the lab, said InfiniBand is attractive because of its relatively low cost and low latency—about 7 microseconds, compared with Ethernets 10 to 20 microseconds. All 128 nodes will be running by January, Boorman said, with the test running until about March. InfiniBand is “a good candidate for a high-speed, low-latency interconnect,” he said. He added that Los Alamos is also looking at proprietary technology from such companies as Quadrics Ltd. and will look at 10 Gigabit Ethernet in the next couple of years.