While hopes for the rapid adoption of Infiniband, an emerging high-speed I/O technology designed for data centers, appear to have faded in recent months due to performance boosts in existing technologies, Intel Corp. remains confident the new serial architecture, due to ship in volume next year, will take root.
At its developers forum in San Francisco this week, Intel, one of seven major high-tech companies backing the development of Infiniband, is touting some of the early data center test results of the new I/O, which is designed to boost information transfers from server to server and server to devices, such as storage systems.
“What were starting to see is some very strong performance numbers,” said Jim Pappas, director of initiative marketing, in an interview Tuesday at the Moscone Convention Center.
For example, he cited an IBM demonstration involving an Infiniband-connected server cluster running the computer makers DB2 software. The test showed that Infiniband connectivity enabled the cluster to fully utilize the power of each new server attached to the cluster, eliminating performance degradation that currently results from less efficient connections.
“Thats the first time ever that theyve seen linear scalability with any kind of interconnect in clusters,” Pappas said. “Basically, you double the amount of processors, you double the amount of performance you have in that cluster. Usually youd double the processors and youd get something like a 70 percent improvement in performance.”
Currently, Infiniband hardware and software developed by some of the hundreds of companies involved in the technologys trade association are being piloted, with the first products coming to market late this year and beginning volume sales next year.
Infiniband is backed by more than 200 companies, including Compaq Computer Corp., Dell Computer Corp., Hewlett-Packard Co., IBM, Microsoft Corp., Sun Microsystems Inc., as well as Intel.
The technologys channel-based, switched-fabric architecture provides a scalable performance range of 500MB per second to 6GB per second per link.
Two years ago, the impressive performance numbers spurred speculation that Infiniband could become the dominant I/O of choice, replacing PCI as the industrys mainstay.
But the development of a faster PCI technology, dubbed PCI-X 2.0, that can offer 4.3GB-per-second performance, as well as improvements in storage connectivity technologies such as Fibre Channel and iSCSI, have dampened enthusiasm for Infiniband.
Pappas said concerns that Infiniband has been overtaken by competitive technologies are the result of misperceptions about Infinibands targeted market.
“Its not about, Do you use Infiniband or do you use iSCSI? Thats not an accurate comparison to make,” he said. “Our strategy is an interconnect that would go off to individual drives. You could certainly make the argument, Is it iSCSI or Fibre Channel? But thats a SAN environment. The question is, How do you connect your SANs to your servers?”
Basically, Infiniband is designed to co-exist, rather than replace, existing I/O technologies, said Pappas, who was involved in the joint effort to develop PCI in the early 90s.
“Our primary focus has always been to connect all your servers together with Infiniband cables and they go into an infiniband switch to connect to other I/Os,” he said. “Theres been some talk from companies we work with about putting Infiniband inside the box, but thats a second use of the technology. The primary use of the technology is how do I connect servers together.”
Eventually, Pappas said, shared bus technologies like PCI, which send data packets back and forth across a single channel, will give way to serial connects, which use multiple channels to transmit information.
“Shared buses are going to go away, and serial buses will take their place,” Pappas said. “Its only a matter of time.”