If you dont know what InfiniBand is, dont worry. Within three years, you probably wont be able to avoid it. By then, every server sold will likely be InfiniBand-capable and the new data exchange standard will be part of the fabric of faster, better networks.
InfiniBand, short for “infinite bandwidth,” promises to radically change how data centers are configured and operated. The new standard is designed to dramatically increase the velocity of information by overhauling a key bottleneck — the Peripheral Component Interconnect (PCI) bus — with a “switched fabric” network. That means it will be able to manage point-to-point communications through switches, like the telephone system, instead of relying on todays general-purpose, shared bus inside the computer. A shared bus carries one message at a time past many points; a switched fabric can juggle hundreds or thousands of messages at a time, both inside and outside the computer, moving them precisely from origin to destination.
In addition to increasing data throughput by a factor of 10 or more, InfiniBand may allow the redesign of the computer itself, perhaps even delivering on the old idea that the network is the computer.
Sure, other computing standards have been proposed and hyped as the Next Big Thing. Remember IBMs Micro Channel? But unlike other standards that have been adopted slowly, or not at all, InfiniBand appears likely to succeed for three reasons:
InfiniBand is backed by the biggest players in the server market. Compaq Computer, Dell Computer, Hewlett-Packard, IBM, Intel, Microsoft and Sun Microsystems are all on the steering committee of the InfiniBand Trade Association, the 222-member group that oversees the specifications and development of the new standard. Those seven companies — whose combined 2000 revenue exceeded $283 billion — have the marketing and financial muscle to make InfiniBand the law of the land. And so far, it appears thats exactly what they intend to do.
InfiniBand has momentum. A spate of InfiniBand-focused start-ups, representing about $100 million of venture capital investment, combined with millions more being invested by companies like Compaq, Dell and Intel in their own InfiniBand-capable devices, gives the new standard more momentum than any other input/output (I/O) standard.
InfiniBand addresses the growing demand for both storage and speed. Last year, Michael Ruettgers, CEO of storage equipment maker EMC, estimated that many large companies will need to increase their data-handling capacity twelvefold to fifteenfold over the next five years. To meet that demand, the server market will boom. The technology intelligence firm IDC estimates that by 2004, the appliance server market will be worth $11 billion per year. Those numbers reinforce the growing need for faster storage that is helping to drive the InfiniBand standard. Demand for bandwidth is increasing everywhere, from the home computer user to the data center operator. Processors are getting faster; Ethernet is getting faster; but servers are still constrained by the speed of the PCI bus.
By supplying an architecture that can answer both needs, InfiniBand promises to increase both throughput and scalability throughout an enterprise while eventually reducing cyclical costs for many hardware upgrades.
InfiniBand servers, which should start appearing on the market early next year, offer speeds that are two times to 20 times faster than those possible through the PCI bus. “Every major computer system vendor knows the PCI bus is so old, it has mold growing on it,” said Michael Hathaway, a partner at Austin Ventures, which has invested in three InfiniBand start-ups.
PCI
: Get Off the Bus!”>
PCI: Get Off the Bus!
In the last decade, the Internets pipes have expanded massively to absorb new traffic. At the same time, microprocessor speeds have soared from 66 megahertz to more than 1,000 MHz. But during this period of rapid acceleration, the PCI bus — the slender ribbon by which the pipes and processors are connected — has been stuck in granny gear. In the server, it often falls short of the demands placed on it. It can handle communications with only one device at a time. That means that a faulty peripheral card in a servers PCI slot, such as a network interface card, can shut down the whole server. As every network administrator knows, single points of failure like that are a nightmare.
Thats where InfiniBand rides to the rescue. It will supplant the PCI bus and overhaul the I/O architecture of servers. That will change the rules of the game, making commodity Intel servers more competitive with their Unix counterparts and concentrating more processing power into a smaller space, InfiniBand experts said.
“Anybody whos not InfiniBand-savvy is going to get left behind,” warned Jonathan Eunice, president of Illuminata, the Nashua, N.H., information technology consulting firm.
Eunice and others point out that InfiniBand, which relies on a serial bus, will be much faster and more scalable than PCI, which uses a parallel bus. In its slowest configuration, InfiniBand will move data at a speed of 2.5 gigabits per second in each direction, or 5 Gbps total. That compares with the typical PCIs 1.064 Gbps or 2.128 Gbps in one direction. In some applications, InfiniBand throughputs will reach 30 Gbps, far faster than even the fastest PCI design. And InfiniBand boosters say there is no reason that the standard cant scale beyond even those lofty speeds, which represent merely what was designed into Version 1.0 of the InfiniBand specification published last October by the InfiniBand Trade Association.
“[Central processor unit] capacity has been outstripping input/output capacity. InfiniBand will provide a better match” to the processor, said Kris Meier, senior manager for InfiniBand at Crossroads Systems, a manufacturer of storage routers. As a result, data will move from the heart of the server out to the network as fast as it can be processed, not when a clogged PCI bus can get around to it, he said. “This could have a pronounced effect on Internet data centers. Performance per square foot will go up,” he predicted.
InfiniBand is a new network protocol that creates a subnetwork in the data center,connecting servers to their disk array storage and other peripherals. “PCI acted as a local input/output bus for a single server. InfiniBand acts as a distributed I/O architecture. . . . It creates a fabric of compute nodes that can talk to each other,” said John Gromala, manager of InfiniBand at Compaq. The nodes on an InfiniBand subnetwork may be servers, routers, switches or other InfiniBand-enabled devices.
In the long run, InfiniBand does a lot more than just speed a servers data throughput to the outside world. By basing itself on a switching architecture rather than a shared bus, InfiniBand shifts the burden of managing input and output from the servers CPU onto the intelligence of the InfiniBand subnetwork, freeing up the CPU for heavier business logic processing. In this respect, InfiniBand follows the example of the IBM mainframes channel architecture, said Tom Bradicich, director of architecture and technology at IBM and co-chairman of the InfiniBand Trade Association.
The intelligence in the InfiniBand subnetwork will be built with host channel adapters and target channel adapters that tell devices how to route data and the nature of the device receiving it. InfiniBand routers and crossbar switches will do the rest.
InfiniBand also potentially changes the design inside the server itself. By moving some components — such as power supplies, disk drives and even some fans — off the server engine, it will likely lead to server “blades” that are a fraction of the size of todays thin servers, saving space in cramped co-location and Internet service provider facilities.
Compaq officials have talked about “quick blades” as a possible future server based on InfiniBand architecture, but Gromala said Compaq has not given a time frame for when such a server might be available. Officials at Dell also declined to say when their InfiniBand-ready devices would be ready for the market.
When the smaller blade servers are produced, Illuminatas Eunice said, it should be possible to get 10 times to 20 times as many servers into a rack. One or more slots in a rack would be occupied by disk drives connected to the blades via InfiniBand, relieving the blades from needing their own disk systems, he pointed out.
Server manufacturers call this the “disaggregation” of the server, or breaking it up so that individual components may be packaged together to serve multiple servers at a time. InfiniBand encourages disaggregation by increasing the distance at which components may reside from the servers memory and processor.
PCI distances were measured in centimeters or inches. InfiniBand can reach across a distance of 55.8 feet using copper wire or 328.1 feet using fiber-optic links. That capability means entire offices could be connected via an InfiniBand fabric — making the “network is the computer” idea a reality. Indeed, with InfiniBand, every computer on the network would have super-fast access to stored data.
“InfiniBand has a much bigger role than PCI,” Eunice said. Much as the microprocessor is one device on a compressed network inside a sheet metal box, InfiniBand will make the server itself a device that is part of a larger whole, interacting at high speed with dozens, hundreds or even thousands of other devices over a network.
How It Works
How It Works
InfiniBand addressing works in an environment that is compatible with Transfer Control Protocol/Internet Protocol addressing and is based on Version 6 of IP, which allows more addresses for network-linked devices “than atoms in the universe,” Eunice said.
The large number of addresses ensures InfiniBands future scalability. One InfiniBand switch is expected to be able to communicate with a maximum 64,000 devices, but theres no reason that a switch cant use one of those connections to attach to another InfiniBand switch, multiplying the number of possible connections by another 64,000, according to Meier.
Such interconnections constitute what advocates call the “switched fabric” of an InfiniBand subnetwork. The PCI bus was a highway that carried messages from one point down a general-purpose path, and the more messages, the slower the traffic could move, IBMs Bradicich said. InfiniBand is more like a high-speed train, moving from one point to another through clearly defined switching stations. No matter how much traffic there is, it can move at the same speed.
Unlike a train, however, InfiniBand doesnt require a message to follow one route the way a train follows its tracks. With InfiniBand, alternative routes can be found around a failed device. In addition, InfiniBand makes it much easier to cluster Internet servers together, as in a Web server farm or group of database servers, Bradicich said.
While clusters of just two machines prevail today, InfiniBand is likely to make clusters of 10, 20 or 50 machines commonplace, Eunice said, because it standardizes a shared server interconnect. And given the distances InfiniBand links may cover, its conceivable that all the servers in an office building or on a small campus could be linked together in an InfiniBand cluster.
In addition, InfiniBand promises to do another task attractive to Internet server users: Allow users to connect large pools of storage to multiple servers over an InfiniBand connection. With InfiniBand, companies like Sun may be better able to compete with market leader EMCs Symmetrix disk arrays.
“Sun views storage as a feature of the server,” Eunice said. The new standard, he said, will level the playing field among storage vendors, speeding up connections between various storage arrays and servers.
Obstacles for InfiniBand
Obstacles for InfiniBand
Creating a new standard, as well as writing specifications and producing the hardware and software it needs, is no small task.
For InfiniBand to become a reality, a whole industry must grow up to produce chips, switches, routers and software. And though many companies are working on the problem, finished products wont be ready for several more months. Ironically, the first phase of implementation will likely be an InfiniBand card — produced by either the server supplier or a third party — that plugs into a PCI slot. The card will connect the server to an InfiniBand switch or other network device and give it a channel out to an InfiniBand subnetwork.
Since the fall of 1999, the InfiniBand movement has been gaining steam. When Intels Next Generation I/O specification merged with the Future I/O specification supported by Compaq, HP and IBM, the InfiniBand Trade Association was formed.
Microsoft, Sun and Dell quickly joined, and the organization has evolved into a sort of United Nations of the server I/O industry.
Since détente came to the server industry, competitors have seen fit to invest in some of the same companies and cooperate on standards development. For instance, Crossroads Systems has received investments from Dell and HP. Lane15 Software, which is writing InfiniBand network management software, got venture funding from both Compaq and Dell.
Those investments appear to show that there is a determination in the industry to assure that InfiniBand will be interoperable among different companies that are producing different types of InfiniBand devices.
Theres plenty to worry about when it comes to interoperability. The 1.0 specification covers 800 pages in two volumes. Although the association is trying to precisely define all aspects of InfiniBand operations, many challenges remain. The trade associations Interoperability Working Group — co-chaired by IBM and Sun — is holding compliance tests, called plugfests, in which InfiniBand products can demonstrate their compatibility. The cooperation of IBM and Sun — bitter rivals in the server market — “ensures that our compliance tests will be a democratic and fair process. That way we get better industry buy-in,” Bradicich said.
The InfiniBand proponents claim an advantage over their predecessors in the field of server I/O. They can learn from the mistakes of the suppliers of the EISA bus, IBMs MicroChannel and PCI. Rather than repeat some of those mistakes, the new standard is relying heavily on software provided by independent companies such as Lane15 and Vieo.
“The advantage is in interoperability,” said Eyal Waldman, chairman and CEO of Mellanox Technologies, a start-up that is producing InfiniBand chips. “And, it shortens the time to market because one provider can provide software to all the elements.” The approach suits hardware makers and software makers alike because both are assured their products will be compatible across a variety of vendor devices.
Mellanox and another silicon start-up, Banderacom, plan to start shipping InfiniBand chips by the fourth quarter. When that happens, server and device manufacturers will start designing systems around those chips. Instead of an InfiniBand card for a PCI slot, servers will be produced with their own InfiniBand ports, governed by the chips added to the motherboard, said Gary Erickson, product marketing manager for HPs Intel-based Netserver line. But thats unlikely to be before the end of 2002 or sometime in 2003, he said.
At some point after that development, InfiniBand is likely to get designed into the servers governing chip set, the supporting chips closest to the CPU, predicted IBMs Bradicich. Thats when InfiniBand will begin showing that it is the standard to beat in terms of cost and efficiency.
Given that so many companies, like IBM and Intel, are supporting the migration to InfiniBand, its backers have no doubt that it will prevail. The world is moving from shared buses to switched fabrics, said Philip Brace, director of product marketing at Intel. Thats good news for InfiniBand, he said. “Its no longer a question of if this will happen. Its a question of when.”
And given the current push, it probably wont be long.