By eweek  |  Posted 2001-04-23 Print this article Print

: Get Off the Bus!"> PCI: Get Off the Bus!

In the last decade, the Internets pipes have expanded massively to absorb new traffic. At the same time, microprocessor speeds have soared from 66 megahertz to more than 1,000 MHz. But during this period of rapid acceleration, the PCI bus — the slender ribbon by which the pipes and processors are connected — has been stuck in granny gear. In the server, it often falls short of the demands placed on it. It can handle communications with only one device at a time. That means that a faulty peripheral card in a servers PCI slot, such as a network interface card, can shut down the whole server. As every network administrator knows, single points of failure like that are a nightmare.

Thats where InfiniBand rides to the rescue. It will supplant the PCI bus and overhaul the I/O architecture of servers. That will change the rules of the game, making commodity Intel servers more competitive with their Unix counterparts and concentrating more processing power into a smaller space, InfiniBand experts said.

"Anybody whos not InfiniBand-savvy is going to get left behind," warned Jonathan Eunice, president of Illuminata, the Nashua, N.H., information technology consulting firm.

Eunice and others point out that InfiniBand, which relies on a serial bus, will be much faster and more scalable than PCI, which uses a parallel bus. In its slowest configuration, InfiniBand will move data at a speed of 2.5 gigabits per second in each direction, or 5 Gbps total. That compares with the typical PCIs 1.064 Gbps or 2.128 Gbps in one direction. In some applications, InfiniBand throughputs will reach 30 Gbps, far faster than even the fastest PCI design. And InfiniBand boosters say there is no reason that the standard cant scale beyond even those lofty speeds, which represent merely what was designed into Version 1.0 of the InfiniBand specification published last October by the InfiniBand Trade Association.

"[Central processor unit] capacity has been outstripping input/output capacity. InfiniBand will provide a better match" to the processor, said Kris Meier, senior manager for InfiniBand at Crossroads Systems, a manufacturer of storage routers. As a result, data will move from the heart of the server out to the network as fast as it can be processed, not when a clogged PCI bus can get around to it, he said. "This could have a pronounced effect on Internet data centers. Performance per square foot will go up," he predicted.

InfiniBand is a new network protocol that creates a subnetwork in the data center,connecting servers to their disk array storage and other peripherals. "PCI acted as a local input/output bus for a single server. InfiniBand acts as a distributed I/O architecture. . . . It creates a fabric of compute nodes that can talk to each other," said John Gromala, manager of InfiniBand at Compaq. The nodes on an InfiniBand subnetwork may be servers, routers, switches or other InfiniBand-enabled devices.

In the long run, InfiniBand does a lot more than just speed a servers data throughput to the outside world. By basing itself on a switching architecture rather than a shared bus, InfiniBand shifts the burden of managing input and output from the servers CPU onto the intelligence of the InfiniBand subnetwork, freeing up the CPU for heavier business logic processing. In this respect, InfiniBand follows the example of the IBM mainframes channel architecture, said Tom Bradicich, director of architecture and technology at IBM and co-chairman of the InfiniBand Trade Association.

The intelligence in the InfiniBand subnetwork will be built with host channel adapters and target channel adapters that tell devices how to route data and the nature of the device receiving it. InfiniBand routers and crossbar switches will do the rest.

InfiniBand also potentially changes the design inside the server itself. By moving some components — such as power supplies, disk drives and even some fans — off the server engine, it will likely lead to server "blades" that are a fraction of the size of todays thin servers, saving space in cramped co-location and Internet service provider facilities.

Compaq officials have talked about "quick blades" as a possible future server based on InfiniBand architecture, but Gromala said Compaq has not given a time frame for when such a server might be available. Officials at Dell also declined to say when their InfiniBand-ready devices would be ready for the market.

When the smaller blade servers are produced, Illuminatas Eunice said, it should be possible to get 10 times to 20 times as many servers into a rack. One or more slots in a rack would be occupied by disk drives connected to the blades via InfiniBand, relieving the blades from needing their own disk systems, he pointed out.

Server manufacturers call this the "disaggregation" of the server, or breaking it up so that individual components may be packaged together to serve multiple servers at a time. InfiniBand encourages disaggregation by increasing the distance at which components may reside from the servers memory and processor.

PCI distances were measured in centimeters or inches. InfiniBand can reach across a distance of 55.8 feet using copper wire or 328.1 feet using fiber-optic links. That capability means entire offices could be connected via an InfiniBand fabric — making the "network is the computer" idea a reality. Indeed, with InfiniBand, every computer on the network would have super-fast access to stored data.

"InfiniBand has a much bigger role than PCI," Eunice said. Much as the microprocessor is one device on a compressed network inside a sheet metal box, InfiniBand will make the server itself a device that is part of a larger whole, interacting at high speed with dozens, hundreds or even thousands of other devices over a network.


Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel