Startups Aim to Free Ethernet Packet Jams

 
 
By Jeffrey Burt  |  Posted 2004-04-05 Email Print this article Print
 
 
 
 
 
 
 

Ammasso, Precision I/O look to boost Ethernet performance with interconnect technologies.

The rapid growth of network computing—from blade servers and grids to compute clusters and utility computing—is fueling a push to find better interconnect technologies.

While InfiniBand has received much of the attention over the past few years as a fast connection between servers, two startups are readying products that take advantage of the increasing performance of Ethernet to ameliorate server-to-server bottlenecks.

Ammasso Inc. and Precision I/O Inc. will introduce technology designed to increase the speed and lower the latency of 1G-bps Ethernet in server-to-server environments. Both are planning upgrades over the next two years as 10G-bps Ethernet adoption grows.

Late this month, Boston-based Ammasso will introduce the Ammasso Model 1100, an Ethernet NIC (network interface card) that includes TOEs (TCP/IP Offload Engines) and RDMA (Remote Direct Memory Access) specifications.

The TOEs increase CPU efficiency and offer higher throughput, while RDMA enables one server to place data directly into the memory of another server, bypassing the operating system. By putting those technologies onto Ethernet, Ammasso expects to improve latency by up to 10 times over existing Gigabit Ethernet—to about 10 microseconds, which is comparable to what InfiniBand currently offers, according to CEO Larry Genovesi.

By enabling faster throughput, the technology will allow users to put more nodes on a cluster, Genovesi said. "It dramatically decreases the cost of hardware," he said.

For its part, Precision I/O is staying away from TOEs and RDMA, said Judy Estrin, chairman and acting CEO. TOEs dont address the latency issue and are beneficial only to applications with large packets moving over long connections. The limitations with RDMA are that the protocol is required on the sending and receiving ends of a connection, and it has security problems, Estrin said.

Precision I/O, of Palo Alto, Calif., at midyear will release yet-to-be-named software designed to do what RDMA does—bypass the operating system to move transactions directly from the NIC to the applications. However, a key difference is that Precision I/Os software will be required only on one end of the communication line, and no hardware upgrade will be needed.

"In Precision I/O, the operating system still does some processing ... but you take it out of the networking business," Estrin said. "By not having to go in and out of the operating system, we can process packets more efficiently."

Estrin and Ammassos Genovesi said they expect Ethernet to remain the primary commercial interconnect, even though InfiniBand already supports 10G-bps connectivity and is moving toward 30G bps. InfiniBand still requires users to install a new network fabric, and only one manufacturer makes the chips used in InfiniBand NICs. Ethernet, on the other hand, is already pervasive, garnering almost 99 percent of the LAN market in the fourth quarter of 2003, Ammasso officials said.

Kelly Carpenter, IT manager at the Genome Sequencing Center at Washington University School of Medicine, currently runs several clusters using Sun Microsystems Inc. hardware connected by Gigabit Ethernet. The center is testing InfiniBand, but Carpenter said he is excited about the idea of running RDMA over Ethernet.

"InfiniBand looks really good. But if they can get something equivalent [on Ethernet], that is something to think about," said Carpenter in St. Louis. "RDMA is the coolest feature for InfiniBand. You can go memory to memory pushing data around and bypass the operating system. For us, in terms of what we see in the numbers, that seems like a good way to go."

That will be particularly important as 10G-bps Ethernet ramps up, Carpenter said. Though still a year or two away from wide use, 10G-bps Ethernet is showing life. Switch makers have offerings available, and OEMs seek to add the capabilities in future systems. Hewlett-Packard Co., for instance, is shipping 10G-bps modules in its ProCurve 9300 routing switches and expects to have 10G-bps NICs in its Itanium-based Integrity and Superdome servers late this year and its ProLiant servers late next year or in 2006, said HP Fellow Dwight Barron in Palo Alto.

IBM, for its part, will introduce 10G-bps Ethernet capabilities in its xSeries systems next year, officials said.

Check out eWEEK.coms Server and Networking Center at http://servers.eweek.com for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.
 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel