HyperTransport Adds Slot Spec for Clusters

Motherboard makers will soon have the option of adding a dedicated HyperTransport card slot, a feature that will increase the options available to clustered supercomputers.

Motherboard makers will soon have the option of adding a dedicated HyperTransport card slot, a feature that will increase the options available to clustered supercomputers.

The so-called HTX slot will allow direct access to the HyperTransport bus via the slot, which motherboard maker Iwill add to its DK8-HTX board, members of the HyperTransport consortium said. The slot could be used with Infiniband, a fairly common backbone protocol used in servers, they said, which has longer latencies compared to HyperTransport.

Currently, the HTX slot is only being discussed as part of a enterprise-class cluster, although diagrams shown by HyperTransport vendors also proposed adding the technology to ATA storage connections and other I/O, such as more general-purpose add-on cards.

Support for the slot is being added to the HyperTransport specification, which has primarily been used as a means to connect microprocessors to other components within the system. Originally designed by Advanced Micro Devices, HyperTransport is now being used by the Apple Macintosh G5 tower as well as gaming consoles like the next-generation Microsoft Xbox. The technology is now managed by the HyperTransport Technology Consortium, an independent organization dedicated to promoting and managing the technology.

"The way we see [the card specification] is as another connectivity factor that were bringing to market to commoditize HyperTransport," said Mario Cavalli, general manager of the HyperTransport Technology Consortium. The addition could be seen as a second step, for instance, for Cray to take off-the-shelf CPUs and apply the benefits of commoditization to high-performance computing, he said.

At least initially, the slot is designed to allow clustered supercomputer designers to cut back on memory latency, often the gating factor in systems in which processor nodes uses the Message Passing Interface or a similar protocol to communicate with the data held in memory and associated with other nodes. The MPI protocol runs on top of Infiniband which generates latencies of 4.5 microseconds, according to Len Rosenthal, vice president of marketing for PathScale, Inc., Sunnyvale, Calif. HyperTransport latencies are on the order of 1.5 microseconds, he said.

The new HyperTransport EATX motherboard/daughtercard specification defines an interface and form factor specification for an EATX motherboard connector and HyperTransport add-in cards. The EATX motherboard is a popular architecture used in high-performance workstations, servers, embedded systems and storage systems. The HyperTransport HTX specification defines an 8- or 16-bit HyperTransport interface with an up to 1.6 gigatransfer/second data rate.

Iwill currently makes 4- and 8-way motherboards, and could add the HTX technology to those boards as well, depending upon market demand, said David Montgomery, director of technology development for the Taiwan board company. The company is also considering a multislot design, Montgomery said in an interview.


Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.

Editors Note: This story has been corrected. Through an editing error, motherboard maker Iwill was omitted from mention in the second paragraph. The author misidentified the latencies tied to MPI/Infiniband and the HyperTransport connection.