Last week, two technology demonstrations provided a glimpse of the incredible storage networking power that will soon be available in the future.
IT managers who made the trip to Baltimore for Supercomputing 2002 instead of journeying to the bright lights of Comdex Las Vegas were rewarded with demos by SGI and StorageTek that pushed the limits of scalability and speed for storage networking.
StorageTek, together with the National Science Foundation, ran a prototype of its TeraGrid Project. The goal of the TeraGrid Project is to develop the largest, fastest distributed computing infrastructure ever deployed for open scientific research. The TeraGrid will connect five major research facilities once it is complete.
By developing the TeraGrid, scientists hope to evolve supercomputing away from the highly centralized infrastructures we have today into a peer-to-peer infrastructure, where computing resources can be summoned on demand to assist researchers.
With TeraGrid in place, researchers will be able to do real-time problem-solving to help with highly involved tasks such as molecular modeling and black hole simulations.
When complete, the TeraGrid Project will have more than 20 teraflops of computing power (a teraflop is a trillion calculations per second) and storage capacity of more than 1 petabyte (1,024 terabytes) in size.
The TeraGrid prototype consisted of 32 IBM Linux nodes running on Itanium2 processors, and StorageTek provided D280 and D178 disk storage systems for close to 44 terabytes of capacity. The nodes of the cluster were distributed throughout the show floor and had a combined throughput of 6400MB per second.
10G-bps Ethernet links were IP links for the cluster, while the storage side used 2G-bps Fibre Channel to link the servers to the StorageTek storage systems. A 128-port Brocade Silkworm 12000 provided the SAN networking for the demo.
Although this demonstration was modest compared to the ultimate goals of the TeraGrid Project, the display was still extremely impressive for its size and computing possibilities.
Closer to home, in a way
SGI, together with LightSand Communications, showed shared data access over a simulated transcontinental link. That demo is likely closer to home for IT managers because it uses technology that is readily available and solves a problem that plagues companies with geographically dispersed locations.
Using LightSands S-600 SONET gateways, SGI could transfer data between two server clusters over a simulated WAN link.
The major distinction between this demo and others from LightSand, which usually show how technology can extend a SAN over a WAN link, is that the Baltimore display used SGIs CXFS shared file system as the networked storage medium instead of just transferring simple block data.
Using this combination of technology, servers running different OSes can write to the same file system over large distances, eliminating the wait time usually associated with sharing remote data and the need to use file-sharing tools such as FTP.
Currently, most organizations have to shuttle large files back and forth for revisions, but using the SGI/LightSand solution, IT managers should be able to improve workflow and data access for clients.
An Adtech AX/4000 WAN simulator was set to create a delay comparable to 8,000 kilometers of WAN link. During the demo, SGI officials were able to get sustained disk I/O of more than 60MB per second going from one side of the WAN to the other.
Senior Analyst Henry Baltazar can be reached at henry_baltazar@ziffdavis.com.