NSFs Research Network

The last thing the U.S. needs is another underused high-speed test bed network designed to look for the next great Internet application, right?

The last thing the U.S. needs is another underused high-speed test bed network designed to look for the next great Internet application, right?

But link computers at four major research institutions via an ultra-high-speed broadband network, and you might get a glimpse of the real future of the Internet.

The National Science Foundation earlier this month announced the Distributed Terascale Facility (DTF), a $53 million, three-year project linking computing power at four major research institutions via 40-gigabit-per-second pipe provided by Qwest Communications International. IBM will contribute geographically distributed Linux servers, and Intel will contribute its powerful Itanium family of processors.

The idea is to prove the commercial and scientific viability of a virtual machine room, or computing facility, that lets researchers tap processing power in many locations for work on data-intensive problems, such as climate, biology, genome, protein or combustion modeling. "This is the supercharger," says Wesley Kaplow, chief technology officer of Qwests government systems division.

"Theres nothing particularly experimental in lighting up four 10-Gbps channels," Kaplow says. "And this issue really isnt whether you can make the facility. The pieces are there, the computer clusters are there, the networking technology and the bandwidth are no longer just theory. The time has come to put all the pieces together."

Big experiments generate tremendous amounts of data that, as numbers on a page, arent meaningful. The DTF, however, should allow very high-resolution visualizations of those experiments. A researcher could create a model of a storm, walk into it, take slices out of it and request the computation proceed in a different direction.

At the enterprise level, the DTF may help prove that as the cost of processing and bandwidth drop, it may be more efficient to harness corporate computing power in two locations to work on a single manufacturing, design or rendering problem than it is to fly tapes of the data back and forth.

"We effectively are improving on the Internets elimination of distance and time barriers by making shared access to massive data — whether its output from a radio telescope or scientific computer simulations — a routine endeavor," says Dan Reed, director of the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign. The network will link Reeds lab, Argonne National Laboratory in Argonne, Ill.; the California Institute of Technology in Los Angeles; and the San Diego Supercomputer Center at the University of California at San Diego.

Not long ago, observers lamented the fact that the NSFs very high- performance Backbone Network Service, the University Corporation for Advanced Internet Developments Abilene project and the Defense Advanced Research Projects Agencys SuperNet failed to spawn much new, advanced Net apps or a meaningful transfer of technology to the private sector. But Kaplow says the DTF is different.

"The reason this was appealing to Qwest was that it wasnt a network in search of a mission, or computers searching for a network," he says. "Its where a network, computer, middleware, software and applications all come together."