Using the Internet2 network, a high-speed backbone connecting various universities and think tanks, a small collection of technology companies and research institutions in February transferred 6.25G bps across 6,569 miles—68,431 terabit-meters per second—a high-water mark for TCP/IP data transmissions measured against a unit of distance.
More importantly, researchers involved with the project say that the technology can be applied to data centers. Accomplishing the new speed record required re-engineering the S2io physical interface to overcome limitations of the Intel Corp. chip set architecture, Harvey Newman, professor of physics at the California Institute of Technology and one of the participants in the test, said in an interview.
Although Moores Law postulates that transistor densities in semiconductors will double every 12 to 18 months, the speed at which networks have improved their bandwidth has outpaced it, Newman said. That has had a "transformative impact" on networking, far more than other segments of information technology, he said.
The team used four servers with Itanium processors from Intel, a 10G-bit Ethernet card from S2io Inc. and a Cisco Systems Inc. 7600 router, running the Microsoft Windows Server 2003 operating system. Caltech and CERN in Switzerland provided the facilities and the engineering on the project. The team transmitted about 499GB worth of information, using nine simultaneous streams of data mixed together. An IPv4 addressing scheme was used, not the IPv6 protocol that is expected to form the foundation for next-generation networks.
Although the Internet2 team is honoring the accomplishment this week, the test was performed on Feb. 22 and verified within the past few weeks, Newman said.
The team utilized the Abilene network, a proving ground for next-generation networking technology. On Feb. 4, the Abilene network completed its upgrade from a 2.5G-bps backbone to a 10G-bps infrastructure.
The team agreed beforehand to allow packet loss, to better simulate real-world effects. Achieving the new record required adjustments in the TCP/IP stack buffer as well as the tuning of the physical interface, Newman said.
"I think this is more difficult than using individual PCs, as it shows that you can get a lot of streams across a production network," Newman said.