The Texas Advanced Computing Center (TACC), founded in 2001, has been a key computational research facility over the past 10-plus years, starting off with a midlevel Cray system and now housing several powerful supercomputers.
TACC, located in Austin, is supported by the University of Texas at Austin and National Science Foundation, and grants from other federal agencies. It is part of the foundation's XSEDE (Extreme Science and Engineering Discovery Environment) project, aimed at giving researchers greater access to supercomputing resources.
TACC will house Stampede, the newest supercomputer at the center. When it becomes fully operational in January 2013, Stampede will offer up to 10 petaflops of performance. It will incorporate more than 5,000 PowerEdge C8220X servers from Dell, each of which will be powered by two eight-core Xeon E5-2680 chips from Intel. The cluster also will include new Xeon Phi coprocessors from Intel to ramp performance.
The blue lights in the systems indicate Xeon Phi coprocessors from Intel; the green lights show that the systems are turned on.
Along with the Xeon Phi coprocessors, Stampede also will offer 128 next-generation Nvidia GPUs for remote visualization, and all the components—from compute nodes to visualization nodes—will be integrated with a 56G-bps InfiniBand network from Mellanox Technologies.
Dell has leveraged the systems it created for Stampede to offer organizations with hyperscale data centers with highly scalable and flexible servers. The PowerEdge C8000 series uses the same design principles that are found in the systems in the TACC supercomputer.
Intel Xeon Phi
Stampede also is leveraging Intel's upcoming Xeon Phi coprocessors, which can be used with traditional processors to help scale the performance and flexibility of the supercomputer. TACC is using a special prerelease shipment of Xeon Phi chips, which had been known as "Knights Corner."
Engineers have carefully mapped out where everything in the Stampede supercomputer will go.
Stampede will not be fully operational for another three months, and there is still a lot of work to be done on the supercomputer, which will eventually encompass about 8,000 square feet of space.
There are still stacks of unopened boxes holding equipment and technology that will be needed to complete Stampede. Here are a number of boxes holding networking gear from Mellanox.
Scaffolding can be found in some aisles, used by engineers as they continue building out Stampede.
A red arrow is used on the side of one aisle to point people in the right direction.
In the back of the room that is housing Stampede are two large containers filled with mineral oil. Organizations with hyperscale data centers are always looking for ways to run their systems cooler, and one way is by placing systems in mineral oil.
Housed in a room right next to Stampede—and not separated by a wall—is the Sun Constellation Linux Cluster, better known as Ranger. In production since 2008, Ranger—incorporating 3,936 sixteen-way compute nodes powered by 15,744 Opteron chips from Advanced Micro Devices—will be replaced by Stampede once the larger supercomputer is fully operational.
Ranger includes SunBlade 6048 blade servers from Sun Microsystems, which was bought by Oracle in 2009.
Ranger also includes systems from Dell.
Feeling the Heat
Pictured here is a hot aisle in the Ranger supercomputer. According to TACC, researchers from Stanford doing research into clean coal were able to run optimization calculations on Ranger over 20 days that on a desktop system would have taken 20 years.