Apple Powers Grassroots Supercomputer
The Turing Cluster, named after famed scientist Alan Turing and operated by the universitys interdepartmental CSE (Computational Science and Engineering) group, is made up of 640 Xserve G5 rack-mounted servers connected by Myrinet, a high-speed, low-latency interconnect from Myricom Inc.
Professor Michael Heath, director of CSE, said he expects to see the cluster reach about 5 teraflops when he runs the Linpack benchmarking software next week. This, he said, should place the Turing Cluster somewhere around 30th on the Top 500 list of supercomputers worldwide.
This, Heath said, represents more than a tenfold increase over CSEs previous system, which was used for large-scale simulations, both for engineering and pure scientific research in modeling solid rocket engines, virtual prototyping, molecular dynamics and protein folding. "Its the kind of computing thats the face of modern biology," said Heath.
Heath is not only directing the Turing Cluster project, but he was also instrumental in raising the required funds. "It was creatively financed," he said, "and by that I mean it was like I had a bake sale." Heath canvassed university research groups that usually spent $20,000 to $50,000 on their own, smaller computing clusters, and asked them to pitch in on a large project that they could all use. As word of his success spread, various university chancellors matched the funds Heath had already raised.
"It was a grass-roots assembly of a very impressive jump in computing power, all on a shoestring," Heath said. He estimated that the cost of the Turing Cluster project was under $3 million. This included the cost of hardware, infrastructure and some labor, which was largely supplied by CSEs regular system administrators and students.
The project took some "chutzpah" to start up, Heath said. He said he initially approached Apples representative of its educations sales division at the university and outlined his proposal to her. "They came up with the specs for the system based on CSEs needs," Heath said. He added that he and his staff had weekly calls with Apple while designing the system and Apple sent technicians to the site.
"We went with Apple because the price/performance ratio was better than anything else we saw," Heath said. In addition, he said, CSE had experience working with the G5 processor, also called the PowerPC 970, which CSE had used in products from IBM. In addition, he said, "power and cooling was also a critical factor." That the Xserve G5 produces less heat than a dual-processor server based on chips from Intel Corp. or Advanced Micro Devices Inc. helped the overall cost of the project. "We didnt want to spend all our money on air conditioning," Heath said.
The increase in computing power will not just allow more users, though Heath said that CSE expects the Turing Cluster to host hundreds of users, where the previous cluster could handle just over 100. "You dont just do the same thing with a tenfold increase in capacity," Heath said. "You do different things.
"This allows us increased resolution, so we can model data in detail, rather than glossing over these details," he explained.
Heath said the Turing Cluster was fully assembled and operating at 76 percent of capacity less than two months from the start of its installation. The grand opening of the Turing Cluster will be on Feb. 8.
Check out eWEEK.coms for the latest news, reviews and analysis on Apple in the enterprise.