Technology from little-known Panasas shatters an old I/O bottleneck.
Supercomputings newest baby on the way is a $35 million system named after a cartoon character and a muscle car: Roadrunner. Currently being built by IBM for the Department of Energys Los Alamos National Laboratory, in Los Alamos, N.M., Roadrunner could develop into a machine capable of achieving a never-before-sustained speed of 1,000 trillion
calculationsor one petaflopper second.
If that sounds like a lot of computational intelligence, trust us, it is.
Roadrunner could become the next-generation supercomputer rock star for the DOEs stockpile stewardship program, which helps ensure that the U.S. nuclear weapons stockpile is safe and reliable so nobody has to reinstitute underground nuclear testing.
Roadrunner is being built entirely from commercially available hardware and is based on Red Hats RHEL (Red Hat Enterprise Linux) 4.3 operating system. IBM System x3755 systems based on Advanced Micro Devices Opteron processors are being deployed in conjunction with IBM BladeCenter H systems with powerful new cell chipsthe latter originally intended for high-end video games. Roadrunner is expected to help usher in a new computing paradigm, in which hybrid architectures are used for extreme-scale computation.
Suffice it to say Roadrunner is intended to become the fastest computer in the world.
While the processors get faster, the architecture and I/O more efficient, and the software better tuned, there remains a major issue: With that firehose stream of calculation data going through the system, how can a storage system be built big enough to get its digital arms around the entire load? Storage I/O has historically been the biggest, nastiest bottleneck for supercomputing.
No more. Eight-year-old, 125-employee Panasas, of Fremont, Calif., was retained by the Roadrunner team to deploy Panasas ActiveScale 3.0 Storage Cluster as the storage package for the new petascale supercomputer, and the problem is being solved.
Roadrunner will run extremely complex scientific calculations using the Linux operating system and the Panasas Storage Cluster with DirectFlow. The DirectFlow capability offers a fully parallel data path called PNFS (Parallel Network File System) to allow high-speed, direct communications between the Roadrunner teams Linux cluster and Panasas storage cluster nodes.
Conventional storage systems use one two-way head controller to direct data traffic. Panasas alone features PNFS, which Panasas founder Garth Gibson, an internationally known inventor of RAID storage, has championed from the beginning. PNFS features two two-way head controllers; imagine adding a second two-way roadway over an existing two-way highway.
"PNFS separates the metadata access from the data path, allowing clients to get direct and parallel access to NAS [network-attached storage]," Henry Baltazar, an analyst with The 451 Group, told eWEEK. "With a standard SAN [storage area network] or NAS storage system, a single controller head can wind up being a bottleneck, especially in performance-critical environments such as HPC [high-performance computing]. The main advantage of clustered storage systems is that they spread the load across multiple systems to ensure high-speed data access."
The parallel file system is an "absolutely crucial part of the new Roadrunner ecosystem," Mike Karp, an analyst at Enterprise Management Associates, told eWEEK.
"The type of calculations that Los Alamos runs are at a level of complexity that demands parallelized computing processes," Karp said, "which in turn means that the data must be delivered to the various CPUs simultaneously, with very low latency, and at a very high I/O rate, to ensure that calculations can be executed at the same timethat is, in parallel."
Panasas Gibson told eWEEK that "reliability and integrity" are the two main hallmarks of the Panasas storage system.
"When something really bad happensdisk read errors during disk failure rebuilds and maybe a network error thrown in for sportPanasas does not toss away terabytes of data just because a tiny amount of data is unreachable," Gibson said. "Instead, Panasas automatically fences off the file containing problematic data and makes the rest of the terabytes of data available to applications and users without interruption."
Will this parallel file system structure eventually work its way into enterprise computing?
"Panasas is continuing to drive for faster parallel I/O handling for the very-high-end supercomputing environment," Tom Trainer, an analyst with Evaluator Group, told eWEEK. "This is certainly a niche that most other storage vendors do not see as a large and profitable endeavor. But this is where the companies such as EMC and IBM are missing an opportunity."
Panasas and BlueArc know that supercomputing is starting to have a trickle-down effect into the business computing environment, Trainer said.
"More and more data is being created at alarming rates," Trainer said. "Credit card companies, for example, must move client account information at lightning speed and analyze for fraud detection at increasingly faster rates. Supercomputers are starting their walk into the data center, and as they step in, there will only be a small number of vendors positioned to provide the requisite storage products required by these data- munching monsters."
Baltazar of The 451 Group had a different take. "The cluster technology that is around today is [all] proprietary," he said. "The forthcoming PNFS standard [which Gibson has been promoting in standards bodies for years] will help this technology move closer to the enterprise, but at this point, this technology will be confined to niche markets, such as HPC."
Check out eWEEK.coms for the latest news, reviews and analysis on enterprise and small business storage hardware and software.
Chris Preimesberger was named Editor-in-Chief of Features & Analysis at eWEEK in November 2011. Previously he served eWEEK as Senior Writer, covering a range of IT sectors that include data center systems, cloud computing, storage, virtualization, green IT, e-discovery and IT governance. His blog, Storage Station, is considered a go-to information source. Chris won a national Folio Award for magazine writing in November 2011 for a cover story on Salesforce.com and CEO-founder Marc Benioff, and he has served as a judge for the SIIA Codie Awards since 2005. In previous IT journalism, Chris was a founding editor of both IT Manager's Journal and DevX.com and was managing editor of Software Development magazine. His diverse resume also includes: sportswriter for the Los Angeles Daily News, covering NCAA and NBA basketball, television critic for the Palo Alto Times Tribune, and Sports Information Director at Stanford University. He has served as a correspondent for The Associated Press, covering Stanford and NCAA tournament basketball, since 1983. He has covered a number of major events, including the 1984 Democratic National Convention, a Presidential press conference at the White House in 1993, the Emmy Awards (three times), two Rose Bowls, the Fiesta Bowl, several NCAA men's and women's basketball tournaments, a Formula One Grand Prix auto race, a heavyweight boxing championship bout (Ali vs. Spinks, 1978), and the 1985 Super Bowl. A 1975 graduate of Pepperdine University in Malibu, Calif., Chris has won more than a dozen regional and national awards for his work. He and his wife, Rebecca, have four children and reside in Redwood City, Calif.Follow on Twitter: editingwhiz