One year after the products initial release, Hewlett-Packard this week is launching a new version of its revamped HP StorageWorks Scalable File Share appliance featuring a host of accelerated performance and scaling capabilities geared toward Linux clusters.
Based on Hewlett-Packard Co.s StorageWorks Grid architecture and due for release next month, HP SFS 2.0 is three times faster, offers triple the bandwidth to over 35GB per second, and doubles the existing capacity of the 1-year-old product to 512TB, said Kent Koeninger, product marketing manager of High-Performance Computing and Technology Marketing for Palo Alto, Calif.-based HP.
The Scalable File Share 2.0 upgrade includes a new entry-level HP SFS system at half the price of the original, made possible by significantly reducing the disk technologys hardware components. Available next month, the product has a starting list price of $44,000.
New features of the storage grid appliance include InfiniBand high-speed interconnect; support of Red Hat Enterprise Linux v.3, v.4, and Fedora, and SuSE Linux Enterprise Server v.9; and an NFS (Network File System) for access to non-Lustre systems, said HP officials.
Koeninger said he expects future versions of HP SFS to adopt business continuity capabilities, such as snapshot, for instance. Additionally, the architecture of the product will eventually be built to enable customers to perform online upgrades of storage without having to restart the system.
HP SFS 2.0 scales bandwidth by distributing files in parallel across as many as thousands of clustered Linux servers and storage devices. Data storage then becomes a shared network resource managed as a single image.
Hugh Couchman, scientific director for the London, Ontario-based SHARCNET (Shared Hierarchical Academic Research Computing Network) academic research network, said HP SFS 2.0 will play a pivotal role in providing sufficient throughput for the file system on each of four large Linux clusters set to be installed this summer.
“[Bandwidth] is absolutely critical. The amount of data is so huge now you just have to push it through the file system effectively, and thats frequently been a weak link, especially in commodity clusters being built—getting that scalability has been a real challenge,” Couchman said.
“With some previous solutions you built commodity clusters and you attached very expensive high-end storage to it. With [HP SFS 2.0], youre building a commodity Linux cluster and you can use commodity, that is, nicely priced storage hardware, and run SFS on that to run an effective system,” he said.
According to Couchman, one of the clusters planned for use by the 11 academic institutions comprising SHARCNET is called a “Capability Cluster.” The cluster will be used to run large single parallel applications with hundreds or perhaps thousands of processors.
An application of that size may produce a time-slice of data that could be hundreds of gigabytes in size, but must be pushed out to the file system in a reasonable amount of time, or the system will sit idle performing I/O.