Hitachi Data Systems May 14 introduced a new, unified enterprise storage system with performance and capacity claims that appear to border on science fiction—or at least unabashed marketing hype.
The new Universal Storage Platform V, touted as “the most advanced, intelligent storage services platform on the planet,” features a mega-controller that the company claims can deliver 3.5 million IOPS (input/output operations per second) and a 500 percent increase over its top competitors in virtualized storage port performance for external storage, Hitachi Chief Technology Officer Hu Yoshida said in a conference call.
The 3.5 million IOPS compares with a top speed of 700,000 IOPS by Hitachis biggest competitor, EMC, with its Symmetrix DMX-3 storage system, Yoshida said.
The Platform V (the letter, not the Roman numeral) is designed to do many things, but it really has two main duties: support up to a whopping 247 petabytes of raw storage capacity, and serve as a central controller/coordinator for all storage systems currently working in any enterprises data center.
“We really dont care what the media is [hard disks, tape, optical disks, etc.] its immaterial,” Yoshida said from company headquarters in Santa Clara, Calif.
“As long as it has an interface that we can connect with, we can access the storage. The Platform V will be able to utilize it within the system, whatever brand it is.”
Key feature: Next-generation virtualization layer
Platform V also features a next-generation, large-scale heterogeneous virtualization layer that works hand-in-hand with Hitachis own thin-provisioning features.
“One of the most compelling aspects of the new Hitachi USP V is that it provides three dimensions of storage virtualization,” said Tony Asaro, analyst with Enterprise Strategy Group in Milford, Mass.
“The first is its internal virtualization capability that includes thin provisioning, large logical storage pools, wide striping, virtual partitions and quality of service. The second dimension of the USP V is its external storage virtualization software managing heterogeneous storage systems and its capabilities as a high-end platform that provides best-in-class performance, scalability and reliability,” Asaro said.
“The third dimension is that you can take all of the considerable intelligence and functionality within the USP V and extend it to any class of storage you have in the data center,” he added.
Hitachis Universal Star Network V architecture remains the only one of its kind in the storage industry featuring a crossbar switch ASIC design and separate, dedicated internal networks for data and control (meta data), Yoshida said.
He added that this enables the platform to deliver advanced storage services, such as storage-agnostic universal replication, large-scale virtualization, logical partitioning and the newly announced Hitachi Dynamic Provisioning software, large logical storage pools and wide striping.
New virtualization software—also announced May 14 by Hitachi—can manage a nearly infinite pool of virtualized storage capacity (247 petabytes), Yoshida said, effectively removing all capacity constraints for global companies centralizing storage and processing within a single geographic area.
Hitachi also announced new software for the creation of large logical storage pools, enabling hundreds of disk drives to operate on a single input/output (I/O) request simultaneously.
John Webster, principal analyst at Illuminata in Nashua, N.H., told eWEEK that “USP proves that Hitachi customers could use a virtualized controller to consolidate and manage arrays from other vendors.
“USP-V makes an even bigger statement about consolidating and managing from a single point of control for both mainframe and open-systems disk. Thats a threat to EMC and IBM. Its not a threat to HP because HP also sells the same device USP as their XP storage array,” Webster said
Thin provisioning: a hot commodity
Hitachi made a major point in the Platform V announcement that this is the first enterprise-ready storage system to include as standard both virtualization and thin provisioning, which allows for better overall control, a lower power draw and a smaller carbon footprint.
Thin provisioning, currently a hot commodity in the storage world, is a feature that allows users of arrays to write to any amount of capacity, yet actually use only a fraction of the total physical storage required by the arrays.
Next Page: High capacity for speedier performance tradeoff.
High Capacity for Speedier
Performance Tradeoff”>
As a user begins writing to the disk, the arrays take only the physical space needed for the job and pull it back into the volume. Extra storage capacity isnt touched and thus doesnt require any energy to maintain.
Hitachi also unveiled a 4GB per second Fibre Channel switch backplane in an enterprise-class storage platform and claims to be the only company with a switched internal architecture and a unique switched backplane architecture. A backplane connection, in which a disk drive plugs directly onto the back of the storage unit, cuts back the use of cables and simplifies the architecture to add throughput speed.
Asaro of ESG said that 247 petabytes is based on external virtualization support and on the number of logical devices the controller supports for additional performance scalability.
“That [247PB] is a big number … the point is that they [Hitachi] theoretically have no ceiling,” Asaro said. “As for the performance, they have done a number of architectural improvements that have increased performance. We analyzed these and, yes, the performance enhancements are realistic corner case improvements. These will result in real application performance improvements.”
Tradeoff: High capacity for speedier performance?
High scalability in a storage system is a major selling point, but the tradeoff can often be a hit on I/O performance. But this apparently isnt the case with the new Hitachi offering, analysts told eWEEK.
“If you want to support 247 petabytes behind one of these systems, then obviously youre making some tradeoffs on the performance on each individual pull. So it has to do with the application set that youre going after,” Rick Villars, storage analyst with IDC in Framingham, Mass., told eWEEK.
“This is the scale and size of the domain that theyre willing to support. Theyve boosted the processor power, they moved to a 4GB architecture, and they actually doubled the number of systems in the processor base. So some of those data points are actually reachable when you do that kind of boost.”
Tom Trainer, an analyst with Evaluator Group in Greenwood Village, Colo., said he believes that 3.5 million IOPS is a believable number based on the technical materials he reviewed prior to this announcement.
Trainer said that the appropriate question to ask is: Who can actually push the box that hard so that it can put the pedal to the metal and perform at that high rate of speed?
“The answer is high-end enterprise customers with fast Unix servers and mainframe customers who need, for example, fast speed for credit card authorization, rapid copy and movement of data to fraud detection servers in order to catch electronic theft faster; and of course reservation systems that need real-time high-speed performance,” Trainer said.
Webster of Illuminata told eWEEK that “we keep saying that storage requirements grow at an average of 60 percent plus per year. Why then should this (3.5 million IOPS) seem out of line? This is where the growth trajectory is headed for those high-end customers who want to consolidate and manage larger environments with fewer vendors.”
Upping the ante against the competition
Trainer of the Evaluator Group said Hitachi is indeed upping the ante against some formidable competition in EMC, IBM and others.
“They are leveraging a controller design with faster hardware, and they are clearly stating that they will provide both the commitment on the software side and professional services side to leverage the brain separated from the body [storage],” Trainer said.
Trainer said that if Hitachi executes well over the next year or so they could set the next trend in storage controller design.
“Think about the competitive hardware and software reply to this kind of ante … in the end, this could be much more effective and efficient for the user. But, as I say, its all in the execution. The industry, at large, may expect Hitachi to stumble … effective execution is measured over time,” Trainer said.
Pricing for the Hitachi Universal Storage Platform V base model is approximately $250,000. It is available now.