Hitachi Data Systems Joins the Software-Defined Crowd

 
 
By Chris Preimesberger  |  Posted 2015-04-28 Print this article Print
 
 
 
 
 
 
 
Hitachi Data Systems

HDS says it now can address customer data workload requirements "from entry to the mainframe" with a single software stack.

Hitachi Data Systems, an old-school IT company that often comes late to a trend but knows how to find customers who like its style, has joined the software-defined, hyper-converged, data analytics-based data center equipment crowd that already includes Cisco Systems, Hewlett-Packard, Oracle, EMC, Dell and others.

On Day 1 of its Connect conference in Las Vegas, HDS on April 28 purported to provide all things for all enterprises by promising to address customer data workload requirements "from entry to the mainframe" with a single software stack.

Hitachi has expanded its Unified Compute Platform lineup with new hyper-converged and converged servers and storage that its says now can support small-to-super large IT workloads. The latest additions to the UCP line include the hyper-converged Hitachi UCP 1000 for VMware EVO:RAIL and the mildly converged Hitachi UCP 2000, both of which are new rack servers that work best in small to medium or remote or branch office environments.

The Hitachi UCP 6000 converged model combines the company's recently launched CB 2500 blade servers with new storage arrays noted later in this story. The UCP package, with the addition of Hitachi Unified Compute Platform Director infrastructure automation middleware, ostensibly allows for fast provisioning of infrastructure for managing fast-changing workload requirements.

HDS also has made available its own so-called "data lake" archive—another trendy term—for big data analytics consisting of its new Hyper Scale-Out Platform, which it contends provides cost-effective compute performance and on-demand capacity.

HDS said the Hyper Scale-Out Platform is capable of ingesting massive amounts of mixed data types across a distributed, clustered architecture by running Hitachi's own file system with open source management and virtualization software. The Scale-Out Platform also connects directly with Hadoop environments, enabling users to analyze data in place and eliminate the need to move large data sets to perform analytics functions for big data. HDS does not provide its own Apache Hadoop analytics engine, however.

HDS is upgrading its storage virtualization capabilities with the VSP product line—the current G200, G400, and G600 models and its upcoming G800 arrays. The servers, networking and all the storage are controlled by the company's single Scale-Out Platform stack, HDS said.

VSP arrays feature native heterogeneous storage virtualization and multi-site active-active storage, in addition to fully compatible data migration, replication and management.

In summary, HDS said its users now have the ability to choose systems based on the necessary capacity, performance and price required to meet their business goals, not because of functional difference.

All of these new products will be coming out later this year. For more information, go here.

 
 
 
 
Chris Preimesberger

Chris Preimesberger is Editor of Features & Analysis at eWEEK. Twitter: @editingwhiz
Join us for our next eWEEKChat May 13: "Software-Defined Data Centers: The Changing Role of IT Hardware."


 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel