Virtualization Will Keep Driving Storage Market in 2013: 10 Reasons Why

 
 
By Chris Preimesberger  |  Posted 2013-01-18
 
 
 

Gap Widens Between Legacy Storage and Virtualization

Conventional storage systems installed by legacy vendors were designed more than 20 years ago to meet the demands of the physical infrastructure. As data centers become more virtualized, there is a growing gap due to the mismatch between how storage systems were designed and the demands of virtual environments. While some industry players are attempting to make virtualization products adapt to legacy storage through APIs or by retrofitting legacy storage to work in virtualized environments, neither approach will go far enough to bridge the "grand canyon" between these two mismatched technologies. What is needed to solve this problem is storage that has been completely redefined to operate in the virtual environment and not the physical constructs of legacy storage platforms.

Gap Widens Between Legacy Storage and Virtualization

Over-provisioning of Storage Will Hit the Wall

More than 60 percent of typical VMware deployment costs are attributed to storage. Why? General-purpose, disk-based storage is poorly suited to handle the random I/O streams in virtual environments, so companies tend to overprovision storage to meet demand. Not only will adding more spindles not solve the fundamental problem, it will also add unneeded excess capacity and consume more data center space and energy while increasing management overhead. In 2013, more companies will realize that their storage costs override their predicted return on investment (ROI) for virtualization projects and hold them back from virtualizing more efficiently.

Over-provisioning of Storage Will Hit the Wall

New Metrics Will Be Needed for New-Gen Storage

There is a lot of comparison in the storage world related to how many input/output operations per second (IOPS) storage systems can produce. As the next generation of storage products gains popularity, expect more companies to use different metrics to measure the efficiencies of their storage systems. For example, $/IOPS for nonvirtualized workload with the highest performance requirement, $/workload for virtualized application-specific workloads and $/GB or $/TB for unstructured file data.

New Metrics Will Be Needed for New-Gen Storage

Simplified Virtualization Management Key to Storage Success

In 2013, simplicity of managing storage in virtual environments will be recognized as a critical metric for success. This, in turn, will inspire more products to integrate VM-aware storage management features.

Simplified Virtualization Management Key to Storage Success

Flash Becomes Mainstream but Is Insufficient on Its Own

As the price of flash comes down, more companies will incorporate it within their storage systems. Flash on its own, however, will fall short of the expectation of most enterprises when performance efficiency, simplicity and data management capabilities are taken into consideration. Furthermore, while it's feasible and necessary for a flash-based array to deliver a lot of IOPS, it's even more important to make sure that the IOPS delivered match the appropriate performance and latency profiles. Flash players will need to distinguish themselves beyond simply providing a commoditized flash product; storage management features for virtualized environments will take center stage.

Flash Becomes Mainstream but Is Insufficient on Its Own

VDI Moves Beyond All the Misconceptions

While some of the earlier misconceptions about virtual desktop infrastructure (VDI) still linger (e.g., VDI is about saving money; VDI can't be done economically because storage is too expensive and it's too complicated to manage), the industry has come a long way in proving these myths wrong. Along the way, more enterprises have figured out what VDI is really about. They have found new uses for VDI and deployed VDI successfully by not following the conventional wisdom. Storage efficiency derived from flash-based storage with virtualization-aware management capability has finally made VDI economically feasible. VDI has a renewed purpose. Expect to see more businesses adopt VDI with this new approach in 2013.

VDI Moves Beyond All the Misconceptions

Quality of Service Comes to Forefront

Virtualization demands a different kind of storage—the kind that understands the I/O patterns of a virtual environment and automatically manages quality of service (QoS) for each VM, not only logical unit numbers (LUNs), or volumes. Operating at a VM level also enables data management operations to occur all the way down to a specific application. Flash enables dense storage systems that can host thousands of VMs in only a few rack units of space. Given such high densities, QoS features that administrators can understand and are easy to use will be critical.

Quality of Service Comes to Forefront

2013: Year of Intelligent Software for Storage

The acquisition of XtremIO by EMC and Texas Memory Systems by IBM drew a lot of attention to the flash market in 2012. The challenge is not just creating products that focus on flash, but more on how to use flash wisely and cost effectively while intelligently managing data access. Beyond basic fast, cheap flash hardware, the focus will move to intelligent software that integrates seamlessly with the application layer and enables administrators to focus on managing application data in VMs rather than storage.

2013: Year of Intelligent Software for Storage

Software-Defined Data Center Will Gain Traction

There is much talk about the software-defined data center, as the desire for infrastructure that is fundamentally more flexible, scalable and cost-effective begins to dominate data center planning. Architects will be looking for infrastructure that understands application workloads and can automatically allocate resources to match the application demands. Rather than construct data centers full of over-provisioned resources, the SDDC concept seeks to more efficiently use and share all aspects of the infrastructure, from servers to networking to storage.

Software-Defined Data Center Will Gain Traction

Storage Will Follow Server and Network

As the software-defined data center concept gains traction, it will become increasingly apparent how far storage needs to evolve to embrace the tenets of a software-defined model. To date, storage lags behind server and networking layers in the transformation to a more software-defined model—and as a result creates the No. 1 pain point today. The industry can't expect to achieve software-defined storage by simply adding new features or points of integration to existing legacy storage architectures. Virtualized environments require storage designed for virtualization. Enterprises expecting to get the full benefit out of the software-defined data center will turn to VM-aware storage that can deliver the simplicity of management and agility that virtualization promises.

Storage Will Follow Server and Network

Rocket Fuel