eWEEK Identifies Key Storage Trends of 2010

The most significant up-and-to-the-right trend continues unabated: the mounting enterprise and consumer deluge of newly created or duplicated data files at 40 to 60 percent per year.

At six-month intervals here at eWEEK, we offer a snapshot of important new and continuing trends in data storage. These are based upon daily conversations with storage vendors, analysts, data center managers, CIOs and CTOs.
As we take a break at the end of 2010 to brace for the onslaught of new products and services to come for 2011, here are eWEEK's observations of key trends in storage. Some are new, and some are continuations of trends we saw previously.
The most significant up-and-to-the-right trend continues unabated: the mounting enterprise and consumer deluge of newly created or duplicated data files. One prognostication is consistent from a number of analysts: The universe of data continues to grow at an estimated 40 to 60 percent clip per year, and it is not slowing down or even leveling off.
Gartner Research reported Nov. 1 that about half (47 percent) of 1,000 data center managers surveyed ranked data growth as the No. 1 biggest daily challenge they face. The other problems are the perennials: system performance and scalability (37 percent) and network congestion and connectivity architecture (36 percent).
Big Trend No. 2: Storage media, including spinning disk hard drives, solid-state NAND and NOR Flash, and digital tape, continues to become more efficient to use as engineers and manufacturers improve these products at regular intervals. Vastly improved networking and processing speeds are also instrumental in these upgrades. Yet the industry is not catching up with the amount of data being created -- by humans and by computers.
Big Trend No. 3: Pricing. Storage, whether as a product or a service, continues to fall in total cost, thanks to increased marketplace competition and improvements in manufacturing techniques.
Underneath these three are six important sub-trends:
Data retirement: As most of us know, data that is supposedly wiped from a hard drive isn't always deleted permanently. In fact, when a file is dragged into a "trash bin," you've simply changed the folder in which it resides.
When you subsequently "empty" the trash bin, most of the time you still have not deleted anything; you have simply designated those data blocks for reassignment. Thus, those blocks may not be reassigned to other files for days or weeks. As a result, the previously "deleted" file can still live in the system for an undetermined length of time.
In the data recovery field, knowing the details of this is what keeps companies such as Kroll Ontrack, DriveSavers, i365 EVault, and numerous others in business.
A data retirement project is one that examines all archived and backed-up data in a system to make sure it gets completely deleted from all storage nodes when its expiration date comes up. Projects of this nature can take weeks or months, depending upn the size and scope of the storage system. An increasing number of enterprises, concerned about long-term storage usage and power consumption, are looking into these initiatives.
In large IT systems, substantial amounts of data often do not get deleted on a regular schedule, despite company policies. This is most often due to multiple backup copies and mirrored systems that get passed over at cleanup time.
All those extra copies take up valuable space on digital disks or tape that can cost the enterprise dearly over time. The term "storage system constipation" is becoming a painful reality for many storage administrators.
Hypervisor-based storage optimization: Data gets scrambled as if in a blender when it travels from servers through pipelines to a hypervisor and then into storage containers. Reassembling increasing amounts of data wears heavily on a conventional system because it takes extra time-and ultimately more cost-for unoptimized systems to straighten all the bits out and get them put back together so they can be used.
Virsto (the name comes from "virtual storage") is a newcomer that wants to do for Microsoft Hyper-V-based storage systems what the hypervisor did for servers: Make them more efficient, so they handle workloads faster and in a less-costly fashion.
Quest Software's Vizioncore vOptimizer Pro also does this very well.
Storage pooling: Pooling is an approach to storage virtualization that delineates specific areas of the storage system to be dedicated to specific data flows to enable more efficient multitenant service deployments, for example.
Virtualized storage systems break files into chunks of data that are dispersed into numerous data center or storage locations, and reassemble them on demand. Keeping data file chunks closer together in pools is said to provide faster reassembly of file chunks.
Storage pooling can be set up as to disk type, where capacity can be in either 1TB or 2TB drives in separate pools. Users can implement a pool based on WORM [write once, read many] storage technology, or for encrypted data, for example.
Isilon -- which specializes in large, clustered systems and was recently bought by EMC -- and Sepaton are two of the early vendors shipping smart-pooling storage systems.
Tier 1 deduplication: Deduplication, clearly the most in-demand storage feature for the past three to four years, has been used primarily in second-tier SATA-type arrays and not for primary (Tier 1) business-critical data storage. Tier 1 "dedupe" at the gateway, however, is beginning to get attention as a useful system addition.
The trend now is for "smart" -- or content-aware -- storage systems to not even allow data it already knows is redundant into the storage system from the get-go. Tier 1 dedupe makes a fast determination about whether the document or data bit stays or is left at the door.
Data deduplication, one of the most important breakthroughs in IT in the last two decades, eliminates redundant data from a disk storage device in order to lower storage space requirements, which in turn lowers data center power and cooling costs and lessens the amount of carbon dioxide produced to generate power to run the hardware.
OEM providers Permabit and Nexenta are new-generation storage vendors that are shipping Tier 1 deduplication.
Virtualization of isolated and formerly siloed storage systems: This trend started with testing and quality assurance work back in the mid-2000s but continues to trend up. Many of these siloed systems -- especially in larger enterprises -- are still in transition, but industry analysts now estimate that some sort of virtualization is now being used in production in nearly 90 percent of all enterprise IT systems. In 2008 that percentage was in the low 20s.
Cloud backup and archival storage: Although some analysts believe cloud backup has to hit a ceiling at some point, this one still shows no signs of leveling off. Small and medium-size businesses, departments of large enterprises, and single consumers alike continue to sign on in increasing numbers to services such as Amazon S3, CommVault, Asigra, iDrive, EMC Mozy, Carbonite, Box.net, Iron Mountain Digital, Seagate i365, Dropbox, SugarSync and others.
Microsoft is offering its SkyDrive, and Google its own Gdrive service in the plans. Amazon S3 is the current market leader in business online storage.
An increasing number of new laptops and netbooks are featuring a pre-install that includes online backup and virus protection. EMC is providing this with its Atmos service for Iomega desktop storage drives.

Chris Preimesberger

Chris J. Preimesberger

Chris J. Preimesberger is Editor-in-Chief of eWEEK and responsible for all the publication's coverage. In his 13 years and more than 4,000 articles at eWEEK, he has distinguished himself in reporting...