Virtualization Technology: 10 Essential Questions to Ask Before You Upgrade a Storage System
10 Essential Questions to Ask Before You Upgrade a Storage System
by Chris Preimesberger
Does the system youre considering reduce the costs of storage?
Researcher IDC and others have reported that storage costs can represent anywhere from 30 percent to 50 percent of a company's total IT hardware expenditure. That's huge. Many productsfrom the low end to the high enduse either simple arrays or large filers that can't scale beyond their fabricated capacity; these are "scale-up" systems, as opposed to "scale out," which scales outside the form factor. Make sure yours will scale out.
Is the system considered Generation 2.0 or 3.0 storage?
Both first-generation and second-generation (SAN and/or NAS) storage platforms are built primarily from the vendor's point of view. Second-generation systems are controller-based systems that emulate the scale-out architecture that was common only among the world's largest supercomputing environments (such as IBM's BlueJean). Third-generation storage is referred to as being unified, scale-out on commodity hardware and working effectively on an extremely flexible clustered storage environment. It decreases cost, increases control and makes management easier.
Does the system reduce the costs of expanding your storage pool?
Your system should be possible to mix and match protocols in the same cluster without having to scrap your investment in storage every time a new protocol is introduced. It's essential that you future-proof your storage investment. Look for systems that are protocol-agnostic. With these systems, you'll be able to add a new protocol to your existing storage pool simply by adding one node that supports that protocol. All others will then work with it. Think of the future cost savings.
Does the system reduce your management time and costs?
Simplified storage management is a necessity for relieving heavy storage management overhead. Administrators should be able to operate the management console from any node for control and flexibility. Look for a storage system that includes a streamlined GUI rather while also supporting the command-line interface. This feature should simplify management tasks.
Does the system increase the control you have over growth?
Both first-generation and second-generation (SAN/NAS) storage platforms are built primarily from the vendor's point of view. Storage companies need to sell more storage to survive. To do that, they've built systems that set arbitrary limits on scalability. Some can scale to 178TB under one control unit, others to over 400TB and some to less than 16TB. Find a system with virtually unlimited, as-you-need-it scalability. This puts storage purchase decisions under your control.
Does the system allow you to scale to increments as small as 1TB?
With storage costs decreasing by 25 percent per year, the notion of being forced to purchase storage ahead of the need is archaic. For example, if a filer is out of drive capacity and you need an additional terabyte, the only option is to purchase the next largest system available. In other words, companies are forced into a continuous cycle of overbuying forklift upgrades and data migration. Imagine purchasing a car today, but not using it for a year or two. The automotive manufacturer benefited by your advance purchasing, but your wallet suffered. Find the finest-grain scalability available.
Does the system allow you to run CIFS, NFS and iSCSI at the same time?
Generation 3.0 storage is the next evolution of unified storage. It includes file and block-level protocols, allowing SAN/NAS environments to be run from each storage node. CIFS, NFS and iSCSI all run simultaneously. This allows IT managers to eliminate file servers, consolidating onto a single, easily scalable platform.
Does the system have a single point of failure that must be rearchitected for redundancy?
Think Legos. Third-generation storage will stripe and mirror data across multiple nodes and even more drives, making everything essentially "data aware." There's no need to architect redundant systems because the entire storage pool already has multiple copies of itself parsed among the nodes and drives. When a drive or node fails, the services continue to run uninterrupted. Rack and stack storage nodes like you would Lego bricks.
Does the system allow you to mix and match multiple node densities?
Controller-based systems typically are programmed to read only nodes of a particular density. Non-controller-based systems can accept nodes of multiple densities in the cluster. Nodes purchased today will work with nodes purchased in the future. This makes capacity planning history and growth much more convenient.
Does the system allow you to purchase storage as needed?
An extremely flexible architecture with fine-grain scalability allows IT managers to scale by as little as 1TB per node at a time. And, unlike systems that require control units to recognize specific node densities, it requires no control unit and allows IT managers to mix and match node densities, providing the ability to buy only the storage required when it is needed.