Experts Tips for Reducing Storage Costs
In the never-ending quest for storage, three experts in the field tell you how to get more for less.Like death and taxes, there are two things that are certain in the world of data storage: Youll always want more of it, and it will never be cheap enough. So, we had an e-mail exchange with three storage experts to get some tips on how to do both. Heres what they told us:
When you negotiate with a storage vendor, their job is to sell stuff. Its what they think about when they wake up in the morning, its what they think about as they drift off to dreamland. So, create a dedicated storage team to handle the negotiations. If you view storage as a one-off, an annoying but necessary part of the job, youre going to be an amateur going up against a pro. But if you put together a dedicated team whose bonuses are determined by how well they negotiate the deals, youre no longer at a disadvantage. Tip 2: Manage a dual-vendor environment.
If you have two vendors competing for your business at all times, and you are willing to change vendors, you will have more effective service and support. Its a matter of creating a credible threat. That doesnt mean you have to divvy up your storage business half and half, but you must remain economically important to both vendors. The important thing is to maintain the element of doubt. Tip 3: Negotiate ongoing costs up front.
Everything is negotiable, and that includes the long-term maintenance and upgrade services the storage vendor provides. But negotiating those discounts must be done before you sign the initial contract for the hardware and software. Youll never be in the same position of power over the vendor again. And be sure you factor in the present value of money. A 10% discount that begins in two years is a losing proposition. Tip 4: Include soft costs in the contract.
Its always going to be easier to get a vendor to throw so-called "soft costs" into a deal than getting them to give you cash back. So, be sure to leverage all of the training expenses possible. It may seem like a small number, but you may have changes in your staff or bring in new features and functions over time that will require additional training. You want to be sure you dont have to pay for that. Tip 5: Look into thin provisioning.
Once the vendor negotiations are over and the hardware and software are in place, the next big thing in storage management will be thin provisioning. By allocating disk space at the time it is needed (rather than setting aside space in case it is needed), thin provisioning can dramatically simplify and reduce utilization rates. But it is not a feature currently being offered by mainstream vendors. 2. Dave Reinsel, program director, storage research, IDC Dave Reinsel is program director of IDCs storage research group. Reinsel and his team of analysts provide insight and analysis for I.T. professionals, investors, resellers, distributors and manufacturers. His research team is responsible for delivering forecasts and analyses on disk storage systems, hard-disk drives and component technologies, as well as for providing quarterly tracking and analysis on numerous metrics related to these markets. Reinsel has more than 12 years of experience within the I.T. industry. In 2002, he published IDCs first-ever detailed report on HDD component technology, and in 2003 co-authored the industrys first extensive report on external drives, as well as an in-depth report that probed the impact of serial ATA drives and tiered storage on data centers. In addition to his research responsibilities, Reinsel provides custom research and consulting for IDC clients on industry trends, product requirements and marketing strategies, and speaks at numerous conferences worldwide. Heres what he told us: Tip 1: Consolidate the large amounts of storage on stranded low-end servers through server virtualization and low-cost storage networks.
Although many companies have made the move to storage area networks, many have not, especially medium-sized companies. The cost to maintain numerous servers throughout a company can be overwhelming and needless. Todays virtualization technologies and low-cost storage networks enable companies to consolidate many server and storage platforms into significantly fewer ones. Virtualization can help streamline the management interfaces, improving efficiencies and decreasing required support personnel. Tip 2. Migrate rarely accessed TBs from expensive SCSI/FC storage to lower-cost ATA storage.
ATA-based storage carries a 3:1 to 4:1 advantage from a $/GB perspective. Most of the time, fixed data access requirements are handled easily by ATA-based systems that dont have the high-performance attributes of SCSI/FC-based storage arrays. Consider virtualization strategies to optimize storage asset management and administrator efficiency. Using ATA-based storage introduces a tiered storage environment that can pose challenges from a management perspective. Conflicts in management tools, communication protocols and additional resources can diminish the value of a tiered storage infrastructure. Virtualization helps to remove many of these potential conflicts. Tip 3. Consider virtual tape solutions to improve application and data recovery speed and reliability, as well as improve tape library asset utilization.
Tape has been and will forever be, but there is much room for improvement. Virtual tape libraries can help optimize the use of physical tape libraries by helping to ensure more complete usage of cartridges, reduced redundancy of data being written to tape (via data de-duplication at the virtual layer), and a refinement of what is being written to tape (with less-important data remaining at the virtual layer). Tip 4. Upgrade to larger, more power-efficient systems.
Power and cooling are becoming a top priority for end-users. Various technologies and strategies already discussed (e.g., virtualization and consolidation) can reduce the number of pieces of hardware powered on. In addition, one might be able to take advantage of more efficient hardware (e.g., cooler disk drives) by adopting new hardware or reducing the necessary footprint of the data center, which equates to less cooling and lower real estate costs. 3. Joe Martins, research director and managing member, Data Mobility Group Joe Martins is the research director and managing member of Data Mobility Group. Joe began his career as a systems software engineer at Lockheed Martin, the worlds second largest aerospace and defense firm, and later worked on General Dynamics e-learning system for military and commercial training. He was also a key contributor to the early stage growth of three successful enterprise content/information management companies in the defense (TEC), health-care (InLight) and publishing (OpenPages) industries. Prior to Data Mobility Group, Joe was business development director for research firm Illuminata. Here are his recommendations: Tip 1. Aggregate access to your information assets to improve accessibility and minimize redundancy. Billions of dollars are wasted each year because employees and system resources are not aware of the existence of relevant information assets. The result is greatly reduced productivity and increased risk. Storage professionals tend to recommend consolidation in the physical sense as a means to reduce management and maintenance costs. But thats just a cost-control Band-Aid. While physical consolidation may cut costs, it is not necessarily the right solution for every organization. Some dont need it and others simply cannot afford to do it. And physical consolidation doesnt necessarily improve access and awareness. Fortunately, aggregating access doesnt necessarily require physical consolidation. Several classes of information and storage management applications exist that can help discover, index, classify, search and manage assets within a customers existing environment. For those that can afford it, the combination of physical consolidation and aggregated access is a win-win. Tip 2. Foster closer collaboration between I.T. and its internal clients-and let them share responsibilities.
Much of the cost impact on I.T. originates from external forces--namely those who consume I.T. resources--yet I.T. is left holding the bag. I.T. personnel already have full-time jobs, yet some managers pile on additional responsibilities (e.g., information discovery and culling, and retention policy management) that overtax I.T. and set it up for failure. Its time companies leverage the insight and unused capacity of the rest of their workforce. In 2005, Salary.com estimated that the average employee spends about two hours of every eight-hour workday on non-work-related activities, and it cost companies an estimated $759 billion. If employees have time to surf eBay, then they have time to lend I.T. a helping hand. How about putting your employees to work finding and removing their non-work-related files? Or perhaps implementing an information management program and the necessary tools to enable those users to properly tag and classify their work so that others may find it and act upon it more efficiently (see first suggestion). I can tell you 759 billion reasons why it makes sense. Tip 3. Strive for interoperability and transparency up and down the application stack.
Storage admins tend to focus on interoperability across the storage fabric. Dont get me wrong, thats an important goal. Unfortunately, it is one that wont be easily resolved anytime soon. But theres another, more pressing concern. Todays storage applications do not communicate well with business applications, and vice versa--leaving the door wide open to inadvertent conflicts and unanticipated risk. Application awareness and storage awareness are ripe for improvement, but, like storage fabric interoperability, its more vision that reality today. The implementation of a global file system/namespace is one way to begin the journey toward integrating and aligning storage and business environments--and its something that can be accomplished today. Yes, it is another flavor of my first suggestion in that a global namespace would improve accessibility, but the suggestions differ in that a global namespace also provides a common framework upon which to gradually implement application interoperability.