The shift to cloud brings new challenges to data storage, which is already complicated by virtualized systems, tape storage, network-attached storage (NAS) and other data storage formats. Because all data in a cloud lives in the same shared system, management of the data becomes paramount in maintaining service levels and securing critical business information.
Organizations should evaluate how their storage resources can most effectively be used in the cloud. Before they can do that, it’s best to categorize the model of cloud computing in the organization. Three types of cloud computing dominate the landscape: private (in which a company hosts, owns and manages its own cloud infrastructure), public (in which a third party owns and manages the infrastructure) and hybrid (in which the public and private models are combined).
In hybrid models, the public cloud often acts as an overflow facility for the private cloud or is used to satisfy other application needs such as off-site information protection. The underlying characteristic of each is that cloud services need to be available and reliable to users, while effectively optimizing resources and providing a pay-as-you-go delivery model.
Keys to effective cloud storage management
Despite advantages of the cloud, not all organizations gain the maximum benefits. When outsourcing business processes to the cloud, organizations can select service options such as performance and capacity levels that best suit an organization’s particular needs. Crucial components for storing critical data in the cloud are storage management, data protection and disaster recovery. For example, a retail company could opt to store and manage data (such as in-store transactions, online purchases and supplier details) on a private cloud because it allows for better control and access to sensitive data. The retailer, however, might decide that keeping copies of data for disaster recovery on a public cloud service is a lower-risk option.
Whether it chooses to leverage a public, private or hybrid cloud model, the company needs to ensure that their cloud has automated data lifecycle management (DLM), built-in data reduction and advanced application protection, to name a few.
Data Lifecycle Management
Data lifecycle management (DLM)
When assessing their cloud model, organizations should take the following two items into consideration:
Item No. 1: DLM
To better plan and manage storage in cloud environments, organizations must efficiently use their resources by placing data on the most appropriate tier of storage that meets service delivery requirements and then eliminate data that’s no longer needed. For example, a healthcare provider who just admitted an emergency patient will need to access the patient’s recent records quickly. This type of Tier 1 data should be stored on high-quality, faster media storage, while the patient’s older records may be archived on tape (which is slower to access). Either way, the cloud service should provide a range of service-level options that balance performance and costs based on the expected use of the stored data.
Organizations also need to ensure that their data is segregated to ensure that confidential information doesn’t get into the hands of others, even in a disaster recovery scenario. Organizations should also ensure that they have applications that provide reporting tools that identify where data is located and can sort by access or saved dates, owners and numerous other filters; automation of data migration between multiple tiers of storage based on policies to move unneeded data from primary storage systems, and transparent operations to minimize impact on other key operational processes.
With these tools, organizations can set policies to take appropriate action or move unnecessary data that clog storage systems and run up usage charges. This automated migration creates a more efficient operating environment, reduces administrative costs and the need to acquire extra hardware.
Storage Resource Management, U<font size=”2″>tilization and Optimization</font>
Item No. 2: Storage resource management (SRM), utilization and optimization
An easy way to visualize storage in the cloud is thinking of it as a huge warehouse. However, this design can obscure visibility into individual storage elements. For example, over time, a cloud service provider may add new storage systems from different vendors, choosing the best products available at the time of purchase. How can you tell which devices are performing as expected and which are creating service delivery bottlenecks?
Although storage resources are shared in a cloud, they still require management based on accurate, timely information. Cloud administrators need tools capable of aggregating and displaying that information, then acting on it in a centralized, optimized way that fulfills business goals. By giving administrators consolidated control over storage systems, storage networks, replication services and capacity management, it restores that visibility to help storage managers establish available capacity, evaluate security, correlate backup/restore performance to Recovery Time Objectives (RTOs) and perform many other necessary functions.
Data protection in the cloud
Cloud services rely heavily on keeping data and applications continuously available. Failure to provide access due to data disasters (such as database corruption, virus attack, and hardware failure or local/regional disasters) could be catastrophic to any organization. Data protection processes such as backup and recovery need to be designed into cloud environments from the start-not added later. Before establishing your cloud infrastructure, it’s important to be familiar with technologies and products used for storage management, protection and disaster recovery.
It’s possible to obtain storage and protection services from companies that specialize in storage-assuming they provide management, data protection and disaster recovery among its services. However, outsourcing storage and applications can put your company at risk.
For example, what would happen if your critical applications and cloud data are hosted on a system that experiences a major failure? You should ensure that your service provider is performing backups as often as necessary to meet contracted Recovery Point Objectives (RPOs), which is what an organization determines is an “acceptable loss” in a disaster situation. You must also have tested the restore processes to meet contracted RTOs, which is the duration of time in which business processes must be restored after a disaster or disruption to maintain business continuity.
Planning for the Future
Planning for the future
Businesses feel the pressure of quickly implementing cloud models, making development of the environment challenging as companies need to find what suits their needs now as well as for years to come. The exponential growth of data combined with the proliferation of data-intensive services is a significant contributor to why data storage is expanding at an even greater pace in the cloud. However, users need to be mindful as storage clouds still present challenges. These challenges include managing cost, intense computing power, security and data mobility across cloud providers-all factors that affect quality of service (QOS).
As solutions come to market to tackle these challenges, companies must prepare themselves for new innovations as the next wave of storage cloud computing evolves. Ultimately, organizations will need to integrate the functions and data in the cloud with various aspects of their business and collaborate with their business partners.
Stephen “Woj” Wojtowecz is Vice President of Storage Software Development for a suite of solutions offered by IBM Tivoli. Stephen has enjoyed a 20-year career with IBM in various management roles that has included all areas of software design, development, strategy, marketing, sales, support and services. Stephen has a Bachelor’s degree in Management Information Systems from Rensselaer Polytechnic Institute in New York. He can be reached at woj@us.ibm.com.