Rethinking Data Protection, Recovery in a Multi-Cloud World

 
 
By Chris Preimesberger  |  Posted 2016-09-30
 
 
 
 
 
 
 
 
 
  • Previous
    1 - Rethinking Data Protection, Recovery in a Multi-Cloud World
    Next

    Rethinking Data Protection, Recovery in a Multi-Cloud World

    With the rise of modern cloud computing, and corresponding apps with multi-faceted, distributed data, it's time to rethink data protection across the multi-cloud landscape.
  • Previous
    2 - It's a Myth That Public Clouds Are Infallible
    Next

    It's a Myth That Public Clouds Are Infallible

    While many are making the move to another cloud service provider to combat vendor lock-in, others are spurred by the perception of increased security and 24/7/365 data availability. But is it a myth that public clouds and multi-cloud environments are infallible. Consequently, this propels another necessity: the rethinking and redesign of backup and recovery systems. Even when distributed across multiple cloud providers, data is still vulnerable to internal user error that can result in accidental corruption and deletion of important data sets. Organizations making the move to multi-cloud need more efficient and effective means of protecting their data while keeping it scalable for future growth.
  • Previous
    3 - Multi-Cloud Deployments Have Their Own Management Issues
    Next

    Multi-Cloud Deployments Have Their Own Management Issues

    As is the case with new technologies (and technology mash-ups), multi-cloud presents its own set of management challenges and implementation risks. The most difficult to overcome is next-generation backup and recovery requirements of non-relational databases also able to accommodate always-on data protection infrastructure, short recovery point objectives (RPOs) and recovery time objectives (RTOs) and API-based architectures.
  • Previous
    4 - No Guaranteed Uptime in Public or Multi-Cloud Environments
    Next

    No Guaranteed Uptime in Public or Multi-Cloud Environments

    Deploying your application in a public cloud or multi-cloud environment does not automatically guarantee application uptime. Public cloud environments do provide some high-availability, but native functionality can be limited, so plan to augment the native capabilities with your own data protection systems.
  • Previous
    5 - Replication Does Not Mean Recoverability
    Next

    Replication Does Not Mean Recoverability

    Some IT professionals believe that data loss is not possible because of infrastructure redundancy, including database replication and availability zones. Even with requisite triple-replication across the infrastructure, the distributed nature of multi-cloud architecture makes it difficult to back up data as well as recover it to any point in time when needed. Point-in-time recovery should be part of a new-gen data protection and recoverability strategy.
  • Previous
    6 - Consider Carefully Your Database Selection
    Next

    Consider Carefully Your Database Selection

    For many cloud-native applications, stateful information is stored in a database rather than locally in a virtual machine. This provides flexibility, scalability and high availability for web and application servers, but it also makes the initial database selection critically important because the application migration may be limited down the road. Databases have inertia; so choose wisely and review the next-generation backup and recovery solutions that are able to migrate databases across clouds, such as Apache Cassandra deployed on a public cloud to Apache Cassandra on a different public cloud.
  • Previous
    7 - Don't Assume Your Cloud Providers Have Your Back(up)
    Next

    Don't Assume Your Cloud Providers Have Your Back(up)

    Public cloud providers are not responsible for the health of your cloud-based systems; you are ultimately responsible for the correctness and integrity of your data. Most cloud vendors provide tooling for high availability and to recover from hardware failures that may happen in their environment. However, soft errors, such as logical data corruption, schema corruption and human errors are hard to identify and even harder to fix. More importantly, cloud providers do not provide any guarantees against these.
  • Previous
    8 - Non-Relational Databases Are a Different Ballgame
    Next

    Non-Relational Databases Are a Different Ballgame

    Non-relational databases are fundamentally unique due to their eventual consistency model, triple replication and distributed storage architecture. They support always-on applications and cannot be paused for backups. When selecting data protection strategy for such non-relational databases (NoSQL, cloud databases, graph, key-value), ensure that your vendors can purpose-build solutions for distributed databases that can work for cloud-first models and across multiple cloud providers.
  • Previous
    9 - Version Your Databases
    Next

    Version Your Databases

    Administrators must maintain durable point-in-time backup copies of databases that are servicing mission-critical cloud native applications. Failures will happen (it's a matter of when, not if). This means the capability to roll back a database to a known healthy point in time is crucial to reduce the risk of data loss and cloud-native application downtime. Service-level objectives must be set on how fast data should be recovered and how much data loss a system can tolerate. Ensure the databases can be recovered within a service-level objective and are geo-replicated for maximum protection.
  • Previous
    10 - Regularly Test Your Multi-Cloud Data Recovery Strategy
    Next

    Regularly Test Your Multi-Cloud Data Recovery Strategy

    Test your end-to-end recovery strategy quarterly or semi-annually. Recovery testing should be at local database level (recovery with cloud) and at multi-cloud level (recovery across clouds).
  • Previous
    11 - Demand a Federated Management View
    Next

    Demand a Federated Management View

    This comes with using a cross-platform, single point-of-control tool. Managing cloud services can absorb a lot of IT resources; users can thwart this resource drain by turning to tools that provide a single-pane-of-glass view of a multi-cloud solution. If you are implementing new tools, such as backup and recovery, ensure they work across different cloud environments, both public and private, because using different tools for different cloud environments not only is operationally inefficient, but also cost-prohibitive.
 

Early in its evolution, cloud portability—the transference of large amounts of data among cloud service providers such as Amazon Web Services, Google Cloud and Microsoft Azure—was of little relevance to IT managers. Instead, they focused on the scale, reliability and cost of a particular cloud service, and only sometimes on promised data durability. Few users thought far enough into the future to consider that exporting someday would be as critical a function, and fewer still imagined the need for data accessibility "fail-safe" mechanisms (both on-site and in the cloud) that were flexible enough to adapt to real-time, multi-cloud recovery demands. But with the rise of modern cloud computing—and corresponding cloud applications with multi-faceted, distributed data—the time has come to rethink data protection across the multi-cloud landscape. This eWEEK slide show, based on industry information from database recovery provider Datos IO, shares data points on how IT, DBAs and DevOps teams can collaborate to ensure smooth operations and fail-safe access to their databases, regardless of cloud provider selection.

 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
Rocket Fuel