Rethinking Data Protection, Recovery in a Multi-Cloud World

1 - Rethinking Data Protection, Recovery in a Multi-Cloud World
2 - It's a Myth That Public Clouds Are Infallible
3 - Multi-Cloud Deployments Have Their Own Management Issues
4 - No Guaranteed Uptime in Public or Multi-Cloud Environments
5 - Replication Does Not Mean Recoverability
6 - Consider Carefully Your Database Selection
7 - Don't Assume Your Cloud Providers Have Your Back(up)
8 - Non-Relational Databases Are a Different Ballgame
9 - Version Your Databases
10 - Regularly Test Your Multi-Cloud Data Recovery Strategy
11 - Demand a Federated Management View
1 of 11

Rethinking Data Protection, Recovery in a Multi-Cloud World

With the rise of modern cloud computing, and corresponding apps with multi-faceted, distributed data, it's time to rethink data protection across the multi-cloud landscape.

2 of 11

It's a Myth That Public Clouds Are Infallible

While many are making the move to another cloud service provider to combat vendor lock-in, others are spurred by the perception of increased security and 24/7/365 data availability. But is it a myth that public clouds and multi-cloud environments are infallible. Consequently, this propels another necessity: the rethinking and redesign of backup and recovery systems. Even when distributed across multiple cloud providers, data is still vulnerable to internal user error that can result in accidental corruption and deletion of important data sets. Organizations making the move to multi-cloud need more efficient and effective means of protecting their data while keeping it scalable for future growth.

3 of 11

Multi-Cloud Deployments Have Their Own Management Issues

As is the case with new technologies (and technology mash-ups), multi-cloud presents its own set of management challenges and implementation risks. The most difficult to overcome is next-generation backup and recovery requirements of non-relational databases also able to accommodate always-on data protection infrastructure, short recovery point objectives (RPOs) and recovery time objectives (RTOs) and API-based architectures.

4 of 11

No Guaranteed Uptime in Public or Multi-Cloud Environments

Deploying your application in a public cloud or multi-cloud environment does not automatically guarantee application uptime. Public cloud environments do provide some high-availability, but native functionality can be limited, so plan to augment the native capabilities with your own data protection systems.

5 of 11

Replication Does Not Mean Recoverability

Some IT professionals believe that data loss is not possible because of infrastructure redundancy, including database replication and availability zones. Even with requisite triple-replication across the infrastructure, the distributed nature of multi-cloud architecture makes it difficult to back up data as well as recover it to any point in time when needed. Point-in-time recovery should be part of a new-gen data protection and recoverability strategy.

6 of 11

Consider Carefully Your Database Selection

For many cloud-native applications, stateful information is stored in a database rather than locally in a virtual machine. This provides flexibility, scalability and high availability for web and application servers, but it also makes the initial database selection critically important because the application migration may be limited down the road. Databases have inertia; so choose wisely and review the next-generation backup and recovery solutions that are able to migrate databases across clouds, such as Apache Cassandra deployed on a public cloud to Apache Cassandra on a different public cloud.

7 of 11

Don't Assume Your Cloud Providers Have Your Back(up)

Public cloud providers are not responsible for the health of your cloud-based systems; you are ultimately responsible for the correctness and integrity of your data. Most cloud vendors provide tooling for high availability and to recover from hardware failures that may happen in their environment. However, soft errors, such as logical data corruption, schema corruption and human errors are hard to identify and even harder to fix. More importantly, cloud providers do not provide any guarantees against these.

8 of 11

Non-Relational Databases Are a Different Ballgame

Non-relational databases are fundamentally unique due to their eventual consistency model, triple replication and distributed storage architecture. They support always-on applications and cannot be paused for backups. When selecting data protection strategy for such non-relational databases (NoSQL, cloud databases, graph, key-value), ensure that your vendors can purpose-build solutions for distributed databases that can work for cloud-first models and across multiple cloud providers.

9 of 11

Version Your Databases

Administrators must maintain durable point-in-time backup copies of databases that are servicing mission-critical cloud native applications. Failures will happen (it's a matter of when, not if). This means the capability to roll back a database to a known healthy point in time is crucial to reduce the risk of data loss and cloud-native application downtime. Service-level objectives must be set on how fast data should be recovered and how much data loss a system can tolerate. Ensure the databases can be recovered within a service-level objective and are geo-replicated for maximum protection.

10 of 11

Regularly Test Your Multi-Cloud Data Recovery Strategy

Test your end-to-end recovery strategy quarterly or semi-annually. Recovery testing should be at local database level (recovery with cloud) and at multi-cloud level (recovery across clouds).

11 of 11

Demand a Federated Management View

This comes with using a cross-platform, single point-of-control tool. Managing cloud services can absorb a lot of IT resources; users can thwart this resource drain by turning to tools that provide a single-pane-of-glass view of a multi-cloud solution. If you are implementing new tools, such as backup and recovery, ensure they work across different cloud environments, both public and private, because using different tools for different cloud environments not only is operationally inefficient, but also cost-prohibitive.

Top White Papers and Webcasts