eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.
1Rethinking Data Protection, Recovery in a Multi-Cloud World
2It’s a Myth That Public Clouds Are Infallible
While many are making the move to another cloud service provider to combat vendor lock-in, others are spurred by the perception of increased security and 24/7/365 data availability. But is it a myth that public clouds and multi-cloud environments are infallible. Consequently, this propels another necessity: the rethinking and redesign of backup and recovery systems. Even when distributed across multiple cloud providers, data is still vulnerable to internal user error that can result in accidental corruption and deletion of important data sets. Organizations making the move to multi-cloud need more efficient and effective means of protecting their data while keeping it scalable for future growth.
3Multi-Cloud Deployments Have Their Own Management Issues
As is the case with new technologies (and technology mash-ups), multi-cloud presents its own set of management challenges and implementation risks. The most difficult to overcome is next-generation backup and recovery requirements of non-relational databases also able to accommodate always-on data protection infrastructure, short recovery point objectives (RPOs) and recovery time objectives (RTOs) and API-based architectures.
4No Guaranteed Uptime in Public or Multi-Cloud Environments
Deploying your application in a public cloud or multi-cloud environment does not automatically guarantee application uptime. Public cloud environments do provide some high-availability, but native functionality can be limited, so plan to augment the native capabilities with your own data protection systems.
5Replication Does Not Mean Recoverability
Some IT professionals believe that data loss is not possible because of infrastructure redundancy, including database replication and availability zones. Even with requisite triple-replication across the infrastructure, the distributed nature of multi-cloud architecture makes it difficult to back up data as well as recover it to any point in time when needed. Point-in-time recovery should be part of a new-gen data protection and recoverability strategy.
6Consider Carefully Your Database Selection
For many cloud-native applications, stateful information is stored in a database rather than locally in a virtual machine. This provides flexibility, scalability and high availability for web and application servers, but it also makes the initial database selection critically important because the application migration may be limited down the road. Databases have inertia; so choose wisely and review the next-generation backup and recovery solutions that are able to migrate databases across clouds, such as Apache Cassandra deployed on a public cloud to Apache Cassandra on a different public cloud.
7Don’t Assume Your Cloud Providers Have Your Back(up)
Public cloud providers are not responsible for the health of your cloud-based systems; you are ultimately responsible for the correctness and integrity of your data. Most cloud vendors provide tooling for high availability and to recover from hardware failures that may happen in their environment. However, soft errors, such as logical data corruption, schema corruption and human errors are hard to identify and even harder to fix. More importantly, cloud providers do not provide any guarantees against these.
8Non-Relational Databases Are a Different Ballgame
Non-relational databases are fundamentally unique due to their eventual consistency model, triple replication and distributed storage architecture. They support always-on applications and cannot be paused for backups. When selecting data protection strategy for such non-relational databases (NoSQL, cloud databases, graph, key-value), ensure that your vendors can purpose-build solutions for distributed databases that can work for cloud-first models and across multiple cloud providers.
9Version Your Databases
Administrators must maintain durable point-in-time backup copies of databases that are servicing mission-critical cloud native applications. Failures will happen (it’s a matter of when, not if). This means the capability to roll back a database to a known healthy point in time is crucial to reduce the risk of data loss and cloud-native application downtime. Service-level objectives must be set on how fast data should be recovered and how much data loss a system can tolerate. Ensure the databases can be recovered within a service-level objective and are geo-replicated for maximum protection.
10Regularly Test Your Multi-Cloud Data Recovery Strategy
11Demand a Federated Management View
This comes with using a cross-platform, single point-of-control tool. Managing cloud services can absorb a lot of IT resources; users can thwart this resource drain by turning to tools that provide a single-pane-of-glass view of a multi-cloud solution. If you are implementing new tools, such as backup and recovery, ensure they work across different cloud environments, both public and private, because using different tools for different cloud environments not only is operationally inefficient, but also cost-prohibitive.