How Enterprises Can Future-Proof Kubernetes Management Platforms

eWEEK DATA POINTS: While the Kubernetes development community has done a commendable job of providing clear and concise interfaces into Kubernetes, the integration of additional tools still requires deep knowledge. And that can sometimes be hard to get.


Enterprises are experiencing a confluence of new challenges here in the tumultuous year 2020. The COVID-19 pandemic suddenly forced employees around the globe to work from home, target markets are in flux and the evolution of public cloud pricing is adding complexities to IT budgets. 

To adapt to these conditions and become more resilient for future challenges, many enterprises are turning to hybrid-cloud strategies. At the same time, Kubernetes has entered the mainstream for enterprise users, resulting in more organizations using the technology as the default platform for managing cloud-native applications and microservices in a hybrid-cloud environment. 

The challenge for enterprise users is that using Kubernetes to achieve true hybrid-cloud flexibility in a secure and reliable way is difficult. Most Kubernetes solutions are disjointed and limited in scope, focusing on a small set of simple applications. This is partially due to the fact that many early Kubernetes projects piece together different one-off solutions from multiple vendors. 

Even today, many of the Kubernetes management platforms still rely heavily on third-party networking and storage solutions all operating independently of one another. While the Kubernetes development community has done a commendable job of providing clear and concise interfaces into Kubernetes, the integration of additional tools still requires deep knowledge. Without said knowledge, Kubernetes becomes much more difficult to scale, leaving enterprise users with incomplete solutions that they may not have the right staff to run. 

This eWEEK Data Points article uses industry information from Brian Waldon, Vice-President of Product for Diamanti, who has been a part of the Kubernetes community since the beginning. He brings a deep perspective on customer needs and the technological expertise to help enterprise users avoid pitfalls when deploying Kubernetes. Waldon posits five questions IT organizations should consider to deliver a complete future-proof platform that supports diverse applications, infrastructure, people and processes in an enterprise. 

Data Point No. 1: Will it scale? 

While enterprise adoption of Kubernetes is on the rise, many organizations get stuck at just a few clusters or just a few teams without a solid strategy for expanding adoption broadly across a company. While Kubernetes simplifies managing containerized applications, there are still a few challenges when it comes to managing multiple teams and clusters. These challenges exist because each on-premises implementation or cloud provider has a different approach to creating and managing Kubernetes clusters, different methods of user authentication, and different application-specific configuration options. 

Another significant challenge is the variety of applications that need to be supported, including stateful applications and their data. Therefore, it is important to look for a comprehensive Kubernetes solution with a common control plane that can manage multiple Kubernetes clusters, applications and data--regardless of where the clusters reside.

Data Point No. 2: Is it flexible?  

As enterprises embrace the hybrid cloud model, they need the flexibility to deploy applications to the most appropriate infrastructure. That includes the ability to migrate applications, including stateless applications, between data centers or between the data center and the cloud. This flexibility is key to moving workloads from staging to production environments and migrating workloads from cloud to on-premises or another cloud vendor to avoid vendor lock-in and control costs. This maneuverability can also open up capacity on critical clusters by moving lower priority applications to other clusters. Look for a Kubernetes solution that provides tools to deploy or migrate apps where you want them as business or IT needs change.

A solution should also provide the flexibility to work with chosen Continuous Integration and Continuous Delivery (CI/CD) tools via standard APIs. However, some Kubernetes solutions add their own proprietary or opinionated frameworks that obscure those APIs, making standard tools and applications incompatible. The chosen solutions need to be conformant to Kubernetes APIs and provide flexibility to work with a variety of third-party tools. 

Data Point No. 3: Is it easy to use?  

The Kubernetes skills gap becomes increasingly obvious as enterprises move to Day 2 operations and need to address issues such as security, performance, maintenance and scalability. Not only do many IT organizations lack the skills to implement complex, large-scale Kubernetes environments, but there is a limited pool of qualified candidates, even if an organization wants to bring this expertise in house. This problem becomes amplified when dealing with disjointed solutions that require more resources and training or force the IT organization to manage multiple vendors just to cobble together a DIY approach. As such, it is important to look for a turnkey solution that does not require specialized staff or additional resources to manage as your environment expands, with as few external vendors involved as possible.  

Data Point No. 4: Is it resilient?  

No IT organization is impervious to the risk of losing access to its applications and data. In the event of a cluster or site failure, it is critical for an organization to quickly and easily recover applications and data to ensure business continuity. 

Look for cloud-native solutions designed to work with containerized applications that address a wide variety of failure and recovery modes. Focus on solutions that consider improved uptime and resilience, including the ability to be deployed across availability zones or that offer integrated asynchronous replication to send data offsite. You’ll also want to easily set up backup and disaster recovery (DR) policies that ensure application and data persistence for stateless and stateful applications is protected.

Data Point No. 5: Does it provide performance and price benefits?

Early on, container and microservice development primarily focused on lightweight, stateless applications. As the ecosystem matured, IT organizations began to look for ways to containerize stateful applications like databases, artificial intelligence, and machine learning, and began to extend Kubernetes to their high-value applications. However, these applications demand much greater I/O than stateless applications and are more sensitive to latency variations. As a result, it is important to seek out a solution that delivers optimal I/O performance for distributed applications. Additionally, performance improvements translate into TCO savings via efficiencies and reductions of the overall footprint and operational costs.

Kubernetes on its own is hard. It’s not turnkey, but we are starting to see vendors come in to fill the gaps around operations for Kubernetes. The key for enterprises as they scale from initial Kubernetes projects to broad-scale adoption is to find the right solution that enables them to grow not only in the number of applications but in terms of security, availability and performance. 

If you have a suggestion for an eWEEK Data Points article, email [email protected].