Three of the most important current trends in IT development are devops and agile development, no- and low-code development (AKA citizen development) and container deployments of microservices using Kubernetes scheduling. It’s apparent that all of this cloud and container adoption will continue its fast pace in 2019, but you may need help analyzing some of the bigger-picture implications of that migration.
So eWEEK is happy to publish some predictions involving the container-deployment trend here today. The first 10 predictions are from Sysdig CEO Suresh Vasudevan, former CEO at Nimble Storage, and the others are attributed accordingly.
Vasudevan:
Public cloud goes on-prem with Kubernetes. Microsoft Azure was the first large cloud company to push on-premises with its Stack offering, which now supports Kubernetes, but now all the big players have to follow suit. In fact, movement is already afoot. Cisco announced in November it is teaming with AWS to support hybrid cloud capabilities spanning AWS and Cisco’s hyperconverged on-premises infrastructure, all orchestrated by Kubernetes. Google offers its Google Kubernetes Engine (GKE) on-premises, which provides a single view of cloud and on-premises clusters. And IBM, which has a public cloud offering, just acquired Red Hat, which is all about private cloud, so still more evidence of the hybrid land grab.
Cloud adoption forces still more industry consolidation. Speaking of the IBM/Red Hat deal, expect to see more consolidation as suppliers with weaker cloud stories seek strong cloud dance partners. The specific prediction: One of the mega system/operating system vendors (sales of more than $10 billion) will disappear.
Despite the hybrid cloud push, true multi-cloud capabilities are still some time off. Yes, everyone already uses multiple clouds, so in that sense multi-cloud is already a reality. But when it comes to the nirvana vision where Kubernetes makes it possible for apps to move to wherever is cheapest or fastest for jobs to execute, the reality is the tech is nowhere close to that. Kubernetes orchestration has certainly made it easier to manage cloud environments, but basic realities stand in the way of realizing the grand vision.
For one, apps are not yet fully decoupled from the infrastructure. While that’s the promise of Kubernetes, it has still not delivered on its mission to fully break that bond. Then there is the problem of data. Even in containerized environments, data sits where it sits. If it is on-prem, shifting the job out to the cloud is problematic, and vice versa. So multi-could for the near term means clouds that are separate but equal. The best you can do is use AWS for App A, Azure for App B, etc.
Organizations get serious about shifting stateful applications to Kubernetes. It is well known by this point that containers are ephemeral in nature. In fact, studies have shown that close to two-thirds of containers live less than an hour (and 85% live less than a day). That means containers are only good for the stateless applications commonly found in microservice environments, right? Well, no. It turns out containers are perfectly good for stateful applications as well. You have to get the security and the monitoring and the management all ironed out, but once you do, you benefit from basic perks that come with containers, all of which can be summed up as speed and agility. That’s hard to deliver in legacy software environments, so you can expect to see organizations put the pedal to the medal as they migrate more mission-critical, stateful applications to Kubernetes in 2019.
The first container-based data breach with significant privacy implications will take place. As organizations have grown more comfortable with containers they have migrated increasingly important applications to these fungible environments. BUT (yes, the capital letter version of this conjunction) … they haven’t necessarily thought through the security implications of containers or mastered the additional security controls needed to lock down these environments. The result will be a breach that goes a long way in showing the industry that cloud-based security tools are a requirement, not a nice to have.
2019 will be the year of the “serverless” startup. Just as Docker and containers took the world by storm, the latest craze is serverless cloud services such as AWS’ Lambda offering (and now also offered by Google, IBM, Azure, and others). And just as the arrival of containers resulted spurred the growth of a surrounding cottage industry, the serverless movement will see a rash of upstarts looking to supply the tooling necessary for everything from security to performance management and even multi-cloud tools.
Further adoption of microservice architectures pushes Istio mainstream. Microservice environments make it easier and faster to develop applications, but the mesh of network services that emerge to support the apps and interactions with other resources grows complex quickly. That has given rise to an open source service mesh management tool called Istiothat lets you “connect, secure, control, and observe services.” Now suppliers of load balancing tools and firewalls are supporting Istio, or building Istio compliant products, and startups are emerging in what is becoming an Istio ecosystem. This year will see a major upswing in Istio adoption.
Containers eat VMs — as in, literally. A Kubernetes project introduced the idea of speeding migration from virtual machine environments to container environments by stuffing VMs inside containers, and in 2019 we’ll see more organizations attempt to do just that. Red Hat, in fact, even has a name for it: “container-native virtualization.” The main advantage is it gives you the benefits of orchestration while freeing you of the need to rely on VMware’s vCenter infrastructure. That’s the theory. But it isn’t enough to sustain the whole Docker lift and shift effort.
DevSecOps earns its place in the sun. Take DevOps and stir in security and you get DevSecOps, a way of speeding development of software that results in code that is easier to maintain and has security built in from the get-go rather than bolted on at the end as an afterthought. The arrival of containers has changed so many of the traditional security rules – there are more moving parts and more layers, including your runtime, your orchestrator and your build environment – that DevSecOps is becoming a necessary way of running container shops. So much so that the majority of Fortune 1,000 companies will formally staff the DevSecOps function in 2019.
Year of enterprise Prometheus. Container adoption will scale to the point where instrumentation becomes mandatory, and Prometheus becomes the go-to tool. Prometheus is a standalone open-source systems monitoring and alerting toolkit which joined the Cloud Native Computing Foundation (CNCF) in 2016 and graduated from the CNCF in August of this year. While Prometheus has solid community support, the only way to deploy it at enterprise scale is to find a company such as Sysdig that is commercializing the tool.
—————————–
From Docker:
2019 is the year in which container platforms go mainstream for the enterprise. Independent research firms such as Forrester are recognizing the importance of the emerging market for enterprise container platform (ECP) software suites with its first-ever New Wave report on the market. Our prediction is that 2019 will be the year that enterprises widely adopt container platforms as they become a key component in digital transformation initiatives.
Jack Norris, VP of Data & Applications at MapR:
2019 is the year containers and AI meet in the mainstream: NVIDIA announced open source Rapids at the end of this year. Harbinger of how the focus on operationalizing AI, better sharing across data scientists and distributing processing across locations will drive containerized. Another rising technology that drives this prediction is Kubeflow which will complement containers and distributed analytics.
From HiveIO:
Containers as a service will be a hot item: Looking ahead to 2019 and beyond, enterprises will use containers to move data between private and public clouds. The benefits of this include reducing complexities through automation, better distributing computing capabilities, and offering policy-based optimization. Hyperconverged technology has already incorporated some aspect of Containers-as-a-Service (CaaS) as well as Serverless or Function-as-a-Service (FaaS). In the future, these two services makeup basic cloud computing features offered as complete storyboards.
If you look historically, we have moved from physical to virtual servers over the past 10 years, this continues to break down into smaller chunks of compute. During the last 24 months, containers have come to the forefront as application architectures change and the requirement for huge scale changes and devops comes to the mainstream. The natural progression from this is serverless computing, which is where you upload a function and simply send the question to you function and you get an answer. Ultimately, this is a process of optimizing resource usage. Advantages include better resource utilization, flexibility in application architecture, and scalability at a global level.
Rick Kilcoyne of hybrid cloud management platform CloudBolt:
Containers will make serious inroads in 2019. Additionally, people will care less about where the cloud is running, but more about how containers are running.
There will be a commoditization of the cloud. Service providers must grapple with the notion they are no longer selling solutions but a commodity. Because of this, the industry faces an interesting cost structure predicament. Enterprises want commoditization of the cloud, while vendors want to provide services that lock in customers.
“Infrastructure as code” will hit its trough of disillusionment. IT teams will start wrestling with the view that infrastructure is code, and they’ll have to manage it carefully. It will be critical for IT to develop protocols ensuring code is wrapped and can be executives the same way every time.
Serverless computing is a trap door. It seems simple/enticing, but what manages the code? How do teams ensure the same code runs across cloud providers and that it all works?