Five Best Practices for Scaling DevOps Programs

eWEEK DATA POINTS: Here are five best practices involving steps DevOps teams are taking to stay ahead of their competition.


While some of the practices and technologies associated with DevOps are still immature, small teams have nevertheless evolved to a more comprehensive engagement with their company’s overall IT capabilities. Teams that have enjoyed this kind of success are seeing happier customers and are being congratulated by upper management. But what is next? 

Organizational leaders are always wanting more, so how should DevOps teams work in a large organization? Scaling DevOps is a journey, and there's no better time to take the first step.

In this article, Corey Scobie, CTO of Chef, introduces five steps DevOps teams can use to scale their initiative and optimize success. 

Data Point No. 1: Adopt a coded approach

The practice of DevOps requires development, operations and security teams to work together. To do this, they must share a common set of processes and goals. Code provides the path forward to trust and velocity. A leading industry analyst explained it well: “Security and infrastructure are inseparable. As Zero Trust security becomes infused into more infrastructure, the forced segregation of the two is no longer possible. Automation must be secure, and security itself demands more automation. It’s a virtuous cycle.”

Using code to describe the desired outcome and associated policies eliminates miscommunications and makes deliverables unambiguous. Automation ensures repeatability across multiple teams at scale.

Data Point No. 2: Make it easy to work with code

For those organizations and individuals not born in the digital age, the concept of doing “everything through code” can seem overwhelming. In addition, today the world faces a developer shortage. But a coded approach does not mean that everyone has to be a coder. There are tools available that use human-readable languages and templates that enable easy editing.

Infrastructure should be effortless and users should only have to configure the parameters of the infrastructure, not write custom scripts for each and every system. Not only does this make code accessible to teams across the organization with varying skill sets, but it also eliminates much of the time Ops, Security, and QA teams need to spend manually updating process and policy documentation.

Data Point No. 3: Use the right tool for the right job

In the hands of a savvy developer or application superuser, almost any software product can be made to do things way beyond what the vendor intended it to do. This is a core reason why organizations end up with technical debt and solutions that are hard to maintain and scale. Simply put, the tool was never meant to be used for that.   

Historically, DevOps teams took an infrastructure-centric view of the world. Teams started with the infrastructure and built systems from the bottom up using layers and layers of automation. This worked well for a single application, but as more applications were added, dependency maps became more complex and the automation required increasing amounts of maintenance. In order to scale DevOps to the enterprise scale, teams need to take an application-centric view in order to handle the complex web of dependencies present in modern applications. An application-centric view of the world abstracts the application from the underlying infrastructure and packages only the required dependencies with the application.

Data Point No. 4: Enable one way to production

In order to scale, DevOps teams need to work more efficiently which requires standardization across tools and processes. Exception-based delivery is not a viable strategy. Base images need to be consistent across an organization and “snowflake configurations” need to be eliminated. In addition, standard, compliant baseline images need to be used and managed systematically. When using the right technology and the right hierarchy, concerns between the infrastructure and application cycle can be separated and consistently automated as part of CI/CD pipelines.  

Without a normalized process that eliminates the disparity between build (dev) and deploy (stages) application delivery velocity cannot be achieved. Application packages/artifacts need to be built consistently across all of the stages of the deployment pipeline. Using the same artifact in both development and operations drives velocity and eliminates much of the complexity associated with application delivery.

Application delivery teams need to define an application consistently, regardless of the technology and take a non-opinionated view of the deployment topology. Regardless of the age of an application or the underlying code, it can be packaged and then deployed on a bare-metal device, in a VM or a container running in the cloud, or on-premises without having to rebuild the application.   

Data Point No. 5: Shift risk mitigation left

Thanks to agile, cloud and microservices development is getting faster. The sooner new code makes it into production, the quicker the company recognizes the value. System testing, compliance audits, application replatforming, run time errors and reporting are all velocity blockers. 

Coded approaches include policy along with the release, and tests are run and errors are addressed at build time vs. run time. Each policy and dependency are defined as code, versioned, and stored in source control along with the application code. They travel the pipeline along with the application code, are updated, and versioned along with the application and monitored in production. Attaching codified assets to an application release at the source control level is the easiest, cheapest and fastest way to ensure compliance and accelerate delivery.

If you have a suggestion for an eWEEK Data Points article, email [email protected].