Throughout my career as an IT consultant, working with various platforms and technologies, I’ve encountered a myriad of terms pitched to me to explain how their company is approaching modern software development. Terms like Agile, DevOps, SAFe, among others.
All of them come with a passionate group of advocates ready to go to bat for the term when faced with any skepticism. I am personally neither for nor against any of these terms. If something works for an organization, great. Trying to garner that same success, many companies fall into the trap of an overhyped solution as a new template or mold that everyone must fit into if they want to be successful.
The important thing companies must realize is that DevOps is not a one size fits all solution for every company. The impression I get is that if a trendsetter like Google, Facebook, or Microsoft decides to do something, then everyone must have to do that thing.
Add to this the additional pressure of taking oneself too seriously – becoming a bit draconian – and you are like a new lure thrown into the fishing pond for people like me who are generally full of cynicism after so many IT projects behind them.
When I ask companies where they’re at in their maturity around DevOps, the response I usually get is along the lines of “we’re on a journey towards that.” Similar can be said of companies as it relates to continuous performance testing, which generally means that the company is doing something, but they haven’t quite figured it out yet.
This brings me to why I have now deemed most current DevOps initiatives as “DevOops” – that is to say, a misguided attempt at DevOps. Most business leaders and companies are constantly looking out for the latest development, trend, or buzzwords to accelerate innovation, save time and money, and take on competitors. In recent years, DevOps has been one of the biggest buzzwords for companies everywhere.
Ultimately, one of the biggest factors in companies implementing an Agile or DevOps model is F.O.M.O. (fear of missing out). As such, companies try to adopt DevOps practices for the sake of being relevant.
This ends up causing more issues in the software development process in comparison to their older, traditional approach. This is not to say that your traditional model is the way of the future for your company. You need to make sure you are ready for this type of model.
Imitation is the sincerest form of flattery…or is it?
When companies imitate their competitors, they are attempting to keep up with their competition and stay relevant. Many times they do this without sitting down to determine the best way a DevOps model can be implemented for them – or if they even need DevOps practices in the first place. If your primary goal is to add speed just for the sake of speed, that’s a recipe for disaster (or in this case, product development issues).
When it comes to performance testing, choosing speed over performance does not solve the problem. Companies often ask me about putting every API test or end-to-end performance test into a CI pipeline to gather timing metrics, and they want to do this as quickly as possible.
However, by doing this, the consequences are generally that the metrics aren’t valid, they don’t help solve performance issues, and become a “checkbox activity.” Why aren’t these metrics valid, you ask? Because the metrics gathered do not provide real value to the audience (i.e developers), which leads to becoming overlooked or considered irrelevant.
Meanwhile, performance issues companies were already experiencing continue to be caught after it’s too late. Generally, there is a production outage or a customer experience issue in a support ticket to deal with. The automation didn’t solve the problem. The speed at which the tests ran through the CI pipeline didn’t solve the problem. The headaches still exist.
But wait, there’s a light!
In my experience, three things indicate to me that a company has successfully implemented a DevOps model/strategy;
- Service Virtualization (or the equivalent) has been implemented: This enables them to test against unfinished features and services and not have to wait to test until development is finished. If they have no way of working on test automation for the functions that work while others have not been created, they will fall behind on automation and testing will always be behind.
- They have decoupled the production release from users having access to the new features: Meaning they are releasing code silently to allow end-users access at the pace they choose, such as pilot groups. If they roll something out that messes everything up, they can quickly revert to the previous version through feature flags.
- Repetitive task automation: They are constantly figuring out how to reduce toil in the work environment by automating repetitive tasks, while at the same time ensuring that there are manual reviews in place even if there are exceptions to the standard pass/fail criteria. Everything isn’t always so cut and dry – pass or fail. A good DevOps organization knows the balance between reducing toil and mindless pipeline speed.
Ultimately, the solution to what ails a company’s performance testing initiatives will come down to the type of outcome they are hoping to glean from their testing results. If you are looking to implement solid performance test results that will enable your company to make confident business decisions, look no further than the end-user experience.
Future forward DevOps
Right now, DevOps adoption is still in the “hype cycle” stage or the area that Gartner calls the “Peak of Inflated Expectations.” As a result, there are a lot of efforts that aren’t completely serious about their DevOps pursuit and are rejecting the core things that make DevOps work because they are hard to do.
Over the next five years, we will see the hype of DevOps subside because companies will “skill up” their employees and set expectations of new hires for the required skill sets for Site Reliability Engineers. At that point, companies will figure out how to get the right value from it, and some will learn they can get better value without it.
About the Author:
Scott Moore, Director, Customer Engineering, Tricentis