The tremendous amounts of manual time and energy required to configure, run and monitor automated software tests can come as a startling revelation to companies that have invested substantial engineering efforts into automated test frameworks. These frameworks are specifically designed to reduce the human cost of continually running large regression suites, but the process is not always truly automatic. What is at the root of this disconnect?
When smart managers start asking the right questions, they can identify the problem. Stipulating that the automated test framework is a good thing that will always have a lot of work to churn through, you can still challenge the status quo on three key axes and shed a lot of light.
First ask: why exactly does your automated test framework take so long to run and what can you do to make it faster? Second, why is it so resource-intensive and what can you do to make it more efficient? Third, and most importantly, why are people still managing it and how can you make it truly automatic?
We have had the opportunity recently to interview some of the world's largest development organizations as they answered those questions. We were astonished at how similar the problems were across wildly different shops. Test suites take time to configure and launch, there is a perpetual arm-wrestling match with IT for resources, and staff is still spending hours going through log files, painstakingly trying to tease out the real defects from the spurious failures and bad tests.
Effective automation should obviously be about reducing manual effort. Yet the most successful test systems are backed by large infrastructure teams who are tasked solely with the care and feeding of the "automatic" system. How did this happen?