Despite the clear benefits of test automation, many organizations are not able to build effective test automation programs. Test automation becomes a costly effort that finds fewer bugs and is of questionable value to the organization.
There are a number of reasons why test automation efforts are unproductive. Some of the most common include:
Poor quality of tests being automated
Experts in the field agree that before approaching problems in test automation, we must be certain those problems are not rooted in fundamental flaws in test planning and design.
“It doesn’t matter how clever you are at automating a test or how well you do it, if the test itself achieves nothing then all you end up with is a test that achieves nothing faster.” Mark Fewster, Software Test Automation, I.1, (Addison Wesley, 1999).
Many organizations simply focus on taking existing test cases and converting them into automated tests. There is a strong belief that if 100% of the manual test cases can be automated, then the test automation effort will be a success.
In trying to achieve this goal, organizations find that they may have automated many of their manual tests, but it has come at a huge investment of time and money, and produces few bugs found. This can be due the fact that a poor test is a poor test, whether it is executed manually or automatically.
Lack of good test automation framework and process
Many teams acquire a test automation tool and begin automating as many test cases as possible, with little consideration of how they can structure their automation in such a way that it is scalable and maintainable. Little consideration is given to managing the test scripts and test results, creating reusable functions, separating data from tests, and other key issues which allow a test automation effort to grow successfully.
After some time, the team realizes that they have hundreds or thousands of test scripts, thousands of separate test result files, and the combined work of maintaining the existing scripts while continuing to automate new ones requires a larger and larger test automation team with higher and higher costs and no additional benefit.
“Anything you automate, you’ll have to maintain or abandon. Uncontrolled maintenance costs are probably the most common problem that automated regression test efforts face.” Kaner, Bach and Petticord, ibid
Inability to adapt to changes in the system under test
As teams drive towards their goal of automating as many existing test cases as possible, they often don’t consider what will happen to the automated tests when the system under test (SUT) under goes a significant change.
Lacking a well conceived test automation framework that considers how to handle changes to the system under test, these teams often find that the majority of their test scripts need maintenance. The outdated scripts will usually result in skyrocketing numbers of false negatives, since the scripts are no longer finding the behavior they are programmed to expect.
As the team hurriedly works to update the test scripts to account for the changes, project stakeholders begin to lose faith in the results of the test automation. Often the lack of perceived value in the test automation will result in a decision to scrap the existing test automation effort and start over, using a more intelligent approach that will produce incrementally better results.
“Test products often receive poor treatment from project stakeholders. In everyday practice, many organizations must set up complete tests from scratch, even for minor adaptations, since existing tests have been lost or can no longer be used. To achieve a short time-to-market, tests need to be both easy to maintain and reusable.” Buwalda, Jannsen and Pinkster, Integrated Test Design and Automation: Using the TestFrame Method.
No comments:
Post a Comment