Tuesday, May 1, 2007

Five Obstacles on the Way to Successful Test Automation

1. Manual Testers don’t trust the results produced by automated testing

The most obvious reason for starting a test automation project is to cut down on tedious and time-consuming manual testing. However, if testers don’t understand what automated scripts are testing, they won’t trust their results, and the same testing is done twice – by running automated scripts and then testing the same area manually.

This was my biggest surprise, when the company I worked for started implementing test automation. There are several causes of this distrust:

* Quite often, there is no direct correlation between test cases used in manual testing and test cases automated by scripts
* Results of automated test runs are often recorded in cryptic logs. It is difficult to understand which tests have failed and which have completed successfully
* Automation is often done by a separate team that does not work closely with the QA team
* When tests fail it is very difficult to see the cause. Is this bug in the application, a problem with a test environment, or an error in the test code?
* And the most obvious reason is that test automation is software. And all good testers have it in their blood - you cannot trust software

To break this distrust you need to:

* Make the QA team a customer of the automation team. Testers should create requirements and set priorities for automation to ensure their buy-in.
* Store manual and automated test cases in the same repository
* Report results of test automation execution in the same way that results of manual testing are reported. You may need to do some extra scripting to achieve this, but it will be worth the effort. When a QA lead runs a test coverage report, it should include results from both automated and manual test runs.

2 High Cost of Maintenance

It is easy to start a test automation project. Just buy one of the GUI tools like Silk Test or Mercury WinRunner, put it in the recording mode, click around, and you have something to show for your efforts.

But then your product evolves and even a small change in UI can easily break a big number of existing automation scripts. As a UI team rarely talks to a QA automation team, such changes always come as a surprise, and automation developers are caught in the constant fight to keep existing scripts running. I had a case when a small change in one of the style sheets broke all the automation scripts.

It is possible to get the maintenance cost under control by:

* Making the UI team coordinate application face lifts with test automation maintenance
* Using proper design for test automation code that avoids code duplication and keeps test business logic independent of UI clicks
* Making it easer to investigate errors

The main take away - never automate more than you can comfortably maintain.

3 Automation is treated as QA, not as a development project

Most automation projects are initiated by QA teams. But the automation team writes software, and, to do it well, it needs to have development skills, be able to do design, have coding standards and a proper development environment.

The majority of QA teams don’t have such skills. They are staffed, at best, with junior developers, and end up producing a bunch of scripts that are difficult to maintain.

This problem can be mitigated by:

* Augmenting the automation team with developers
* Assigning at least a part-time architect to it
* Defining coding standards
* Using a source control system and keeping the automation code in the same branch as the code that it tests
* Using a data-driven approach for designing test scripts
* Educating developers on the importance of automated testing and on how they can either make it easier or harder. This knowledge can greatly help because developers can often avoid problems early on that would be maintenance nightmares later.

4 Automating the wrong tests

Quite often automation projects are started ad-hoc without proper planning. As it is impossible to automate everything, the QA team needs to setup priorities.

Here are the types of tests should be on the top of your list:
a) Test cases that are difficult or impossible to test manually. For example, emulation of concurrent activities by multiple users.
b) Tests that are very time consuming. If you can replace 3 days of manual testing with 4 hours of automation, manual testers will appreciate your efforts
c) Boring, mindless, repetitive tests that drive testers nuts. Because of their repetitive nature they are usually easy to automate.
d) Tests that exercise complex business logic of the application and don’t rely much on the user interface

5 Where are Automated Tests when you need them?

Implementation of automated tests is usually lagging behind development of new functionality. The automation team is reluctant to start test development until new features are fully implemented and stable.

As a result, the automation team delays test automation until new functionality is finished and tested! In such a case, you are not getting the benefits of automation when you need it the most – during the rush to get a product out the door. You can only use it after the release for regression testing.

Several things can be done to minimize this time lag:

* Automation testing below the UI layer can be done earlier in the game as these interfaces are usually maturing earlier than the UI
* Applying a two step approach to automating testing of new functionality. Implement the first version of the automation test on the first stable build that contains new functionality. As soon as it is implemented, put the test aside and switch to automating a test for another feature. The main savings come from avoiding script maintenance during the most unstable period – initial bug fixing and the UI tune-up phase. When it is all done, return to the script and finish it up.


Anonymous said...

What about the problem of people writing tests with bugs in them that cause the tests to pass when in fact they should fail?

I suppose that's not an obstacle to starting to do test automation, but it's certainly an obstacle to successful test automation. How do you make sure your tests are valid? Write tests for them? What about those tests?

Stephen Michaud said...

One must treat automated test code as a "first class citizen", so reviews and standards are one check point to good test code. In important or complicated areas, one could even automate tests for test code but eventually, you have to stop watching the watcher. Automated tests need to be thoroughly validated before being explicitly trusted, another necesary cost to automation.