Thursday, May 24, 2007

Linda Rising on Collaboration, Bonobos and The Brain

Linda Rising made a great impression at the Agile Vancouver conference. She is an excellent presenter.

In her interview to infoQ she makes some interesting parallels between Agile teams and groups of our closest relatives - chimpanzees and bonobos.
It turns out that these ape species have very different social life.
Linda believes that humans are much closer to bonobos than to chimpanzees and this a reason for Agile methodologies to succeed.

She says,
"...suppose you have 2 groups, one group of chimpanzees and one group of bonobos, and you throw a bunch of bananas into the middle of the group. What would happen in the chimpanzees group is that everyone will get very excited and there will be a huge battle, and the alpha male and his supporters will physically beat up on everybody else, and they will get the bananas.

...Suppose we throw a bunch of bananas into the middle of the bonobos. They also get excited about bananas, and they begin jumping up and down, but their immediate reaction would be to have sex. And everyone would have sex with everyone. Males with males, males with females, females with males, young with old. And there will be a lot of sex and then everyone would share the bananas."

Here is the link:
http://www.infoq.com/interviews/linda-rising-agile-bonobos


Tuesday, May 1, 2007

Five Obstacles on the Way to Successful Test Automation

1. Manual Testers don’t trust the results produced by automated testing

The most obvious reason for starting a test automation project is to cut down on tedious and time-consuming manual testing. However, if testers don’t understand what automated scripts are testing, they won’t trust their results, and the same testing is done twice – by running automated scripts and then testing the same area manually.

This was my biggest surprise, when the company I worked for started implementing test automation. There are several causes of this distrust:

* Quite often, there is no direct correlation between test cases used in manual testing and test cases automated by scripts
* Results of automated test runs are often recorded in cryptic logs. It is difficult to understand which tests have failed and which have completed successfully
* Automation is often done by a separate team that does not work closely with the QA team
* When tests fail it is very difficult to see the cause. Is this bug in the application, a problem with a test environment, or an error in the test code?
* And the most obvious reason is that test automation is software. And all good testers have it in their blood - you cannot trust software

To break this distrust you need to:

* Make the QA team a customer of the automation team. Testers should create requirements and set priorities for automation to ensure their buy-in.
* Store manual and automated test cases in the same repository
* Report results of test automation execution in the same way that results of manual testing are reported. You may need to do some extra scripting to achieve this, but it will be worth the effort. When a QA lead runs a test coverage report, it should include results from both automated and manual test runs.

2 High Cost of Maintenance

It is easy to start a test automation project. Just buy one of the GUI tools like Silk Test or Mercury WinRunner, put it in the recording mode, click around, and you have something to show for your efforts.

But then your product evolves and even a small change in UI can easily break a big number of existing automation scripts. As a UI team rarely talks to a QA automation team, such changes always come as a surprise, and automation developers are caught in the constant fight to keep existing scripts running. I had a case when a small change in one of the style sheets broke all the automation scripts.

It is possible to get the maintenance cost under control by:

* Making the UI team coordinate application face lifts with test automation maintenance
* Using proper design for test automation code that avoids code duplication and keeps test business logic independent of UI clicks
* Making it easer to investigate errors

The main take away - never automate more than you can comfortably maintain.

3 Automation is treated as QA, not as a development project

Most automation projects are initiated by QA teams. But the automation team writes software, and, to do it well, it needs to have development skills, be able to do design, have coding standards and a proper development environment.

The majority of QA teams don’t have such skills. They are staffed, at best, with junior developers, and end up producing a bunch of scripts that are difficult to maintain.

This problem can be mitigated by:

* Augmenting the automation team with developers
* Assigning at least a part-time architect to it
* Defining coding standards
* Using a source control system and keeping the automation code in the same branch as the code that it tests
* Using a data-driven approach for designing test scripts
* Educating developers on the importance of automated testing and on how they can either make it easier or harder. This knowledge can greatly help because developers can often avoid problems early on that would be maintenance nightmares later.

4 Automating the wrong tests

Quite often automation projects are started ad-hoc without proper planning. As it is impossible to automate everything, the QA team needs to setup priorities.

Here are the types of tests should be on the top of your list:
a) Test cases that are difficult or impossible to test manually. For example, emulation of concurrent activities by multiple users.
b) Tests that are very time consuming. If you can replace 3 days of manual testing with 4 hours of automation, manual testers will appreciate your efforts
c) Boring, mindless, repetitive tests that drive testers nuts. Because of their repetitive nature they are usually easy to automate.
d) Tests that exercise complex business logic of the application and don’t rely much on the user interface

5 Where are Automated Tests when you need them?


Implementation of automated tests is usually lagging behind development of new functionality. The automation team is reluctant to start test development until new features are fully implemented and stable.

As a result, the automation team delays test automation until new functionality is finished and tested! In such a case, you are not getting the benefits of automation when you need it the most – during the rush to get a product out the door. You can only use it after the release for regression testing.

Several things can be done to minimize this time lag:

* Automation testing below the UI layer can be done earlier in the game as these interfaces are usually maturing earlier than the UI
* Applying a two step approach to automating testing of new functionality. Implement the first version of the automation test on the first stable build that contains new functionality. As soon as it is implemented, put the test aside and switch to automating a test for another feature. The main savings come from avoiding script maintenance during the most unstable period – initial bug fixing and the UI tune-up phase. When it is all done, return to the script and finish it up.