Thursday, June 21, 2007

Fit = Acceptance Testing for Anyone

One thing I’ve learned in software: business people and developers speak a different language. My Stanford friends like to lump people under fuzzies or techies (fuzzies being someone who doesn't know a lot about computers) and this generalization has some merit – at times it can feel like you’re speaking to another species (and buzzword spouting executives are from another planet!)

But I’ve been using a solution called Fit, or the Framework for Integrated Testing, that bridges the gap between fuzzies and techies. In Fit, product managers supplement written word requirements with HTML or Excel Fit tables that express concrete examples. These Fit tables can easily be written by fuzzies in a HTML WYSWIG editor or Microsoft Word or Excel. There is something almost mathematical about writing out examples in this way, forming what domain-driven design philosophers might dub a Ubiquitous Language. These examples are both requirements and tests, saving you a step (and an opportunity for business needs to be lost in translation).

How easy is it? Well let’s go through an example (from the Fit bible, “Fit for Developing Software” by Fit pioneers Rick Mugridge and Ward Cunningham.)

Business rule: A 5% discount is provided whenever the total purchase is greater than $1,000.
Fit test table:

Kinda making sense?
Well, row one names the fixture, or the code that hooks the test into the system under test.
Row two has the headers, which label the two columns. The amount column acts as a given column, which specifies the test inputs (amount = purchase price in the business rule). The discount column acts as a calculated value column, which specifies the expected output of the calculation. The calculated value columns are designated by the () brackets after the name. The rest of the rows are effectively our test cases, trying a variety of inputs into the system.

Once a developer has written a fixture to hook this test into the system, the Fit test can be run, producing a report (see my report below). The calculated value column is colour-coded in the report: green=test success, red=test failure. Failed tests list the expected value and actual value calculated by the system, so you can investigate the discrepancy. In this case, the developers seem to have misinterpreted the requirement to provide the discount when the purchase is greater than $1000.


The value of Fit is not only in more precise communication of requirements, but as an automated acceptance testing framework. With Fit, your requirements can be tested regularly to indicate to developers if new development efforts meet expectations, and if refactoring or new development has broken existing functionality.

Well, that’s all for now. To learn more about Fit, check out http://fit.c2.com/

Thursday, May 24, 2007

Linda Rising on Collaboration, Bonobos and The Brain

Linda Rising made a great impression at the Agile Vancouver conference. She is an excellent presenter.

In her interview to infoQ she makes some interesting parallels between Agile teams and groups of our closest relatives - chimpanzees and bonobos.
It turns out that these ape species have very different social life.
Linda believes that humans are much closer to bonobos than to chimpanzees and this a reason for Agile methodologies to succeed.

She says,
"...suppose you have 2 groups, one group of chimpanzees and one group of bonobos, and you throw a bunch of bananas into the middle of the group. What would happen in the chimpanzees group is that everyone will get very excited and there will be a huge battle, and the alpha male and his supporters will physically beat up on everybody else, and they will get the bananas.

...Suppose we throw a bunch of bananas into the middle of the bonobos. They also get excited about bananas, and they begin jumping up and down, but their immediate reaction would be to have sex. And everyone would have sex with everyone. Males with males, males with females, females with males, young with old. And there will be a lot of sex and then everyone would share the bananas."

Here is the link:
http://www.infoq.com/interviews/linda-rising-agile-bonobos


Tuesday, May 1, 2007

Five Obstacles on the Way to Successful Test Automation

1. Manual Testers don’t trust the results produced by automated testing

The most obvious reason for starting a test automation project is to cut down on tedious and time-consuming manual testing. However, if testers don’t understand what automated scripts are testing, they won’t trust their results, and the same testing is done twice – by running automated scripts and then testing the same area manually.

This was my biggest surprise, when the company I worked for started implementing test automation. There are several causes of this distrust:

* Quite often, there is no direct correlation between test cases used in manual testing and test cases automated by scripts
* Results of automated test runs are often recorded in cryptic logs. It is difficult to understand which tests have failed and which have completed successfully
* Automation is often done by a separate team that does not work closely with the QA team
* When tests fail it is very difficult to see the cause. Is this bug in the application, a problem with a test environment, or an error in the test code?
* And the most obvious reason is that test automation is software. And all good testers have it in their blood - you cannot trust software

To break this distrust you need to:

* Make the QA team a customer of the automation team. Testers should create requirements and set priorities for automation to ensure their buy-in.
* Store manual and automated test cases in the same repository
* Report results of test automation execution in the same way that results of manual testing are reported. You may need to do some extra scripting to achieve this, but it will be worth the effort. When a QA lead runs a test coverage report, it should include results from both automated and manual test runs.

2 High Cost of Maintenance

It is easy to start a test automation project. Just buy one of the GUI tools like Silk Test or Mercury WinRunner, put it in the recording mode, click around, and you have something to show for your efforts.

But then your product evolves and even a small change in UI can easily break a big number of existing automation scripts. As a UI team rarely talks to a QA automation team, such changes always come as a surprise, and automation developers are caught in the constant fight to keep existing scripts running. I had a case when a small change in one of the style sheets broke all the automation scripts.

It is possible to get the maintenance cost under control by:

* Making the UI team coordinate application face lifts with test automation maintenance
* Using proper design for test automation code that avoids code duplication and keeps test business logic independent of UI clicks
* Making it easer to investigate errors

The main take away - never automate more than you can comfortably maintain.

3 Automation is treated as QA, not as a development project

Most automation projects are initiated by QA teams. But the automation team writes software, and, to do it well, it needs to have development skills, be able to do design, have coding standards and a proper development environment.

The majority of QA teams don’t have such skills. They are staffed, at best, with junior developers, and end up producing a bunch of scripts that are difficult to maintain.

This problem can be mitigated by:

* Augmenting the automation team with developers
* Assigning at least a part-time architect to it
* Defining coding standards
* Using a source control system and keeping the automation code in the same branch as the code that it tests
* Using a data-driven approach for designing test scripts
* Educating developers on the importance of automated testing and on how they can either make it easier or harder. This knowledge can greatly help because developers can often avoid problems early on that would be maintenance nightmares later.

4 Automating the wrong tests

Quite often automation projects are started ad-hoc without proper planning. As it is impossible to automate everything, the QA team needs to setup priorities.

Here are the types of tests should be on the top of your list:
a) Test cases that are difficult or impossible to test manually. For example, emulation of concurrent activities by multiple users.
b) Tests that are very time consuming. If you can replace 3 days of manual testing with 4 hours of automation, manual testers will appreciate your efforts
c) Boring, mindless, repetitive tests that drive testers nuts. Because of their repetitive nature they are usually easy to automate.
d) Tests that exercise complex business logic of the application and don’t rely much on the user interface

5 Where are Automated Tests when you need them?


Implementation of automated tests is usually lagging behind development of new functionality. The automation team is reluctant to start test development until new features are fully implemented and stable.

As a result, the automation team delays test automation until new functionality is finished and tested! In such a case, you are not getting the benefits of automation when you need it the most – during the rush to get a product out the door. You can only use it after the release for regression testing.

Several things can be done to minimize this time lag:

* Automation testing below the UI layer can be done earlier in the game as these interfaces are usually maturing earlier than the UI
* Applying a two step approach to automating testing of new functionality. Implement the first version of the automation test on the first stable build that contains new functionality. As soon as it is implemented, put the test aside and switch to automating a test for another feature. The main savings come from avoiding script maintenance during the most unstable period – initial bug fixing and the UI tune-up phase. When it is all done, return to the script and finish it up.


Tuesday, March 27, 2007

Agile Quality: Control vs. Assurance vs. Analysis

Questions about how testing fits onto agile development practices are usually answered into two very unhelpful ways for QA professionals:

1. Agile development eliminates the need for QA, developers test it all themselves.
2. QA has to work harder to keep up with development while maintaining their traditional methodologies and test approaches.

There is truth and falsehood in the statements. When I am asked how to "fit QA in" I like to frame my answer by defining three types of testing, Quality Control, Quality Assurance, and Quality Analysis, then go on to describe how I think each fits into most agile processes.

Quality Control

What a lot of people think of as testing is what I call Quality Control. Think of the guy sitting in the beer plant (or girl if you are a fan of Laverne and Shirley) watching the bottles go by making sure that there is nothing wrong with them before they get capped, this is quality control. In other words, you are inspecting the final product to ensure it meets the criteria for an acceptable product. Within any software project, unit testing, peer review, and regression testing are all forms of quality control. Inside of an agile project, these tasks need to be performed on a continuous basis. Unit tests need to be automated and made a part of a continuous integration strategy and peer reviews can be literally continuous, in the case of pair programming, or mandated on a regular basis in the form of diff reviews before check-ins and code reviews as a part of doneness criteria. There is also no controversy in stating that regression testing needs to be automated and should be run as often as possible.

Ideally, regression tests should be written in such a way as to be maintained along with the code. Using FIT (Framework for Integration Testing) is one good way to keep the tests in sync with the code. If the suite of FIT regression tests are run with every build, those tests that were not refactored along with the rest of the changed code that now fail need to be investigated to see if either the test was missed in refactoring or an actual bug was introduced. Though the cost of maintenance is not zero, there is a lower cost of maintenance and next to no chance that the automated tests will be abandoned.

As you can see, the ownership of quality control within the software product moves more onto the shoulders of the developers. This is as it should be in an agile project where the developer has the responsibility of meeting the customers' requirements, which usually implicitly include no regressions.

Quality Assurance

"Quality Assurance is a part and consistent pair of quality management proving fact-based external confidence to customers and other stakeholders that a product meets needs, expectations, and other requirements. QA assures the existence and effectiveness of procedures that attempt to make sure - in advance - that the expected levels of quality will be reached" Wikipedia

Within an agile project, the customer is constantly involved and informed. As such, there is no real need for "fact-based external confidence" building. Another best practice in agile development it to ensure that the acceptance criteria for all requirements are documented and well understood during the requirements gathering and iteration planning stages. Ideally, the validation that development of the requirements meets the acceptance criteria is also automated (again, FIT is a great tool for this automated validation).

So, again, it is the responsibility of the product owner in creating requirements and the customer working with the developers to assure "expected levels of quality" are reached.

I know I have a number of nervous testers and QA people at this point in reading, but you had to know that the main point was coming last.

Quality Analysis

So far I have mentioned these remarkably well written requirements and acceptance criteria in such a way that some may believe that they magically appear. Well, they do not and they are much too critical to the success of an agile project to neglect them. Here is where an experienced tester can contribute greatly to an agile team. A product owner or customer will provide vision in the form of high-level requirements and basic acceptance criteria. An experience tester can look at these criteria and with an understanding of the system that exists and/or the technologies involved expand and elaborate on these criteria. An especially experienced tester will also be able to suggest missing requirements and non-functional requirements that the customer/product owner has not had the time or experience to consider. A good example of how the acceptance criteria could be augmented is to add the boundary conditions to the acceptance (i.e. added a check for maximum field length sizes to FIT tables)

The tester having been freed from a lot of manual, tedious control and assurance testing can then provide value in performing exploratory testing. Using a tester's natural ability to ferret out instabilities in the system and looking at the system from a high level and turning things on their side as only a try tester/user can do.

Conclusion

In conclusion, what I feel is the role of a tester or QA person in agile projects is more of an Analyst role. Call it what you will, quality analyst, requirements analyst, system analyst, etc... An experienced tester can fill in those technical requirements that are missed by the customer with their high-level perspective but also missed by the developer with their focused perspective. The blend of technical skills, customer perspective, and user experience make the experienced tester/QA person ideal for requirements expansion and elaboration and provides a good career path into product ownership/management.