Monday, November 5, 2007

Fit and evolving domain model

In his recent blog post Scott Bellware expresses skepticism about using Domain Specific Language tools like Fitness for application that are at the very beginning of developing their model and domain specific language. He argues that for applications with unstable domain model the maintenance overhead is too big.

http://codebetter.com/blogs/scott.bellware/archive/2007/10/28/170346.aspx

Saturday, September 15, 2007

Test First, Test Driven Development and Distributed Agile

The more I deal with severely distributed teams (long way from Vancouver to Siberia!) the more I have come to view Test First and Test Driven Design/Development as a must for success. It is an uncontested fact that the lack of co-location is the largest downside to a distributed agile team, you can view the communication link within an agile team as a water system.

For a co-located team, the communication is a vast pool of knowledge that everyone can dip into whenever they please. As soon as you divide the team up, even if it is from one room to another, you need to put a pipe in place between these pools. The further apart you place the teams, the narrower the pipe becomes.

Now one needs to ask, how best can we fill this pipe? Are there any ways we can shore up the pipe with other means of communication. This is where I see Test First, TDD, executable requirements, Fit, whatever you want to call it but activities that help the team communicate the requirements and validate that the design and development underway meets those requirements without the constant need to fill up the pipe with clarifying questions.

Now pardon my belaboured use of the plumbing analogy (can't help it, I am an engineer!) but after an initial flood of communication clarifying tests and requirements at the beginning of the sprint/iteration/whatever (we need to start to have a unified language for these things ... a blog post for another time), these agreed upon, well understood, and continuously executed tests leave the pipe free to be used to communicate about creative problem solving and more truly profound questions about requirements, not questions like "should the discount be applied AT $50 dollars or AFTER $50".

The Break Between Sprints

One element of agile development that is often forgotten in the heady drive to increase velocity is to truely take a break and recharge those creative juices. This struck me at a concert during the aptly titled song "White and Nerdy" by none other than Weird Al.

We all have to remember that you must play hard to work hard. Who better than Weird Al to let you laugh at him, the world, and most importantly yourself.

Wednesday, September 12, 2007

Weinberg on Agile

In his article 5Qs on Agile with Gerald M. Weinberg Gerald Weinberg tries to foresee the agile movement's fate:

Q4: What is the future of Agile?
First we will drop the capital A. Then we will drop the term "agile" altogether. Agile methods will be successful if and when we stop seeing them as anything other than normal, sensible, professional methods of developing software.

Sounds like a plan, doesn't it?

Thursday, August 23, 2007

Agile 2007 Conference Notes from a Newbie

What a week. I’ve never done booth duty before, and as a introverted product development type, it’s exhausting. I tried to flee to attend as many session as possible, mostly product management and Fit related sessions. So here goes with my notes from the conference:

Monday

Did some booth setup and attended a session by Naresh Jain of Thoughtworks and Micah Martin (yes, Uncle Bob’s son) on Acceptance Test Driven Development, where they described a lot of patterns and best practices in ATDD and using FitNesse. Some of the main ideas: write independent tests, don’t put everything in one big test, cleanup and teardown after your tests, avoid repeating by using includes, and don’t have too long of a feedback loop when using a build tool like Cruise Control – try out open source products like ProTest to get you feedback on your checkin within 5 minutes.

In the evening we checked out the ice-breaker in the ballroom and had a good chat with Rick Mugridge, the creator of Fit library. He was a walking booth with his ZiBreve shirt and hat – the shirt even had screenshots.

Tuesday

From my booth I could see a bit of the keynote by Susan Ershler. I spoke to her afterwards and got her book for my husband – she’s pretty inspiring. She and her husband were the first couple to climb the Seven Summits, including Everest. On her quest up Everest she had to try it a couple times, and go up and down to camps so the basic idea was sometimes to reach new heights you gotta do a bit of backtracking, and struggle long and hard.

Then I worked the booth. A fellow Canadian I met in the café suggested I attend the session on Google’s Agile Adoption and I wish I had – I heard it was excellent.


Wednesday
I was up, bright and early, to check out Rick Mugridge and David Hussman speak about Executable project documents (basically about Fit). We spent some time writing up user stories and Fit tests. David was funny and it was well presented, but it was basically the same content as in the Fit book, “Fit for Developing Software.”

Then in the afternoon, I checked out the leadership symposium in the South Ballroom. Saw 2 good talks, one on Agile Memes (successful Agile patterns basically) by Steve Baker and Joseph Thomas from DTE Energy and one on Enterprise Agile at Yahoo by Gabrielle Benefield. Steve described how they got their big energy company using Agile by slowly phasing it in, trying on pilot teams, and getting teams to volunteer to try out scrum. They did their scrum meetings in the cafeteria and were open to people listening in, and made it fun by dubbing it “Big Pop Time” (someone started a trend of getting a huge pop and making scrum a fun break in the day.) Gabrielle described how she similarly piloted scrum at Yahoo and got mass appeal by giving out cute Agile t-shirts, hiring very outgoing Agile coaches, making it a defacto almost subversive process rather than mandated. She basically made scrum and Agile cool, and it spread around Yahoo.

Thursday

I went to Gabrielle Benefield and Michael Holzer’s session on User Centred Design for Agile projects. They described their unusual research methods for finding out what users, in their case teenagers, wanted out of products and creating user profiles to help understand their target market.In “Agile Ghettos or Thriving Communities”, Chris Avery and Michael England presented the idea that we should have Agile methodologies move beyond product development departments up to the executive level. There is a divide between these levels, and having a scrum of scrums and more collaborative work between executive could bring productivity gains. They described many case studies of this working at unique companies, including WL Gore (they make Gortex). And Friday we flew home. It was an exhausting week and I did a lot of demos of FITpro and got some feedback, but that will be the topic of another blog post. See you at Agile 2008 in Toronto!

Thursday, August 16, 2007

Affirmations

Every so often I come across something in the blogosphere that makes me happy:

I don't know about you, but I use [the refactoring step] to add a bit of Beauty to my code. Thanks to TDD, my code already has Truth. It works. Now I make sure it isn't just good, it's beautiful. Gorgeous. The best damned code I know how to write... It doesn't sound like much, but for me, it's the final touch. I write good code. I take pride in my work. When you look at my code, you can tell.

- adapted from James Shore's post Truth and Beauty

Emerging design

If development is the process of introducing dependencies, then design is the art of managing those dependencies.

So, design will be effective when applied after development has begun. That is, design emerges from the swamp code.

This type of emergent design is ideal in an agile environment. With techniques such as refactoring, unit testing, and pair programming at our disposal, all we need do is periodically look at our work from a different perspective and have the courage to acknowledge its shortcomings and we will be in a great position from which to emerge a better design.

Here are some tools that I've found helpful for emerging design:

Tuesday, August 7, 2007

How pragmatic are we?

In his Let's be pragmatic blog post, Jason Yip asks a single simple question:
When you say "let's be pragmatic", do you mean short-term pragmatic? Or do you mean long-term pragmatic, which means paying attention to sticking to values and principles?

The answer is obvious but unfortunately different for different people.
All members of a cohesive team should give the same answer to the question. Or should they?

Thursday, July 26, 2007

Distributed Agile

The question was asked on Linked In on how companies handle Agile development in distributed environment with outsourcing vendors. While team co-location is an ideal (I would say the easiest) environment for Agile development, it is not an option for many companies. Even if you do not outsource, chances are that your development team is spread out between multiple locations. Acquisitions are way too common in high-tech.

So if we have no say in a team’s location, the question really is: which is better - distributed Agile or distributed waterfall?

Personally, I got the answer to this question after implementing Agile a couple of years ago with development teams in Vancouver and Boston. It worked much better than the plan-driven approach we have used in the past.

You do need to modify some Agile practices, of course, by moving Agile process artifacts from their physical environment (white boards, sticky notes, …) to an online one. I used Microsoft SharePoint at that time. You can also look at commercial tools from Rally or Version 1. The way you communicate will change as well, as you supplement face to face meetings with IM, VOIP, videoconferencing, and wikis.

Agile techniques actually help you to address and mitigate the usual challenges of a distributed / off-shored project: lack of visibility on project status, delay in feedback cycle, loss of business and technical contexts, decrease in communication bandwidth, higher documentation overhead, and mistrust.

Short iterations, with a product demo and retrospective at the end, increase visibility of the project status and provide instant feedback as well as an opportunity for process adjustment. Customer involvement facilitates both a shared understanding of business context and communication between business people and the development teams.

Frequent team meetings help to build trust and improve communication on all levels and across different groups. Continuous test and integration cycles tell you where you are in the project.

Functional Test Driven Development helps in removing ambiguity from requirements and clearly communicating them to distributed teams. Fit (Framework for Integrated Testing) is a great open source tool that we are using (http://fit.c2.com/).

In my current company, Luxoft Canada, we successfully use Distributed Agile in a number of projects with teams located in the US, Canada, and Russia.

If you want to learn more about Distributed Agile read the book by Dean Leffingwell: “Scaling Software Agility, best practices for large enterprises”. Dean is a well known name in the s/w industry and was in charge of developing RUP at Rational Software.

He writes that “at scale, all agile development is distributed development. … even the largest or most distributed teams can achieve the faster time to market, higher productivity, and higher team morale that the agile methods provide.”

In the book Dean describes a case study of a Denver company called Ping Identity that is using SCRUM with Luxoft’s team in Moscow.

Agile is getting further acceptance in world of outsourcing and this trend is going to continue. Watch for new tools and techniques that are emerging to make it easier and more efficient.


Friday, July 20, 2007

On engineering practices in Scrum

The following couple of articles helped me to understand better the nature of the Scrum.

When is Scrum not Scrum? by Tobias Mayer and Differences Between Scrum and Extreme Programming by Mike Cohn

Both articles pay special attention to using engineering practices in Scrum. Both agree on importance of the practices and their value for the development process. More radical Tobias (he was expelled from the Scrum Alliance) thinks that they should be mandatory. Mike takes less categorical approach.

This is the bullet point list from "When Scrum is not scrum":

1. Product Owners are part of the team.
2. Two-week Sprints
3. Tasks are not measured in hours
4. Use of Taskboards rather than spreadsheets
5. Backlogs on the wall
6. Estimation Meetings
7. Insistence on Agile Engineering practices
8. The Scrum Master role is not always necessary

Here is an excerpt from Mike's article:

"I find true XP to be a small target off in the distance. If a team can aim at that and hit the bull’s eye, wonderful. If not, however, they are likely hacking (e.g., refactoring without any automated testing or TDD). Scrum is a big bull’s eye that on its own brings big improvements simply through the additional focus and the timeboxed iterations. That’s a good starting point for then adding the XP practices."


-Alex

Thursday, July 19, 2007

Experiencing pair programming

At Luxoft Canada, working in a team of four developers we were able to pair and work on specific tasks that were taken from the Scrum task board.

Physical aspect of pairing involved having two developers (navigator and driver) sitting behind a computer connected to two sets of keyboard and mice. We managed to change our desks to rectangular desks to easily accommodate two programmers working behind the same desk.

When we began pair programming, some participants had no or little desire to participate. The only way to overcome this lack of desire was to experimentally try pair programming. I tried it and gradually grew into it and started seeing its tremendous benefits to the organization and us.

While working in a pair, there were cases in which we could not accept the other person’s point of view about solving a problem. For example, in one case I insisted that we needed to add more unit tests for a problem and my partner was insisting that what we have written was enough. We spent a few minutes explaining our positions and finally I had to compromise by not introducing the new unit test and leaving the decision to be made at a later time.

I paired with two people multiple times. On a few occasions the navigator programmer felt more skilled than the driver in performing some actions (as simple as using short keys) or the navigator was feeling s/he knows the area of the application better than the driver programmer. In these cases even though the navigator explained the techniques to the driver, it took a while for the driver to pick up the techniques and get used to using them. The navigator who had explained the tricks to the driver and still was seeing the driver using the old techniques felt frustrated. We learned that the navigator should explain the techniques and allow the driver to pick them gradually over time.

In other cases where we had some bad programming habits that pair programming surfaced them and gave us a chance to change them.

There were a couple of cases in which the driver would not give the navigator a chance to code. After a short period of time we all felt that this makes the navigator feel marginalized and lose his focus on the task. As soon as this problem was recognized, we made sure to switch the roles often.

In some cases in the driver seat, I was finding it difficult to explain what I meant when proposing a specific solution. I found it easier to take a minute to write the piece of code and show it my pair. Once, it involved modifying the code my pair had written a few minutes ago. It was good that he did not mind me touching his code and he was patient enough to let me take my time to explain what I meant using written code. I think the navigator should trust the driver and let him to take his/her time to explain his/her point either verbally or in written code.

I believe some of us occasionally create negative images of ourselves when we feel we are not functioning our best and start expressing the feeling verbally. I have done this and heard others doing the same. This usually does not have good outcomes. I have seen that expressing the frustration and creating negative images make things worse and affects the outcome of pair programming.

We made sure to have a few minutes of informal retrospective about pair programming experience among ourselves. This was done at the end of the day or when a task was completed. The talk about our achievements, what we really did well and what could be improved, educated us to do better each time.

For us:

1. Pair programming is an extremely efficient way to do general technical and application knowledge transfer.
2. Promotes team communication.
3. Results in better design.
4. Helps eliminating buggy code early or not introducing it in first place.
5. Increases team productivity since the team members will have undivided attention during coding.
6. Improves communication and collaboration skills of the team members.
7. Makes work more fun.

Wednesday, July 18, 2007

Fit from a Developer's Perspective

In an earlier post, Mandy wrote about the benefits of using Fit from a business owner’s perspective. I’d like to add to that and describe some of the benefits that I’ve seen as a developer working with Fit from deep in the trenches.

In our development process, we attempt to create Fit tests for new user stories up front in collaboration with business owners (let’s call this “Fit-driven development” - because we love the “X-driven” paradigms). This has several advantages for developers. First, we learn the language and rules of the business. This is crucial to any attempt at domain-driven design (see?), in which we carry the language and concepts of the business domain deep into our applications. Second, it gets us collaborating with the business owners on a day to day basis beyond the bounds of the SCRUM meeting. This builds knowledge of each other’s worlds and, more importantly, empathy, which is key to a building a positive working relationship. Third, we add to the test coverage in our system – tests that are usually at a level higher than unit tests, like integration or acceptance tests. All of this contributes to being able to get it right the first time.

Now, to be honest, we don’t always develop Fit-first but when we don’t I often find myself wishing that we had – for example, the time when I discovered that the language I used in my code was woefully inconsistent with the language the business uses, or the time when we implemented a core piece of logic and had to rewrite it when we learned that we had misinterpreted the partially-erased scribble on the whiteboard - shock! As a developer who must admit to rewriting code too frequently, tools, like Fit, that help to get it right the first time are whole-heartedly welcomed.

Wednesday, July 11, 2007

Extending Fit with a New Fixture

Fit is a very flexible tool for testing due in large part to the pre-defined fixture types (ActionFixture, ColumnFixture, RowFixture, fitlibrary’s DoFixture, etc.) This flexibility means it’s usually possible to write test tables in a way that is intuitive and easy to read. However, we’ve come across a situation where the test tables we want to write aren’t executable by any of Fit’s built-in fixture types.

The feature we’re working with is a Fit HTML table builder. Our test provides a fixture classname as input and expects a Fit HTML table as output. Here’s an example of what we wanted to test:



Unfortunately, when we ran this test using a ColumnFixture, Fit interpreted the nested table as more parsable cells rather than expected HTML output. And, while there are other ways to structure the test, such as using raw HTML, none of them seem very easy to work with.

What we really need is a new type of fixture, which interprets the table data in the last column as HTML content to be compared with the result of the fixture’s verification method (in the case above, table()).

public class HtmlTableFixture extends ColumnFixture {

@Override
public void check(Parse cell, TypeAdapter adapter) {
String expected = cell.body;
try {
String result = (String) adapter.get();
if (expected.equals(result)) {
right(cell);
} else {
wrong(cell, result);
}
} catch (Exception e) {
exception(cell, e);
}
}

@Override
protected Binding createBinding(int column, Parse heads)
throws Throwable {
Binding binding = super.createBinding(column, heads);
if (column == columnBindings.length - 1) {
binding = new HtmlTableBinding(binding);
}
return binding;
}

private static final class HtmlTableBinding
extends Binding {

private final Binding internalBinding;

private HtmlTableBinding(Binding binding) {
this.internalBinding = binding;
}

@Override
public void doCell(Fixture fixture, Parse cell)
throws Throwable {
Unparse unparse = new Unparse(cell.parts);
String html = unparse.text;
cell.body = html;
internalBinding.doCell(fixture, cell);
}
}
}


Our fixture creates a custom binding, called HtmlTableBinding, which gets bound to the last cell in each row. This binding unparses the cell’s contents and stores them in the body of the cell’s Parse object. When the cell is later evaluated by check(), our fixture gets the expected value from the body of the cell's Parse object and compares it with the result from executing the method (via the TypeAdapter).

Running the test now gives us:



So, Fit is not only a flexible testing tool, it's also easily extensible for those corner cases where you might find that it doesn't give you exactly what you want.


* To understand why we need to unparse it helps to know how Fit reads HTML tables. When Fit is run on a file, the first thing it does is create a model of all of the table information, which it stores in a composite object called Parse. Each part of a table (i.e., <table>, <tr>, or <td>) is a nested Parse object within the composite. Unparsing is achieved by traversing the composite Parse and appending the HTML parts to a buffer.

public class Unparse {

public String text;

public Unparse(Parse parse) {
text = unparse(parse);
}

private String unparse(Parse parse) {
StringBuffer sb = new StringBuffer();
sb.append(parse.tag);
if (parse.body != null) {
sb.append(parse.body);
}
if (parse.parts != null) {
sb.append(unparse(parse.parts));
}
sb.append(parse.end);
if (parse.more != null) {
sb.append(unparse(parse.more));
}
return sb.toString();
}
}

Tuesday, July 10, 2007

Agile 2007 Conference

I recently found out that Stephen, Michael, Askhat and I are attending the Agile 2007 conference in Washington, D.C. August 13-16th. We hope to bring back some fresh ideas about Agile software development and lots of blog post material. Come say hi at the Luxoft booth!

Friday, July 6, 2007

Why We Scrum

I read an article today complaining that scrum was just another useless meeting and that the time would be put to better use speaking to the other developers about design issues one on one. I tend to disagree.
At a previous company, during our waterfall days, product managers tended throw requirements over the wall as they were busy. As a tester, I often found large discrepancies between my interpretation of the requirements and what the developer had coded into the product. On occasion I had to strong-arm developers into the product manager's office so we could come to a common understanding of a feature.
We had product development offices in Boston and Vancouver, so often the product manager, tester and developer were not co-located and had never met.
Scrum helped alot. When daily scrum meetings started up, we met our cross-department distributed team (face-to-face or over phone/video conference) and had a daily opportunity to discuss all the little questions and blockages that come up as you code and test. We came to a common understanding daily, and if we got astray from the feature vision the next day, it was still recoverable.
In conclusion, scrum breaks down the barrier of departments and lack of co-location, and gets a team communicating and collaborating. Cause many brains really are better than one.

Thursday, June 21, 2007

Fit = Acceptance Testing for Anyone

One thing I’ve learned in software: business people and developers speak a different language. My Stanford friends like to lump people under fuzzies or techies (fuzzies being someone who doesn't know a lot about computers) and this generalization has some merit – at times it can feel like you’re speaking to another species (and buzzword spouting executives are from another planet!)

But I’ve been using a solution called Fit, or the Framework for Integrated Testing, that bridges the gap between fuzzies and techies. In Fit, product managers supplement written word requirements with HTML or Excel Fit tables that express concrete examples. These Fit tables can easily be written by fuzzies in a HTML WYSWIG editor or Microsoft Word or Excel. There is something almost mathematical about writing out examples in this way, forming what domain-driven design philosophers might dub a Ubiquitous Language. These examples are both requirements and tests, saving you a step (and an opportunity for business needs to be lost in translation).

How easy is it? Well let’s go through an example (from the Fit bible, “Fit for Developing Software” by Fit pioneers Rick Mugridge and Ward Cunningham.)

Business rule: A 5% discount is provided whenever the total purchase is greater than $1,000.
Fit test table:

Kinda making sense?
Well, row one names the fixture, or the code that hooks the test into the system under test.
Row two has the headers, which label the two columns. The amount column acts as a given column, which specifies the test inputs (amount = purchase price in the business rule). The discount column acts as a calculated value column, which specifies the expected output of the calculation. The calculated value columns are designated by the () brackets after the name. The rest of the rows are effectively our test cases, trying a variety of inputs into the system.

Once a developer has written a fixture to hook this test into the system, the Fit test can be run, producing a report (see my report below). The calculated value column is colour-coded in the report: green=test success, red=test failure. Failed tests list the expected value and actual value calculated by the system, so you can investigate the discrepancy. In this case, the developers seem to have misinterpreted the requirement to provide the discount when the purchase is greater than $1000.


The value of Fit is not only in more precise communication of requirements, but as an automated acceptance testing framework. With Fit, your requirements can be tested regularly to indicate to developers if new development efforts meet expectations, and if refactoring or new development has broken existing functionality.

Well, that’s all for now. To learn more about Fit, check out http://fit.c2.com/

Thursday, May 24, 2007

Linda Rising on Collaboration, Bonobos and The Brain

Linda Rising made a great impression at the Agile Vancouver conference. She is an excellent presenter.

In her interview to infoQ she makes some interesting parallels between Agile teams and groups of our closest relatives - chimpanzees and bonobos.
It turns out that these ape species have very different social life.
Linda believes that humans are much closer to bonobos than to chimpanzees and this a reason for Agile methodologies to succeed.

She says,
"...suppose you have 2 groups, one group of chimpanzees and one group of bonobos, and you throw a bunch of bananas into the middle of the group. What would happen in the chimpanzees group is that everyone will get very excited and there will be a huge battle, and the alpha male and his supporters will physically beat up on everybody else, and they will get the bananas.

...Suppose we throw a bunch of bananas into the middle of the bonobos. They also get excited about bananas, and they begin jumping up and down, but their immediate reaction would be to have sex. And everyone would have sex with everyone. Males with males, males with females, females with males, young with old. And there will be a lot of sex and then everyone would share the bananas."

Here is the link:
http://www.infoq.com/interviews/linda-rising-agile-bonobos


Tuesday, May 1, 2007

Five Obstacles on the Way to Successful Test Automation

1. Manual Testers don’t trust the results produced by automated testing

The most obvious reason for starting a test automation project is to cut down on tedious and time-consuming manual testing. However, if testers don’t understand what automated scripts are testing, they won’t trust their results, and the same testing is done twice – by running automated scripts and then testing the same area manually.

This was my biggest surprise, when the company I worked for started implementing test automation. There are several causes of this distrust:

* Quite often, there is no direct correlation between test cases used in manual testing and test cases automated by scripts
* Results of automated test runs are often recorded in cryptic logs. It is difficult to understand which tests have failed and which have completed successfully
* Automation is often done by a separate team that does not work closely with the QA team
* When tests fail it is very difficult to see the cause. Is this bug in the application, a problem with a test environment, or an error in the test code?
* And the most obvious reason is that test automation is software. And all good testers have it in their blood - you cannot trust software

To break this distrust you need to:

* Make the QA team a customer of the automation team. Testers should create requirements and set priorities for automation to ensure their buy-in.
* Store manual and automated test cases in the same repository
* Report results of test automation execution in the same way that results of manual testing are reported. You may need to do some extra scripting to achieve this, but it will be worth the effort. When a QA lead runs a test coverage report, it should include results from both automated and manual test runs.

2 High Cost of Maintenance

It is easy to start a test automation project. Just buy one of the GUI tools like Silk Test or Mercury WinRunner, put it in the recording mode, click around, and you have something to show for your efforts.

But then your product evolves and even a small change in UI can easily break a big number of existing automation scripts. As a UI team rarely talks to a QA automation team, such changes always come as a surprise, and automation developers are caught in the constant fight to keep existing scripts running. I had a case when a small change in one of the style sheets broke all the automation scripts.

It is possible to get the maintenance cost under control by:

* Making the UI team coordinate application face lifts with test automation maintenance
* Using proper design for test automation code that avoids code duplication and keeps test business logic independent of UI clicks
* Making it easer to investigate errors

The main take away - never automate more than you can comfortably maintain.

3 Automation is treated as QA, not as a development project

Most automation projects are initiated by QA teams. But the automation team writes software, and, to do it well, it needs to have development skills, be able to do design, have coding standards and a proper development environment.

The majority of QA teams don’t have such skills. They are staffed, at best, with junior developers, and end up producing a bunch of scripts that are difficult to maintain.

This problem can be mitigated by:

* Augmenting the automation team with developers
* Assigning at least a part-time architect to it
* Defining coding standards
* Using a source control system and keeping the automation code in the same branch as the code that it tests
* Using a data-driven approach for designing test scripts
* Educating developers on the importance of automated testing and on how they can either make it easier or harder. This knowledge can greatly help because developers can often avoid problems early on that would be maintenance nightmares later.

4 Automating the wrong tests

Quite often automation projects are started ad-hoc without proper planning. As it is impossible to automate everything, the QA team needs to setup priorities.

Here are the types of tests should be on the top of your list:
a) Test cases that are difficult or impossible to test manually. For example, emulation of concurrent activities by multiple users.
b) Tests that are very time consuming. If you can replace 3 days of manual testing with 4 hours of automation, manual testers will appreciate your efforts
c) Boring, mindless, repetitive tests that drive testers nuts. Because of their repetitive nature they are usually easy to automate.
d) Tests that exercise complex business logic of the application and don’t rely much on the user interface

5 Where are Automated Tests when you need them?


Implementation of automated tests is usually lagging behind development of new functionality. The automation team is reluctant to start test development until new features are fully implemented and stable.

As a result, the automation team delays test automation until new functionality is finished and tested! In such a case, you are not getting the benefits of automation when you need it the most – during the rush to get a product out the door. You can only use it after the release for regression testing.

Several things can be done to minimize this time lag:

* Automation testing below the UI layer can be done earlier in the game as these interfaces are usually maturing earlier than the UI
* Applying a two step approach to automating testing of new functionality. Implement the first version of the automation test on the first stable build that contains new functionality. As soon as it is implemented, put the test aside and switch to automating a test for another feature. The main savings come from avoiding script maintenance during the most unstable period – initial bug fixing and the UI tune-up phase. When it is all done, return to the script and finish it up.


Tuesday, March 27, 2007

Agile Quality: Control vs. Assurance vs. Analysis

Questions about how testing fits onto agile development practices are usually answered into two very unhelpful ways for QA professionals:

1. Agile development eliminates the need for QA, developers test it all themselves.
2. QA has to work harder to keep up with development while maintaining their traditional methodologies and test approaches.

There is truth and falsehood in the statements. When I am asked how to "fit QA in" I like to frame my answer by defining three types of testing, Quality Control, Quality Assurance, and Quality Analysis, then go on to describe how I think each fits into most agile processes.

Quality Control

What a lot of people think of as testing is what I call Quality Control. Think of the guy sitting in the beer plant (or girl if you are a fan of Laverne and Shirley) watching the bottles go by making sure that there is nothing wrong with them before they get capped, this is quality control. In other words, you are inspecting the final product to ensure it meets the criteria for an acceptable product. Within any software project, unit testing, peer review, and regression testing are all forms of quality control. Inside of an agile project, these tasks need to be performed on a continuous basis. Unit tests need to be automated and made a part of a continuous integration strategy and peer reviews can be literally continuous, in the case of pair programming, or mandated on a regular basis in the form of diff reviews before check-ins and code reviews as a part of doneness criteria. There is also no controversy in stating that regression testing needs to be automated and should be run as often as possible.

Ideally, regression tests should be written in such a way as to be maintained along with the code. Using FIT (Framework for Integration Testing) is one good way to keep the tests in sync with the code. If the suite of FIT regression tests are run with every build, those tests that were not refactored along with the rest of the changed code that now fail need to be investigated to see if either the test was missed in refactoring or an actual bug was introduced. Though the cost of maintenance is not zero, there is a lower cost of maintenance and next to no chance that the automated tests will be abandoned.

As you can see, the ownership of quality control within the software product moves more onto the shoulders of the developers. This is as it should be in an agile project where the developer has the responsibility of meeting the customers' requirements, which usually implicitly include no regressions.

Quality Assurance

"Quality Assurance is a part and consistent pair of quality management proving fact-based external confidence to customers and other stakeholders that a product meets needs, expectations, and other requirements. QA assures the existence and effectiveness of procedures that attempt to make sure - in advance - that the expected levels of quality will be reached" Wikipedia

Within an agile project, the customer is constantly involved and informed. As such, there is no real need for "fact-based external confidence" building. Another best practice in agile development it to ensure that the acceptance criteria for all requirements are documented and well understood during the requirements gathering and iteration planning stages. Ideally, the validation that development of the requirements meets the acceptance criteria is also automated (again, FIT is a great tool for this automated validation).

So, again, it is the responsibility of the product owner in creating requirements and the customer working with the developers to assure "expected levels of quality" are reached.

I know I have a number of nervous testers and QA people at this point in reading, but you had to know that the main point was coming last.

Quality Analysis

So far I have mentioned these remarkably well written requirements and acceptance criteria in such a way that some may believe that they magically appear. Well, they do not and they are much too critical to the success of an agile project to neglect them. Here is where an experienced tester can contribute greatly to an agile team. A product owner or customer will provide vision in the form of high-level requirements and basic acceptance criteria. An experience tester can look at these criteria and with an understanding of the system that exists and/or the technologies involved expand and elaborate on these criteria. An especially experienced tester will also be able to suggest missing requirements and non-functional requirements that the customer/product owner has not had the time or experience to consider. A good example of how the acceptance criteria could be augmented is to add the boundary conditions to the acceptance (i.e. added a check for maximum field length sizes to FIT tables)

The tester having been freed from a lot of manual, tedious control and assurance testing can then provide value in performing exploratory testing. Using a tester's natural ability to ferret out instabilities in the system and looking at the system from a high level and turning things on their side as only a try tester/user can do.

Conclusion

In conclusion, what I feel is the role of a tester or QA person in agile projects is more of an Analyst role. Call it what you will, quality analyst, requirements analyst, system analyst, etc... An experienced tester can fill in those technical requirements that are missed by the customer with their high-level perspective but also missed by the developer with their focused perspective. The blend of technical skills, customer perspective, and user experience make the experienced tester/QA person ideal for requirements expansion and elaboration and provides a good career path into product ownership/management.


Tuesday, March 20, 2007

Types of customers

From time to time, I come across this complaint: developers tell me sad stories about some customer managers that "just don't get it". They do not understand Agile principles and the process they impose is somewhat sick. Are they morons?

No, they are not. First of all, it is counterproductive to consider them as inadequate people. There's always a thing that we just don't understand about them.

The most typical thing that happens is that customer managers do not try to help the team to adopt some good Agile engineering practices. Almost always the customer loves all the things about Team and Customer collaboration and Agile project management, but it looks like there's something magically annoying about unit testing, test driven development, refactoring, code reviews and pairing.

After all, we're all reasonable people and we all have the same goal, aren't we? So there must be some consensus about the way we do things. Or we have some deep misunderstanding of our goals.

So why do we need Agile engineering practices? Well, it allows us to shorten the test cycle, which is important for frequent delivery. It raises the quality of the code and the system becomes easier to maintain. As far as money, engineering practices just make the system cheaper to develop.

But note that it will make the development cheaper in the future. At the beginning of the project, it’s just a pure investment. There's no need to spend hours on automating testing if it takes a few minutes to do manual regression testing for the whole system.

So it looks like there are 3 types of customer managers:

  1. Product-Driven managers: People that value their product. They know that their welfare depends upon how successful their product is going to be. Typically they are the owners of the company. Their goals are long-term one: several years or more.
  2. Project-Driven managers: People that value their project. They will be rewarded if the project will be successful. They are mostly hired managers from bureaucratic organizations. Their goals are based on their reward system and are mid-term one.
  3. Demo-driven managers: There are some managers (thanks god I've seen only one) that value next demonstration to their stakeholders.

Obviously, product-driven managers invest enough efforts on technical excellence. Otherwise they going to fail in a year or so or at least might loose some money on trying to develop the system that is not as flexible as it supposed to be. This is the most comfortable customers for the team that value Agile principles.

Project-driven managers are the most typical ones. They always have the battle in their head. The angel tells them how important is to maintain high-quality in the system and how XP engineering practices can help in doing it. And the devil just make them realize that their bonuses depends how great the system's going to look like the next few months and doesn't depend upon how easy it would be to maintain the system in several years - it's going to be another project with some other manager.

Demo-driven managers don't have a struggle in their head. They just don't want to spend a minute on ensuring quality.

So if you have product-driven managers, you are lucky. Your development is a kind that prevents the problems rather than struggling with them.

If you have project-driven manager, just help the angel win ;-). Otherwise, you will spend most of your time heroically fighting with problems that could be easily avoided.

If you have demo-driven managers, God help you.

Tuesday, March 6, 2007

When is Scrum not Scrum?

Via Jason Yip, an interesting summary of potential process flaws of Scrum:

When is Scrum not scrum?




Tool Usage on Scrum Teams

Another great article from Michael Vizdos.
http://www.implementingscrum.com/cartoons/cartoons_files/implementingscrum-20070305.html
Can't agree more.


Wednesday, February 28, 2007

What on Earth is a ScrumMaster?

This past week I completed the Certified ScrumMaster Course (CSM). Over drinks on Friday night, I mentioned this in passing to some friends. “What the heck is a ScrumMaster?” was the overwhelming reply. It’s a weird name, I’ll admit.

Well, before I took the ScrumMaster course, I figured that the ScrumMaster was an Agile Project Manager, who made sure the team followed Agile practices, facilitated the daily scrum and schedules sprints.

However, the course revealed another side to the ScrumMaster, that of a coach and shepherd. The ScrumMaster acts as a shepherd as they shield the team from external influences and over-commitment, and empower the team to make decisions, instead of making decisions for them. The ScrumMaster is the coach that cheers on the team, ensures they follow the rules (and enforces them through peer pressure), and teaches them how to self-organize and work with the Product Owner.

The ScrumMaster is really a facilitator, rather than a manager. In fact our ScrumMaster course instructor revealed that it would be best if the ScrumMaster were not a project manager or technical person; it was better if they were not, as they would rely more on the team to make decisions. It is best if the ScrumMaster is not a team member, as there would be a tension between oiling the machine and producing product. Any employee could be a ScrumMaster, if they have the right personality for the job; the kind of person that is approachable, trusted, people-oriented, and detail-oriented. I have read several articles implying that QA people make good ScrumMasters, as they have the right mindset (perhaps we are used to taking one for the team, being jammed at the end of the waterfall for so many years). ;)

Digg it! Bookmark this post to del.icio.us




Integrating Fit with Cruise Control


Why using FIT?
Test Driven development is an Integral part of Agile processes. As part of Test Driven Development FIT is a good choice to be used as an automated test framework for acceptance testing.


Why using CruiseControl?
Usually agile processes suggest that continuous build and testing must be part of the development process. CruiseControl,in a very flexible and configurable manner,accommodates automating and scheduling the build and testing of an under development code.



As the above suggests, CC and FIT are very likely to be used in an agile project and I thought sharing some information on how CC can run FIT tests and display FIT reports can be beneficial. If you know of any better solution please let me know.


Please Note that CC provides a plug-in for displaying FITNESS test results and not FIT test results. If you are working with FITNESS and need to configure CC to show the test results for Fitness tests then see Fitness Reporting posting on gmane.comp.java.cruise-control.user newsgroup for an example.


I assume the reader of this posting already knows how to configure a project on Cruise Control. In case you do not know how to do it you can find more info on the following sites:

-Getting Started with CC document: CruiseControl Getting Started

-11 easy steps to get CC going: CruiseControl - Getting Started in 11 easy steps

Let’s assume the target project is already using FIT. Below I show you how to create an ant target for running your fit tests and also configure CC to use it:

1. Create two properties that are available to your build.xml for FIT input and output folders:

For example:

fit.storytests.dir=src/test/com/sentinel/test/fit/storytests

fit.results.dir=${build.dir}/fitReports

2. Create an ant target in your build.xml for running FIT Tests in your project:


* Add fitlibrary classpath:

<path id="fitlibrary.classpath">
<fileset dir="${libs}" includes="fitlibraryRunner.jar"/>
</path>


* Add fit test classpath:

<path id="fit.test.classpath">
<pathelement location="${build.test.classes.dir}"/>

<pathelement location="${build.classes.dir}"/>
<path refid="fitlibrary.classpath"/>
</path>



* Add build targets:

<target name="fit.test.prepare" depends="compile.test,fit.test.clean"
description="Prepare to do fit test"/>
<target name="fit.test.clean">
<delete dir="${fit.results.dir}"/>
</target>
<target name="test.fit" depends="fit.test.prepare" description="runs fit tests">
<echo>Please see the Fit Test Report page for more details on following

results:</echo>
<java classname="fitlibrary.runner.FolderRunner" fork="yes" failonerror="yes">
<classpath refid="fit.test.classpath"/>
<arg value="${fit.storytests.dir}"/>
<arg value="${fit.results.dir}"/>
</java>
</target>

3. To test you changes and run the fit tests run: ant test.fit

4. Create an snapshot of your project under CC and create a Project element in CC config.xml.

5. Create a file called fitLink.jsp and place it under [Cruise Control Home]\webapps\cruisecontrol directory. fileLinks.jsp provides a link to your FitReport folder under artifacts directory. Leave the ouput folder point to fitReports folder under build directory inorder to have the FIT test reorts published under CC artifacts. This is the content of fileLink.jsp:


-------------------------------
<%@ taglib uri="/WEB-INF/cruisecontrol-jsp11.tld" prefix="cruisecontrol"%>
</p><p>
<table width="98%" border="0" cellspacing="0" cellpadding="2" align="center">
<tr>
<td colspan="4" class="unittests-sectionheader">
Fit Tests:
</td>
</tr>
<tr>
<td class="unittests-data" colspan="2">
<cruisecontrol:artifactsLink>
<table width="98%" border="0" cellspacing="0" cellpadding="2" align="center">
<tr><td class="unittests-data"><a href="<%= artifacts_url %>/fitReports/reportIndex.html" target="_blank">View Fit Results</a></td></tr>
</table>
</cruisecontrol:artifactsLink>
</td>
</tr>
<tr>
<td>
<table width="98%" border="0" cellspacing="0" cellpadding="2" align="center"></table>
</td>
</tr>
<tr></tr>
<tr>
<td colspan="2"> </td>
</tr>
</table>
-------------------------------

6. Modify the buildResults.jsp file in the above directory to include fileLink.jsp. To do so include the line below at the bottom of buildresults.jsp:

<jsp:include page="./fitLink.jsp"/>

7. Inlcude the test.fit target in the build.xml file used by this project on CC.

<project name="myProject" default="build" basedir=".">
<target name="build">
<!-- Call the target that does everything -->
<ant antfile="build.xml" target="clean"/>
<ant antfile="build.xml" target="deploy-all"/>
<ant antfile="build.xml" target="test"/>
<ant antfile="build.xml" target="test.fit"/>
</target>
</project>

8. Restart your CC and open CC home page in your browser.

9. Press the build button for your new poject.

10. After the build is completed you should see a summary report of FIT test results on your project build result page in Errors\Warning section. At the bottom of the page you should also see a FIT Test section that provides a link to FIT Reports page.

Tuesday, February 27, 2007

Risk Assessment or thoughts in an airport

Recently I was traveling from Vancouver to New York. I could not say it was a smooth trip. At first, my taxi was late. I was about to call a dispatcher, when the driver eventually arrived. When I came to the airport, I had an hour and twenty minutes until the flight and was glad to see that the check in line to Delta Connection was not that long.

Oh boy, it was probably the slowest line I ever saw. There was only one Delta guy at the counter and he was not in a hurry. Also, something was not working properly at his stand and for each passenger he was leaving his seat and going to another counter to print baggage tags.

Thirty minutes later, he started to deal with two old ladies who were definitely planning a trip around the world based on a number of boarding passes he printed for them.

I was the next in line, and was starting to feel nervous that I may be really late for my flight. When I expressed my concerns to check-in clerk, he smiled back and said: “Don’t worry, the flight has been delayed”.

Sure, this put me at ease! I had just forty three minutes to make a connection in Salt Lake City. Without skipping a bit, instead of worrying about missing the flight from Vancouver I switched to worrying about missing the flight from Salt Lake City.

We landed in Salt Lake City about 30 minutes late. Those of us, who had connection to New York, started to run to another terminal with a slight hope that our plane was delayed. When we arrived at the gate, there was no sign of the plane.

But the plane has not left without us; it was canceled, due to a snow storm in New York.

I did not make it to New York that day. Airline rescheduled my flight to arrive in NY in the next day evening. My meetings would have been over by that time, so I decided to go back to Vancouver.

When I was sitting in airport’s restaurant waiting for my flight back home, it occured to me that I spent the entire day worrying about the wrong thing. Whatever I was considering the biggest risk in completing my journey, was not worth worrying about.

At every step I was getting more information about my environment, which changed my assessment of what prevented me from finishing the trip. At the end, bad weather in New York happened to be a higher risk, than the late taxi, slow check-in line, or short connection time.

After getting to terms with the idea that I won’t get to New York this week, my thoughts switched by a strange analogy to the way we assess risks in software development.

While nobody really likes spending too much time on assessing risks, most people including myself agree that it is good idea to keep track of a project’s risks.

A much more interesting and controversial issue is when do you need to start addressing risks? If you follow a waterfall approach, the answer is clear – address them at the beginning of the project as all important deign decisions are made at that time.

Many of us were on projects where people worried about wrong things. And not just worry, but spent time and money addressing problems that won’t be problems at the end of the project, while failing to anticipate some real future headaches - all these complex solutions for weird use cases, high performance scalable frameworks for never implemented features, risks addressed in alpha release for release 5.0 features.

This is just one of the reasons, why Agile approach saves money. It views software development as a learning experience. In Agile, you delay making a decision and acting on it up to the last possible moment, because it is guaranteed that you are going to know more about the problem in the future and be smarter then, than you are now.

I booked another trip to New York - direct flight this time :-)
Digg! Bookmark this post to del.icio.us

SCRUM illustrated

I like Is a Waterfall silent? post from Michael Vizdos blog illustrated by Tony Clark.
Both the content and the cartoon.

We're launching a blog!

Hi! Welcome to Think Agile – A corporate blog by Luxoft development’s employees.

Here you’ll find us at Luxoft sharing our everyday experiences with Agile software development practices and technologies, and reflections on working within a distributed Agile development team.