Archive for June, 2009

Controlled Experiments To Test For Bugs In Our Mental Models

June 24, 2009

Here’s a 22 minute video lecture on using controlled experiments to discover what your customers think, and if you work on web software I suspect you’ll find it 22 minutes well spent. Kohavi points out several examples where the results of tests were quite surprising, as well as some interesting suggestions for how to organize them.

Note, this is related testing software in the broad sense. The bugs that split tests find are the (plentiful) bugs in our mental models, not in our implementation.

Advertisements

Question about Model-Based Testing

June 3, 2009

 

First, a quick note on terms. I tend to use James Bach’s definition of Testing as “Questioning a product in order to evaluate it”. All test rely on /mental/ models of the application under test. The term Model-Based Testing though is typically used to describe programming a model which can be explored via automation. For example, one might specify a number of states that an application can be in, various paths between those states, and certain assertions about what should occur in on the transition between those states.
There are real costs here: building a useful model, creating algorithms for exploring it, logging systems that allow one to weed through for interesting failures, etc. Whether or not the costs are reasonable has a lot to do with *what are the questions you want to answer?* In general, start with “What do I want to know? And how can I best learn about it?” rather than looking for a use for an interesting technique.
All that said, some excellent testers have gotten a lot of mileage out of automated model-based tests. Sometimes we have important questions about the application under test are best explored by automated, high-volume semi-randomized tests. Here’s one very colorful example from Harry Robinson (one of the leading theorists and proponents of model-based testing) where he discovered many interesting bugs in Google driving directions using a model-based test (written with ruby’s Watir library): http://model.based.testing.googlepages.com/exploratory-automation.pdf
Robinson has used MBT successfully at companies including Bell Labs, Microsoft, and Google, and has a number of essays here: http://www.harryrobinson.net/
Ben Simo (another great testing thinker and writer) has also written quite a bit worth reading on model-based testing: http://www.questioningsoftware.com/search/label/Model-Based%20Testing
Finally, a few cautions: To make good use of a strategy, one needs to explore both its strengths and its weaknesses. Toward that end, James Bach has an excellent essay on the limits of Model Based Testing http://www.satisfice.com/blog/archives/87 has links to his hour long talk (and associated slides on the Unbearable Lightness of Model Based Testing.
I’ll end with a note about what Boris Beizer calls the Pesticide Paradox: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.” Scripted tests (whether executed by a computer or a person) are particularly vulnerable to the pesticide paradox, tending to find less and less useful information each time the same script is executed. Folks sometimes turn to model-based testing thinking that it gets around the pesticide problem. One should remember that in some contexts model-based testing may well find a much larger set of bugs than a given set of scripted tests…but that it is still fundamentally limited by the Pesticide Paradox. Remembering its limits — and starting with questions MBT addresses well — it has the potential to be a very powerful testing strategy.

If you haven’t been to Stack Overflow yet, it’s an interesting forum for asking technical questions — and sorting through the answers — written by Joel Spolsky and Jeff Atwood. 

I noticed a question on Model-Based Testing over there that I had something to say about. I wanted to link to articles by Harry Robinson, Ben Simo and James Bach…but as a new user, I’m allowed to add only one link. What to do? How about using my one link to go to my blog. Here’s the original question.

And here’s my answer, complete with links:

First, a quick note on terms. I tend to use James Bach’s definition of Testing as “Questioning a product in order to evaluate it”. All test rely on /mental/ models of the application under test. The term Model-Based Testing though is typically used to describe programming a model which can be explored via automation. For example, one might specify a number of states that an application can be in, various paths between those states, and certain assertions about what should occur in on the transition between those states. Then one can have scripts execute semi-random permutations of transitions within the state model, logging potentially interesting results.

There are real costs here: building a useful model, creating algorithms for exploring it, logging systems that allow one to weed through for interesting failures, etc. Whether or not the costs are reasonable has a lot to do with *what are the questions you want to answer?* In general, start with “What do I want to know? And how can I best learn about it?” rather than looking for a use for an interesting technique.

All that said, some excellent testers have gotten a lot of mileage out of automated model-based tests. Sometimes we have important questions about the application under test are best explored by automated, high-volume semi-randomized tests. Here’s one very colorful example from Harry Robinson (one of the leading theorists and proponents of model-based testing) where he discovered many interesting bugs in Google driving directions using a model-based test (written with ruby’s Watir library).

Robinson has used MBT successfully at companies including Bell Labs, Microsoft, and Google, and has a number of helpful essays.

Ben Simo (another great testing thinker and writer) has also written quite a bit worth reading on model-based testing. 

Finally, a few cautions: To make good use of a strategy, one needs to explore both its strengths and its weaknesses. Toward that end, James Bach has an excellent talk on the limits and challenges of Model-Based Testing. This blog post of Bach’s links to his hour long talk (and associated slides).

I’ll end with a note about what Boris Beizer calls the Pesticide Paradox: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.” Scripted tests (whether executed by a computer or a person) are particularly vulnerable to the pesticide paradox, tending to find less and less useful information each time the same script is executed. Folks sometimes turn to model-based testing thinking that it gets around the pesticide problem. In some contexts model-based testing may well find a much larger set of bugs than a given set of scripted tests…but one should remember that it is still fundamentally limited by the Pesticide Paradox. Remembering its limits — and starting with questions MBT addresses well — it has the potential to be a very powerful testing strategy.