Question about Model-Based Testing

 

First, a quick note on terms. I tend to use James Bach’s definition of Testing as “Questioning a product in order to evaluate it”. All test rely on /mental/ models of the application under test. The term Model-Based Testing though is typically used to describe programming a model which can be explored via automation. For example, one might specify a number of states that an application can be in, various paths between those states, and certain assertions about what should occur in on the transition between those states.
There are real costs here: building a useful model, creating algorithms for exploring it, logging systems that allow one to weed through for interesting failures, etc. Whether or not the costs are reasonable has a lot to do with *what are the questions you want to answer?* In general, start with “What do I want to know? And how can I best learn about it?” rather than looking for a use for an interesting technique.
All that said, some excellent testers have gotten a lot of mileage out of automated model-based tests. Sometimes we have important questions about the application under test are best explored by automated, high-volume semi-randomized tests. Here’s one very colorful example from Harry Robinson (one of the leading theorists and proponents of model-based testing) where he discovered many interesting bugs in Google driving directions using a model-based test (written with ruby’s Watir library): http://model.based.testing.googlepages.com/exploratory-automation.pdf
Robinson has used MBT successfully at companies including Bell Labs, Microsoft, and Google, and has a number of essays here: http://www.harryrobinson.net/
Ben Simo (another great testing thinker and writer) has also written quite a bit worth reading on model-based testing: http://www.questioningsoftware.com/search/label/Model-Based%20Testing
Finally, a few cautions: To make good use of a strategy, one needs to explore both its strengths and its weaknesses. Toward that end, James Bach has an excellent essay on the limits of Model Based Testing http://www.satisfice.com/blog/archives/87 has links to his hour long talk (and associated slides on the Unbearable Lightness of Model Based Testing.
I’ll end with a note about what Boris Beizer calls the Pesticide Paradox: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.” Scripted tests (whether executed by a computer or a person) are particularly vulnerable to the pesticide paradox, tending to find less and less useful information each time the same script is executed. Folks sometimes turn to model-based testing thinking that it gets around the pesticide problem. One should remember that in some contexts model-based testing may well find a much larger set of bugs than a given set of scripted tests…but that it is still fundamentally limited by the Pesticide Paradox. Remembering its limits — and starting with questions MBT addresses well — it has the potential to be a very powerful testing strategy.

If you haven’t been to Stack Overflow yet, it’s an interesting forum for asking technical questions — and sorting through the answers — written by Joel Spolsky and Jeff Atwood. 

I noticed a question on Model-Based Testing over there that I had something to say about. I wanted to link to articles by Harry Robinson, Ben Simo and James Bach…but as a new user, I’m allowed to add only one link. What to do? How about using my one link to go to my blog. Here’s the original question.

And here’s my answer, complete with links:

First, a quick note on terms. I tend to use James Bach’s definition of Testing as “Questioning a product in order to evaluate it”. All test rely on /mental/ models of the application under test. The term Model-Based Testing though is typically used to describe programming a model which can be explored via automation. For example, one might specify a number of states that an application can be in, various paths between those states, and certain assertions about what should occur in on the transition between those states. Then one can have scripts execute semi-random permutations of transitions within the state model, logging potentially interesting results.

There are real costs here: building a useful model, creating algorithms for exploring it, logging systems that allow one to weed through for interesting failures, etc. Whether or not the costs are reasonable has a lot to do with *what are the questions you want to answer?* In general, start with “What do I want to know? And how can I best learn about it?” rather than looking for a use for an interesting technique.

All that said, some excellent testers have gotten a lot of mileage out of automated model-based tests. Sometimes we have important questions about the application under test are best explored by automated, high-volume semi-randomized tests. Here’s one very colorful example from Harry Robinson (one of the leading theorists and proponents of model-based testing) where he discovered many interesting bugs in Google driving directions using a model-based test (written with ruby’s Watir library).

Robinson has used MBT successfully at companies including Bell Labs, Microsoft, and Google, and has a number of helpful essays.

Ben Simo (another great testing thinker and writer) has also written quite a bit worth reading on model-based testing. 

Finally, a few cautions: To make good use of a strategy, one needs to explore both its strengths and its weaknesses. Toward that end, James Bach has an excellent talk on the limits and challenges of Model-Based Testing. This blog post of Bach’s links to his hour long talk (and associated slides).

I’ll end with a note about what Boris Beizer calls the Pesticide Paradox: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.” Scripted tests (whether executed by a computer or a person) are particularly vulnerable to the pesticide paradox, tending to find less and less useful information each time the same script is executed. Folks sometimes turn to model-based testing thinking that it gets around the pesticide problem. In some contexts model-based testing may well find a much larger set of bugs than a given set of scripted tests…but one should remember that it is still fundamentally limited by the Pesticide Paradox. Remembering its limits — and starting with questions MBT addresses well — it has the potential to be a very powerful testing strategy.

About these ads

4 Responses to “Question about Model-Based Testing”

  1. Tim Coulter Says:

    Thanks Jeff. Exactly the writeup I was looking for! Thanks for the links and great ideas.

  2. Lanette Says:

    This is a good summary. I read through James Bach’s slides and I agree, but I also think that I’d like to argue with him about this because it isn’t intended to cover EVERYTHING. I mean, it is a great way to cover the functional testing, but it shouldn’t be the entire test strategy. Any technique when seen as “the magic bullet” is going to fall apart. Only when we have a reasonable balanced diet of appropriate testing for the software under test is it going to be testing that can scale while assuming the right amount of risk.

    I really LIKE model based testing because it is way smarter and more flexible than test cases. It has more value over time and as a tester it helps me understand what I’m testing for more than just the automated testing of some tool. It helps the tester understand what is going on. I would say that the making and knowing the model is often of MORE value than even reading the code for understanding what is a good way to test this, all automation aside. You can look at the model and then create tests. The “what happens if” scenarios start to come up. Also, you can collaborate with a model and circle one section and tell your test partner “go for it in this area” and they can explore a visual charter.

    My point is, don’t limit the model just to the automation. A visual overview has much more value than that.

  3. Jack Says:

    Thanks for sharing this information Jeff. I got your blog while I am searching the same content in Google.

  4. Simon Ejsing Says:

    Thank you for a great post, with some good references!

    I would argue that the pesticide problem becomes less apparent when working on software that is under constant development. In this case large regression suites will potentially provide you with much more information at a later stage, when code is being refactored/replaced/removed.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

%d bloggers like this: