Friday, December 14, 2007

Advantages of "highly repeatable tests" ...

I was reading Ben simo's post on "what is software testing" - a meticulously created list of quotes about software testing. The beauty of this post is that it traces the history of software testing”. One good way to read this post is to evalauate each statement or the quote mentioned, with respect to it's relevance to software testing.

I stumbled upon this GEM from James Bach.

“Highly repeatable testing can actually minimize the chance of discovering all the important problems, for the same reason that stepping in someone else’s footprints minimizes the chance of being blown up by a land mine.”

- James Bach,Test Automation Snake Oil, 1996

So, if you have an excellent set of "highly" repeatable tests, in terms of execution (automatable sequence of actions) and in terms of results (pass of fail) - congratulations, you have successfully managed to find a set of test cases or scenarios where the software is least likely to fail - meaning you will not (or do not expect to) see bugs/problems

But ... wait .. is that your testing mission?

I heard someone yelling from my back… “Yes .... that is what we expect in regression testing. But sometimes occasionally we do find bug as developer made a mistake that was caught or tester [by mistake] deviated from scripted test sequence [a process issue or discipline issue]"

What do you say?



AlanPa said...

I say that highly repeatable automation is useful for, as you stated, regression testing. For products with a long support cycle and/or high code churn, there is value with that type of automation.

It also leads to the need for data driven, or model-based testing - or any type of testing that can adapt to try to find new places to put the footprints.

I'm too lazy to type it up, but there's a phenomenon known as the pesticide paradox that is often compared to test automation. A good automation strategy should attempt to avoid the pesticide paradox in some way (and also note how and where "highly repeatable" tests are used).

Anonymous said...


From past few posts you are mentioning posts and articles written by others.
I would like to see more articles written by you with your thoughts into it.
Anyway its ur call and your blog.
But still eagerly waiting for your thoughtful articles.

Shrini Kulkarni said...

Dear anonymous,

I appreciate your advice and will try to post my articles.

I agree that 3 or 4 out of previous 5-6 posts have been entirely other's work. I quoted them in my blog for two reasons -

1. It will help my blog readers as some of first time visitors of my blog and those who happened to see my blog by accident. It is like spreading good word.

2. For my own benefit - I would like to preserve few of these posts as "golden" references.

If you look at my blog and my so called "original" works, many have been in one or other way inspired by the work of my gurus - James Bach, Cem Kaner and Michael Bolton.

In some cases, my blog posts and articles get inspiration from the project work that I do and from the people I work with and some times from those people who mail me asking for help in topics related to testing.

So while I would like write about my original thoughts as many as possible, some times here and there I might quote or reproduce the work others (by giving them appropriate credit) which I like most.

I have taken note of your views -surely will address them.

I would have loved to know you by name than as “anonymous"


Shrini Kulkarni said...

Hi Alan,

>>> For products with a long support cycle and/or high code churn, there is value with that type of automation.

I understand, but this value comes with a rider - failure to identify new problems or the problems that are immune to these repeatable tests. Usage of "highly repeatable" tests can potentially distract the tester from "hidden/new" bugs. Real danger from these tests comes when stakeholders start viewing these tests having "supreme" ability to identify "broken" features. One might say "if this regression set passes, we are *sure* that there are no new bugs".

>>It also leads to the need for data driven, or model-based testing - or any type of testing that can adapt to try to find new places to put the footprints.

In my opinion, tests like "data driven" and model based (state machine based formal models) tests are generally designed separately and as such are not extensions of "regression suite".

>>>I'm too lazy to type it up, but there's a phenomenon known as the pesticide paradox that is often compared to test automation.

"Pesticide paradox" as a problem, affects any kind of "scripted" testing (manual and automation). "Repeatability" as an attribute of tests, has both pros and cons.

I have observed that "cons" of repeatability are not well understood and articulated.


Cem Kaner said...

I think it was Boris Beizer who coined the phrase "pesticide paradox" to describe the problem that a regression test series gets less and less powerful as you use it over and over.

I think it was Brian Marick who first published the minefield analogy (running the same series of tests over and over is like following one path through a minefield. You might keep that path clear and safe, but the rest of the minefield still has plenty of mines.)

Some companies have serious source control problems. Old bugs reappear because old code versions get slipped back into the builds.

Some companies have seriously unmaintainable code and the same method breaks over and over and over.

If the probability is high that old bugs will come back or code that used to pass tests will fail, then repetitious testing will expose these failures. In my experience, the better solution for the company is to look at root causes (such as weak source control) and fix the development process, rather than wasting huge amounts of money and time on repeated testing. However, the test group lives inside an organization that changes at its own pace. If repetitive testing provides valuable information on an ongoing basis, then it has value.

In my experience, most bugs are found by tests that try something new, a new value for a variable, a new combination of values, a new timing, a new device, a new operational sequence. Testing with old tests is easy and convenient, but unlikely to find bugs that were missed in previous testing.

Anonymous said...


Since you do a lot of surfing, reading on testing why don't you start one like this for testing

" Planet Perl is an aggregation of Perl blogs from around the world. Its an often interesting, occasionally amusing and usually Perl related view of a small part of the Perl community. Posts are filtered on perl related keywords. The list of contributors changes periodically. You may also enjoy Planet Parrot or Planet Perl Six for more focus on their respective topics. "

Why dont you aggregate and maintain a site of all blogs??????

Suggested the same to Pradeep also

- Bj - said...

Boris Beizer stated the Pesticide Paradox is "Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual."

In layman's terms, he is stating that no single approach to testing is effective in exposing all defects.

The justification for regression testing is explained in Beizer's corallary law known as the Complexity Barrier which states "Software complexity (and therefore that of bugs) grows to the limits of our ability to manage that complexity, and also in the Organism Principle which states "When a system evolves to become more complex, this always involves a compromise: if its parts become too separate, then the system's abilities will be limited - but if there are too many interconnections, then each change in one part will disrupt many others."

Basically, eliminating defects in part of the software becomes more complex, and the greater the complexity the greater the probability that a change in one part of the system will effect other parts of the system which were previously tested.

- Bj -