Wednesday, January 16, 2008

Reductionism and Test Techniques - who, what?

Scientific reductionism is an undeniably powerful tool, but it can mislead us too, especially when applied to something as complex as, on the one side, a food, and on the other, a human eater. It encourages us to take a mechanistic view of that transaction: put in this nutrient; get out that physiological result.
- Michael Pollan

I have heard statements like the ones below several times … I am sure many of you have.

- Orthogonal Array or pair wise technique reduces the number of test cases and hence can optimize the test effort

- Use of orthogonal array based testing (Test design technique) has demonstrated to produce superior test plans that improve testing productivity by a factor of 2.

- Equivalence class partitioning (ECP) is a functional testing technique that systematically reduces the number of tests from all the possible data inputs and/or outputs, and it provides a high degree of confidence that any other data in one particular subset will repeat the same result.

- Boundary value technique catches the bugs occurring near boundaries. Use of boundary value technique increases the effectiveness of test coverage.

What do you notice common in all these statements? Faith and reductionism. When you say “Technique XXX does this or help in this [reduce the number of test cases], you are forgetting that it is not the technique but it is YOU as someone who is making an assumption/assertion.

For example, when you say “Orthogonal array technique reduces the number test cases” – you actually saying, I as a tester using this technique making assumption that only pair wise test cases generated by this technique matter – rest I ignore.
When some one says “ECP technique systematically reduces the number of test cases” – that is a view of reductionism. It is tester's hypothesis, judgment and assertion that certain sets of data values CAN be treated equivalently. ECP does not "reduce" ANYTHING by itself systematically or otherwise. ECP's principle per se is that there are groups of data that can be modeled to be treated identically by the AUT

I can say that, this is a real difference. This shifts the focus from the technique to the person who is using it. You then can not shift the blame to the technique if it fails.

As testers we use lots of tools, techniques and heuristics to understand, operate and observe the test objects. Every time I use a technique or a heuristic, It is my explicit choice and I make a set of assumptions and assertions. The goodness and applicability of results of the test or technique, directly depends upon these assumptions and assertions. Technique/tool has no role in it.

Pradeep Soundararajan and Michael Bolton have it here – A focus on tester not on test or a tool or technique…

If you take a gun and kill a person – what do you say? Gun killed the person?



Chris said...

I think that the real difficulty is that the tester selects a (tool/technique), and then relies on that to choose the tests s/he will run. A child could select a loaded gun to pound a peg into a toy but that in and of itself doesn't make a gun the best implement with which to do that. The child, on the other hand, is using its experience (A hard object with a handle is useful for pounding pegs into toys) to select something in its environment that it believes is able to do what it wants to do. Sad results may ensue.

So, for example, using equivalence partitioning the tester decides that the quotation mark and the dollar sign are in the same partition, as they are both special characters. The tester chooses to use the dollar sign as input. However, because of the syntax of the programming language used, a quotation mark in an input field exposes a failure.

So, accepting the premise that equivalence partitioning works for you means that you must have sufficient knowledge of the context to choose the partitions so that the maximum number of failures are exposed (That is, make sure that if you're pounding pegs into toys you know what the tool you select to pound them in actually does).

Saying that pairwise test tools reduce the number of test cases is not correct. All the possible test cases are still there. Saying that you choose to believe the test tool when it identifies a subset of all possible tests and that if you run only those tests you will find the maximum number of defects is another thing altogether.

Management believes that, eventually, test tools will be "intelligent" enough so that most human testers will be dispensable. They don't say this but down deep, that's what they think. Test tool vendors (I believe) encourage this opinion as it's good for their business.

Observations like yours expose the fact that human choices, guided by experience, are required in the testing. No substitutes allowed.

Anonymous said...


People do come up with a bunch of techniques to reduce the scope of validation or Capture the tests that might be relevant to the context only.

We all know that it's difficult to validate all the test items to it's maximum extent.

This trend leaded towards capturing the tests that might be critical for the context in which we operate with a set of assumptions in place.

The bad part is that these assumptions are not explicit always.

Tools / Techniques are being used used every where (not just Software Testers), but what matters end of the day is the person who is successful in his / her deliverables with the help of those tools.

In reality, the blame will always goes against a person but not on a tool.


Anonymous said...

Shrini -

Of course, if you use tools or techniques to aid your testing, but blindly follow the results without using your brain, you are destined for failure.

That certainly doesn't mean you should stop using the tools or techniques.

To use an American colloquialism, you seem to often want to throw the baby out with the bathwater.

Anonymous said...

Was going through your post and because of the reference you have given to Pradeep, also went through his recent post and also comments.

"Testing is questioning a product in order to evaluate it."

Do you feel is this definition complete.

As per Pradeep's comments on the readers comment, he says its complete.

Just wanted to know your views on it.

My view, We cannot define testing in a broader view, because any definition will be incomplete though i might be wrong also.

What do you say?????