Sunday, October 07, 2007

Types of Equivalence: Equivalence Class Partitioning - II

Following this post of mine, I have been studying deep into understanding of this technique. Here are few more thoughts related to “equivalence”.

Here is ECP in nutshell - “Group a set of tests or data supplied for an application. Assert that all the tests/data belonging to group will teach you “same thing” (application behavior). Hence it is “sufficient” to use only one value/test from the group”.

Fundamental to ECP is the concept of “Equivalence”. Most of the authors or proponents of this technique give examples of date, integer fields and demonstrate identification of classes and equivalence. For example if you consider a date field in “NextDate” program, using “generally accepted rules” governing the usage of dates in Gregorian calendar – you can identify some classes – All the dates in the month of January can be considered as equivalent (except first and last day of January and first and last month of the century – which are boundaries). These “canned” classes appear to be applicable for every application that has date field in a “next date” function. Another example would be a field of integers (1-100) – most authors have mentioned the example of 2-99 as one equivalence class meaning all numbers in the range 2-99 would be treated “alike”.

I would call such equivalence that can be arrived without knowing anything about application, it logic and programmatic implementation details as “Universal equivalence”. It is easier to explain the concept of ECP using “universal equivalence” – date and integer fields are the most popular examples. But I see a danger here – the way ECP is explained using “universal equivalence” – it leaves out lots of key details such as basis for equivalence.

What are other forms of Equivalence?

Functional Logic equivalence - Consider the “Age (1-150)” field. Application logic might enforce that Age range 1-16 considered as one eq class (Kids) and others like 17-45 (Adults) and 60-99 (Senior Citizen). This kind of equivalence is very straight forward, easy to derive. Often specifications help us to arrive at such equivalence classes. Here is where the classic examples of “valid” and “invalid” EQ classes seems to have been originated.

If one were to go by pure functional logic equivalence, it would be sufficient to model Age parameter having three eq. classes and hence one value taken each from these classes (3 in all) would provide “complete” test coverage from an ECP perspective.
Dr Cem Kaner calls this as “specified equivalence”.

Implementation Equivalence – This is where one deep dives into how data is processed (validated, accepted, rejected), passed around (within application components) and eventually stored or discarded after use. Here we would talk about programming language (data types), software platform (OS and other related programs) and the hardware platform.

Dr Kaner identifies another two sets of equivalence - “Risk based” and “Subjective”. If the equivalence is in the eyes of tester (“these two tests appear to teach me same thing”), this form of equivalence is called “Subjective” equivalence. If a notion of equivalence is established targeting a specific class of risks or errors, it is referred as “risk based” equivalence.

Thus one way to apply ECP effectively is to start with universal equivalence and go on refining the sets EQ classes as we go deep into the application and platform (add, modify, delete the classes and their definitions). Implementation Equivalence seems to be the lowest or the last in the chain overrides the specifications of classes as determined by higher levels of equivalence (universal or functional logic type)

One question to spice up the discussion – Is ECP a black box technique?
Yes if we restrict to “universal and functional logic” equivalence.
No if we deep dive into code of the application and look around at platform (software and hardware)

What do you think?

[ Update ]

ECP attempts "simplify" a big picture (data domain with infinite set of possible values"). When attempting to apply ECP for a data variable, best starting point would be "what is that big picture I am trying simplify using ECP"? This is a top-down approach - model, understand, analyse, hypothesize the big picture then go to next level and then think about EQ. classes. I have people mostly approaching this from "bottom up" approach - think about valid and invalid classes first (or even actual values) then if possible think about the big picture.

Which approach you think is a useful one to start with?

BTW, there is "Equivalance principle" by Einstein related to theory of relativity. Can I say equivalence as applicable software tests is "relative" in nature?

Shrini

6 comments:

amagazine said...

Hi Shrini,
Nice information on Equivalence Class Partitioning. While to be honest, i haven’t really seen many testers really keen to know, explore and apply equivalence class partitioning very actively in testing. Your attempt is really praise-worthy.
I have seen the application of ECP not only in designing the test cases but also in troubleshooting issues to some extent. Basically, it stems from James Bach's description of ECP- "Some distinctions don't make a difference."
Here's an example-
Consider a situation in which a tester gives a large string as a text input e.g.
asdfghQWERTY!@#$%^&*)();';,./;:`~
and the resultant test result in a crash. As a result tester would troubleshoot to figure out which character actually caused the application to crash. This is where the concept of ECP can be beneficial i.e. group the similar characters (i.e. similar failure causes) and try the tests again. Possible groupings- lower-case letters, upper case letters, numeric characters, special characters etc. And work to narrow down the problematic characters.
This example may seem trivial but the application of ECP is indeed beyond test design (I have experienced it in troubleshooting bigger issues than described above) as i see it and it is just contextual.

Anuj Magazine
http://anujmagazine.blogspot.com

Shrini Kulkarni said...

Good Point Anuj,

Using the concept of ECP in troubleshoting or defect or suspecious behavior investigation is indeed an interesting application of ECP ...

I appreciate it ..

Shrini

alan said...

Hi Shrini -

I think you may be trying to make the whole thing too complex. I don't really see different types of ECs - ECP is simply a method for reducing the overall number of test methods by treating some groups of values the same.

But I think your last question reveals some of your confusion. ECP can be a black box method or a white box method, or a combination of the two. In many cases, domain knowledge of the area is enough (i.e. if you're a calendar expert, you can come up with ECs entirely from a bbox perspective).

However, a the application is poorly designed (or in some cases when it's not), the ECs may not be what you expect, so code analysis may be needed in order to be accurate.

Consider an input that takes values from 1-100. From a bbox persepective, there are 3 Ecs: 1, 2-99, and 100 (boundaries, and the middle numbers). The code underlying this input, however, could be something like this:

if (input > 10)
{
DoSomething();
}
else if (input == 10)
{
DoSomethingElse();
}
else
{
// 11-100 are all treated the same
}

In this case, the ECs include the number 10 (as well as the boundaries around 10). If the value is used anywhere else, 1 & 100 may still be valid tests (you'd have to analyze the code to see for sure), but you'd probably end up with ECs of 1,9,10,11,12-99, and 100.

Then you would have to find out why 10 was so special :}

Alan said...

I received your email, and I think it has inspired me to change the way I explain equivalence class partitioning. ECP does indeed reduce the overall number of test cases needed. In order to do this accurately, it forces the tester to examine the way in which all input is processed within the application. This knowledge and examination of the data within the application, and the understanding of how this data is processed by the tester, is the key of ECP.

Once the tester has thorough understanding of how all of the data types are used within an application they are able to accurately remove some test cases without fear of missing critical bugs. You can't dive in to ECP with an initial intent of drastically reducing test cases. The first thing you must do is examine the data consumed by the application sufficiently enough that you then can reduce test cases in order to save time. Of course, if you are in a situation where you a constrained by time and have time to run as many test cases as you'd like you may choose not to do ECP. However, you would be likely to miss out on certain bugs because you have failed to take the time to understand how the application works.

James Bach said...

Equivalence classes are not real things. They don't actually exist in the nature of software. They are just assertions by the tester that "if I try one, then probably I don't need to try the others because I already think I know what those test results will be." But on what can such an assertion rationally be based? It seems to be an assertion that we can know the results of a test without running a test. Obviously, that's absurd. It goes against everything testing stands for: ground truth.

Why is it okay for testers not to run tests, but it's not okay to fire all the testers?

More often, what someone calls an equivalence class is simply a class of apparently similar but not identical things. As a way of prioritizing work, a representative of the class is used at first, but later the tester may come back and select many more representatives until time runs out.

They should be called "similarity classes" or "affinity classes".

Michael said...

Equivalence is in the eye of the beholder. Cem Kaner has suggested that two values are equivalent if we expect the program to treat them in the same way. I mostly agree with this. However, I'd say (mis)treat, I'd also want to know who this "we" is, and I'd like to know why we expect the values to be treated in the same way. Equivalence tends to be based on some theory that a certain error is going to occur or not--but what motivates us to believe that that's a theory worth testing?

As James suggests, it's a mistake to treat equivalence as a property or attribute of items. One name for this kind of mistake is "reification", ascribing construct attributes to things that are merely concepts. Reification is like a pandemic disease in the testing business.

This is why I have to disagree with Alan here. Equivalence class partitioning doesn't reduce the number of test cases needed; it may reduce someone's perception of the test cases needed. But of course that need is subjective in the first place, so to be accurate, we'd have to say it reduces someone's perception of someone's perception of the number of test cases needed. Complexity is also apparently easy to reify.

I do agree with Alan when he says that the equivalence classes may not be what you expect. That's why we test.