Monday, March 17, 2008

18 Myths Associated with Exploratory Testing ...

Myths are public dreams; dreams are private myths – Joseph Campbell
Science must begin with myths, and with the criticism of myths - Karl Popper


Instead of writing on what is exploratory testing, to me, it looks easier to start of explaining what is not exploratory testing. For those who are hearing this word for the first time – let me remind you this is a testing approach that involves simultaneous test design and execution with an emphasize on learning. The term exploratory testing is coined by Dr Cem Kaner around 1983.

Exploratory testing as approach has a legacy. Its predecessor “Ad-hoc” testing is a perceived as spoilt kid. Until recently, I was under the impression that adhoc testing is quick and monkey type of testing (often connected with sloppy testing) where you just play with application in order to find some bugs that are out of specification. I was proved wrong during a rapid software testing workshop that I attended at Toronto. James Bach mentioned in that class that the word “Adhoc” means “to the purpose”. Most people believe (as I did initially) that Adhoc testing is undirected, random clicking of mouse and navigation. This is probably the reason, why Dr Kaner decided to name the baby differently so that the “stigma” associated with the approach of testing goes off.

I think Dr Kaner did not stop at just naming the baby, he along with James Bach, Michael Bolton, Jonathan Kohl, Jonathan Bach and others did lots of research and practice related to “exploratory testing”. Today, if many of us talk so confidently about this – it is due to the pioneering work done by these people.

Let me also remind that exploratory testing is not a testing technique but an approach and opposite of exploratory testing is “scripted testing”.

There so much of confusion, skepticism and disbelief associated with this approach to testing. Based on my interactions with people from all walks of testing world, I have drawn a partial list of myths that are associated with exploratory testing.

Let us go straight into the list of myths …

Why bother about ET?
1. ET -- every one does ET while doing testing - why bother? why have a special name?
2. ET is some kind of snake oil - people use this term to indicate some mystic thing to make money
3. ET will not work in my context - I am not even willing to give it a try as I am not convinced we should give a try. I know it for sure.
4. ET does not seem to have come under the radar of Gartner or Forrester's -- it might not be that popular.

About the form of ET
5. ET - is an unstructured, ad-hoc (meaning sloppy) process.
6. ET is nothing but doing testing without any formal test cases.
7. ET is instant bug hunting process.


About ways of doing ET
8. ET should always be done after completion of all planned scripted testing (if time permits) If you have time (while you wait for a new build), do ET to use your time productively.
9. ET is not predictable and repeatable (was it meant to be repeatable?)
10. ET shows scant respect for well established Test techniques - how can you do testing without using any of those time tested techniques

About skills required for ET
11. ET requires in depth domain expertise hence only domain experts can do it - it is not every ones cup of tea
12. ET seems to require special skills (domain, quick learning etc) - hence we do not do it.
13. ET is highly person dependent, unacceptable to us - we are resource starved industry.

About suitability of ET in a context
14. Our quality process standards require that every test effort be substantiated by a detailed report of what test cases are executed, on what platform, what data is used, how many test cases passed, how many failed etc. ET is not so good in producing such report. ET does not provide enough evidence/proof of having executed testing.

About perceived Value of ET
15. ET is not process oriented and is not methodical (requires highly skilled disciplined, responsible testers). Anything that is so “person/Skill” oriented is strict no-no in our environment (process driven)
16. ET can not be outsourced - even if it is we can not assess the progress - it is an uncontrollable process.
17. ET can not be automated - I am looking to reduce spend on testing - ET can not help me there.
18. ET seems to be useful in only in those environments where there are no requirements but ours is a very structured process. Why to use ET?

Why these are myths and not the truths – that is an exercise to the readers. I welcome each one you, the reader to challenge my claim that these are myths and lets debate …

Shrini

9 comments:

Anonymous said...

Point 3. is a more general innovation stopper. "We have always been doing this in that way."

Generally spoken this ia something you will always get discussed when change needs to be introduced.

Erik Petersen said...

The interesting thing is for a long time Cem thought that James Bach came up with the term, until Cem (or James, I can't remember which) realized it was in Cem's earlier writings...

Your list is a good one. We need to bring in the mythbusters to work on them!

Erik
www.testingspot.net

antigrav_kids KD0FNR said...

Errr... If they just heard the term for the first time, how could you remind them?

~Ashish said...

Very True,
Infact if Exploratory testing done in a planned way. It can produce great result. It can be planned if we could just sweep through the bug database and identify the more error prone area. Doing Feature swap, Organizing Bug Hunt. Etc

Aaron Hodder said...

Here are my takes on these points:

1. ET -- every one does ET while doing testing - why bother? why have a special name?
Giving it a name can take it from an ad hoc procedure, and turn it into a discipline. Why give equivalence partitioning a name? Everyone does it....

2. ET is some kind of snake oil - people use this term to indicate some mystic thing to make money
In my case, I advocte the use of ET to save money; to actually spend as much time as we have testing the product, instead of writing about testing the product.


3. ET will not work in my context - I am not even willing to give it a try as I am not convinced we should give a try. I know it for sure.
I cannot think of a context where at least some ET wouldn't be beneficial.

4. ET does not seem to have come under the radar of Gartner or Forrester's -- it might not be that popular.
I don't know who Gartner or Forrester's are :/

I'll answer the rest later!

About the form of ET
5. ET - is an unstructured, ad-hoc (meaning sloppy) process.
6. ET is nothing but doing testing without any formal test cases.
7. ET is instant bug hunting process.


About ways of doing ET
8. ET should always be done after completion of all planned scripted testing (if time permits) If you have time (while you wait for a new build), do ET to use your time productively.
9. ET is not predictable and repeatable (was it meant to be repeatable?)
10. ET shows scant respect for well established Test techniques - how can you do testing without using any of those time tested techniques

About skills required for ET
11. ET requires in depth domain expertise hence only domain experts can do it - it is not every ones cup of tea
12. ET seems to require special skills (domain, quick learning etc) - hence we do not do it.
13. ET is highly person dependent, unacceptable to us - we are resource starved industry.

About suitability of ET in a context
14. Our quality process standards require that every test effort be substantiated by a detailed report of what test cases are executed, on what platform, what data is used, how many test cases passed, how many failed etc. ET is not so good in producing such report. ET does not provide enough evidence/proof of having executed testing.

About perceived Value of ET
15. ET is not process oriented and is not methodical (requires highly skilled disciplined, responsible testers). Anything that is so “person/Skill” oriented is strict no-no in our environment (process driven)
16. ET can not be outsourced - even if it is we can not assess the progress - it is an uncontrollable process.
17. ET can not be automated - I am looking to reduce spend on testing - ET can not help me there.
18. ET seems to be useful in only in those environments where there are no requirements but ours is a very structured process. Why to use ET?

Why these are myths and not the truths – that is an exercise to the readers. I welcome each one you, the reader to challenge my claim that these are myths and lets debate …

Anonymous said...

I would challenge anyone who says that domain skills are not required for ET.

Bach himself states in his General Functionality and Stability Test Procedure for Microsoft that "Your ability to perform
exploration depends upon your general understanding of technology, the information you
have about the product and its intended users, and the amount of time you have to do the
work."

I tend to agree more with this concept than saying there are no domain skills required. Granted, you may be able to test something for which you have no domain experience, but you will certainly garner better results based on the experience and/or knowledge you have gathered prior to your testing.

Shrini Kulkarni said...

Hi Brent,

While I value domain experience and power that it gives to be aware of domain.

Having said that I do not believe lack of domain knowlegdge is a handicap. I would rather use it as my strength. It becomes useful in the context of ET as I will try to do those things (like any user in hurry, confused state etc) and discover a strange/interesting behavior that a domain expert would IGNORE thinking no one would do it (on purpose).

I my opinion, people who constantly emphasise on domain skills are the one who are conspiring those sharp but domain ignorant thinker to take center stage.

I would make-up my lack of domain skills (wherever applicable) with my quick learning, systems thinking and analysing skills, questioning skills. In most of the cases, I would be able to test close to a domain expert - some times even better ....

If you ask me -- As a tester , I am generalist or journalist.

A Journalist reports various things on daily bases - from wars to Holywood gossip, from corporate boardroom affairs to a new scientific discovery ... what domain a journalist should master ...

I am developing my skills like a journalist ...I do not wish to be called as a domain expert in any domain -- I am a tester, I am a thinker and learner ...

Shrini

Anonymous said...

Hey Shrini,

You make a great point. Yes, in most cases, domain experience may not be necessary. This is especially so in environments where the product is designed for commercial consumption. I feel that if I am dealing with a commercial product then I should be able to pick it up and use it without even reading the manual, and it should perform as I would expect it to, with my lack of knowledge.

However, there are other cases where products are designed for product experts. I work in an industry that is extremely specialized and the software is very much proprietary to the systems on which it runs. So in this case domain knowledge is an absolute necessity before embarking on any sort of exploratory testing. While I may find an issue or two, I would likely be missing every piece of functionality that would be expected by the field experts.

The same could be said for something like Photoshop. It works great for my purposes, but I'm sure that a domain expert would be able to find errors which are more visible to the industry, for instance certain combinations of filters showing incorrectly, etc.

So while I appreciate your point of view, I think that, just as with most things we do, it needs to be taken with a grain of salt because there are going to be items in many applications which don't seem intuitive to the consumer, but an absolute requirement for industry experts.

Shrini Kulkarni said...

Brent,

Thanks for your detailed view ... I appreciate. Yes, there would some contexts where having an "expert" level of knowldge would a mandatory requirement for a tester (exploratory or otherwise). In those cases, If I am smart enough, I should be able to figure that out by questioning and analysis and quickly get some help.

If I am testing a software meant for doctors for analysing Heart related problems - I must do lots of reading/study/discissions etc or may be partner with a expert cardiologist. In that way each of us can work on our strengths.

Disapppointing trend in the testing industry today is not recognising this "pair concept". Managers are just pushing the concept of domain and lots of mediocre testers are making into testing profession in the name of domain specilists. In reality they are neither testers nor domain experts.

Last but the least - real users are more like non domain experts and even in case of a product meant for experts (who are typically humans, there can be situations where expert get blinded or biased by the "what they know" and tend to miss out things. Add to it - human cognitive/emotional/social/culturalconsiderations - At times an expert too can miss things...

Not knowing a domain that you test - after all may not be that bad thing ... Right?

Shrini