Saturday, March 24, 2012

Learning from Tenali Raman's crows ...

As kids, like many in southern part of India - I grew up listening to stories of Tenali Raman - a 16th century wise court-poet of King Krishnadevaray of vijaynagar empire. Tenali Raman is also known as Vikat Kavi - meaning intelligent poet. Birbal from King Akbar's court - enjoys similar cult among kid's stories in India. This story of counting crows that I narrated to my 8 year old daughter - made me realize how real are Tenali raman's crows in our day-today life in software.

First, let me quickly run through the story. One day king throws up a strange puzzle to Tenali - asking him to count and report the number of crows in the city. Tenali thinks for a while and asks for 2 days time to come up with the answer. After two days, he comes back and reports to king that that there are One lach (10 lach = 1 million) seventy thousand and thirty three crows in the city. At first, the king becomes frozen and did not know how to respond - after a while, recovering from the shock of the answer - king checks if  Tenali is sure about the answer. King further says that he would conduct a counting (recounting?) and if number does not agree with Tenali's number - he (Tenali) would punished. Tenali being Tenali - responds to qualify his answer. He says it is possible that the recounted number of crows might be different from his number. If the new number is less than old number - then it is due to the fact that few of city's crows have gone out of station (city) to visit their relatives in nearby cities. If the new number is more than the old number, then additional number is due to crows from nearby cities visiting their relatives in vijaynagar city. Listening to this - king has a heart-full laugh and realizes the flaw in assignment/problem. As it happens in all Tenali stories - Tenali gets king's praise and some prizes for the witty answer.

Now, let us come back and see how this crow metaphor is applicable to what we do as project managers, test managers and testers in our day today work.

There are entities we deal that are similar to crows - in following respects :

1. Counting/quantifying is a prized puzzle
2. Counting is asked by an authority, a boss - you cannot say "No" to ( saying "no" can cost you your job or potential label of "incompetent")
3. Often you can fake a number
4. There is no easy, sure way to verify/validate the count
5. Even if someone does a recount and comes up with new (different) count - you can always "explain" the discrepancy, like Tenali did.

One example that comes to my mind is count of test cases. Typically, during test estimation process, as a test manager you would be asked "how many test cases could be written for a given set of requirements". The boss would then do the required math to confirm the number of testers required, time required to execute the estimated number of test cases (note - time required to "execute" test cases - not to test). So, wear hat of Tenali - throw up a number. If asked - show your working (be sure to have your working).  You would be OK from then on.

There are things we deal in software that can not be counted like we count concrete things.  Software requirements, use cases, test cases, lines of code, bugs, ROI from Automation - are abstracts not concrete objects. Counting them is akin to counting crows as in Tenali's story.

[Puzzle : Prove that ROI from automation is a Tenali Raman Crow count]

Cem Kaner says executives are entitled and empowered to chose their metrics. So, King was perfectly right in asking Tenali to count and report number of crows - though objective of King in the story is not to make any important decision for his kingdom. In any case - crow count metric was sought.

What can a tester/test manager do when asked to count "crows" ? While our community develops models and better alternatives to "imperfect metrics" - we need to tread a careful path. We should provide alternatives and possible answers to crow counts.

I have come to realize that refusal to give the count might be counter productive in many cases - trying to ape Tenali Raman might useful. Need for quantification is here to stay - years of persuasion and reasoning why in some cases counting can be bad - has not managed contain the problem.

What do you think about "Pass/Fail Counts"?

Shrini

9 comments:

Christopher Smith said...

The trick is to learn to avoid a "no" answer. Instead you say, "yes, but...". In this case, "yes, but what are you trying to accomplish with this?"

Christopher Smith said...

The trick is to learn to avoid a "no" answer. Instead you say, "yes, but...". In this case, "yes, but what are you trying to accomplish with this?"

James Marcus Bach said...

Executives are not entitled to insist that we participate in a scheme to mislead them. In any case, even someone says they are, it violates my understanding of the IEEE, ACM, and AST ethical codes to do so.

If I feel that I cannot provide metrics without harming my client, I must refuse.

Shrini Kulkarni said...

Christopher -

Try asking "what you are trying to accomplish with this (some imperfect metrics)" - typical answer would be something related to cost, quality, risk, timelines, effort etc.

Thus executives are READY to deal with imperfect metrics.

In absence of valid models with construct validity in most of the cases - we often end up in situation to deal with "imperfect metrics"

As Cem's post suggests - providing an answer that can potentially mislead the "asker" does not amount to lack of ethics.

Shrini

Shrini Kulkarni said...

James -

>>> Executives are not entitled to insist that we participate in a scheme to mislead them.

The problem here, I think is, executives while dealing with imperfect metrics - do not believe that they are participating in a scheme to mislead them.

What are the possibilities here.

1. Executives believe that asking for imperfect metrics makes them to be part of scheme of misleading.

2. Executives believe that asking for imperfect metrics perfectly fine

OR

1. Executives know that they are participating in schemes to mislead them

2. Executives do not believe that they are participating in schemes misleading them. They might believe that they are doing their best using the methods, models available to carry on their job.

Where the corrective action should be then?

1. Educating executives about dangers of imperfect metrics. Thus eliminate potential ethical issues for both parties (giver and receiver) - our community has been doing this by and large.

This works partially when the receiver accepts potential dangers and ethical issues of using/asking for imperfect metrics.

2. Develop models, metrics that have construct validity, represent the reality to a reasonable degree to meet "quantification" needs of executives.

This is required as majority of executives might believe that asking for (imperfect) metrics is perfectly right and reasonable thing to do. We need to work in this direction as stakeholders we serve (including executives) might believe that they are participating in a scheme to mislead them (hence associated ethical issues)

Hence the basic premise of the argument "Executives are not entitled to insist that we participate in a scheme to mislead them" is in question now.

Executives are entitled to ask us (we) participate in the schemes that they /think/ are not misleading (hence no ethical issues on both side). Are'nt they?

>>> If I feel that I cannot provide metrics without harming my client, I must refuse.

So... you seem to be taking a position that you would not be providing /those/ metrics that you believe can harm client EVEN WHEN the other party (client) is sure that they are fine and they take the responsibility of all evil that can happen.

Right?

Shrini

Unknown said...

I think rather than quoting an random number and giving a witty explanation to prove or disprove the same, the better option can be(in a project context say estimating number of test cases):
Make a rough calculation with some approximation and assumptions and then give the number. The number may not be absolutely correct, but we should be in a position to rationalize the method(approximation and assumptions) of reaching that number.

Unknown said...

Yeah, it is true that tenali raman’s crows story is very much applicable to our daily software development life. It happens that we provide this kind of answer to our seniors and get appraisals in return.

360logica

Kumar said...

Nice to read your article.Thanks for publishing .I have my view which might be of relevance.
Test case metrics judgement based on pass/fail can sometimes be used for good judgement.It depends on how the pass/fail is derived w.r.t to used combination of coverage,activity,risk,people for that particular pass/fail result.It might tell about behaviours that are relevant to the developer or desighner and which are against the operational and development quality criteria.It might also give rise to further information.Hence counting all such pass/fail can be useful,if the content involved to derive that is fine and it bring out the desired results

from
shravan

Kumar said...

Test case metrics judgement based on pass/fail can sometimes be used for good judgement.It depends on how the pass/fail is derived w.r.t to used combination of coverage,activity,risk,people for that particular pass/fail result.It might tell about behaviours that are relevant to the developer or desighner and which are against the operational and development quality criteria.It might also give rise to further information that is critical.Hence counting all such pass/fail can be useful,if the content involved to derive that is fine.

from
shravan