Friday, December 15, 2006

Why counting is a bad idea

Let us consider a typical Test report - a report that is presented in a meeting for assessing the progress of testing, attended by key stakeholders and team members:

No of Test cases prepared: 1230
No of Test cases Executed: 345
No of Test cases Failed : 50
No of Bugs reported: 59
No of Requirements Analyzed : 45
No of requirements updated :50
No of Transactions covered in Performance Testing : 24
No of Use cases Tested : 233

Productivity

No of Test cases prepared Per Person Per hour = 5
No of Test cases executed per person per hour = 15


What do you see here?

Managers love numbers - Numbers give objective information, numbers quantify observations and help in taking decisions (??). Numbers simplify things, one can see trends in numbers.

You might have heard about one or more of above statements (mostly in review, progress meetings right?). When it comes to testing, followers of Factory approach testing, are comfortable in just counting things like test cases, requirements, use cases, Bugs, passed and failed test cases etc and take decisions about "quality" of the product.

Why counting (without qualifications) is bad idea in testing? What are disadvantages of such practice? Let us briefly take a look of few famous frequently *counted* things

Count Requirements (as in there are 350 requirements for this project)
Can we count?
How to count? Do we have a bulleted list of requirements? If not, what to do?
How to translate given requirements into "bulleted list"
How to account for Information loss, interpretation errors while counting requirements
Count Test cases ( as in test team has written (or designed or prepared) 450 test cases in last week)
Test cases are test ideas. Test case is only a vague, incomplete and shallow representation of actual intellectual engagement that happens in testers mind at the time of test execution (Michael Bolton, mentioned this in his recent Rapid software Testing workshop at Hyderabad)
How can we count ideas?
Test cases can be counted in multiple ways - more often than not, in a ways that are "manipulative" - count is likely to be misleading
When used for knowing or assessing Testing progress - likely to mislead the management
Count Bugs ( we have 45 bugs discovered in this cycle of testing so far)
The story or background about a bug is more interesting and valuable than the count of bugs ( this again I owe it to Micheal Bolton - "Tell me story about this sev 1 bug? would be more informative and revealing question than asking how many sev 1 bugs we have uncovered so far?
when tied to testers effectiveness - is likely cause testers to manipulate bug numbers ( as in Tester 1 is great tester as he always logs maximum number of bugs)
Let us face the fact of life in software testing - there are certain things in testing that can not be counted as we count no of cars in the parking lot, no of patients visited a dentist's clinic or No of Students in a school.

Certain artifacts like test cases, requirements and bugs are not countable things and any attempt to count them can only lead to manipulations and ill-informed decisions.

Wait --- Are there any things at all in testing that we can count without loss of effectiveness and usefulness of the information that a counted number might reveal?

Shrini

Wednesday, December 06, 2006

How can a software tester shoot on his/her own foot?

Would like to know the self destructive or suicidal type of notions of the today's tester? Would like to know how a tester can shoot his/her own foot?

There are many ways – one of them is by “declaring” or "asserting" that -

Software Testing is an act of (whole or Part) Software Quality Assurance.
Few variations of above –

Software Testing = Software QC (quality control) + Software QA
Software Testing = Verification + Validation.


As an ardent follower or disciple of Context Driven school of Testing – I swear by following definitions or views

• Quality – “Value” to someone (Jerry Weinberg)
• Bug (or defect or Issue) – “Something that threatens the Value” (I am not sure about the source)
OR “something that bugs somebody” (James Bach)
• Whatever is QA – that is not Testing – Cem Kaner
• Testing – Act of Questioning product with an intent of discovering Quality related information for the use of a stakeholder (James Bach /Cem Kaner – I attempted to combine definitions by both James and Cem)

OR
• An Act of Evaluation aiming at Exploring ways in which the Value to Stakeholders is under threat (this is my definition – which discovered quite recently – open for criticism)

• Stakeholder – is someone who will be affected in success or failure OR actions or actions of a product or Service (Cem Kaner)

• Management is the TRUE QA group in a organization (Cem Kaner)

Now let us see how notions that assert to Testing as QA or combination QA and QC roles are self destructive or similar to shooting on ones foot …

1. The terms like QA, QC were appear to have barrowed from Manufacturing Industry – Can you measure and assess the attributes of software in the same way as you do for a mechanical component like piston or Bolt?
2. You can not control or assure Quality in a software By Testing
3. It can be more dangerous or costly to claim as a Tester that “I assure or control Quality by Testing” as it can backfire when you don’t.
4. Unless your position in Organization hierarchy is very high – you as a tester can NOT TAKE decisions about
a. Resources and Cost that is allocated for the project (Budget)
b. Features that go into the product (Scope)
c. Time when product will be shipped out of your doors (Schedule)
d. Operations of all related Groups - Development, Business, Sales and Marketing etc.

When none or most of above not in your hands – How can you Assure or control quality?

5. When you claim that you assure or control quality – others can be relaxed – Developer can say – I can afford to leave bugs in my code – I have anyway some paid to do the policing job Or others will say “Let those Testers worry about Quality, we have work to do” – Cem Kaner

6. You will become scapegoat or Victim when you leak Bugs (or issues or defects) go past you. One of the stakeholders may ask – “you were paid to do the job of assuring or controlling Quality – how did you let this bug(s) to product”

An interesting and relevant is reference is mentioned in Cem Kaner’s Famous article
The ongoing revolution in software testing

Johanna Rothman Says (as quoted in Cem Kaner’s article) -

Testers can claim to do “QA” only if the answers to the following questions, is YES
• Do testers have the authority and cash to provide training for programmers who need it?
• Do testers have the authority to settle customer complaints? Or to drive the handling of customer complaints?
• Do testers have the ability and authority to fix bugs?
• Do testers have the ability and authority to either write or rewrite the user manuals?
• Do testers have the ability to study customer needs and design the product accordingly?

Clear enough?

What is the way out ---?

Treat software testing as a service to Stakeholder(s) to help them conceptualize, build and enhance the *value* of a product or a service.

Be a reporter or service provider – Don’t be Quality Police or Quality Inspector of an assembly line …