A Tester driven by curiosity and relentless question "what if"
"My vote for the World’s Most Inquisitive Tester is Shrini Kulkarni" - James Bach
My LinkedIn Profile : http://www.linkedin.com/in/shrinik
For views, feedback - do mail me at shrinik@gmail.com
Saturday, March 21, 2009
When "Process" stops working for you ....
What is happening here?
Most managers somehow (more so in current economic situations) confuse skill, human ingenuity and expertise to metrics/measurement. When customer cribs about “value” and quality of work delivered – she really is cribbing about people and their skill (not about metrics and measurements). When people hear about customer cribs … managers suddenly jump and say “let us collate some metrics and show client that we have delivered the value (which they will dump eventually)” and push the core issues about skill below carpet. This “hide and seek” game goes on until we lose the client. This pattern has to break and unfortunately I have no simple solution for that (probably no one has). Few of us appear to know the root of the problem now.
If following processes would ensure quality and being very serious about metrics is HOLY – then our problems would have been solved long ago …. Why people do not follow process? Is it because they are so tough and stress full to follow? Is it because they are difficult to understand? Probably people follow process and we have stopped being critical of whether process is doing any useful thing are not …. That is the start of the problem. Glorifying process beyond its own utility (ask process – it would probably say … beyond this, I cannot add any value). I understand process (whatever is the definition) provides some common framework within which people with diverse educational/technical/social background work to produce consistent output so that whole thing can be managed easily. Beyond certain point (this limit might vary from context), process cannot help any further. It calls for people's skill to deliver - process then becomes an enabler or mere Hygiene factor. Just walking or eating alone can not keep you healthy all the time. Do you know where is the limit beyond which "following process" can no longer help?
There is a big fuss about “using English than numbers” … Why there is so much faith on numbers? Why qualitative subjective wordings are such a waste? Why not we express everything in numbers all the time – our hunger, happiness, intelligence (yes there is IQ test), pain, sorrow, emotion (yes there is emotional quotient), commitment, enthusiasm, creativity and what not all human attributes are so rich and multi dimensional that poor numbers can express a minute part of them. And we refuse to use qualitative measures saying that “objective is better than subjective”. Many would like humans to behave as if they are machines so that they can be objectively measured. A sad reality…. Perils of advanced economic world. Hunger for objective interpretation of human attributes is probably has reached its crescendo. I am waiting for the downfall of that raise. Will it come?
There is a big deal about “improving productivity in testing .. We must meet SLA’s and show continuous improvement in productivity”. I am STRONG opponent of usage of the word “Productivity” in testing in general terms. When people say productivity, they typically refer to speed – number of units produced per unit of time. Much like in a shop floor assembly line. There might some portions of testing that one does that are “speed sensitive” but by and large skilled testing is not about “speed” more than it is about “coverage”, “identifying tough to find problems”, “asking right questions”, “seeking information”, “building on available information”, “investigation” and many more. Probably not more than 5% of good testing is speed sensitive… most of it is not … then what is the meaning of “productivity” when it is applied as “serious generalization” to all testing. I PROTEST ….
Finally, come’ on, let us accept there many ways we can improve (many) things without measuring them (at all) at least in poor numbers. We all do it in our day today interactions with our near and dear ones in family and those out side in society. So there are clear exceptions to the statement “you can not improve if you can not measure”. I strongly oppose the statement. Too poor generalization that suites machines and mechanical constructs well, than human beings in a social structure.
Friday, March 06, 2009
C-DLICE’ing in software Testing
Let me take credit for making this pneumonic up: C-DLICE. I was listening to Michael Bolton's video interview on youtube. He said testing is more than verification, validation and confirmation. It is about Challenging claims, Discovery, Investigation, Learning and Exploring. Any skilled tester would do one or more of these activities as part of testing. By explicitly chaining them in a pneumonic, a tester can focus on a specific aspect of the interaction with the test subject.
Let me expand the pneumonic –
C – Confirmation. Other than traditional words like Verification and Validation (whatever may be the meanings of these terms) most people on this planet think that sole aim of doing testing is "confirmation". It is seen confirmation of claims made about the product, conformation of what developers "felt" that they created in response to requirement specifications that received and interpreted to their best of abilities. The confirmation about some specific user expectations (assumed to be routed through specifications into the software product). In basic form confirmation is somewhat like "Click this button, such and such thing should happen – Does it happen?" While confirmation is important aspect of testing, any testing that focuses on confirmation will become boring, brain dead and poor way to think about testing. Notions like anyone can do testing, process plays important role in testing, testing without test cases and requirements is not possible – are creation of confirmation oriented testing. I will not dwell upon challenges in confirmation oriented testing. And there is a big deal called "reference" against which you confirm – specifications. If your reference is wrong, ambiguous and incomplete – so will be your confirmation. That is the weakness of confirmatory testing.
Though my pneumonic is more about DLICE, I will still keep this "C" in there to remind us that confirmation may be as important as other letters in the pneumonic.
D- Discovery. While we test tester, we discover information about software, certain behaviors. It is like discovering an unknown island. As product grows bigger in size (in terms of codebase), discovery becomes important. No user would use the software as per the user manual. Discovering way in which software could be used and misused is important aspect in testing. Discovering is about finding information about unknown areas. For a growing software application, every time there is more to test than before – more to cover than before. Under such circumstances, you constantly discover the application, its variations, behaviors and so on.
L- Learning- This is a freaky one. A significant part of testing is implicitly spent on learning about everything that software under test. Be it business domain, technology domain, community of users using the software, cultural and social set up in the organization producing the software, we learn all time. Learn about how the software is constructed, deployed, distributed and so on. Often, I have seen people downplaying "learning" aspect of testing as they would like to position themselves as "experts".
I – Investigation As testers we investigate claims about the product. How people perceive the product? Investigate inconsistencies. Investigate bugs, Investigate impact of a new technology, software change on the over image of the software. Investigation is about focused information gathering, analysis on certain events. Examining the evidence etc. Investigation starts off as open ended.
C- Challenge (used as verb) – As testers we need to constantly challenge the assumptions, beliefs around how people think about software. What each stakeholder thinks about the capabilities of the software? Challenge the premises and so on. Challenging would require the design of tests, experiments etc to expose the weakness of an aspect of software.
E – Explore – Somewhat similar to Discovery, Exploration enables any information gathering exercise. Explore market conditions. Exploring is about taking a tour. Exploration helps in modeling the problem space. Exploration is more open ended than investigation.
Notice that each letter is has some overlap with others. You can learn while discovering or challenging a claim or exploring a feature. You can investigate something by exploring it or discovering it. You can challenge something by investigating it or learning about it and so on. One way to think about DLICE is – Discover like Magellan, Columbus, Learn like a learning a new language, Investigate like Sherlock Homes, Challenge like Lawyers, Explore like exploring moon's surface or deep African jungles.
Few practical themes to apply dlice'ing
- When there is a new thing most people around you know little about - something that you do not understand well , then – Discover, Explore and Learn
- When there is something that several others know but you do not – Learn through exploration, discovering
- When there is"suspense" or "mystery" about a thing – Investigate – a defect, a strange behavior etc.
- When there is some that is "well known" to you (you are pretty sure) about some claim – Challenge it and prove your point ( backed up by prior discovery, exploration, investigation and learning)
So, next time you feel bored doing testing, try switching your focus … try doing some investigation, discover new ways of using software or explore an area of software and so on. You would find that testing is always interesting but you were told about only one dimension of it (confirm, find bugs, check it passes tests) so you felt low or bored about doing testing that way ….
HAPPY C-DLICE'ing
Shrini