Monday, June 27, 2011

When Testing Fails ...

Carl Segan once famously said “Science is a self correcting process – an aperture to view what is right”. Carl Zimmer in an article on Indian express says “… science fixes its mistakes more slowly, more fitfully and with more difficulty than Sagan’s words would suggest. Science runs forward better than it does backward.” According to Zimmer, checking the data, context and results of a published scientific work is not of much interest to journals that take pride in “first” publishing ground breaking new research. For scientists scrambling for grants and tenures – checking published work is not attractive and often an exercise not worth its effort. As a result of this, Zimmer says, original work/papers often stand with little or no investigation or scrutiny by peers. This surely, is bad for science. Zimmer, suggests the community to have focus on “replication” and setting aside time, money and journal space to save the image of science as self correcting pursuit of knowledge in Carl Segan’s words.

Well … does this have anything to do with software testing? I recall James Bach once saying “it is easy for bad testing to go unnoticed”. Like science and scientific community’s social fabric – software testing world (software world in general) has built-up layers of wrapping and packaging. It is easy to find some reasons in one or more of these layers in the ensemble of social systems leading to few missed bugs that cost few days of outage of say- a popular e-commerce website. Any query or investigation on the effect of outage or episode of missed bugs would set-up a nice blame game looping all the way to testers, developers, business analysts, test data, environment, lack of technical/business knowledge and host of other things. Like it happens in published science works – hunting down the culprit or group of culprits would be time consuming job. In any case it is regressive job and takes valuable resources from many “progressive” tasks. Right? In Zimmer’s terms – spending time on production bugs is similar to running backwards. Does testing work well when running backwards? Do stakeholders like it when testing is running backwards? Not sure if published research wrong or publishing a new perspective on the basis of an existing work can be as productive as hunting down a missed bug and trying locate the reasons for its birth in the first place.

While process enthusiasts, Quality assurance people, Sick-Sigma (oops… six sigma) people might protest and insist on full blown root cause analysis of all bugs reported in production. SEI/CMM folks might make it mandatory to document all lessons learnt from missed bugs and refuse to sign off the project closure unless this is done. In spite of this – in a fast paced life of software where cycles between conception and real-working-software, are shrinking, ever increasing number of platforms (don’t forget mobile platform) – who has time to look at root cause analysis reports and all those missed bugs?

I remember a business sponsor once saying – “I can’t wait for testing to complete, once the development and put the stuff into production. If something breaks, we have enough budget provisioned to take care of any production failures.” Here, failure to put (buggy) software in production is more disastrous than waiting for “perfect” software to develop. Back to layers of social system in software testing – it appears to easy to hide bad testing – unless you screw-up very badly, the chances are that your stakeholders will never notice what good testing can get them.

I often wondered looking at some testing that happens in some organizations – how they are managing to stay afloat with such bad testing? The reason is probably – when testing fails, it is difficult to attribute it to testing and call it as such. It requires courage. How many testing leaders are willing to admit their testing sucks without hiding behind fancy looking metrics, root cause analyze reports and charts? That is the effect of “software testing” as “social process”

Tuesday, June 07, 2011

Sure Ways to Reduce Test Cycle Time through Automation

World is simple, we complicate the world for the sake of it – says a friend. There is a simple concept called "automation" and another one "testing cycle time". Why these terms can not be simply understood without much fuss and inquiry? he often argued with me. This friend is a manager and works for IT services company. A constant pouncing on me by this topic of test-cycle-time-reduction-through-automation and related hype, frustration and feeling of achievement, made me to think deep about basic laws or golden principles that govern this phenomenon of cycle-time-reduction.For the benefit of my blog readers, here, I make them public. Read them, understand them, implement them and be blessed. A caution here any criticism and cynicism about these laws will have harmful consequences to the beholder. These are golden principles and axiomatic about test automation!!!!

First principle - “About Testing”: Strongly believe that software testing is a deterministic, highly repeatable and structured process – somewhat akin to a step-by-step procedure to produce say a burger or car. You have to believe that given a fixed scope of testing, it always takes a finite and fixed time (effort) to complete testing. Needless to say, you have to have absolute faith in testing processes and standards. Your faith in the power of processes and standards making testing predictable and repeatable is an important success factor. It is also necessary to abandon any misconceptions you might have about relationship (or dependency) between testing and automation. You should treat testing truly as a mechanical, step-by-step and repeatable process to ensure Quality. Positive ideas about metrics to improve testing and consistency are - a sure bonus. You should resist and fiercely oppose any attempts to link automation and testing, claiming that automation can and should work independently.

Second principle – “Definition and Meaning of testing cycle”: Never challenge or probe definitions and meanings of word “Testing cycle”. Keep it (testing cycle time) loosely defined so that you can flip in any direction when confronted by a skeptic challenging your claim of cycle time reduction. The more the vagueness of the term “testing cycle and cycle time”, higher are the chances of meeting goal of reduction of cycle time through automation. Any variables and factors that make “testing cycle” somewhat unclear – should be ignored and not discussed. It is important to have faith in the fact that “testing cycle time” is universally known term and does not need to be redefined in any context. You would be looked upon as genius if you simply talk about “cycle time reduction” and omit the unnecessary qualifier “testing” or “test” (as in “test cycle time”).

Third principle - “Playing to the gallery”: Use words like “business needs”, “business outcome”, “business processes”, “success through business alignment”, “Time to market” etc., liberally during all communications related automation and testing cycles. Confront any opposition by skeptics about automation and its connection to cycle time by “calling authority” – say “Business/Market needs it”. Strongly believe in statements like “Automation will help releasing products and services faster – and hence will improve customer satisfaction and company bottom-line”. Your success in achieving cycle time reduction depends upon how often and how strongly you make reference to “business” and “business needs”. These powerful words that do all the magic required. You need right rhetoric to spread the message. Another important keyword here is “market”. Make sure you thoroughly mix up and use the terms “Testing cycle time” and “Time to market” interchangeably. The more you talk about “Time to market” and how automation can directly help it – more you look authentic and convincing.

Forth principle - “Motivations and Incentives”: Believe economists when they say “incentives” drive change and motivate people. How can this not work here? The last but not least of the measure that one needs to be aware of is to provide incentives for people to reduce cycle time through automation. Define performance goals of individuals involved in the automation in terms of cycle time they reduce through automation. Penalize those that question the idea and fail to meet the performance goals. The automation initiatives tend to be successful in their stated goals of cycle time reduction when they are integrated with performance goals of individuals involved in the game – especially automation team. Also make sure (when automation is done by a decentralized team) to define and impose the performance goals ONLY on automation team and suppress any attempts to include manual testing team. After all automation and testing are not related in anyway and it is the responsibility of automation team to bring about cycle time reduction. Right?

If you find these principles and ideas rather strange and are intrigued – one of the several possibilities is that you might be working in a software product organization as opposed to IT and IT services organization. The folks in software product organizations that build software – unfortunately approach automation some what “differently” and cycle time might not be a very familiar term to you.

If you are not from software Product Company and still confused about the ideas in post and you think they are inconsistent or incoherent with each other – do comment. That is a good sign that you are thinking about the topic.

Let me know if there are other ideas and principles that you might have successfully used to reap benefits of cycle time reduction through (and only through) automation – I would be more than happy to incorporate them with due credit.