Friday, January 25, 2013

Should Automation that runs slower than human test execution speed - be dumped?


I am working on a piece of automation using java and some commercial tool to drive a test scenario on AN iPad App. This scenario involved entering multiple pages of information and hundreds of fields of data. This automation script runs this scenario for say 1 hr where as a tester that exercises same scenario on the app “manually” - claims that it takes only about 30 minutes.

I was asked – if automation script runs slower than human test execution (however dumb) – what is the use of this automation?  What do you think? 

Here are my ideas around this situation/challenge:
Mobile Automation might not ALWAYS run faster than human test execution -
Many of us in IT, have this QTP-Winrunner way of seeing testing as bunch of keyboard strokes and mouse clicks and automation is a film that runs like a dream at super fast speed.  GUI automation tools that drive Windows desktop application GUI or Web GUI have consistently demonstrated that it is always possible to run sequence of keyboard and mouse click events at higher speed than human.  Enter mobile world – we have 3-4 dominant platforms – Andriod, iOS, Blackberry and Windows Mobile. GUI Automation when enters the world of mobile – mainly runs on some windows desktop that communicates with app (native or web) on the phone that is connected to the desktop through, say USB port.  The familiar paradigm of all automation and AUT running on the same machine/hardware breaks down and so would be our expectations on speed of test execution. iOS platform specifically (in non-jail broken mode) presents several challenges for automation tool while android is programmer friendly. As technology around automation tools on mobile devices and associated platforms (desktop and mobile), evolves – we need to be willing to let go some of our strongly held beliefs of GUI automation that happens on web and windows desktop applications.
Man vs. Machine – items that might make machine/program slow
When you see a button on the screen – you know it is there and you touch it (similar to click on non touch phones) – as a human tester you can regulate the speed of your response depending upon how app is responding. Thus, sync with app, checking if the right object is the view and operate the object – all of this happens very natural to human. When it comes to automation tools (mobile tools especially) – all of this has to be programmatically controlled. We would have function calls like “WaitForObject” and some “Wait” calls to sync the speed of automation with speed of app responses. The whole programmatic control of slowing down or speeding up of the automation in relation with app response and checks to make sure automation does not throw exceptions – many times automation programmers need to favor robust but slower automation code that is almost guaranteed to run against all app speeds. This is one of several reasons why automation might run slower than human execution. You might ask how do likes of QTP handle this situation – even tools like of QTP need to deal with these issues. Given the state of technology – the problem is somewhat acute in mobile automation space.
Imagine long, large and highly repeated testing cycles – a human tester would lose out on 2nd or 3rd iteration due to fatigue and boredom. Consider current case of multipage and entering 100’s fields – how long do you think a human tester can focus on doing the data entry.  Here is where our “tortoise” (slow but steady) automation still adds value. This slow program does not mind working 100 times over and again with different data combinations – frees up human tester time and effort for you.
Remember – automation and skilled human tester both have their inherent positives and shortcomings. A clever test strategy would combine (mix and match) human and automation modes of exercising tests to get maximum output – information about issues, bugs and how value of the product be threatened.

If automation runs unattended well – why bother about execution time?
Many of us are used to sitting for hours staring at automation running to see if it works, pass or fails. If fails – check, correct and rerun. If automation is robust and runs unattended – why have someone looking at screen – watching automation running. Why not run it at non working hours? Why not schedule it to run at certain time. This will free up human resources that can be deployed at other areas requiring focused human testing. Isn’t this a value provided by a slow running automation – free up human testers? A well designed but slow running automation can still justify investment as it can run without bothering you.

How you can get best out of slow running automation?
  • Optimize automation to see if speed can be improved – remove sync/waits, “object exists” checks (not compromising on robustness of automation)
  • Identify bottlenecks in tool and fix them
  • Identify environmental and data related slowness in automation and fix them
  • Schedule automation at non working hours and save human effort


Have you come across automation that runs slower than human test execution speed? What did you do with automation? Dumped it? Want to hear about your experiences


Sunday, December 30, 2012

Where do you stand in this debate?



Inspired by Elisabeth Hendrikson's blog post 

[Updated 25th Jan 2013]
I am disappointed to see no responses to this post. While I expected some responses in agreeing or disagreeing. Whenever I see such condition where my post does not get any comments - I think of following possibilities (thanks to Michael Bolton)

1. The post is not very engaging - there is way too much information there. Everyone and every thing is seeking attention. This post simply failed to get any
2. Its dumb idea - completely useless
3. Post is simply a question which either is too simple to answer (so no one would like to feel insulted by answering) or something deep and intriguing (why bother answering)
4. Why Author is not saying anything? Trick to get some free survey done for some homework?
5. No comments

I will attempt to expand on this topic sometime in the future. This situation made me to learn something - no comments - will make you think.

Dear readers - thanks for not commenting and teaching me something.

Shrini

Sunday, November 04, 2012

A bizarre idea called "Software testing factory"

"Persistence in the face of a skeptical authority figure is priceless" - Seth Godin

Paul Holland (twitter handle @PaulHolland_TWN) shared this amazing video of Seth Godin on education systems. As I listened to Mr Godin talking about how present system of schools evolved from schools churning about labors for factories. Alas - even in our software testing "industry", we still need laborers as testers and companies take pride in setting up software testing factories. This post is about how bad and dangerous is the idea of "software testing factory"

According to Godin, about 100-150 years ago - schools used to be for a different purpose. He says - large-scale education was not developed to motivate kids or to create scholars. It was invented to churn out adults who worked well within the system. Scale was more important than quality, just as it was for most industrialists. A day in school started with "good morning" represented the notion of respect and obedience that was injected into students as a virtue. School was about teaching compliance, fitting in for the students into larger social context when pass out. Schools, according to Godin were established as public education to produce people who could work in factories - create set of people who can comply, fit-in and follow the orders of the supervisor.  

Emerging industrialization brought the focus on profitable factories - Godin points out. Factory owners thought "there aren't enough people, if we get more, we can pay them less - if we can pay less we can make more profits. When we put kids into factory that is called as school - we indoctrinate them into compliance.  Godin points out another key feature of factories - idea of interchangeable parts - when translated to schools - it meant producing people who are replaceable just as "standard part" of a machine.  When it comes to work - if you do more - there is always "ask" for little more. This is because - we are products of industrial age. The term productivity was brought the center of the things.

Key idea that I was attracted in this talk was about "Factory and how factory worked". I strongly believe that software and software testing work is "knowledge work" in contrast to "factory work". Here, thinking humans, in collaboration with humans assisted by computers create stuff that we call software that has changed and continues to change our life.  Wholesale lifting of idea of factory - thanks to strong association of "quality" to likes of toyota and promotion of idea of  "sick-sigma" (Cem Kaner used this phrase first, I think) - we have indoctrinated software people as factory workers.
I am troubled by this. When I ask people  - "does it what we deal in factory - machines/concrete things vs abstract ideas and machine instructions - matter? Should or is software produced like a machine in assembly line",  I get no clear response. Many simply think since our industry (Software) is immature and nascent - we must learn from engineering disciplines like manufacturing.

I am fine with learning from other disciplines as  I believe software testing is multi disciplinary - we constantly import ideas from multiple fields such as natural sciences, maths and statistics, behavioral economics, neuro sciences, cognitive psychology, philosophy, epistemology and list continues. I am against  wholesale and mind less import of ideas from the areas where we deal with a totally different type of things and we must exercise caution.

Coming back to factory - many IT services companies take pride in saying "we have successfully implemented software testing factory for a client" or "software testing is now commoditized" - what a shame !!! What happens in a software testing factory? There are dozens of "brain dead" people called software test engineers whose job is to produce test cases, bugs, test results, automation code (sorry popular word is "script"), metrics and tones of reports.  The intellectual pursuit of software testing that seeks to discover, investigate and report interesting and strange problems in software that requires - thinking, skeptic and open mind - has been reduced to "mindless" factory work. As a passionate tester, I would never want to associated with this deadly idea.

Am I biased as tester about my profession as some highly complex rocket science? Is my rational mind blocked or misdirected by confirmation bias? I think that is possible. If I am thinking about software testing as a business - like any other business say hotel, garments, manufacturing or engineering hardware - I would love the idea of factories. I would want to maximize my profits per dollar of investment. I would want to train cheap labour - teach them how to write test cases, report bugs and automate test scripts. I would then deploy them in "mass" to a client and charge handsome money in the name of testing. This business apparently works and it is perfectly legal, by and large ethical.

If I imagine myself as a tester in such factories (flip my context from factor owner to a factor worker or a supervisor) - I see a dark future for myself.  Just as factory works are expected to "comply" and follow a set pattern of work - when factor owner does not need me - I don't have any skills that I can trade outside factory. Over a period of brain dead work - I have lost my thinking and questioning mind. Unless I gain skills in becoming factory owner myself (that is a business development and management skill) - I must leave the factory quickly and move to an environment where I can grow my skills as tester  as a thinking individual.

In short - if you are managing software testing as a business - software testing factory is good for you. If you are a software tester working in a software factory - get out of the place fast or change the career to become factory owner or supervisor.

As a  tester in me roares - I wish for "End of compliance as an outcome - it is too boring for a curious, skeptic mind to simply fall in line".


Additional Notes: 
Following are few statements - that I liked that strikes chord with my belief in "software (testing)" as a knowledge work as opposed to factory work
  • Why we would not want to have our kids to figure it out and go do something interesting
  • Are we asking our kids to "connect dots" or "collect  dots"
  • We are good at measuring how many dots we collect - how many boxes are collected, how many facts memorized, 
  • we don't not teach kids how to connect the dots. You cannot teach connecting dots in dummies guide, text books. By putting kids in to situations where they can fail, experiment
  • Grades are an illusion - passion and insight are realities
  • Your work is more imp than your answer in congruence to answer key
  • "Fitting in" is a short term strategy to go no where.


Do not forget to read this pdf "stop stealing dreams" by Seth Godin.

Sunday, October 21, 2012

Divisions in Testing, Slotting People - How bad is idea of schools?



This post is an offshoot of discussion with friends Rahul Verma and Vipul Kochar on twitter. It started off from a blog post from Rahul on "exploratory testing" - one approach to testing that many in context driven testing community are working hard to be be good at.  When Vipul joined the debate - to me, two key things stood out as following "long" tweet of viper suggests - http://www.twitlonger.com/show/jm95dm

" ...classification, definitions are good. When one starts to use them to divide and slot people, it becomes counter-productive."

Vipul followed up with a detailed post here

Divisions amongst people

Take for example - the idea of schools in software testing by Bret Pettichord.

Rahul wrote a good summary and analysis of schools of testing way back in 2007.  Rahul's main complaint was schools concept divides people.  My view is different. To me, idea of schools has been very helpful to identify myself and my approach to testing distinct from others I see around. It helped me to develop my skills in the framework of context driven testing school. I think, testing as a multidisciplinary field was (will always be) divided. It is just that few refused to recognize the differences. Still worse, some insisted that their's is some sort of universally agreed way of doing testing.

What Bret did is phenomenal but at the core he simply named the groups/schools that he saw. In other words - schools of testing idea did not divide people - it gave "names" to different sets of practices "using" the name of testing. Having names to things arounds us helps to talk about the things, debate about them, understand them and improve them. That is exactly what Bret's idea of schools - did to some of us.

If you disagree with idea of schools - you might be saying one of these

"There is one universal way of doing testing hence idea of schools is absurd"
"I do not agree with Bret's classification - here is mine"
"I refuse the idea that there are patterns in testing that are distinct"

So - it would be not be correct to put the blame on idea of schools in testing to "division" in our industry - divisions always existed, we now have one model in which these differences can be named. I also argued with Rahul that "divisions" are good for our craft - they work like having multiple political parties in a democratic setup. With divisions we can have multiple, diverse ideas to co-exist. I am in favor of division in testing community as we need diverse mindsets, ideas and philosophies each offering solution to unique situations.

Vipul's post on "religions" and his apparent suggestion on being like "water" - indeed is a support of view of "divisions" are good.  If there are differences and divisions - cherish in diversity instead of trying to bring unification.


Slotting people, calling people by names

As strong supporter of schools concept - what I condemn is slotting people where they don't want to belong or identify with. There are factory or analytical school practices not factory or analytical school testers. Likewise there is Agile Testing (some form of testing that happens in Agile projects) but there are no Agile testers. There is exploratory testing and testers can chose to be good at it - but when they master it - they don't become exploratory testers - but testers with mastery over the approach of exploratory testing.

When people get slotted in groups/labels (for example if we call someone as factory tester) - for few it sounds "offensive". Personally, I am proud to be context driven tester. I don't have problems of me getting slotted in a category that Bret proposed. But that is me only speaking. By speaking of me as a context driven tester - I will let others know my testing philosophy and to some extent help others what to expect from me. This label for me is helpful to identify my approach and grow it in a framework driven by the principles of my school. 

Vipul approaches this from a different direction - he talks about dangers and obsession of belonging to a school (akin to type of fundamentalism that we see in religion). He says "Test matters and the test result matters not the division" - Well - I say - how does one test? what principles and values one approach the act of testing? The values, beliefs and approaches that one uses in testing define what Bret called as school. These elements of school are not independent and separate parts of a testers life and work. When we become conscious of them - we can work to improve them,  add few, modify few and delete few. How can one chase objectives and goals of testing without having a value system of individual about testing? If you think young testers struggle to define terms like of GUI testing or agile testing etc or struggle to belong or not belong to any school - it is sign of they trying to find their value system.

While a person can be FREE thinking person to choose and adapt - I can always see in the person - a subtle value and belief system about world, work (testing) - a view. Even choices of Free thinker are subtly guided by these values and beliefs. Instead of trying to deny the existence of these values and beliefs (in involuntary pretext of freedom to chose and adapt) - I urge likes of Vipul and Rahul to explore to find these subtle values that drive them. Bret's idea of schools and influences of James Bach, Cem Kaner and Michael Bolton - personally helped me to find my values or to be precise - they shaped up my fluidic and rather vaguely defined testing philosophies, values and beliefs.

I am proud to stand up as a context driven tester - I can talk about my values and beliefs about testing. While I do this - one thing that these great teachers (James, Cem, Michael) taught me is - not to get biased by one unilateral thinking. I constantly question my beliefs and values - I try to hang around with people who think and work differently than me. I train to be critical and rational thinker - constantly look to beat "confirmation bias".

I am reminded of this famous quote of Bertand Russel "Do not absolutely be certain of anything" - So…as a tester - I keep doubting my own ideas and that of others - that keeps me learning.

Shrini

Friday, August 24, 2012

How different Software Industry segments see Testing ...

Consider these views expressed by few real people about testing - cutting across the software industry segments. You (a tester) might be surprised by few of these comments - but take it from me - these reflect true state of how stakeholders see testing as.

A manager from a Software Product Company : "We follow Agile model - every team member in the team is responsible for quality and will do a bit about testing. We believe in Agile practices like test driven development, continuous integration, automated unit testing - our code is naturally comes out with good quality. We do not employ any "plain vanilla" black box testers. That is waste of our time. We would get all our testing done by developers mostly or in some cases - testers cover the rest through automated testing. We dont have anything called "testing" phase in our process. We hire testers that are capable of writing production level code - as most of their time will be spent in writing unit tests and automation to help developers.

A manager from IT/Captive Unit : "We believe in providing agility and value to our customers. Testing is one small bit in that whole process. We don't actually worry about how testing is done as long as it aligns to our business purpose. Bulk of testing that happens is done by our partners. We constantly seek to commoditize testing and aggressively deskill so that - we can gain the cost efficiencies in testing. More than testing skills - we value business domain skills. Testers eventually either become managers (and manage customers, IT services deliver/management and other stakeholders) or become business analysts.

A manager/consultant from IT services Industry: Testing is all about assuring quality and process improvement. We constantly develop tools and frameworks to help our customers to do testing efficiently and cheaply. We provide value driven testing services based our process maturity and experience in setting up large scale test factories. Our number 1 aim  is to reduce cost of quality - we do it by focussing in tools, processes and domain skills.

A consultant from Software Tools Company: Testing is an essential part of SDLC that can gain significantly from Tools - Automation tools. Usage of Automation aggressively can help reduce cost of testing. Software Testing tools help in implementing Software Test factory so that non technical and business users can use them and achieve faster cycle time and enhanced quality. Not to forget our strength in terms of Six Sigma, CMMi and other Software Quality models. We endorse software quality management through rigorous metrics and quantitative measures.

Now - dear tester - identify yourself where are you working and how are you improving skills in testing to suite the industry segment you work now or hope to work in the future. Does this sound similar to the view of testing that you read in text books or conferences ? Did you know software industry sees testing in such variety of perspectives?

Shrini

Sunday, May 06, 2012

A brief introduction of Test Automation...

I was asked by a blog reader to give a quick introduction of how automation helps in testing. Here is how I replied. I thought this might kick off some interesting off shoots...


"Certain portions of testing such data validation etc can be efficiently verified by automation programs than humans in repeated way (humans make mistakes and often are terrible at repeated executions). By carefully identifying portions of application under test that could be "safely" checked (validated) by automation - you can speed up testing (you can run many test cases in parallel, in the night etc) through automation. 

But beware - automation is a dumb and (humble?) servant - will do exactly what you ask it to do million times without cribbing - it does not have intelligence. A good tester can recognize something that is not in test script and looks like a problem. Automation cannot do this."


Do you like it?

One offshoot I am reminded of when wrote this piece - Automation is like people trying to losing weight. It requires patience, discipline and dedication. There are many quacks that operate in both automation and "weight loss" industry that promise "over-night" benefits.

If you are aware of how weight loss works or does not work - you can safely extend the analogy to benefits of automation.

Do not expect your testing or your application to become slim and trim with automation - overnight and most importantly - do not expect it remain so with no investment on ongoing basis. The later part - neither automation consultants (especially those who sell tools) nor those folks that run weight-loss industry - will tell you.




Shrini

Sunday, April 01, 2012

Testing is Dead - in which world?

Few weeks back I participated in VodQA event by Thoughworks. It was a day filled with lots of power packed sessions and discussions around the topic testing - sorry QA (that is how TW calls testing as).

My talk was around the topic about alleged death of testing and implications of the same on people who live and make their living by testing.

The slides of the talk are here and video of the talk is on youtube (thanks TW)

Folks organizing the event did a wonderful job of arranging a platform and have people exchange their views on testing. There was this "open house" where an adhoc group of people assembled at a place to discuss about a topic that one of them wanted talk about. There was passion and energy all around. I said to myself - in such an assembly of about 50-70 people - who could believe "testing is dead". It was  a real thing for the people in the event.

One thing that I wanted the listeners of the talk - was this idea of "two worlds of interpretation" - software makers world and software users world.  More about that later in a separate post.



Saturday, March 24, 2012

Learning from Tenali Raman's crows ...

As kids, like many in southern part of India - I grew up listening to stories of Tenali Raman - a 16th century wise court-poet of King Krishnadevaray of vijaynagar empire. Tenali Raman is also known as Vikat Kavi - meaning intelligent poet. Birbal from King Akbar's court - enjoys similar cult among kid's stories in India. This story of counting crows that I narrated to my 8 year old daughter - made me realize how real are Tenali raman's crows in our day-today life in software.

First, let me quickly run through the story. One day king throws up a strange puzzle to Tenali - asking him to count and report the number of crows in the city. Tenali thinks for a while and asks for 2 days time to come up with the answer. After two days, he comes back and reports to king that that there are One lach (10 lach = 1 million) seventy thousand and thirty three crows in the city. At first, the king becomes frozen and did not know how to respond - after a while, recovering from the shock of the answer - king checks if  Tenali is sure about the answer. King further says that he would conduct a counting (recounting?) and if number does not agree with Tenali's number - he (Tenali) would punished. Tenali being Tenali - responds to qualify his answer. He says it is possible that the recounted number of crows might be different from his number. If the new number is less than old number - then it is due to the fact that few of city's crows have gone out of station (city) to visit their relatives in nearby cities. If the new number is more than the old number, then additional number is due to crows from nearby cities visiting their relatives in vijaynagar city. Listening to this - king has a heart-full laugh and realizes the flaw in assignment/problem. As it happens in all Tenali stories - Tenali gets king's praise and some prizes for the witty answer.

Now, let us come back and see how this crow metaphor is applicable to what we do as project managers, test managers and testers in our day today work.

There are entities we deal that are similar to crows - in following respects :

1. Counting/quantifying is a prized puzzle
2. Counting is asked by an authority, a boss - you cannot say "No" to ( saying "no" can cost you your job or potential label of "incompetent")
3. Often you can fake a number
4. There is no easy, sure way to verify/validate the count
5. Even if someone does a recount and comes up with new (different) count - you can always "explain" the discrepancy, like Tenali did.

One example that comes to my mind is count of test cases. Typically, during test estimation process, as a test manager you would be asked "how many test cases could be written for a given set of requirements". The boss would then do the required math to confirm the number of testers required, time required to execute the estimated number of test cases (note - time required to "execute" test cases - not to test). So, wear hat of Tenali - throw up a number. If asked - show your working (be sure to have your working).  You would be OK from then on.

There are things we deal in software that can not be counted like we count concrete things.  Software requirements, use cases, test cases, lines of code, bugs, ROI from Automation - are abstracts not concrete objects. Counting them is akin to counting crows as in Tenali's story.

[Puzzle : Prove that ROI from automation is a Tenali Raman Crow count]

Cem Kaner says executives are entitled and empowered to chose their metrics. So, King was perfectly right in asking Tenali to count and report number of crows - though objective of King in the story is not to make any important decision for his kingdom. In any case - crow count metric was sought.

What can a tester/test manager do when asked to count "crows" ? While our community develops models and better alternatives to "imperfect metrics" - we need to tread a careful path. We should provide alternatives and possible answers to crow counts.

I have come to realize that refusal to give the count might be counter productive in many cases - trying to ape Tenali Raman might useful. Need for quantification is here to stay - years of persuasion and reasoning why in some cases counting can be bad - has not managed contain the problem.

What do you think about "Pass/Fail Counts"?

Shrini

Wednesday, March 21, 2012

My Views on Testing certification : 2012

A reader of my blog "Arpan Sharma" writes "What’s your take on certifications these days? I see your wrote about this is 2008 which is almost 4 years ago. Do you think the landscape of certifications have changed in recent times?".

Arpan - Thanks for writing and reminding that my stand on certification on this blog is about 4 years old now. It is interesting that you are checking with me if I have changed views. Here is how I summarize my current thinking on certification.


1. First of all the person seeking certification should be absolutely clear what they are expecting the certification to give them - Knowledge, Skill, skill enhancement, Marketing value, a job, an interview

2. Certifications that do not observe and qualitatively grade a tester - in action "while doing testing" - can not guarantee a certain level of skill in testing. Employers, Recruiters, hiring managers - please take a note.

3. If you want to learn how to do good testing, how to gain skills in broad testing landscape - certification is not something you should look for.

4. If there is a certification that let us get a job in a given situation/context or gets you a interview shortlisting - you should consider taking that certification. But - be aware - once you get your job - you are on your own. You would then be required to display (depending on the type of organization and nature of job) skills on job. Certifications' role ceases there.

5. Be critical about certification material and tests tell you - question them. Form your own ideas and logic about how things work. Do not take everything that taught or you read as part of certification as "universal truth. Why this is important? Only being critical on what is certification course - can help you to decide what value intrinsically you gained from it and what already existed in you.

6. Reputation is everything in today's world. You gain professional reputation by demonstrating your work and skills to your employer and to out side world (through networking world). Building reputation takes time and real good work. People with confidence in their skills and reputation - do not require a third party to endorse their level of skill. In today's world - people with skill and reputation - don't need certification. What does that tell you about certification?

7. Take special note of qualifiers like "Advanced" when applied to certifications - check out what is advanced and how? More often that not - it is more "jargon-laden".

#4 and #5 specially apply to freshers looking for /some/ job and those 1-3 years experience folks who either had some software job or a lost a testing job.

In terms of landscape of certifications - I don't think there has been change. Prime motive for certification providers is to make money - fast and cheaper. That has only intensified with many job seekers. That is fine as a business objective - we the target audience of such business ventures need to be clear about what we want from certifications and how capable are these certifications to deliver on the promises.

I repeat what I said earlier - if you want to learn, acquire skills, enhance skills in testing - certifications are the things that you should avoid. There are better, cheaper ways of doing that.

Did I answer your question Arpan?

Shrini

Thursday, March 01, 2012

Patterns in weakness in approaches about testing

I was reading this testing round table discussion and thought this might a blog post. Here I go...

To me, the biggest weakness is the perception or idea about what testing is  and why it is required.

Here are few examples as how companies treat testing.

1. Something that is avoidable to large extent or even can be eliminated if they could get their programmers and analysts get spec and code exactly right. The lousy work that these folks do during SDLC - creates need for testing.

2. Quality Assurance - Straight out of the box comparison with manufacturing assembly line. For these folks, testing is all about process and nothing else. If you get process right - you are done. It does not make any difference who does and when - all they need is to get process script right.

3. Building quality from the grounds up - A variation of #1 above - a growing group of people think that if you have automated tests (checks - actually) you don't have to really worry about testing. You are building quality from the ground - you cannot test quality but need to build it right? poor testers will not define and manage testing (under the name QA)

4. Testing? What? The whole team approach. This is creation of Agile model. Here, testing is everyone's responsibility. There goes "testing" out of the door as a specialist's job. When testing is something anyone in the team does - it is like any other project task.


5. Dont forget - this popular rhetoric - Testing (phase) is dead - That is biggest weakness in approaches about testing.  What else can be the biggest weakness about something other than saying it is dead?


Whole idea of testing is dead is around these beliefs and notions (test yourself if you agree or not)


1.  Testing (phase or role) makes developers complacent - a safety-net - remove it to make developers responsible.
2.  with so much focus on automated unit testing, test driven development, continuous integration - developers are producing quality software anyway
3. Finding problems is no big deal, we know where problems are (this is what James Whittaker said in his EURO STAR 2011 keynote). So do we need what testers for?
4. With cloud as popular software delivery model - you don't bother about bugs leaking. Time and effort to fix and turn around bug is ridiculously LOW - why bother testing?
5. What for you have crowd sourcing? Beta testing? - throw your stuff to users - let them use and tell us where are the bugs (there should not be many as we are group of smart developers and we know where the bugs are)



Thus,  the weakness in testing arises out of how we think about it  and what we want it to do for us. Thinking idealistically about how software is made and used, applying models from other fields without properly customizing them and removing or de-emphasizing the human element in the system  - are the key patterns of weakness in testing approaches





What do you people think?


Shrini

Saturday, November 12, 2011

Cause and Effect - Non Linear Systems

Here are three examples where cause and effect do not appear to corroborate. Take a look.

  1. Build a flyover on a busy road hoping that traffic will ease – A personal experience [Traffic will actually increase with flyover] 
  2. Dip a thermometer in boiling water. What happens to temperature reading? – Adopted from Gerald Weinberg’s Book “Introduction to General Systems Thinking” [Thermometer will show a lower temperature reading due to difference in thermal expansion between mercury and enclosing glass tube] 
  3. Making Cars more safer will cause drivers to become more aggressive and rash – “Peltzman Effect” Adopted from freakonomics post "What happens to your head” 
 In each of these cases – the effect or what is expected is not what happens – but the opposite. There can be explanations – lesson for testers : think holistically, develop systems thinking mind.

Why this happens? I think we often approach in terms of or use analytical/reductionist thinking - just bread/divide a thing into its constituents (atoms) and study them. This linear thinking of single cause-effect (taken one at a time usually) can help understand some aspects an object/phenomenon. With non linear systems such as societies, political/cultural systems, business systems etc - this simple cause-effect thinking simple does not hold good. So - think in terms of systems and their interactions.

Shrini

Thursday, August 11, 2011

Do statistics lie?

There are no whole truths; all truths are half-truths. It is trying to treat them as whole truths that play the devil" – Alfred North Whitehead

I received an email this morning with this topic of statistics and the content of mail and overall theme of the mail boldly proclaimed “statistics do not lie”. When we speak of statistics in software world, we typically refer to metrics and various numbers representing effort, number of defects, test cases, cost etc. So, instead of talking about statistics – lets talk about “Do numbers lie?”

We have two items here – numbers and lie or truth.

Let us start with numbers. What is a number after all? The need of having numbers probably is related (or caused by) need for counting. In Egypt, from about 3000 BC, records survive in which 1 is represented by a vertical line and 10 is shown as ^.
According this historical account, Pegan priests need to calculate the frequency of natural phenomenon. One of the best known examples of this period is the Stonehenge stone circle in Britain, built by the Druids as a kind of celestial observatory in 1,800 BC. For cave men of pre-historic age, counting facilitated sharing food items - probably. The pictures on the caves and archeological finds give us an idea that people in those days counted by drawing lines to indicate “how many”. Counting fewer items, let us say 6 fruits could be fitted in this idea of counting with fingers or counting by drawing lines. But how would you count number of fruits in a tree or number of people in a village? Discovery of place value system allowed counting items in 100’s and 1000’s. Hind-Arabic numbers 0…9 were known since probably 300 BC. Before Hindu-Arabic numbers, people used “roman symbols”. Interestingly enough, Greeks and Romans did not know the idea of “place value”. And human civilization evolved.

Gonitsora – an initiative from few students of Tezpur University India, carried an article on history of counting that said –

“The first motivation for people to create number was the human desire to the manyness of a set of objects. In other words, to know how many duck’s eggs are to be divided amongst family members or even how many days until the tribe reaches the next watering hole, how many days wills it be until the days grow longer and the nights shorter, how many arrow heads do one trade for canoe? Knowing how to determine the manyness of a collection of objects must surely have been a great aid in all areas of human endeavor.”

When we say “9” what does that indicate in a purest and objective sense? Nothing. OK, let us say 9 cars? What does that mean? Extending it, “9 cars parked in front of a house” what does that mean? “9 cars parked in front of house of a celebrity in London”. You might say “might not be any significant or interesting” as it is common for a celebrity in London have that many cars. Now if I say “9 cars parked in front of a house of politician in Delhi” – Something surprising or some planning happening on a political discourse. Now if I say “9 cars parked in front of a poor in a remote village in Somalia” – what does that mean? You would really get interested to know what might be happening in that house, who came in those cars, where did the come from? Did this poor steal those cars and all sorts of questions.

Pause for a while – and think did or do the number 9 revealed any truth in each of these situations? Is the number 9 capable of telling any truth or for that matter lie at all? A number meaning full and relevant by set of object/objects, people, ideas, events that the number points to. Thus it might be totally meaningless and absurd to say “number don’t lie”, as a matter of fact numbers of incapable of telling truth or reality independent of context, observers and recipient of the information.

Let us talk about truth. What is truth – a question that is at the base of all philosophy, science and every root of what we call “knowledge”. For the purpose of this post, let me use this definition (very tentative, provisional) - “truth is a qualifier that we can attach to a piece of information about which a group people do not disagree by and large”. Back to software world – give me an example of truth. One might say “In this month there were 9 sev 1 incidents in production”. Is this a truth? You may point to live incident tracking system and show a list of incidents reported in this week and say “look here is the truth there are 9 live incidents”. Let me apply my “provisional” definition of truth here. Let us call 10 people – few programmers, few business analysts, a project manager, a business unit head, a customer and a sales manager. Let us put these people in 10 separate rooms and show them the list of 9 live incidents and ask them “is this information true?” Let us record each response. What do you think would be those responses? Will all of these agree on the notion of truth of 9 live incidents? I guess – many would say “Yes, I know there are 9 live incidents this week But ……” What follows after but is each person’s view point or story of how they view (defend, attack, frown, shout, feel sad etc) those incidents. How do you extract “truth” from this beautiful, “god-like”, impartial number “9” quantifying live incidents”?

You would soon have a consultant selling a version of “cost of quality” and attach some dollar figures to these 9 incidents and sell a multi year “transformational” deal to reduce the cost associated with these incidents. Should you believe him?

Often when executives say “I need statistics, numbers” – it seems to me that they are really (should be) interested in the stories behind those numbers, they are (should be) least bothered about numbers themselves. Numbers are masks for stories, events and emotions that they represent.

Numbers, statistics – are incapable of telling anything in absence of context, stories, people and their motivations. For now, I can say the issue of whether statistics tell lie (or truth) is settled – they don’t tell anything.

An exercise: When I was preparing this post, a colleague of mine, Joy Chakraborty challenged me and said “company financial results” are objective truths about company’s performance (he did acknowledge Satyam Saga and other irregularities about how company financial results could be manipulated). He simply asked how the numbers in the statement - “Goldman Sach’s reported net earnings for Q2 2011 – of 1.1 billion USD - 77% up over previous year’s same quarter” are not objective truths? What do you say?

Is Pythagoras theorem true? How about Einstein’s General theory of relativity?

Monday, June 27, 2011

When Testing Fails ...

Carl Segan once famously said “Science is a self correcting process – an aperture to view what is right”. Carl Zimmer in an article on Indian express says “… science fixes its mistakes more slowly, more fitfully and with more difficulty than Sagan’s words would suggest. Science runs forward better than it does backward.” According to Zimmer, checking the data, context and results of a published scientific work is not of much interest to journals that take pride in “first” publishing ground breaking new research. For scientists scrambling for grants and tenures – checking published work is not attractive and often an exercise not worth its effort. As a result of this, Zimmer says, original work/papers often stand with little or no investigation or scrutiny by peers. This surely, is bad for science. Zimmer, suggests the community to have focus on “replication” and setting aside time, money and journal space to save the image of science as self correcting pursuit of knowledge in Carl Segan’s words.

Well … does this have anything to do with software testing? I recall James Bach once saying “it is easy for bad testing to go unnoticed”. Like science and scientific community’s social fabric – software testing world (software world in general) has built-up layers of wrapping and packaging. It is easy to find some reasons in one or more of these layers in the ensemble of social systems leading to few missed bugs that cost few days of outage of say- a popular e-commerce website. Any query or investigation on the effect of outage or episode of missed bugs would set-up a nice blame game looping all the way to testers, developers, business analysts, test data, environment, lack of technical/business knowledge and host of other things. Like it happens in published science works – hunting down the culprit or group of culprits would be time consuming job. In any case it is regressive job and takes valuable resources from many “progressive” tasks. Right? In Zimmer’s terms – spending time on production bugs is similar to running backwards. Does testing work well when running backwards? Do stakeholders like it when testing is running backwards? Not sure if published research wrong or publishing a new perspective on the basis of an existing work can be as productive as hunting down a missed bug and trying locate the reasons for its birth in the first place.

While process enthusiasts, Quality assurance people, Sick-Sigma (oops… six sigma) people might protest and insist on full blown root cause analysis of all bugs reported in production. SEI/CMM folks might make it mandatory to document all lessons learnt from missed bugs and refuse to sign off the project closure unless this is done. In spite of this – in a fast paced life of software where cycles between conception and real-working-software, are shrinking, ever increasing number of platforms (don’t forget mobile platform) – who has time to look at root cause analysis reports and all those missed bugs?

I remember a business sponsor once saying – “I can’t wait for testing to complete, once the development and put the stuff into production. If something breaks, we have enough budget provisioned to take care of any production failures.” Here, failure to put (buggy) software in production is more disastrous than waiting for “perfect” software to develop. Back to layers of social system in software testing – it appears to easy to hide bad testing – unless you screw-up very badly, the chances are that your stakeholders will never notice what good testing can get them.

I often wondered looking at some testing that happens in some organizations – how they are managing to stay afloat with such bad testing? The reason is probably – when testing fails, it is difficult to attribute it to testing and call it as such. It requires courage. How many testing leaders are willing to admit their testing sucks without hiding behind fancy looking metrics, root cause analyze reports and charts? That is the effect of “software testing” as “social process”

Tuesday, June 07, 2011

Sure Ways to Reduce Test Cycle Time through Automation


World is simple, we complicate the world for the sake of it – says a friend. There is a simple concept called "automation" and another one "testing cycle time". Why these terms can not be simply understood without much fuss and inquiry? he often argued with me. This friend is a manager and works for IT services company. A constant pouncing on me by this topic of test-cycle-time-reduction-through-automation and related hype, frustration and feeling of achievement, made me to think deep about basic laws or golden principles that govern this phenomenon of cycle-time-reduction.For the benefit of my blog readers, here, I make them public. Read them, understand them, implement them and be blessed. A caution here any criticism and cynicism about these laws will have harmful consequences to the beholder. These are golden principles and axiomatic about test automation!!!!

First principle - “About Testing”: Strongly believe that software testing is a deterministic, highly repeatable and structured process – somewhat akin to a step-by-step procedure to produce say a burger or car. You have to believe that given a fixed scope of testing, it always takes a finite and fixed time (effort) to complete testing. Needless to say, you have to have absolute faith in testing processes and standards. Your faith in the power of processes and standards making testing predictable and repeatable is an important success factor. It is also necessary to abandon any misconceptions you might have about relationship (or dependency) between testing and automation. You should treat testing truly as a mechanical, step-by-step and repeatable process to ensure Quality. Positive ideas about metrics to improve testing and consistency are - a sure bonus. You should resist and fiercely oppose any attempts to link automation and testing, claiming that automation can and should work independently.

Second principle – “Definition and Meaning of testing cycle”: Never challenge or probe definitions and meanings of word “Testing cycle”. Keep it (testing cycle time) loosely defined so that you can flip in any direction when confronted by a skeptic challenging your claim of cycle time reduction. The more the vagueness of the term “testing cycle and cycle time”, higher are the chances of meeting goal of reduction of cycle time through automation. Any variables and factors that make “testing cycle” somewhat unclear – should be ignored and not discussed. It is important to have faith in the fact that “testing cycle time” is universally known term and does not need to be redefined in any context. You would be looked upon as genius if you simply talk about “cycle time reduction” and omit the unnecessary qualifier “testing” or “test” (as in “test cycle time”).

Third principle - “Playing to the gallery”: Use words like “business needs”, “business outcome”, “business processes”, “success through business alignment”, “Time to market” etc., liberally during all communications related automation and testing cycles. Confront any opposition by skeptics about automation and its connection to cycle time by “calling authority” – say “Business/Market needs it”. Strongly believe in statements like “Automation will help releasing products and services faster – and hence will improve customer satisfaction and company bottom-line”. Your success in achieving cycle time reduction depends upon how often and how strongly you make reference to “business” and “business needs”. These powerful words that do all the magic required. You need right rhetoric to spread the message. Another important keyword here is “market”. Make sure you thoroughly mix up and use the terms “Testing cycle time” and “Time to market” interchangeably. The more you talk about “Time to market” and how automation can directly help it – more you look authentic and convincing.

Forth principle - “Motivations and Incentives”: Believe economists when they say “incentives” drive change and motivate people. How can this not work here? The last but not least of the measure that one needs to be aware of is to provide incentives for people to reduce cycle time through automation. Define performance goals of individuals involved in the automation in terms of cycle time they reduce through automation. Penalize those that question the idea and fail to meet the performance goals. The automation initiatives tend to be successful in their stated goals of cycle time reduction when they are integrated with performance goals of individuals involved in the game – especially automation team. Also make sure (when automation is done by a decentralized team) to define and impose the performance goals ONLY on automation team and suppress any attempts to include manual testing team. After all automation and testing are not related in anyway and it is the responsibility of automation team to bring about cycle time reduction. Right?

If you find these principles and ideas rather strange and are intrigued – one of the several possibilities is that you might be working in a software product organization as opposed to IT and IT services organization. The folks in software product organizations that build software – unfortunately approach automation some what “differently” and cycle time might not be a very familiar term to you.

If you are not from software Product Company and still confused about the ideas in post and you think they are inconsistent or incoherent with each other – do comment. That is a good sign that you are thinking about the topic.

Let me know if there are other ideas and principles that you might have successfully used to reap benefits of cycle time reduction through (and only through) automation – I would be more than happy to incorporate them with due credit.

Shrini

Wednesday, May 25, 2011

All wannabe software testers out there …

An anonymous comment posted on my blog read “hi i am non IT background i want to do software testing course in india. if some give me some of the institution in india who teach very well that help u when u get job for software testing and ISTQB test exam.”

Nothing new here, I receive mails regularly asking for suggestion on how to get on a fast track for software testing and get a job. This particular post has three dangerous things that I see – which a new entrant in software testing should be aware of and avoid them.

“Software Testing course” – Let me tell you from my experience – there no such course in world that can make you a tester worth a job overnight. Any course that claims to do is a complete hoax and fraud. Shorter the course duration and taller the claims made by it – deadlier it is. Folks – be aware of such courses that claim you to get a testers job – please don’t fall into the trap. Another dangerous mix or variation here is – claim of “teaching automation or one or more world leading automation tool”. If you want to be software tester - no matter whether you are from IT or non IT background – don’t waste your money and/or time on such courses.


“ISTQB” (replace this with any popular testing certification” – while much has been written by my colleagues and many real life (not so pleasant) experiences out there – I would want to touch upon one thing. ISTQB or for that matter any testing certification WILL NOT teach you how to DO testing and how to gain expertise at it. That is their limitation. Please understand it. Considering certifications as businesses making money – teaching testing and assessing testing skill of a candidate in real time is not their cup of tea. Real certifications and exams that do subjective assessment of testing skill – are not scalable and hence certification people can’t make fast money.


ISTQB and other certifications have done one thing well – marketing. In India, many organizations and recruiters insist one or other testing certification as a must for an entry level testing job. In some others, attaining certifications is a criterion for promotion. Sad state of affairs – though. Sad – because it sets a wrong precedent. It creates wrong expectations among entrants and companies that hire these newbies. It creates a wrong image about software testing in general and at times it trivializes the craft. I have people telling me how easy was to pass a certification exam when they had no prior background or practice about testing. At the most you can get to know some terms used in testing and their fixed meaning as used by a specific group of people. Worst - in some cases they are taught as though those terms have universal meaning and acceptance.


Getting Job – this is third danger in the wannabe’s should avoid. While in some cases you might be lucky get a job using some testing course or certification (hiring process for software testing in India – lot needs to improve – but that is a different topic) – you will not survive long unless you practice real testing – get your hands dirty, feel and learn to think like tester. I can relate software testing skill to that of a musician. If you want to be an expert guitarist – what would you do? Take a 2 week course and a certification (theory) exam and claim a professional guitarist job? As taking a 2 week guitar crash course will not make you a competent – taking a software testing course will not prepare you for a professional career. Similarly learning few words and vocabulary related to music and guitar – will not help you to give performances – though knowing common words and their meanings can be helpful. This is not much different with respect to software testing – getting an ISTQB can help you to know words like “regression testing” and “severity of a software bug” – but those are mere words. Role of ISTQB ends there and real work of practicing testing starts.

In a nutshell – if you are reading this post, is someone who is trying to make it to software testing, want to get an IT job in software testing – here is what you need to keep in mind

Don’t do these things:

  1. Look for or ask for software testing institute that gives a software testing course (shorter the better. One that guarantees job – the gem)
  2. Taking a certification – especially when you have no clue on what is software testing.
  3. Look for job on the basis of 1 and 2 above.
  4. Don’t take a crash course on automation or automation tools in order to get a software testing job.
  5. Take shortcuts

Do these things (few of things that personally helped me in my career) – Try them

  1. Have a time horizon of 1-2 years to the minimum and a complete dedication to learn and practice software testing
  2. Get a mentor – there are many willing to mentor if you show real passion and dedication. Read software testing blogs and engage in conversation. Use social media to your advantage – blogs, twitter, facebook, linkedIn. Make your presence felt as hungry, passionate new comer in software testing – let world notice you. Build reputation.
  3. Practice testing - A platform I recommend is weekendtesting.

Any questions?

Shrini

Tuesday, April 12, 2011

How IT deals with Test Automation ...

Here is a short post on how test automation is dealt in IT/IT services organizations.

For example : A typical test cycle:

Before Automation -- (AS IS statistics - documented)

Number of Manual test cases = 300
Time taken to execute these test cases = 10 Person days
Tester's productivity = 30 Test cases per day (derived data)

After Automation -- (Assume 150 of these test cases are automated)

How do you think test cycle would look like?

Number of test cases Automated = 150
Number of test cases requiring manual effort = 150


Time taken for 300 test cases = Time for Automated execution + Time for Manual test execution

= (how much time will 150 automated test cases will take to complete) + 5 days (with 30 test cases per day productivity)

= 1 + 5 (assuming that automated tests run at 1/5th of time of manual execution)

= 6 days

Hence a business case will be developed to justify automation investment

The business case would say - with automation, we can save 4 person days of testing effort per cycle. Depending upon how many cycles of testing are lined up - we can break even the automation investment.

Oh... What a great feeling ... This is what I see all day in and out..

Let me tell you one thing .. executives, managers love this stuff ... so neat - objectively stated in numbers, justifying in terms of monetary terms why we need automation.

But some discomfort and terrible feeling in me ... what you people think - we are missing here?

Sunday, March 13, 2011

Programmers make Excellent Testers - Arguments and Counter Arguments

Janet Gregory’s post on Programmers as testers prompted me to do this post. She mentions in the post that “Programmers make excellent testers”. So I asked myself can I think of few reasons to support and few others reasons to refute the statement. Here I go …

Programmers make excellent testers because:

  1. Programmers understand their code and their fellow programmer’s code /very/ well. Hence knowledge of the code helps them to test /it/ better. Creator knows the best about his/her creation having created it.
  2. Programmers understand /better/ the technicalities of the platform (anything other than “software application under test”) on which their code runs.
  3. Programmers can /write/ /better/ automation code that tests their product code. Writing automation is part of testing right? Or test driven development?
  4. Programmers being closest to the code – can find and fix the bug in smallest time possible – hence can do efficient testing. First opportunity of finding bug in a code is with programmers. If there is a problem in the code – they are the ones that should know about it FIRST.
  5. … I can’t think 5th one … probably need some help...

Now, let me cross over to other side.

Programmers make not-so-good testers because:

  1. Programmers are /usually/ blind to their own mistakes. Their testing is limited by cognitive bias (confirmation bias)
  2. Programmers /typically/ good at “construction” work – getting spec to working code – not a key tester skill. Programmer testing is more like a cook tasting food made by him/her before serving.
  3. Programmers testing is /typically/ happy path – where would they get time to do “out of the box” as testing, unusual paths, usability, security and performance related tests? (Unless explicitly called out as a part of specification). Programmers /often/ do not see big picture.
  4. Programmers, all through their professional life, work to improve their coding skills – so testing is a part time work for them (which programmer would work to improve testing skills unless he/she decide to become full-time testers)
  5. It is hard for programmers to /think/ like users (many types of them) – their mission of Spec2Code limits them to think in terms of code

A bonus: Typically programmers hate or avoid testing (other than writing automation –which is again a coding work) as far as they can. Many would say “testing is not cup of tea” but I need to do it as we all in a team are responsible for quality. Programmers can’t make excellent testers simply it is not their job (in all).

In sport like Cricket – there are specialist batsmen, bowlers and also all-rounders (who can do both equally well). That is not true for testing.

Someone might say Janet’s context is /Agile/ development model – well, how does that change my “for” and “against” arguments here? How far does that matter?

Update : Pete Houghton (Twitter @pete_houghton) mentioned that debugging is a form of testing - programmers do it well. To me, debugging is an act of hunting down a reported or known problem and fixing it. Thus debugging follows testing - not a form of the latter. By going through the process of debugging repeatedly, programmers gain understanding of how they (programmers) make mistakes and how to avoid them. That also helps them to think about interesting ideas about testing. That is one reason that helps them to be better testers.

BTW, working in customer support - gives probably the best experience for testers, through exposure to wide variety of usage patterns (many of them - out of scope of specifications). Unfortunately - end users do not use applications as per the specification or user guide.

Shrini