Sunday, November 27, 2016

Tester or Leborer ?

A friend of mine sent a link to this article on PMP and Project Managers that brings out an aspect of our profession - testing so beautifully. Are we knowledge workers paid for our expertise or laborers?

How does whatStuart is saying about PMP and Project management apply to Testing? I believe, more than certification, testing profession is hit by the way we poorly define testing and adopt a model of testing that eliminates need for skill, focuses on mindless repetition of some documented procedures.

Time to reflect on. If we define and accept that definition of testing that systematically undermines skill element and focuses on process, tools, metrics etc - there is no doubt that we will become laborers.

Is testing rule based?

How much of good testing is rule based?

Saturday, July 09, 2016

Testers are human ... so are Programmers

One important aspect of we humans as testers or programmers is how our day-today happenings impact our work at office. While this is not different for we software folks as opposed to any other profession that requires "presence of mind" - being occupied with thoughts about past or future can lead "knowledge workers" to make mistakes and/or forget things.

An incident that occurred this evening made me realize how important is for testers to be "present" while testing so that do not miss things and make mistakes. I went for a shopping mall with my family nearby. While entering I had an argument with a fellow who while reversing the car in parking happened to hit my car. While that incident fresh in my mind - I passed parking ticket counter, collected the ticket (while my mind of full of the car incident little while ago) and gave it to my wife. I generally has a designated place in the car where I keep all these tickets. This time my wife kept the ticket in a place that I generally cannot reach from driver's seat. I did not mindfully record nor my wife remembered clearly where she kept the ticket. Few hours passed by. While returning, my wife and kids went a nearby place and asked me to get the car and pick them up,  While walking back to parking lot - I was confused about where was parking ticket - thought of parking ticket was all over mind. When I reached car, I searched my usual places, did not find the ticket. I panicked on the prospect of paying almost a day's parking fee instead of few hours. I did few more rounds of check around drivers seat, usual places that I keep ticket and pillion seat - did not find the ticket. Finally called wife to check if ticket is with them. I was told that ticket should be in car. I finally gave up and paid the full day fare and came out of the parking.  When my wife comes in the car, she reaches out to glow box at pillion eat and hands over the ticket to me.

Why it did not occur to me check in glow box ? Why my blocked mind did not contemplate on various possibilities and locations for the ticket after all car is not such a big place? I guess two things happened. One - due to argument with other driver at the mall entrance filled my mind so that I did not mindfully register where my wife kept the ticket and second I gave up easily before exploring my options.

What did I learn from this incident that I can apply testing?

Good testing is about having wide range of testing ideas to cover mistakes that other folks do while constructing software. Programmers, Business analysts and others can make mistakes like I did. I urge testers to be mindful while testing, designing test cases and watch out for mistakes/misses that might lead to bugs. Every now and then put your mental abilities about test idea generation to test and develop the skill to look for misses/mistakes. Practice mindfulness and be vigilant at all times. This helps in your personal and life outside office as well - you, yourself are least likely to make mistakes.
This will save time, money, rework and will give you peaceful life. What more - you can do the same for others.

Role of human emotions in software development and testing has been the point of discussions at many testers meets and conferences. I guess the larger software community needs to acknowledge this and develop measures to be mindful.

I suggest mindful meditation and concentration exercises to testers having high level of mental activity (more often than not - nose) - like me. Being mindful and vigilant at all times - seems to be now a skill and capability for testers.


Sunday, May 22, 2016

Chocolate and Prayer - An Anti Pattern for BDD

In a school, for first graders, there was a practice or ritual in the morning every day. The kids used to assemble and sit in a designated place as they come in. They needed to say a prayer with closed eyes. When they finished the prayer, each kid would find a chocolate bar in front of her. The kids would happily take it, eat and proceed to their classes. This ritual ran for several years. Kids thought that chocolate is prize that they earn for saying prayers and none questioned the ritual. Years passed by. The length of prayer became smaller, kids got their share of prize - chocolate nonetheless.  On one day - kids assembled in their usual place and were preparing to say prayer - they saw chocolate bars in front of each of them - already. With none around  - few kids took initiative and grabbed chocolate while few sincere ones proceeded with prayer as usual.  After few days - following law of diminishing returns, these sincere kids to started to skip prayer and focused only on eating chocolate.
After several years of this ritual - one curious kid, unable to control his thought about why they get a chocolate everyday in the morning (note - prayer is long forgotten), asked his friend. "None knows why, my elder brother tells me that there used be some prayer before they got their chocolate" said the friend not so interested in the question.



Now, imagine this is a multi year social experiment conducted by school authorities in collaboration with educationists - what would you infer ? You might say, initially kids got their prize after prayer (a good and recommended activity to start the day in school) and when chocolate was given prior to any prayer, kids simply forgot or dropped the idea of prayer. Economists would call this as "incentive" to elicit a specific behavior from a group of people.

Let us come to our world and let us try to map prayer and chocolate to BDD (behavior driven development) and automation. As original proponents of BDD wanted it solve certain problems and automation apparently came out as chocolate, prize that follows doing BDD.

As I understand  - BDD was intended to bring business analysts into the party, develop a common vocabulary between Dev, BA, Testing and stakeholders and address some of the perceived problems close cousin of BDD - the TDD, test driven development. Dan North explains the background and history of how he landed with the idea of BDD. As Dan narrates - the practice of BDD proposes to focus on the behavior (change from keyword "Test"or "requirement") software should demonstrate for a feature that client wants. In order to develop a common vocabulary - BDD needed to restrict the representation of this behavior using a set of keywords and the behavior required to be in a non technical language (remember they needed to bring BA's that are non technical into the party). Thus using a class of languages (meta language, I guess) like Gherkin which is a type of DSL (domain specific language) BDD ushered a practice where intended software behavior and corresponding scenario or an example was represented in a format like the one below


As [Role/Stakeholder]
I want to [A feature or behavior]
So that [business outcome that is worth paying for]

Scenario
Given [Initial or Preconditions]
When [ Action performed to invoke the feature]
Then [Expected result that software needs to demonstrate]

As Liz Keogh, one of early collaborators with Dan on BDD development, says -key challenge BDD was intended (broadly among other things) to solve is facilitate and improve communication, discussion and debates about what the behavior should be,among developers, testers, business analysts and stakeholders.

That was a prayer  - BDD's objective for effective communication.

After looking format of BDD scenario/user story, full of keywords - a smart developer would have thought "I can parse this and generate a skeleton code which can be implemented later as automated test". This is that chocolate that was promised to everyone in the team. Thus a strong distraction for original objective of BDD was born in the form of automated tests out of BDD story/scenario.

The theme of automation attached to BDD become so powerful with loads of frameworks such as jBehave, Cucumber and others overshadowed everything related BDD. At some point of time, doing BDD meant using jBehave or cuccumber and creating automated tests.

The power distraction of automation (chocolate from our story) instantly hijacked communication/discussion about behavior (prayer) and practitioners BDD started doing only automation. This is the anti pattern that I wanted to highlight in this post. I have seen several instances where testers/developers/BA's were worried only about which tool or framework to use for BDD and which automation framework/library to use. The stakeholders on their part were sold on the idea that they would get "Executable specifications that come with dual benefit - representation of behavior and automated test". They could not ask more.

Alas, in the process, BA's, testers and developers instead of sitting together and discussing about what "Given" should lead to what "Then" or what "When" leads to what "Then's" - sat in silos and happily created loads of BDD stories and some tester or developer jumped straight away to implement automation.

I am not complaining about automation that is embedded in BDD per se - I would like people to reinstate the prayer - the focus on cross function collaboration, you can have your chocolate (automation) anyway.

Time to read Dan's post on introduction to BDD and also posts from Liz on the aspect of communication ?




Wednesday, April 08, 2015

What do you call something - Name matters !!!

In the recent times - I came across two instances where names/phrases we use in our daily life as software people - programmers and testers make a huge impact on what we do. The names we use to indicate things create objects/actions larger than the life.

Unit testing is something that only developers do
A colleague mine recently demonstrate me a testing framework (some code, library that drives some portion of application under test) as a unit testing framework. I applied some knowledge I gained by reading on unit testing to this framework and realized that the framework did not do or support unit testing.  A unit test by definition is self contained and attempts to validate logic supposedly implemented by piece of code under test with all other dependencies mocked out.  I confronted my colleague on why is he calling the framework as unit testing framework. His answer surprised me - he said while agreeing to definition of unit test that I quoted here "If I do not use the phrase unit testing here - developers would not use this saying it is testers job". Here the word "unit testing" is inappropriately used to effect some change in the behavior of programmers/developers. I can sympathize with my colleague - such is the power of names/words/phrase that we have created.

Behavior is more useful word than Test
Dan North in his introductory article on Behavior Driven Development - says people misunderstood the word "test" in TDD. He observed removing the word "test" from TDD, replacing the word with "behavior" made the whole activity more acceptable to programmers. While there is more to BDD than TDD and word "test" - this instance made me think yet another case of how names create effects with far reaching impact than we seem to think. I guess very name of "test" makes some programmers think "not my job".  All of a sudden the wall between dev and test becomes and we have stereotype developers and testers out there.

We need to be more careful while creating or using words and phrases - development, testing, unit testing etc are few examples here that are creating practices that inhibit effective collaboration between various functions in a software team. Who said "what is in a name" ? We know now - that there is something in name !!!

Saturday, December 13, 2014

Being away from blogging

Ten years since I posted my first post - it has been a long journey. Some years very active with many posts and some very lean - like this year. I want to avoid creating a weird record of having exactly one in 1st and 10th year. This is not so happy state in being.

From work-wise this has been a very hectic year for me. I get very little time (including the week ends) to reflect and write. Some if it is attributed to writers block and some of it is related to the puzzle of what to write.

Recently I spoke at QAI STC conference on "Feynmanism for testers" - a phrase to indicate "Feynman" way of thinking for testers. I had about 30 mins to cover the idea like this. I surely, struggled to make justice to the topic. However - I had some very interesting discussions and met many nice people post the talk. So - my talk did touch few of these people who overcame their hesitation to come up to me and talk.

It is nice to see many of these conferences posting conference talks on youtube. While I wait for this year's STC video appearing on youtube, you can check out my 2012 talk here.

I am planning to start small 3-5 mins video podcast sessions on testing topics as an alternative way to keep this blog going. One very personal reason for this is to improve my presentation skills. Watching yourself doing a talk can teach you lot as how to improve the same.

Let us see how this goes... I thank my readers for their interest shown on me.

Sunday, June 22, 2014

There is no such thing called Agile Testing

I struggled since long to find a reasonable meaning and definition for phrase 'Agile Testing". So far I have been unsuccessful in finding one definition that can stand my scrutiny. Probably there is nothing called "Agile Testing" exists. Possibly yes.

Before I proceed – let me make a distinction between "Agile" and "agile". James Bach has been suggesting since long - this difference. The word "agile" as a dictionary word meaning "swift", "quick" – when applied to software it simply means what it means in dictionary. Good and reasonable software people have attempting to be "agile" in the project context as demanded by stakeholders. This has been happening much before the industry invented the buzz word "Agile" (note caps "A" here). This word "Agile" is more of a marketing term invented to describe a ceremony-laden model of developing software. It promises continuous, small iterative and quicker pieces of deployable software – straight to market. It is fashion of the day and often seen as panacea to all problems of slow, buggy and boring year long projects draining millions of dollars where first 4-6 months of the project would be spent in agreeing upon the requirements or initial design. In Today's world Market demands speed and flexibility for businesses making software or using software – days of big upfront design and yea- long software projects are getting over.

You can consider "agile" as drinking water that you get in the tap and "Agile" as your favorite brand of mineral water bottled specifically and sold for a price promising certain level of purity.

Also, let me define for the purpose of this post – what is testing. Testing is an open ended activity of evaluation, questioning, investigation and information gathering activity around software and its related artifacts. This is typically done to inform stakeholders about potential problems in the product, advise them about risks of failures as quickly and as cheaply as possible. There is NO one "right"(certified) way to do testing and one right time in the project lifecycle to start with it. The context of the project defined by the people in the project including stakeholders – dictates the form and essence of testing. Testing does not assure ANYTHING, it informs (to best of testers ability and intent) about problems in the software that can threaten the value of software. Given constraints of time and money – testing (even though an open ended evaluation/investigation activity) constantly seeks to optimize its course to find problems faster and report them in right perspective. This requires testers to be good/quick learners, skeptics, thinkers with diverse set of skills in business, technology, economics, science, philosophy, maths/statistics amongst others. In some sense testing is like a sport or a performing art that becomes better with practice and improvisation. A professional tester needs to practice (meaning doing) testing as a professional musician or sports person.

Good testing thus -

  • Focused on working closely with programmer
  • Uses tools/automation to perform tasks that are best done by computer
  • Favored light weight bug tracking process that primarily focused on faster feedback cycle to developer and speed fixing of important bugs (important to stakeholders)

When books, blog posts, articles, conference presentations talk about "Agile testing" – it is always in contrast with so called "Traditional testing". Any meaning or interpretation of traditional testing assumes a stereotypic "traditional" tester. So, let me attempt to define one.

A traditional tester is one who has worked in a waterfall software project and was part of a dedicated (independent) testing team. There would be a wall between development and testing teams and code to be tested would be thrown over the wall for testing purposes. Testers used heavily documented test cases and relied on elaborate requirement documentation. Bugs are reported in a formal bug tracking system and it would be testers pride to fight to prove that bugs logged were to be defended. Testers resisted changes to requirements in the middle of the project and insisted that it would make them to rework on test cases, retest application and hence adds to cost of the project overall. Testers assumed the role of quality police and took pride in being final arbitrators of "ship" decision.

For the uninitiated few examples of what (I believe) is NOT Agile testing

  • Writing units tests in xUnit framework – you are not testing
  • Doing xDD – There are host of 3/4 letter acronyms on the lines of /something/ driven development. As many agile folks admit these are development methodologies – let me not go deep in explaining why these are not related to testing.
  • If you are working on continuous integration tools and your automation gets kicked off in response to a new build/check-in – you are not testing
  • If you are writing stories or participating in scrum meetings - you are not testing

Finally here are 3 reasons – why I believe that there is no such thing called "Agile Testing" -

Agile Testing people do not talk about testing skills

If you know what is testing and you do it – it is obvious that you know what skills you need and how do you work to improve them. Agile people often confused about what is testing and what is not. Hence you cannot expect them to articulate testing skills by them. You typically hear things like "collaboration", "programming skills", "think like customer" etc. I strongly feel that these folks have no clue about testing or testing skills. I bet they are just making it up. Software testing is now special skill in itself. Many people study and practice it as a profession and life time pursuit. There are testing conferences happen all over the world. There is a growing body of knowledge about craft of software testing.

Sad that Agile folks have no idea about these skills. All they are talking about is how developers or team members in Agile projects work and believe. This is what really bothers me about the idea of Agile testing. The idea is being badly articulated.

"Something that everyone in the team does" – that is how an Agile folks define testing. While everyone in a project team owning responsibility to make sure project succeeds is a noble and unquestionable idea – making testing as everyone's responsibility is shooting on own foot. Very soon we will get into "everybody-anybody-somebody" type of problem. Expecting developers do excel in their bit of testing is OK, expecting business analysts/story writers in capturing requirements well is fine too. But making everyone responsible for testing is about turning blind eye to skills required for professional testers. This idea of everyone-does-testing is rampant in Agile teams. Why call this testing with a special name "Agile testing"? In terms of roles – as everyone does testing – you may not have a designated role called testers.

Agile Testing is different from Traditional Testing – but not quite

Inevitably, I now need to introduce term "traditional testing". Agile folks would argue that testing that happens in Agile project is different from "traditional testing" – they point to testing against user stories as opposed to detailed requirements. Wow – if you are testing basis is a story instead of a detailed requirement document- you are doing Agile testing. But how different is that?

Much of the trouble for testers transitioning to agile projects is about their dominant beliefs about testing. For someone who worked in a typical outsourced IT environment – it was difficult to work with stories instead of elaborate requirement documents. It would be challenging to work closely with developers/programmers and speak their language while all along they worked with a wall between them and development team. Automation, for these testers was something on the lines of QTP or any GUI automation tool – where as agile teams used likes of Selenium and API or unit testing.

  • Many testers cannot work with leaner documentation (requirements)
  • When requirements constantly change – they are thrown off the track – they cannot test without test cases
  • No more there wall between dev and test – hence a tester is exposed to work directly with dev. Some are intimidated with this possibility
  • Testers that are familiar with GUI automation tools like QTP are suddenly exposed to tools that work under the skin – expectation is to understand and work with formal programming languages. This is terrifying to many testers.

So, there is no such thing called "Agile Testing" but there is "good testing". If you are a "good tester" and asked to work on an Agile project – what do you do? Fit yourself in project context – keep doing good testing that you always did. Do not get distracted by jargons and marketing terms that you might find people, consultants throwing around.

I think there are some ideas that I have not touched upon – Agile testing quaderants, why exploratory testing is such a hit with "Agile" people – well, that is for part 2. Let's see how this pans out.

Sunday, December 08, 2013

Refreshing Schools of testing - A flow chart

I picked up this from one of my old notes  on schools of testing where I made a sketch in the form of some flowchart. While I was cleaning my book rack, found the paper with the sketch, I thought why not make it a blog post.

Starting thought  was about fundamental ideas about software testing especially in terms of objectives, tactics, outcome etc.

If you agree that there are differing "opinions" about software testing amongst practitioners, stakeholders and other parties in software eco-system - follow through the flow chart and see where you end up. Let me know your views on this.


Sunday, December 01, 2013

Connection between Software Metrics and News

I discovered Maria Popova's brain pickings accidentally. I am happy that I did. Fully loaded with stuff that makes you think almost every time you read her blog - is something that stands out to be noticed.  If you have not already signed up for her newsletter and not aware of brain pickings - I strongly recommend you sign up.  If you are curious mind - you cannot afford miss this "interstingness hunter-gatherer and curious mind at large". Thanks Maria - for keeping us busy in reading and absorbing stuff that you keep serving to knowledge hungry, curious world.

In a recent post she explores (or re-explore)  a book "Does my gold fish know who Am I" and in the narration that follows the central theme of curious  and urgent questions by kids, I found a paragraph about news. I could immediately make a connection with how software metrics are produced and consumed.

Thanks to my confirmation bias towards anything that criticizes software metrics - I sat down, this sunday afternoon (while busy in finishing all piled up work) to write this post. If you feel strong about any idea to write about (for a blogger) - you would find time to write about it.

For a question "what news papers will do when there is no news" -

"Newspapers don’t really go out and find the news: they decide what gets to count as news. The same goes for television and radio ....The important thing to remember, whenever you’re reading or watching the news, is that someone decided to tell you those things, while leaving out other things. They’re presenting one particular view of the world — not the only one. There’s always another side to the story"


Wow , that seems to be absolutely right to me. Exactly same thing goes for software metrics. The producers of the metrics decide what they want the consumers (Managers, stakeholders) to see and absorb while leaving out some unpleasant things that probably matter. How often you have see testing producing results that confirms what stakeholders looking for.  Zero Sev 1 and Sev 2 bugs in open state and 2 Sev 3 bugs with clear work arounds. On release Go-No Go meetings - what else can be sweet news like this? If  as a stakeholder you wanted the release to happen - you would not question these numbers at all. Thanks to confirmation bias. 

Management's preference for numbers and summarized data - it is very easy to hide things that matter.  And there is always another side to the story - sorry the numbers (numbers themselves are astonishingly incapable of telling any story leave alone telling the right story). Why this works (or apparently works) - our brains are wired for optimism - we like to hear good stories (good numbers) and most importantly stories that confirm our existing world view.  Here is where critical thinking comes as savior. To me critical thinking is about questioning ones' own suppositions and line of thinking. "Am I missing anything here?" or "Is my understanding right, should I seek some contradictory information if exists" -  are examples of critical thinking.   For software testers - this is very CRITiCAL - we should last person to say "all right this is right".

Sadly as is the case with news, the metrics madness goes on - consultants year after year mint money in the name of software engineering, software process and metrics rule our life as software folks.

While I am writing all this - I need to be critically thinking as well - Am I being overly negative and dismissive about metrics and news?

Sunday, November 24, 2013

White's Illusion and Importance of Context in Testing and for Testers

I found one of the best examples or illustration of what is context and why it is important - in Keith Klain's Eurostar Webinar "The confidence Game: What is mission of Testing"

As Keith explains (slide #7) white's illusion- we can see how our brains and eyes perceive color in the context of surrounding's color and relative sharpness or dullness. To me, this is similar to the idea of "context" of say our projects, testing practices and probably everything that we believe as truth. To be context driven (in testing) is to be aware of "background" of statements, ideas presented in all aspects of testing. Conversely, context independent approach would be to "completely" ignore the background of the statements, views and ideas presented and treat them as absolute and universally applicable. Personally, when I was growing up as software testing professional - I did the mistake of taking everything I read or hear about software testing from books, blogs and conferences - as absolute knowledge about testing as they came from "experts", authors of books on testing, consultants and folks that have great reputation in the industry. After getting into Context driven world of testing and initial training  - I started understanding the importance of context. Then I figured about how background or context colors every information so differently and how not to be fooled by power of presentation of views/ideas that come in context independent way.

Another extension of white's illusion and importance of context is straight relates to enemy #1 of rational thinking - "confirmation bias". In simple words - its tendency towards supporting and endorsing something that you (already) believe it to be true while rejecting or ignoring any contrary evidence. Since our childhood - we keep accepting pieces of "information" as knowledge and store them in our brain (tabula rasa as John Locke suggested). We probably start of with an empty slate (not sure if brain of the fetus in mother's womb has something written on it) and go on accumulating so called knowledge through our senses/experience. Once the brain accumulates a "critical mass" (probably when kids start going to school where they are asked to simply follow the rules or memorize what teacher says) - confirmation bias starts kicking in. Brain starts filtering all information that does not confirm current (at any given point of time) information "saved" on the brain slate. 

Just as our eyes perceive color in the context of surrounding, our so-called knowledge is relative to stuff that we already reckon to be true. This illusion of absolute knowledge is similar to white's illusion. Fortunately, we know that something of this sort (filtration to confirm to known stuff) is going and we as testers as rational thinkers need to be vigilant about confidence of what we know to be true. One way that I have been practicing to beat confirmation bias is to hang around with people, read and listen to ideas that contradict my existing set of beliefs about say - testing, software, management, money matters and almost anything that impacts me. 

There is a huge knowledge base and psychological research on "confirmation" - My favorite one is from David McRaney the author of "You Are Not So Smart" - "People like to be told what they already know. Remember that. They get uncomfortable when you tell them new things."

Back to Keith - thanks for note on white's illusion and the rest of presentation about "mission of testing" - I really enjoyed listening to it. In my opinion - key message of Keith's talk about is "dangers" of seeking confidence. What danger - if you seek confidence - you will get it - an illusionary one though. There is an entire machinery and system called "Marketing" to serve you exactly the confidence you are seeking - which is "cooked" with recipe that swear by.

If you are a stakeholder  - be wary of any consultant or project manager or any one serving when they claim they can "generate" confidence by doing "xyz" (or whatever).

"We never are RIGHT we can only be sure we are WRONG" - Richard Feynman (quoted in Keith's presentation). I love to quote Feynman and tend to agree with him almost on all things that he said.
Need to check If I am falling into confirmations bias with respect to my fascination with Feynman and his approach of knowing.


Sunday, October 20, 2013

Questions about Gamification of Testing

I came across the phrase “gamification” and how in testing we can use this, in the writings of Jonathan Kohl. One day on twitter, jonathan and I had brief exchange and he encouraged to me explore the topic and write about it. Yesterday I read few articles about the topic and thought it is high time that I need jump and learn this stuff. Here is an initial and very crude attempt to understand the idea of gamification in general and how it is applied in testing.

When I think about the phrase gamify or gamification what comes my mind is - some sort of application or accommodation of elements of game into other systems and see what happens. That is probably what gamify means – take a thing or system that is game or related to a game and do stuff as though you’re playing a game.  To simply put it gamifying testing is running testing as though it’s a game.

What does it mean to run/conduct testing for a project as a game?

At broad level, game is a competition between players and game leading to a player or a team winning or losing against the game or with other player or the team.  This definition is provisional one – I might not be considering types of games that do not falling in this category. I wonder if there is any game where there is notion of victory or defeat.

Games can be single player multi player – individual or team, it seems to me that constants are there are rules of the game and there is definite outcome (mostly time bound). How do we apply these elements to testing as game?

In this initial exploration of gamification of testing – let me analyze elements of gams that I strongly feel about and how they apply to testing.

First of all – there are rules in a game. I cannot, at least this moment think of a game that does not have rules. What are the rules of testing – is the question that comes to my mind. Then I would ask who makes these rules? Is there one set of rules that apply to all kinds of testing that being done?  When we gamify testing – are we considering testing as one single type of game or collection of different games each having its own set of rules?

Secondly there are players or groups of players in the game that play the game. There are two player games such as chess, tennis (there doubles too), wrestling, boxing, badminton, carom board etc. There are multi player individual games – athletics, gymnastics, swimming, shooting, archery, racing cars etc. Finally there are team games – football, cricket, various forms of hockey, baseball etc.  Thus when players and teams play the game – they compete with one another.  When we model testing as a game who are players and who complete with each other?  I might be tempted to say testing is a game between dev team and testing team. With already so much animosity amongst two communities – this idea is no good to pursue.  Whatever model of testing we might use – putting dev and test against each other is not a good idea. So we have problem here in gamification of testing wherein we need to identify the players or teams that compete. Is testing a “friendly” or a “practice” game where ultimate goal for each team is not win – just a warm up or practice?

How about goals or objectives of a player or team playing games in the first place? Winning of course! There are prizes for the winners and in some cases even losers get some prize (say as runner-up). Winning brings happiness and satisfaction in the players that can be a motivation or objective of playing. In order to pep-up the losers – there is something called “sportsmen/women spirit”. How many times you heard the statement “it is not important to win or lose, participation and competing with ones best ability is important”. So if you lose do not feel bad – there is always another chance.  When we apply this to testing – how we might formulate the goals, objectives and motivations of playing the game of testing?

The rules of the game throw conflicts, contradictions and options for the players to play using skill and strategy.  In simple terms, a strategy deals with options and risks to take the player to winning. We can apply the metaphor of strategy of games to strategy in testing.  This is probably one matching element that I agree between games and testing. A good test strategy in testing is same as winning strategy in games. But then – what is the meaning of winning the game of testing? Against who?

Most characteristic aspect of playing or viewing a game is the climax and thrill of outcome. So when any game ends we have winners and losers.  Winners take trophy, prize, applause, glory and loser some lessons on how to win the next game. How do we transport this idea of games to our world of testing? When testing ends (as a game) who is the winner? and who is the loser?  You might say “customer” or “business” is the winner – then who is the loser?  Are there any games where game ends with only winners or losers or no winner or loser? An interesting example comes to my mind is casino games. It is being said – at the end casino wins, even though each player in casino might win here and there – net casino is a winner always. What does this tell about testing gamification?

Having said all of above and asking these questions – I do agree with some aspects of games and gamification as apply to testing.

As Jonathan says games tickle our emotions, they captivate us, and they encourage us to work hard at solving problems and reaching goals – True. Thus running testing as game will lead to greater engagement of testers into work.

Why games work?  Success and praise for success generates pleasure in brain releasing chemical messengers like dopamine, serotonin etc. In corporate settings gamification sets up employees in competition among peers with rewards/batches etc. I agree with this as well.

So, with this post I have attempted to come up with few questions about gamification of testing.  In next few posts, I would explore the idea of gamification in general and testing as game in particular.


Shrini

Sunday, August 04, 2013

James Bach's Advise on Tool supported Testing (aka Test Automation)

James Bach, in response a question on articulating "test automation and frameworks" to non technical people - gives these pretty useful pieces of advise.

I thought of sharing them to my readers.

-- Tools don't test. Only people test. Tools perform actions that help people test.
-- You must understand, design, monitor, and fulfill your test strategy. Only people can do that.
-- All testing is manual testing, in that regard. But in another sense most testing is tool-supported, since we use tools to help us in many ways.
-- Tools are capable of directly detecting only very specific bugs. Humans can, in principle, detect any kind of bug (especially when helped by tools).
-- Tools left alone will "detect" lots of things that are not bugs, while missing various huge bugs.
-- Think of tool-supported testing like cruise control-- it helps but the human is still driving the car.
-- Think of tool-supported testing as "tool-mediated" as opposed to naturally mediated. If you test through a tool then it filters out lots of the experience that may otherwise alert you to problems. This is not a bad thing (think of an infrared camera, which is exactly the same tool-mediated concept applied to vision) unless you test through your tool too much (imagine going through your life with infrared goggles on all the time).

Read these along with following post

http://www.satisfice.com/blog/archives/58


I wish that when people see automation as some sort of tool assisted testing - sense will prevail.

Shrini

Sunday, June 09, 2013

10 Random ideas about Test Automation Estimation

I received a mail from a friend who asked about estimation approach for test automation. Wow - what a topic to mess up your head on a sunday night. Instead of responding to him in a mail - I thought of writing a post on this so that other can engage some conversation with me on this topic.

Here you do - 10 random thoughts on the topic. Well - I can extend this list to more than 10 items. For now there are 10 items.

6. Test automation is writing code - that involves all that writing code needs. Ask developers what they need. If development world claims that they cracked this problem. Automation folks can simply lift and use that solution.

2. Regardless of what commercial tools claim about "generating code" or similar non-sense around "scriptless" solution - fact remains that any sustainable automation code is similar to software product that automation aims to validate - so do not fall prey to false propaganda around tools that claim easy automation.

4. Ideas/Frameworks like data -driven, keyword, hybrid are simple ideas for automation design. You need to go deeper as ask if I need to write a method/function or a class in automation code - how much time will I need? Do get an answer from say a developer? You might be aware of some crazy metrics around number of lines of code written per day or number of functions/methods/classes written per day.  As you can - it only gets murkier if start insisting on measuring productivity of automation guy in terms of these meaningless metrics.

9. Important thing to note is we (folks in software world) are knowledge workers - meaning we do not work in manufacturing assembly line of factory. We deal with abstract things, software that can not be worked in the same way as say "Cars". So how does that change the way we should be viewing estimation of developing automation code - think about it.

5. Depending upon nature of piece of automation work that you would be developing - you would not know in the beginning how much time you spend in thinking and how much you spend putting your thoughts into compilable code. That is biggest challenge developers have. So are we - automation developers.

7. First question you should ask when you are working developing an estimation model for automation - what is my smallest unit of work - that unit which is building block of my whole solution. What answer would you get?  Function? class? Next question would be - are all building blocks are similar? How many different types of building blocks do i have?

Compare it say atoms - ask how can I characterize atoms or molecules of automation solution.

2. Once you get clear answer on above question - next you need break down your automation solution into building blocks and size them. Then ask - given a competent developer of automation - how much each unit would take to build - hours, days etc. Then add up time setup, testing, integrating etc. You will get some ball park number of estimate that you need go with as first estimate.

1. What reference should you use for creating your automation solution (or design?) - anything that describes what you want exercise on the application under test. One approach I found useful is create a mind map of application features and attach to each feature what application can do, what data it processes and what checks (note the word check) needs to be done. This is your skeleton reference. Build this first by collating data/information from various references - then make sure all information from each sources is accounted. This is your master reference. Work with multiple sources of reference (Requirements, test cases, use cases or simply walkthrough of application manually to build map of features).

10. Should you use test cases or test scenarios or test steps as basis for automation estimation? As  James Bach prefers to call - test cases are like unicorns - how many of them you can fit in a suitcase or fridge? Without knowing what is inside - counting test steps or test cases and using that number as basis for anything (leave alone automation) useful is utterly stupid idea. Never do it - unless you want to mislead someone.

3.  Few words about keyword driven framework. Personally I think, there is lots of hype around this simple idea. Keyword could be a verb (also called as action) that describes some feature of application under test. In developers language it is some basic unit of code - typically a method or function. What is a big deal here when you say "let us use keyword driven framework"? It's all hype - no real stuff there. There even more irritating words like "keyword driven or based testing" - so far I have not figured out how to do testing (not automation) using keywords. Same goes for other related buzz words like data driven automation (a marketing term for saying let us use variables instead of hardcoded values) or hybrid framework. Note all these simple ideas had some place 10-20 years back but not anymore. I personally prefer to develop automation pretty much like a developer goes about doing product code - no difference. I hate over simplification by tool vendors and consultants on so called "excel based automation" or script-less automation, automation for your granny - they are simply empty ideas to bully unsuspecting boardroom folks that sign contracts.

How will I summarize - There is no simple solution for estimating automation effort.  Keep a watch on how development (programming) community deals with this conundrum and let us use that to build our own model. At present, when developers are working on small units work like use stories and in an iterative model of churning working code (theoretically) in say weekly or fortnightly or monthly basis -
I think the whole problem of "tell me how much time (and resources) you need to develop this solution" - will vanish. You would probably say "let me start with 3 people, I will publish a 1-2 week plan of what I deliver for people to use - let us take it from there".

I think gone are the days where you had 3-6 (or even more) months of lead time for something software to be deployed for use. In the mobile apps world - development times are even shorter.  I doubt if anyone would ask you - give me an estimate for automation of this app.  It seems that we have solved the problem of estimation by going small and going fast.

I am happy to be corrected on any of views expressed here. Let me not forget to add - when I say automation - my experience has been in IT/IT services world mainly working with commercial off the shelf automation tools. If it were likes of Google or Microsoft - it would be totally a different ball game all together.

Shrini


Friday, June 07, 2013

Are you measuring something that easy to measure or something that is important ?

Measurement is fabulous – Unless you are busy in measuring what is easy to measure as opposed what is important. – Seth Godin


... And what is important (and to whom and when) is often subjective and context based.

Thank you Mr. Godin for your sound advice that is useful for software folks
I have confirmation bias for bad metrics and measurements. We have obsession for measuring things to demonstrate that we are rational and objective humans (which we are not). It's amazing to see how Seth Godin in above post demonstrates "measuring sometihng that is easy to measure is waste".

"As an organization grows and industrializes, it's tempting to simplify things for the troops. Find a goal, make it a number and measure it until it gets better. In most organizations, the thing you measure is the thing that will improve"


Many people blame growth, size to "metrics menace" and say "how can we manage such volume of work if we do not have right metrics". Remember the thing that you measure will be victim of gaming and match fixing - people will change the behaviour to look good in terms of what is being measured. Look at our testing metrics - all easy to stuff to measure (sorry - simply count) - number of test cases, defects (and all dimentions of it), number of requirements (this is really bizzare), defect detection percentage, defect leakage rate, cyclomatic complexity and list is long - mostly all easy stuff to measure (in fact - simply count).

While what our users care is how software works (or does not work) - it is about those emotions (frustration, anger, happyness etc). Since these are important but difficult to measure (in some easily understandable numbers of percentages etc) - we take easy route. Pretend as though these do not matter at all or when confronted, wear "rational" hat and issue "scientific/engineering" statement "anything that cannot be measured - cannot be improved".

"And this department has no incentive to fix this interaction, because 'annoying' is not a metric that the bosses have decided to measure. Someone is busy watching one number, but it's the wrong one."

-- So true for software - our bosses (influenced by high flying software engineering/process consultants) have choosen to turn deaf ear to real "metrics" (that are tough to measure). Thus software developers or testers appear to have no incentive to "listen" and "fix" important issues that matters to users.


Software is developed, tested, used and maintained - in, for and by social enterprise and people are irrational, implusive, greedy and look for instant gratificatoin. Society (a name given to large number people living together) amplifies such indivitual traits.

We, software testers - need to adopt social sciences approach and stop aping practices of "enngineering processes" of a factory assembly line.


- Shrini

Thursday, May 30, 2013

How many heads can you roll off with this Automation?

...  This was exactly one of the managers in a meeting asked me when I was discussing with a group on automation that we were developing and maintaining. Hold your breath - this is not some 10-15 year old fairy tale story extract. This shows dominant view that is held amongst business stakeholders, IT execs, consultants and sadly many test managers.

For the starters in this field and to the topic - test automation (or simply automation) is an idea of some computer program "checking" behavior of another computer program (called application under test) and some sense "replicating" what a sufficiently disengaged or brain dead tester would do in the name of testing the application. In order to drive home the idea - early automation tools such as Winrunner introduced the idea of "record and playback". Wow - what a way to simplify a really complex and difficult work of testing a software application.

Thanks to IT consultants and managers - whenever the problem of "speed" showed up in meetings - automation was proposed as potential solution. This has grown to such an insanity that today for almost all problems of software projects - automation is common solution. But, very soon people in IT realized that doing automation requires Money - additional money than you pay to a tester.

Some clever fellow in the consulting company they shouted 'Return on Investment" - from that fateful day - life of tester or someone who supports tester through automation - has never been the same. Since automation requires funding extra to what is spent on testing - execs obviously want their pond of flesh in return.

This leads to popular equation. Without automation if you need 5 testers to do testing for a project/release, with 50% of automation - you would need 50% less people. That is how automation pays back itself. Since software requires repeated testing when changes happen - automation once done can be repeatedly used without paying for human tester.  That is how conventional and most popular thinking on automation goes.

While it is not very difficult to reason why and when automation cannot reduce the need of number of testers - very proposition of "automatic" testing and removal of need of some dumb tester staring at screens of automation test run is simply irresistible.

I fought many losing or lost battles in explaining my stakeholders as why automation should not be thought about as means to reduce number of testers or cost of testing. Everytime I lost, I was made to understand - it is just execs made explicit choice not to reason but to continue to insist that if automation can not reduce manpower required for testing - it is useless or at least not worth investing.

I am thinking of refusing to do automation if right expectations are not set with stakeholders - will it work? Will I be given first right of refusal not do automation if right level of awareness exists?

But then - if you are a business leader, IT manager (not someone with deep understanding or appreciation for testing and automation) - you would believe what a consultant or tool vendor would say.

As I close another pessimistic post on automation - I realize - it is tough to be in automation where everyone has opinion (strong one) and I have to force my agenda through.... tough...

But I have not given up --- trying to bring sanity in the mad world of test automation.

Shrini


Sunday, May 19, 2013

How to disagree elegantly and learn something in the process ...

I love my Zite iPad App that pulls out amazing (and latest) news just about anything. By tuning this to topics like science, philosophy, mind-body, software, programming, critical thinking (topics of my interest) - I can get hours worth reading everyday from this app. Thank you Zite.

I tweeted about Daniel Dennet's thinking tools - an article on Guardian. Lots of good stuff - take a look at it and if possible buy the book and read.

One idea that most attracted me from this article is about "how to effectively criticize/argue with someone". Here is a quick paraphrase.

3 simple rules (These rules are attributed to social scientist Anatol Rapoport - as Daniel suggests in the article)

1.  Attempt to restate (re-express) opponents idea in your words (so clearly that opponent should say "I wish I could have expressed like you did - that is precisely the idea")
2. State the points of agreement about opponents idea (especially if they are not matters of general/public agreement)
3. State what you have learned (new) from the idea

Only after doing 1.2.3 - you can do any rebuttal or criticism.

Notice - what 1, 2, 3 will do to your opponent?
By #1 - you have managed to show that you have understood the idea (even better than the opponent herself)
By #2 - you have established an emotional connect with opponent by explicitly stating what portions of idea you agree with. This will open up opponent for considering your points. This is the point where she will start actively listening to you.
By #3 - This is big one. Through this you show your humility and desire to learn when critiquing an idea.


Through this series of actions - you essentially convert a potential adversarial idea/person into a positive and collaborative interaction.

I will be putting these rules into action for situations where I am disagreeing with anyone and offering opposition or criticism. Let me see how it goes.

Pretty sound advise Daniel. Thanks.

Shrini

Tuesday, May 07, 2013

Book/Reading suggestions ...

Few days ago - a tester friend of mine approached with a request to suggest him for some books to read. I responded him with a small list that on the face of it - looked unlikely for a software tester.
I thought I would share the list with you folks ...

Here is it is 
This book introduces the idea of "systems thinking" and To a tester - I think it is most important to know and engage in general systems thinking as we engage in solving problems.


2. Surely You're Joking Mr. Feynman by Ralph Leighton and others

Richard Feynman is hero of testers in my opinion. This nobel prize winning American physicist lived life of a curious child all his life exploring  the world and never turned away from learning new things. He questioned things around him like a true tester. The encounters described in this book by him explain what it means to be a curious thinker. Although he openly hated philosophy and made fun of philosophers - we can forgive him for the enthusiasm he showed and examples he left through his life to demonstrate a human's thrust for knowledge and learning.

You can see his interview that he gave for horizon BBC "Fun to imagine" - look up in youtube.

3. Outliers by Malcolm Gladwell - This is not a book for testers in direct sense but a fascinating book that illustrates systems thinking that Jerry's book (as indicated above #1). This is one book that read from start to end. Each chapter is illustration of how to look at information that is publicly available and create a whole new interpretation of it.
Other books from same author that are worth reading are - "Tipping Point", "What the dog saw", "Turning point (this is a science book).

4. God Particle by Leon Lederman

This is again not a testing book - not a systems thinking book not a book about software. It is about amazing journey of science of understanding building blocks of our universe. I liked the narration of how to express and articulate heavy scientific stuff through metaphors and examples so that even a 7th grader can understand. What this has to do with testing ? Understanding tough subject and explaining in easy language - something that tester do all the time - find tough bugs and demystify them for our stakeholders including developers.

4. "How to think about science" - A CBC series of 14 interviews with scientists, philosophers, Writers - about emerging form of science. If we reckon testing as multidisciplinary - Look no beyond this. Download the series of interviews (mp3) and listen/absorb. These interviews left a long lasting impression on me about how think about an intellectual pursuit like science or software development or testing.

Shrini

Making a food item vs Solving a Puzzle - An attempt to characterize Testing Mindset

A Disclaimer: I am going to make some sweeping generalizations about how testers and developers (generic name including programmers, designers and business analysts) think and work. This is an attempt to characterize a /typical/ testing (or tester) mindset - a set of dominant thinking patterns, attitudes, biases, choices and behaviors.

I was reading out a bedtime story to my 9 year old daughter - I was holding a book of "Akbar-Birbal" stories. In one story - King Akbar asks Birbal, narrating how typically "giving" works. Under what circumstances - giver's hand is at bottom and receiver's hand is at the top - was Akbar's Puzzle. Under normal circumstances - giver's hand would be at top and reciever's hand below that of givers. How do you solve this puzzle? What goes on in your mind when you encounter stuff like this?

This got me into thinking how in general solving such puzzles/riddles work. When you start solving puzzles like the one above - your mind would be like water gushing out of a pipe - divergent thinking. You need to work towards solving the puzzle from definition of the problem out into vast open exploration.

Different types of puzzle require different approaches to solution - in some cases you know the answer and in some other cases you don't.

1. Math problem - Solve a simultaneous algebraic equation or solve a differential equation
2. Solve Sudoku
3. Play Chess - from initial state to win.
4. Play Scramble - how many words you can make from sets of jumbled letters?

Contrast solving puzzles to say cooking (or making) a food item from a recipe or with someone's help. Here you have more or less definitive, probably seen previously end state when you know you are done. You work with mostly known steps or incremental activities from start to end. In other words you do convergent thinking. Many acts of "construction" go from some known set of conditions and some known end state - you go from say "requirements" to "working software"

Contrast that to a testing problem or solving a riddle.

Extending these two activities - cooking a food item and solving a puzzle - I think former describes how developers work/think where as later characterizes typical testers way of thinking.

What do you think?


Support Keith - Find answers for questions about ISTQB and more....

Keith Klain is stirring the world of testing through some smart and witty comments about testing on twitter. I enjoyed his discussions with Rex Black and others related to ISTQB and other topics that are close the hearts of testers - especially context driven ones.

Here is what makes Keith a special mention - he is a Business/Technology leader (not a consultant) of a Bank and heads a software testing group. Unlike other testing leader, he talks more like a practitioner who does testing day in out (not someone manages someone who manages a team few of which are testers). It is a quite welcome change in the world of business leaders we see around.

Two things I want to bring to your attention about what Keith is doing.

1. Watch him debate with others on twitter and notice how gets people talking. In one of the tweet discussing about testing and confidence with Rex Black, Michael Bolton and others - Keith says (paraphrase) "For a change let us change our positions - how about you (Rex Black) arguing in favour of us (testing does not build confidence) !!!!

In a debate - can you take a stand that is totally opposite to what you have believed all in your life and see the world from that angle ? Confirmation Bias - No 1 Enemy for testers or that matter any intellectual - can be beaten by hanging around with folks that think differently. Well said Keith !!!!

2. Sign the petition that Keith has setup questioning some basic ideas about how ISTQB goes about doing its ("Non profitable") business. First of all read the petition and see if it makes sense - if does - please sign up.

Follow Keith (@KeithKlain) on twitter and watch out interesting debates he kicks off ...

Shrini

Friday, January 25, 2013

Should Automation that runs slower than human test execution speed - be dumped?


I am working on a piece of automation using java and some commercial tool to drive a test scenario on AN iPad App. This scenario involved entering multiple pages of information and hundreds of fields of data. This automation script runs this scenario for say 1 hr where as a tester that exercises same scenario on the app “manually” - claims that it takes only about 30 minutes.

I was asked – if automation script runs slower than human test execution (however dumb) – what is the use of this automation?  What do you think? 

Here are my ideas around this situation/challenge:
Mobile Automation might not ALWAYS run faster than human test execution -
Many of us in IT, have this QTP-Winrunner way of seeing testing as bunch of keyboard strokes and mouse clicks and automation is a film that runs like a dream at super fast speed.  GUI automation tools that drive Windows desktop application GUI or Web GUI have consistently demonstrated that it is always possible to run sequence of keyboard and mouse click events at higher speed than human.  Enter mobile world – we have 3-4 dominant platforms – Andriod, iOS, Blackberry and Windows Mobile. GUI Automation when enters the world of mobile – mainly runs on some windows desktop that communicates with app (native or web) on the phone that is connected to the desktop through, say USB port.  The familiar paradigm of all automation and AUT running on the same machine/hardware breaks down and so would be our expectations on speed of test execution. iOS platform specifically (in non-jail broken mode) presents several challenges for automation tool while android is programmer friendly. As technology around automation tools on mobile devices and associated platforms (desktop and mobile), evolves – we need to be willing to let go some of our strongly held beliefs of GUI automation that happens on web and windows desktop applications.
Man vs. Machine – items that might make machine/program slow
When you see a button on the screen – you know it is there and you touch it (similar to click on non touch phones) – as a human tester you can regulate the speed of your response depending upon how app is responding. Thus, sync with app, checking if the right object is the view and operate the object – all of this happens very natural to human. When it comes to automation tools (mobile tools especially) – all of this has to be programmatically controlled. We would have function calls like “WaitForObject” and some “Wait” calls to sync the speed of automation with speed of app responses. The whole programmatic control of slowing down or speeding up of the automation in relation with app response and checks to make sure automation does not throw exceptions – many times automation programmers need to favor robust but slower automation code that is almost guaranteed to run against all app speeds. This is one of several reasons why automation might run slower than human execution. You might ask how do likes of QTP handle this situation – even tools like of QTP need to deal with these issues. Given the state of technology – the problem is somewhat acute in mobile automation space.
Imagine long, large and highly repeated testing cycles – a human tester would lose out on 2nd or 3rd iteration due to fatigue and boredom. Consider current case of multipage and entering 100’s fields – how long do you think a human tester can focus on doing the data entry.  Here is where our “tortoise” (slow but steady) automation still adds value. This slow program does not mind working 100 times over and again with different data combinations – frees up human tester time and effort for you.
Remember – automation and skilled human tester both have their inherent positives and shortcomings. A clever test strategy would combine (mix and match) human and automation modes of exercising tests to get maximum output – information about issues, bugs and how value of the product be threatened.

If automation runs unattended well – why bother about execution time?
Many of us are used to sitting for hours staring at automation running to see if it works, pass or fails. If fails – check, correct and rerun. If automation is robust and runs unattended – why have someone looking at screen – watching automation running. Why not run it at non working hours? Why not schedule it to run at certain time. This will free up human resources that can be deployed at other areas requiring focused human testing. Isn’t this a value provided by a slow running automation – free up human testers? A well designed but slow running automation can still justify investment as it can run without bothering you.

How you can get best out of slow running automation?
  • Optimize automation to see if speed can be improved – remove sync/waits, “object exists” checks (not compromising on robustness of automation)
  • Identify bottlenecks in tool and fix them
  • Identify environmental and data related slowness in automation and fix them
  • Schedule automation at non working hours and save human effort


Have you come across automation that runs slower than human test execution speed? What did you do with automation? Dumped it? Want to hear about your experiences


Sunday, December 30, 2012

Where do you stand in this debate?



Inspired by Elisabeth Hendrikson's blog post 

[Updated 25th Jan 2013]
I am disappointed to see no responses to this post. While I expected some responses in agreeing or disagreeing. Whenever I see such condition where my post does not get any comments - I think of following possibilities (thanks to Michael Bolton)

1. The post is not very engaging - there is way too much information there. Everyone and every thing is seeking attention. This post simply failed to get any
2. Its dumb idea - completely useless
3. Post is simply a question which either is too simple to answer (so no one would like to feel insulted by answering) or something deep and intriguing (why bother answering)
4. Why Author is not saying anything? Trick to get some free survey done for some homework?
5. No comments

I will attempt to expand on this topic sometime in the future. This situation made me to learn something - no comments - will make you think.

Dear readers - thanks for not commenting and teaching me something.

Shrini