Saturday, July 09, 2016

Testers are human ... so are Programmers

One important aspect of we humans as testers or programmers is how our day-today happenings impact our work at office. While this is not different for we software folks as opposed to any other profession that requires "presence of mind" - being occupied with thoughts about past or future can lead "knowledge workers" to make mistakes and/or forget things.

An incident that occurred this evening made me realize how important is for testers to be "present" while testing so that do not miss things and make mistakes. I went for a shopping mall with my family nearby. While entering I had an argument with a fellow who while reversing the car in parking happened to hit my car. While that incident fresh in my mind - I passed parking ticket counter, collected the ticket (while my mind of full of the car incident little while ago) and gave it to my wife. I generally has a designated place in the car where I keep all these tickets. This time my wife kept the ticket in a place that I generally cannot reach from driver's seat. I did not mindfully record nor my wife remembered clearly where she kept the ticket. Few hours passed by. While returning, my wife and kids went a nearby place and asked me to get the car and pick them up,  While walking back to parking lot - I was confused about where was parking ticket - thought of parking ticket was all over mind. When I reached car, I searched my usual places, did not find the ticket. I panicked on the prospect of paying almost a day's parking fee instead of few hours. I did few more rounds of check around drivers seat, usual places that I keep ticket and pillion seat - did not find the ticket. Finally called wife to check if ticket is with them. I was told that ticket should be in car. I finally gave up and paid the full day fare and came out of the parking.  When my wife comes in the car, she reaches out to glow box at pillion eat and hands over the ticket to me.

Why it did not occur to me check in glow box ? Why my blocked mind did not contemplate on various possibilities and locations for the ticket after all car is not such a big place? I guess two things happened. One - due to argument with other driver at the mall entrance filled my mind so that I did not mindfully register where my wife kept the ticket and second I gave up easily before exploring my options.

What did I learn from this incident that I can apply testing?

Good testing is about having wide range of testing ideas to cover mistakes that other folks do while constructing software. Programmers, Business analysts and others can make mistakes like I did. I urge testers to be mindful while testing, designing test cases and watch out for mistakes/misses that might lead to bugs. Every now and then put your mental abilities about test idea generation to test and develop the skill to look for misses/mistakes. Practice mindfulness and be vigilant at all times. This helps in your personal and life outside office as well - you, yourself are least likely to make mistakes.
This will save time, money, rework and will give you peaceful life. What more - you can do the same for others.

Role of human emotions in software development and testing has been the point of discussions at many testers meets and conferences. I guess the larger software community needs to acknowledge this and develop measures to be mindful.

I suggest mindful meditation and concentration exercises to testers having high level of mental activity (more often than not - nose) - like me. Being mindful and vigilant at all times - seems to be now a skill and capability for testers.

Sunday, May 22, 2016

Chocolate and Prayer - An Anti Pattern for BDD

In a school, for first graders, there was a practice or ritual in the morning every day. The kids used to assemble and sit in a designated place as they come in. They needed to say a prayer with closed eyes. When they finished the prayer, each kid would find a chocolate bar in front of her. The kids would happily take it, eat and proceed to their classes. This ritual ran for several years. Kids thought that chocolate is prize that they earn for saying prayers and none questioned the ritual. Years passed by. The length of prayer became smaller, kids got their share of prize - chocolate nonetheless.  On one day - kids assembled in their usual place and were preparing to say prayer - they saw chocolate bars in front of each of them - already. With none around  - few kids took initiative and grabbed chocolate while few sincere ones proceeded with prayer as usual.  After few days - following law of diminishing returns, these sincere kids to started to skip prayer and focused only on eating chocolate.
After several years of this ritual - one curious kid, unable to control his thought about why they get a chocolate everyday in the morning (note - prayer is long forgotten), asked his friend. "None knows why, my elder brother tells me that there used be some prayer before they got their chocolate" said the friend not so interested in the question.

Now, imagine this is a multi year social experiment conducted by school authorities in collaboration with educationists - what would you infer ? You might say, initially kids got their prize after prayer (a good and recommended activity to start the day in school) and when chocolate was given prior to any prayer, kids simply forgot or dropped the idea of prayer. Economists would call this as "incentive" to elicit a specific behavior from a group of people.

Let us come to our world and let us try to map prayer and chocolate to BDD (behavior driven development) and automation. As original proponents of BDD wanted it solve certain problems and automation apparently came out as chocolate, prize that follows doing BDD.

As I understand  - BDD was intended to bring business analysts into the party, develop a common vocabulary between Dev, BA, Testing and stakeholders and address some of the perceived problems close cousin of BDD - the TDD, test driven development. Dan North explains the background and history of how he landed with the idea of BDD. As Dan narrates - the practice of BDD proposes to focus on the behavior (change from keyword "Test"or "requirement") software should demonstrate for a feature that client wants. In order to develop a common vocabulary - BDD needed to restrict the representation of this behavior using a set of keywords and the behavior required to be in a non technical language (remember they needed to bring BA's that are non technical into the party). Thus using a class of languages (meta language, I guess) like Gherkin which is a type of DSL (domain specific language) BDD ushered a practice where intended software behavior and corresponding scenario or an example was represented in a format like the one below

As [Role/Stakeholder]
I want to [A feature or behavior]
So that [business outcome that is worth paying for]

Given [Initial or Preconditions]
When [ Action performed to invoke the feature]
Then [Expected result that software needs to demonstrate]

As Liz Keogh, one of early collaborators with Dan on BDD development, says -key challenge BDD was intended (broadly among other things) to solve is facilitate and improve communication, discussion and debates about what the behavior should be,among developers, testers, business analysts and stakeholders.

That was a prayer  - BDD's objective for effective communication.

After looking format of BDD scenario/user story, full of keywords - a smart developer would have thought "I can parse this and generate a skeleton code which can be implemented later as automated test". This is that chocolate that was promised to everyone in the team. Thus a strong distraction for original objective of BDD was born in the form of automated tests out of BDD story/scenario.

The theme of automation attached to BDD become so powerful with loads of frameworks such as jBehave, Cucumber and others overshadowed everything related BDD. At some point of time, doing BDD meant using jBehave or cuccumber and creating automated tests.

The power distraction of automation (chocolate from our story) instantly hijacked communication/discussion about behavior (prayer) and practitioners BDD started doing only automation. This is the anti pattern that I wanted to highlight in this post. I have seen several instances where testers/developers/BA's were worried only about which tool or framework to use for BDD and which automation framework/library to use. The stakeholders on their part were sold on the idea that they would get "Executable specifications that come with dual benefit - representation of behavior and automated test". They could not ask more.

Alas, in the process, BA's, testers and developers instead of sitting together and discussing about what "Given" should lead to what "Then" or what "When" leads to what "Then's" - sat in silos and happily created loads of BDD stories and some tester or developer jumped straight away to implement automation.

I am not complaining about automation that is embedded in BDD per se - I would like people to reinstate the prayer - the focus on cross function collaboration, you can have your chocolate (automation) anyway.

Time to read Dan's post on introduction to BDD and also posts from Liz on the aspect of communication ?

Wednesday, April 08, 2015

What do you call something - Name matters !!!

In the recent times - I came across two instances where names/phrases we use in our daily life as software people - programmers and testers make a huge impact on what we do. The names we use to indicate things create objects/actions larger than the life.

Unit testing is something that only developers do
A colleague mine recently demonstrate me a testing framework (some code, library that drives some portion of application under test) as a unit testing framework. I applied some knowledge I gained by reading on unit testing to this framework and realized that the framework did not do or support unit testing.  A unit test by definition is self contained and attempts to validate logic supposedly implemented by piece of code under test with all other dependencies mocked out.  I confronted my colleague on why is he calling the framework as unit testing framework. His answer surprised me - he said while agreeing to definition of unit test that I quoted here "If I do not use the phrase unit testing here - developers would not use this saying it is testers job". Here the word "unit testing" is inappropriately used to effect some change in the behavior of programmers/developers. I can sympathize with my colleague - such is the power of names/words/phrase that we have created.

Behavior is more useful word than Test
Dan North in his introductory article on Behavior Driven Development - says people misunderstood the word "test" in TDD. He observed removing the word "test" from TDD, replacing the word with "behavior" made the whole activity more acceptable to programmers. While there is more to BDD than TDD and word "test" - this instance made me think yet another case of how names create effects with far reaching impact than we seem to think. I guess very name of "test" makes some programmers think "not my job".  All of a sudden the wall between dev and test becomes and we have stereotype developers and testers out there.

We need to be more careful while creating or using words and phrases - development, testing, unit testing etc are few examples here that are creating practices that inhibit effective collaboration between various functions in a software team. Who said "what is in a name" ? We know now - that there is something in name !!!

Saturday, December 13, 2014

Being away from blogging

Ten years since I posted my first post - it has been a long journey. Some years very active with many posts and some very lean - like this year. I want to avoid creating a weird record of having exactly one in 1st and 10th year. This is not so happy state in being.

From work-wise this has been a very hectic year for me. I get very little time (including the week ends) to reflect and write. Some if it is attributed to writers block and some of it is related to the puzzle of what to write.

Recently I spoke at QAI STC conference on "Feynmanism for testers" - a phrase to indicate "Feynman" way of thinking for testers. I had about 30 mins to cover the idea like this. I surely, struggled to make justice to the topic. However - I had some very interesting discussions and met many nice people post the talk. So - my talk did touch few of these people who overcame their hesitation to come up to me and talk.

It is nice to see many of these conferences posting conference talks on youtube. While I wait for this year's STC video appearing on youtube, you can check out my 2012 talk here.

I am planning to start small 3-5 mins video podcast sessions on testing topics as an alternative way to keep this blog going. One very personal reason for this is to improve my presentation skills. Watching yourself doing a talk can teach you lot as how to improve the same.

Let us see how this goes... I thank my readers for their interest shown on me.

Sunday, June 22, 2014

There is no such thing called Agile Testing

I struggled since long to find a reasonable meaning and definition for phrase 'Agile Testing". So far I have been unsuccessful in finding one definition that can stand my scrutiny. Probably there is nothing called "Agile Testing" exists. Possibly yes.

Before I proceed – let me make a distinction between "Agile" and "agile". James Bach has been suggesting since long - this difference. The word "agile" as a dictionary word meaning "swift", "quick" – when applied to software it simply means what it means in dictionary. Good and reasonable software people have attempting to be "agile" in the project context as demanded by stakeholders. This has been happening much before the industry invented the buzz word "Agile" (note caps "A" here). This word "Agile" is more of a marketing term invented to describe a ceremony-laden model of developing software. It promises continuous, small iterative and quicker pieces of deployable software – straight to market. It is fashion of the day and often seen as panacea to all problems of slow, buggy and boring year long projects draining millions of dollars where first 4-6 months of the project would be spent in agreeing upon the requirements or initial design. In Today's world Market demands speed and flexibility for businesses making software or using software – days of big upfront design and yea- long software projects are getting over.

You can consider "agile" as drinking water that you get in the tap and "Agile" as your favorite brand of mineral water bottled specifically and sold for a price promising certain level of purity.

Also, let me define for the purpose of this post – what is testing. Testing is an open ended activity of evaluation, questioning, investigation and information gathering activity around software and its related artifacts. This is typically done to inform stakeholders about potential problems in the product, advise them about risks of failures as quickly and as cheaply as possible. There is NO one "right"(certified) way to do testing and one right time in the project lifecycle to start with it. The context of the project defined by the people in the project including stakeholders – dictates the form and essence of testing. Testing does not assure ANYTHING, it informs (to best of testers ability and intent) about problems in the software that can threaten the value of software. Given constraints of time and money – testing (even though an open ended evaluation/investigation activity) constantly seeks to optimize its course to find problems faster and report them in right perspective. This requires testers to be good/quick learners, skeptics, thinkers with diverse set of skills in business, technology, economics, science, philosophy, maths/statistics amongst others. In some sense testing is like a sport or a performing art that becomes better with practice and improvisation. A professional tester needs to practice (meaning doing) testing as a professional musician or sports person.

Good testing thus -

  • Focused on working closely with programmer
  • Uses tools/automation to perform tasks that are best done by computer
  • Favored light weight bug tracking process that primarily focused on faster feedback cycle to developer and speed fixing of important bugs (important to stakeholders)

When books, blog posts, articles, conference presentations talk about "Agile testing" – it is always in contrast with so called "Traditional testing". Any meaning or interpretation of traditional testing assumes a stereotypic "traditional" tester. So, let me attempt to define one.

A traditional tester is one who has worked in a waterfall software project and was part of a dedicated (independent) testing team. There would be a wall between development and testing teams and code to be tested would be thrown over the wall for testing purposes. Testers used heavily documented test cases and relied on elaborate requirement documentation. Bugs are reported in a formal bug tracking system and it would be testers pride to fight to prove that bugs logged were to be defended. Testers resisted changes to requirements in the middle of the project and insisted that it would make them to rework on test cases, retest application and hence adds to cost of the project overall. Testers assumed the role of quality police and took pride in being final arbitrators of "ship" decision.

For the uninitiated few examples of what (I believe) is NOT Agile testing

  • Writing units tests in xUnit framework – you are not testing
  • Doing xDD – There are host of 3/4 letter acronyms on the lines of /something/ driven development. As many agile folks admit these are development methodologies – let me not go deep in explaining why these are not related to testing.
  • If you are working on continuous integration tools and your automation gets kicked off in response to a new build/check-in – you are not testing
  • If you are writing stories or participating in scrum meetings - you are not testing

Finally here are 3 reasons – why I believe that there is no such thing called "Agile Testing" -

Agile Testing people do not talk about testing skills

If you know what is testing and you do it – it is obvious that you know what skills you need and how do you work to improve them. Agile people often confused about what is testing and what is not. Hence you cannot expect them to articulate testing skills by them. You typically hear things like "collaboration", "programming skills", "think like customer" etc. I strongly feel that these folks have no clue about testing or testing skills. I bet they are just making it up. Software testing is now special skill in itself. Many people study and practice it as a profession and life time pursuit. There are testing conferences happen all over the world. There is a growing body of knowledge about craft of software testing.

Sad that Agile folks have no idea about these skills. All they are talking about is how developers or team members in Agile projects work and believe. This is what really bothers me about the idea of Agile testing. The idea is being badly articulated.

"Something that everyone in the team does" – that is how an Agile folks define testing. While everyone in a project team owning responsibility to make sure project succeeds is a noble and unquestionable idea – making testing as everyone's responsibility is shooting on own foot. Very soon we will get into "everybody-anybody-somebody" type of problem. Expecting developers do excel in their bit of testing is OK, expecting business analysts/story writers in capturing requirements well is fine too. But making everyone responsible for testing is about turning blind eye to skills required for professional testers. This idea of everyone-does-testing is rampant in Agile teams. Why call this testing with a special name "Agile testing"? In terms of roles – as everyone does testing – you may not have a designated role called testers.

Agile Testing is different from Traditional Testing – but not quite

Inevitably, I now need to introduce term "traditional testing". Agile folks would argue that testing that happens in Agile project is different from "traditional testing" – they point to testing against user stories as opposed to detailed requirements. Wow – if you are testing basis is a story instead of a detailed requirement document- you are doing Agile testing. But how different is that?

Much of the trouble for testers transitioning to agile projects is about their dominant beliefs about testing. For someone who worked in a typical outsourced IT environment – it was difficult to work with stories instead of elaborate requirement documents. It would be challenging to work closely with developers/programmers and speak their language while all along they worked with a wall between them and development team. Automation, for these testers was something on the lines of QTP or any GUI automation tool – where as agile teams used likes of Selenium and API or unit testing.

  • Many testers cannot work with leaner documentation (requirements)
  • When requirements constantly change – they are thrown off the track – they cannot test without test cases
  • No more there wall between dev and test – hence a tester is exposed to work directly with dev. Some are intimidated with this possibility
  • Testers that are familiar with GUI automation tools like QTP are suddenly exposed to tools that work under the skin – expectation is to understand and work with formal programming languages. This is terrifying to many testers.

So, there is no such thing called "Agile Testing" but there is "good testing". If you are a "good tester" and asked to work on an Agile project – what do you do? Fit yourself in project context – keep doing good testing that you always did. Do not get distracted by jargons and marketing terms that you might find people, consultants throwing around.

I think there are some ideas that I have not touched upon – Agile testing quaderants, why exploratory testing is such a hit with "Agile" people – well, that is for part 2. Let's see how this pans out.

Sunday, December 08, 2013

Refreshing Schools of testing - A flow chart

I picked up this from one of my old notes  on schools of testing where I made a sketch in the form of some flowchart. While I was cleaning my book rack, found the paper with the sketch, I thought why not make it a blog post.

Starting thought  was about fundamental ideas about software testing especially in terms of objectives, tactics, outcome etc.

If you agree that there are differing "opinions" about software testing amongst practitioners, stakeholders and other parties in software eco-system - follow through the flow chart and see where you end up. Let me know your views on this.

Sunday, December 01, 2013

Connection between Software Metrics and News

I discovered Maria Popova's brain pickings accidentally. I am happy that I did. Fully loaded with stuff that makes you think almost every time you read her blog - is something that stands out to be noticed.  If you have not already signed up for her newsletter and not aware of brain pickings - I strongly recommend you sign up.  If you are curious mind - you cannot afford miss this "interstingness hunter-gatherer and curious mind at large". Thanks Maria - for keeping us busy in reading and absorbing stuff that you keep serving to knowledge hungry, curious world.

In a recent post she explores (or re-explore)  a book "Does my gold fish know who Am I" and in the narration that follows the central theme of curious  and urgent questions by kids, I found a paragraph about news. I could immediately make a connection with how software metrics are produced and consumed.

Thanks to my confirmation bias towards anything that criticizes software metrics - I sat down, this sunday afternoon (while busy in finishing all piled up work) to write this post. If you feel strong about any idea to write about (for a blogger) - you would find time to write about it.

For a question "what news papers will do when there is no news" -

"Newspapers don’t really go out and find the news: they decide what gets to count as news. The same goes for television and radio ....The important thing to remember, whenever you’re reading or watching the news, is that someone decided to tell you those things, while leaving out other things. They’re presenting one particular view of the world — not the only one. There’s always another side to the story"

Wow , that seems to be absolutely right to me. Exactly same thing goes for software metrics. The producers of the metrics decide what they want the consumers (Managers, stakeholders) to see and absorb while leaving out some unpleasant things that probably matter. How often you have see testing producing results that confirms what stakeholders looking for.  Zero Sev 1 and Sev 2 bugs in open state and 2 Sev 3 bugs with clear work arounds. On release Go-No Go meetings - what else can be sweet news like this? If  as a stakeholder you wanted the release to happen - you would not question these numbers at all. Thanks to confirmation bias. 

Management's preference for numbers and summarized data - it is very easy to hide things that matter.  And there is always another side to the story - sorry the numbers (numbers themselves are astonishingly incapable of telling any story leave alone telling the right story). Why this works (or apparently works) - our brains are wired for optimism - we like to hear good stories (good numbers) and most importantly stories that confirm our existing world view.  Here is where critical thinking comes as savior. To me critical thinking is about questioning ones' own suppositions and line of thinking. "Am I missing anything here?" or "Is my understanding right, should I seek some contradictory information if exists" -  are examples of critical thinking.   For software testers - this is very CRITiCAL - we should last person to say "all right this is right".

Sadly as is the case with news, the metrics madness goes on - consultants year after year mint money in the name of software engineering, software process and metrics rule our life as software folks.

While I am writing all this - I need to be critically thinking as well - Am I being overly negative and dismissive about metrics and news?