Saturday, December 13, 2014

Being away from blogging

Ten years since I posted my first post - it has been a long journey. Some years very active with many posts and some very lean - like this year. I want to avoid creating a weird record of having exactly one in 1st and 10th year. This is not so happy state in being.

From work-wise this has been a very hectic year for me. I get very little time (including the week ends) to reflect and write. Some if it is attributed to writers block and some of it is related to the puzzle of what to write.

Recently I spoke at QAI STC conference on "Feynmanism for testers" - a phrase to indicate "Feynman" way of thinking for testers. I had about 30 mins to cover the idea like this. I surely, struggled to make justice to the topic. However - I had some very interesting discussions and met many nice people post the talk. So - my talk did touch few of these people who overcame their hesitation to come up to me and talk.

It is nice to see many of these conferences posting conference talks on youtube. While I wait for this year's STC video appearing on youtube, you can check out my 2012 talk here.

I am planning to start small 3-5 mins video podcast sessions on testing topics as an alternative way to keep this blog going. One very personal reason for this is to improve my presentation skills. Watching yourself doing a talk can teach you lot as how to improve the same.

Let us see how this goes... I thank my readers for their interest shown on me.

Sunday, June 22, 2014

There is no such thing called Agile Testing

I struggled since long to find a reasonable meaning and definition for phrase 'Agile Testing". So far I have been unsuccessful in finding one definition that can stand my scrutiny. Probably there is nothing called "Agile Testing" exists. Possibly yes.

Before I proceed – let me make a distinction between "Agile" and "agile". James Bach has been suggesting since long - this difference. The word "agile" as a dictionary word meaning "swift", "quick" – when applied to software it simply means what it means in dictionary. Good and reasonable software people have attempting to be "agile" in the project context as demanded by stakeholders. This has been happening much before the industry invented the buzz word "Agile" (note caps "A" here). This word "Agile" is more of a marketing term invented to describe a ceremony-laden model of developing software. It promises continuous, small iterative and quicker pieces of deployable software – straight to market. It is fashion of the day and often seen as panacea to all problems of slow, buggy and boring year long projects draining millions of dollars where first 4-6 months of the project would be spent in agreeing upon the requirements or initial design. In Today's world Market demands speed and flexibility for businesses making software or using software – days of big upfront design and yea- long software projects are getting over.

You can consider "agile" as drinking water that you get in the tap and "Agile" as your favorite brand of mineral water bottled specifically and sold for a price promising certain level of purity.

Also, let me define for the purpose of this post – what is testing. Testing is an open ended activity of evaluation, questioning, investigation and information gathering activity around software and its related artifacts. This is typically done to inform stakeholders about potential problems in the product, advise them about risks of failures as quickly and as cheaply as possible. There is NO one "right"(certified) way to do testing and one right time in the project lifecycle to start with it. The context of the project defined by the people in the project including stakeholders – dictates the form and essence of testing. Testing does not assure ANYTHING, it informs (to best of testers ability and intent) about problems in the software that can threaten the value of software. Given constraints of time and money – testing (even though an open ended evaluation/investigation activity) constantly seeks to optimize its course to find problems faster and report them in right perspective. This requires testers to be good/quick learners, skeptics, thinkers with diverse set of skills in business, technology, economics, science, philosophy, maths/statistics amongst others. In some sense testing is like a sport or a performing art that becomes better with practice and improvisation. A professional tester needs to practice (meaning doing) testing as a professional musician or sports person.

Good testing thus -

  • Focused on working closely with programmer
  • Uses tools/automation to perform tasks that are best done by computer
  • Favored light weight bug tracking process that primarily focused on faster feedback cycle to developer and speed fixing of important bugs (important to stakeholders)

When books, blog posts, articles, conference presentations talk about "Agile testing" – it is always in contrast with so called "Traditional testing". Any meaning or interpretation of traditional testing assumes a stereotypic "traditional" tester. So, let me attempt to define one.

A traditional tester is one who has worked in a waterfall software project and was part of a dedicated (independent) testing team. There would be a wall between development and testing teams and code to be tested would be thrown over the wall for testing purposes. Testers used heavily documented test cases and relied on elaborate requirement documentation. Bugs are reported in a formal bug tracking system and it would be testers pride to fight to prove that bugs logged were to be defended. Testers resisted changes to requirements in the middle of the project and insisted that it would make them to rework on test cases, retest application and hence adds to cost of the project overall. Testers assumed the role of quality police and took pride in being final arbitrators of "ship" decision.

For the uninitiated few examples of what (I believe) is NOT Agile testing

  • Writing units tests in xUnit framework – you are not testing
  • Doing xDD – There are host of 3/4 letter acronyms on the lines of /something/ driven development. As many agile folks admit these are development methodologies – let me not go deep in explaining why these are not related to testing.
  • If you are working on continuous integration tools and your automation gets kicked off in response to a new build/check-in – you are not testing
  • If you are writing stories or participating in scrum meetings - you are not testing

Finally here are 3 reasons – why I believe that there is no such thing called "Agile Testing" -

Agile Testing people do not talk about testing skills

If you know what is testing and you do it – it is obvious that you know what skills you need and how do you work to improve them. Agile people often confused about what is testing and what is not. Hence you cannot expect them to articulate testing skills by them. You typically hear things like "collaboration", "programming skills", "think like customer" etc. I strongly feel that these folks have no clue about testing or testing skills. I bet they are just making it up. Software testing is now special skill in itself. Many people study and practice it as a profession and life time pursuit. There are testing conferences happen all over the world. There is a growing body of knowledge about craft of software testing.

Sad that Agile folks have no idea about these skills. All they are talking about is how developers or team members in Agile projects work and believe. This is what really bothers me about the idea of Agile testing. The idea is being badly articulated.

"Something that everyone in the team does" – that is how an Agile folks define testing. While everyone in a project team owning responsibility to make sure project succeeds is a noble and unquestionable idea – making testing as everyone's responsibility is shooting on own foot. Very soon we will get into "everybody-anybody-somebody" type of problem. Expecting developers do excel in their bit of testing is OK, expecting business analysts/story writers in capturing requirements well is fine too. But making everyone responsible for testing is about turning blind eye to skills required for professional testers. This idea of everyone-does-testing is rampant in Agile teams. Why call this testing with a special name "Agile testing"? In terms of roles – as everyone does testing – you may not have a designated role called testers.

Agile Testing is different from Traditional Testing – but not quite

Inevitably, I now need to introduce term "traditional testing". Agile folks would argue that testing that happens in Agile project is different from "traditional testing" – they point to testing against user stories as opposed to detailed requirements. Wow – if you are testing basis is a story instead of a detailed requirement document- you are doing Agile testing. But how different is that?

Much of the trouble for testers transitioning to agile projects is about their dominant beliefs about testing. For someone who worked in a typical outsourced IT environment – it was difficult to work with stories instead of elaborate requirement documents. It would be challenging to work closely with developers/programmers and speak their language while all along they worked with a wall between them and development team. Automation, for these testers was something on the lines of QTP or any GUI automation tool – where as agile teams used likes of Selenium and API or unit testing.

  • Many testers cannot work with leaner documentation (requirements)
  • When requirements constantly change – they are thrown off the track – they cannot test without test cases
  • No more there wall between dev and test – hence a tester is exposed to work directly with dev. Some are intimidated with this possibility
  • Testers that are familiar with GUI automation tools like QTP are suddenly exposed to tools that work under the skin – expectation is to understand and work with formal programming languages. This is terrifying to many testers.

So, there is no such thing called "Agile Testing" but there is "good testing". If you are a "good tester" and asked to work on an Agile project – what do you do? Fit yourself in project context – keep doing good testing that you always did. Do not get distracted by jargons and marketing terms that you might find people, consultants throwing around.

I think there are some ideas that I have not touched upon – Agile testing quaderants, why exploratory testing is such a hit with "Agile" people – well, that is for part 2. Let's see how this pans out.

Sunday, December 08, 2013

Refreshing Schools of testing - A flow chart

I picked up this from one of my old notes  on schools of testing where I made a sketch in the form of some flowchart. While I was cleaning my book rack, found the paper with the sketch, I thought why not make it a blog post.

Starting thought  was about fundamental ideas about software testing especially in terms of objectives, tactics, outcome etc.

If you agree that there are differing "opinions" about software testing amongst practitioners, stakeholders and other parties in software eco-system - follow through the flow chart and see where you end up. Let me know your views on this.


Sunday, December 01, 2013

Connection between Software Metrics and News

I discovered Maria Popova's brain pickings accidentally. I am happy that I did. Fully loaded with stuff that makes you think almost every time you read her blog - is something that stands out to be noticed.  If you have not already signed up for her newsletter and not aware of brain pickings - I strongly recommend you sign up.  If you are curious mind - you cannot afford miss this "interstingness hunter-gatherer and curious mind at large". Thanks Maria - for keeping us busy in reading and absorbing stuff that you keep serving to knowledge hungry, curious world.

In a recent post she explores (or re-explore)  a book "Does my gold fish know who Am I" and in the narration that follows the central theme of curious  and urgent questions by kids, I found a paragraph about news. I could immediately make a connection with how software metrics are produced and consumed.

Thanks to my confirmation bias towards anything that criticizes software metrics - I sat down, this sunday afternoon (while busy in finishing all piled up work) to write this post. If you feel strong about any idea to write about (for a blogger) - you would find time to write about it.

For a question "what news papers will do when there is no news" -

"Newspapers don’t really go out and find the news: they decide what gets to count as news. The same goes for television and radio ....The important thing to remember, whenever you’re reading or watching the news, is that someone decided to tell you those things, while leaving out other things. They’re presenting one particular view of the world — not the only one. There’s always another side to the story"


Wow , that seems to be absolutely right to me. Exactly same thing goes for software metrics. The producers of the metrics decide what they want the consumers (Managers, stakeholders) to see and absorb while leaving out some unpleasant things that probably matter. How often you have see testing producing results that confirms what stakeholders looking for.  Zero Sev 1 and Sev 2 bugs in open state and 2 Sev 3 bugs with clear work arounds. On release Go-No Go meetings - what else can be sweet news like this? If  as a stakeholder you wanted the release to happen - you would not question these numbers at all. Thanks to confirmation bias. 

Management's preference for numbers and summarized data - it is very easy to hide things that matter.  And there is always another side to the story - sorry the numbers (numbers themselves are astonishingly incapable of telling any story leave alone telling the right story). Why this works (or apparently works) - our brains are wired for optimism - we like to hear good stories (good numbers) and most importantly stories that confirm our existing world view.  Here is where critical thinking comes as savior. To me critical thinking is about questioning ones' own suppositions and line of thinking. "Am I missing anything here?" or "Is my understanding right, should I seek some contradictory information if exists" -  are examples of critical thinking.   For software testers - this is very CRITiCAL - we should last person to say "all right this is right".

Sadly as is the case with news, the metrics madness goes on - consultants year after year mint money in the name of software engineering, software process and metrics rule our life as software folks.

While I am writing all this - I need to be critically thinking as well - Am I being overly negative and dismissive about metrics and news?

Sunday, November 24, 2013

White's Illusion and Importance of Context in Testing and for Testers

I found one of the best examples or illustration of what is context and why it is important - in Keith Klain's Eurostar Webinar "The confidence Game: What is mission of Testing"

As Keith explains (slide #7) white's illusion- we can see how our brains and eyes perceive color in the context of surrounding's color and relative sharpness or dullness. To me, this is similar to the idea of "context" of say our projects, testing practices and probably everything that we believe as truth. To be context driven (in testing) is to be aware of "background" of statements, ideas presented in all aspects of testing. Conversely, context independent approach would be to "completely" ignore the background of the statements, views and ideas presented and treat them as absolute and universally applicable. Personally, when I was growing up as software testing professional - I did the mistake of taking everything I read or hear about software testing from books, blogs and conferences - as absolute knowledge about testing as they came from "experts", authors of books on testing, consultants and folks that have great reputation in the industry. After getting into Context driven world of testing and initial training  - I started understanding the importance of context. Then I figured about how background or context colors every information so differently and how not to be fooled by power of presentation of views/ideas that come in context independent way.

Another extension of white's illusion and importance of context is straight relates to enemy #1 of rational thinking - "confirmation bias". In simple words - its tendency towards supporting and endorsing something that you (already) believe it to be true while rejecting or ignoring any contrary evidence. Since our childhood - we keep accepting pieces of "information" as knowledge and store them in our brain (tabula rasa as John Locke suggested). We probably start of with an empty slate (not sure if brain of the fetus in mother's womb has something written on it) and go on accumulating so called knowledge through our senses/experience. Once the brain accumulates a "critical mass" (probably when kids start going to school where they are asked to simply follow the rules or memorize what teacher says) - confirmation bias starts kicking in. Brain starts filtering all information that does not confirm current (at any given point of time) information "saved" on the brain slate. 

Just as our eyes perceive color in the context of surrounding, our so-called knowledge is relative to stuff that we already reckon to be true. This illusion of absolute knowledge is similar to white's illusion. Fortunately, we know that something of this sort (filtration to confirm to known stuff) is going and we as testers as rational thinkers need to be vigilant about confidence of what we know to be true. One way that I have been practicing to beat confirmation bias is to hang around with people, read and listen to ideas that contradict my existing set of beliefs about say - testing, software, management, money matters and almost anything that impacts me. 

There is a huge knowledge base and psychological research on "confirmation" - My favorite one is from David McRaney the author of "You Are Not So Smart" - "People like to be told what they already know. Remember that. They get uncomfortable when you tell them new things."

Back to Keith - thanks for note on white's illusion and the rest of presentation about "mission of testing" - I really enjoyed listening to it. In my opinion - key message of Keith's talk about is "dangers" of seeking confidence. What danger - if you seek confidence - you will get it - an illusionary one though. There is an entire machinery and system called "Marketing" to serve you exactly the confidence you are seeking - which is "cooked" with recipe that swear by.

If you are a stakeholder  - be wary of any consultant or project manager or any one serving when they claim they can "generate" confidence by doing "xyz" (or whatever).

"We never are RIGHT we can only be sure we are WRONG" - Richard Feynman (quoted in Keith's presentation). I love to quote Feynman and tend to agree with him almost on all things that he said.
Need to check If I am falling into confirmations bias with respect to my fascination with Feynman and his approach of knowing.


Sunday, October 20, 2013

Questions about Gamification of Testing

I came across the phrase “gamification” and how in testing we can use this, in the writings of Jonathan Kohl. One day on twitter, jonathan and I had brief exchange and he encouraged to me explore the topic and write about it. Yesterday I read few articles about the topic and thought it is high time that I need jump and learn this stuff. Here is an initial and very crude attempt to understand the idea of gamification in general and how it is applied in testing.

When I think about the phrase gamify or gamification what comes my mind is - some sort of application or accommodation of elements of game into other systems and see what happens. That is probably what gamify means – take a thing or system that is game or related to a game and do stuff as though you’re playing a game.  To simply put it gamifying testing is running testing as though it’s a game.

What does it mean to run/conduct testing for a project as a game?

At broad level, game is a competition between players and game leading to a player or a team winning or losing against the game or with other player or the team.  This definition is provisional one – I might not be considering types of games that do not falling in this category. I wonder if there is any game where there is notion of victory or defeat.

Games can be single player multi player – individual or team, it seems to me that constants are there are rules of the game and there is definite outcome (mostly time bound). How do we apply these elements to testing as game?

In this initial exploration of gamification of testing – let me analyze elements of gams that I strongly feel about and how they apply to testing.

First of all – there are rules in a game. I cannot, at least this moment think of a game that does not have rules. What are the rules of testing – is the question that comes to my mind. Then I would ask who makes these rules? Is there one set of rules that apply to all kinds of testing that being done?  When we gamify testing – are we considering testing as one single type of game or collection of different games each having its own set of rules?

Secondly there are players or groups of players in the game that play the game. There are two player games such as chess, tennis (there doubles too), wrestling, boxing, badminton, carom board etc. There are multi player individual games – athletics, gymnastics, swimming, shooting, archery, racing cars etc. Finally there are team games – football, cricket, various forms of hockey, baseball etc.  Thus when players and teams play the game – they compete with one another.  When we model testing as a game who are players and who complete with each other?  I might be tempted to say testing is a game between dev team and testing team. With already so much animosity amongst two communities – this idea is no good to pursue.  Whatever model of testing we might use – putting dev and test against each other is not a good idea. So we have problem here in gamification of testing wherein we need to identify the players or teams that compete. Is testing a “friendly” or a “practice” game where ultimate goal for each team is not win – just a warm up or practice?

How about goals or objectives of a player or team playing games in the first place? Winning of course! There are prizes for the winners and in some cases even losers get some prize (say as runner-up). Winning brings happiness and satisfaction in the players that can be a motivation or objective of playing. In order to pep-up the losers – there is something called “sportsmen/women spirit”. How many times you heard the statement “it is not important to win or lose, participation and competing with ones best ability is important”. So if you lose do not feel bad – there is always another chance.  When we apply this to testing – how we might formulate the goals, objectives and motivations of playing the game of testing?

The rules of the game throw conflicts, contradictions and options for the players to play using skill and strategy.  In simple terms, a strategy deals with options and risks to take the player to winning. We can apply the metaphor of strategy of games to strategy in testing.  This is probably one matching element that I agree between games and testing. A good test strategy in testing is same as winning strategy in games. But then – what is the meaning of winning the game of testing? Against who?

Most characteristic aspect of playing or viewing a game is the climax and thrill of outcome. So when any game ends we have winners and losers.  Winners take trophy, prize, applause, glory and loser some lessons on how to win the next game. How do we transport this idea of games to our world of testing? When testing ends (as a game) who is the winner? and who is the loser?  You might say “customer” or “business” is the winner – then who is the loser?  Are there any games where game ends with only winners or losers or no winner or loser? An interesting example comes to my mind is casino games. It is being said – at the end casino wins, even though each player in casino might win here and there – net casino is a winner always. What does this tell about testing gamification?

Having said all of above and asking these questions – I do agree with some aspects of games and gamification as apply to testing.

As Jonathan says games tickle our emotions, they captivate us, and they encourage us to work hard at solving problems and reaching goals – True. Thus running testing as game will lead to greater engagement of testers into work.

Why games work?  Success and praise for success generates pleasure in brain releasing chemical messengers like dopamine, serotonin etc. In corporate settings gamification sets up employees in competition among peers with rewards/batches etc. I agree with this as well.

So, with this post I have attempted to come up with few questions about gamification of testing.  In next few posts, I would explore the idea of gamification in general and testing as game in particular.


Shrini

Sunday, August 04, 2013

James Bach's Advise on Tool supported Testing (aka Test Automation)

James Bach, in response a question on articulating "test automation and frameworks" to non technical people - gives these pretty useful pieces of advise.

I thought of sharing them to my readers.

-- Tools don't test. Only people test. Tools perform actions that help people test.
-- You must understand, design, monitor, and fulfill your test strategy. Only people can do that.
-- All testing is manual testing, in that regard. But in another sense most testing is tool-supported, since we use tools to help us in many ways.
-- Tools are capable of directly detecting only very specific bugs. Humans can, in principle, detect any kind of bug (especially when helped by tools).
-- Tools left alone will "detect" lots of things that are not bugs, while missing various huge bugs.
-- Think of tool-supported testing like cruise control-- it helps but the human is still driving the car.
-- Think of tool-supported testing as "tool-mediated" as opposed to naturally mediated. If you test through a tool then it filters out lots of the experience that may otherwise alert you to problems. This is not a bad thing (think of an infrared camera, which is exactly the same tool-mediated concept applied to vision) unless you test through your tool too much (imagine going through your life with infrared goggles on all the time).

Read these along with following post

http://www.satisfice.com/blog/archives/58


I wish that when people see automation as some sort of tool assisted testing - sense will prevail.

Shrini