Sunday, October 20, 2013

Questions about Gamification of Testing

I came across the phrase “gamification” and how in testing we can use this, in the writings of Jonathan Kohl. One day on twitter, jonathan and I had brief exchange and he encouraged to me explore the topic and write about it. Yesterday I read few articles about the topic and thought it is high time that I need jump and learn this stuff. Here is an initial and very crude attempt to understand the idea of gamification in general and how it is applied in testing.

When I think about the phrase gamify or gamification what comes my mind is - some sort of application or accommodation of elements of game into other systems and see what happens. That is probably what gamify means – take a thing or system that is game or related to a game and do stuff as though you’re playing a game.  To simply put it gamifying testing is running testing as though it’s a game.

What does it mean to run/conduct testing for a project as a game?

At broad level, game is a competition between players and game leading to a player or a team winning or losing against the game or with other player or the team.  This definition is provisional one – I might not be considering types of games that do not falling in this category. I wonder if there is any game where there is notion of victory or defeat.

Games can be single player multi player – individual or team, it seems to me that constants are there are rules of the game and there is definite outcome (mostly time bound). How do we apply these elements to testing as game?

In this initial exploration of gamification of testing – let me analyze elements of gams that I strongly feel about and how they apply to testing.

First of all – there are rules in a game. I cannot, at least this moment think of a game that does not have rules. What are the rules of testing – is the question that comes to my mind. Then I would ask who makes these rules? Is there one set of rules that apply to all kinds of testing that being done?  When we gamify testing – are we considering testing as one single type of game or collection of different games each having its own set of rules?

Secondly there are players or groups of players in the game that play the game. There are two player games such as chess, tennis (there doubles too), wrestling, boxing, badminton, carom board etc. There are multi player individual games – athletics, gymnastics, swimming, shooting, archery, racing cars etc. Finally there are team games – football, cricket, various forms of hockey, baseball etc.  Thus when players and teams play the game – they compete with one another.  When we model testing as a game who are players and who complete with each other?  I might be tempted to say testing is a game between dev team and testing team. With already so much animosity amongst two communities – this idea is no good to pursue.  Whatever model of testing we might use – putting dev and test against each other is not a good idea. So we have problem here in gamification of testing wherein we need to identify the players or teams that compete. Is testing a “friendly” or a “practice” game where ultimate goal for each team is not win – just a warm up or practice?

How about goals or objectives of a player or team playing games in the first place? Winning of course! There are prizes for the winners and in some cases even losers get some prize (say as runner-up). Winning brings happiness and satisfaction in the players that can be a motivation or objective of playing. In order to pep-up the losers – there is something called “sportsmen/women spirit”. How many times you heard the statement “it is not important to win or lose, participation and competing with ones best ability is important”. So if you lose do not feel bad – there is always another chance.  When we apply this to testing – how we might formulate the goals, objectives and motivations of playing the game of testing?

The rules of the game throw conflicts, contradictions and options for the players to play using skill and strategy.  In simple terms, a strategy deals with options and risks to take the player to winning. We can apply the metaphor of strategy of games to strategy in testing.  This is probably one matching element that I agree between games and testing. A good test strategy in testing is same as winning strategy in games. But then – what is the meaning of winning the game of testing? Against who?

Most characteristic aspect of playing or viewing a game is the climax and thrill of outcome. So when any game ends we have winners and losers.  Winners take trophy, prize, applause, glory and loser some lessons on how to win the next game. How do we transport this idea of games to our world of testing? When testing ends (as a game) who is the winner? and who is the loser?  You might say “customer” or “business” is the winner – then who is the loser?  Are there any games where game ends with only winners or losers or no winner or loser? An interesting example comes to my mind is casino games. It is being said – at the end casino wins, even though each player in casino might win here and there – net casino is a winner always. What does this tell about testing gamification?

Having said all of above and asking these questions – I do agree with some aspects of games and gamification as apply to testing.

As Jonathan says games tickle our emotions, they captivate us, and they encourage us to work hard at solving problems and reaching goals – True. Thus running testing as game will lead to greater engagement of testers into work.

Why games work?  Success and praise for success generates pleasure in brain releasing chemical messengers like dopamine, serotonin etc. In corporate settings gamification sets up employees in competition among peers with rewards/batches etc. I agree with this as well.

So, with this post I have attempted to come up with few questions about gamification of testing.  In next few posts, I would explore the idea of gamification in general and testing as game in particular.


Shrini

Sunday, August 04, 2013

James Bach's Advise on Tool supported Testing (aka Test Automation)

James Bach, in response a question on articulating "test automation and frameworks" to non technical people - gives these pretty useful pieces of advise.

I thought of sharing them to my readers.

-- Tools don't test. Only people test. Tools perform actions that help people test.
-- You must understand, design, monitor, and fulfill your test strategy. Only people can do that.
-- All testing is manual testing, in that regard. But in another sense most testing is tool-supported, since we use tools to help us in many ways.
-- Tools are capable of directly detecting only very specific bugs. Humans can, in principle, detect any kind of bug (especially when helped by tools).
-- Tools left alone will "detect" lots of things that are not bugs, while missing various huge bugs.
-- Think of tool-supported testing like cruise control-- it helps but the human is still driving the car.
-- Think of tool-supported testing as "tool-mediated" as opposed to naturally mediated. If you test through a tool then it filters out lots of the experience that may otherwise alert you to problems. This is not a bad thing (think of an infrared camera, which is exactly the same tool-mediated concept applied to vision) unless you test through your tool too much (imagine going through your life with infrared goggles on all the time).

Read these along with following post

http://www.satisfice.com/blog/archives/58


I wish that when people see automation as some sort of tool assisted testing - sense will prevail.

Shrini

Sunday, June 09, 2013

10 Random ideas about Test Automation Estimation

I received a mail from a friend who asked about estimation approach for test automation. Wow - what a topic to mess up your head on a sunday night. Instead of responding to him in a mail - I thought of writing a post on this so that other can engage some conversation with me on this topic.

Here you do - 10 random thoughts on the topic. Well - I can extend this list to more than 10 items. For now there are 10 items.

6. Test automation is writing code - that involves all that writing code needs. Ask developers what they need. If development world claims that they cracked this problem. Automation folks can simply lift and use that solution.

2. Regardless of what commercial tools claim about "generating code" or similar non-sense around "scriptless" solution - fact remains that any sustainable automation code is similar to software product that automation aims to validate - so do not fall prey to false propaganda around tools that claim easy automation.

4. Ideas/Frameworks like data -driven, keyword, hybrid are simple ideas for automation design. You need to go deeper as ask if I need to write a method/function or a class in automation code - how much time will I need? Do get an answer from say a developer? You might be aware of some crazy metrics around number of lines of code written per day or number of functions/methods/classes written per day.  As you can - it only gets murkier if start insisting on measuring productivity of automation guy in terms of these meaningless metrics.

9. Important thing to note is we (folks in software world) are knowledge workers - meaning we do not work in manufacturing assembly line of factory. We deal with abstract things, software that can not be worked in the same way as say "Cars". So how does that change the way we should be viewing estimation of developing automation code - think about it.

5. Depending upon nature of piece of automation work that you would be developing - you would not know in the beginning how much time you spend in thinking and how much you spend putting your thoughts into compilable code. That is biggest challenge developers have. So are we - automation developers.

7. First question you should ask when you are working developing an estimation model for automation - what is my smallest unit of work - that unit which is building block of my whole solution. What answer would you get?  Function? class? Next question would be - are all building blocks are similar? How many different types of building blocks do i have?

Compare it say atoms - ask how can I characterize atoms or molecules of automation solution.

2. Once you get clear answer on above question - next you need break down your automation solution into building blocks and size them. Then ask - given a competent developer of automation - how much each unit would take to build - hours, days etc. Then add up time setup, testing, integrating etc. You will get some ball park number of estimate that you need go with as first estimate.

1. What reference should you use for creating your automation solution (or design?) - anything that describes what you want exercise on the application under test. One approach I found useful is create a mind map of application features and attach to each feature what application can do, what data it processes and what checks (note the word check) needs to be done. This is your skeleton reference. Build this first by collating data/information from various references - then make sure all information from each sources is accounted. This is your master reference. Work with multiple sources of reference (Requirements, test cases, use cases or simply walkthrough of application manually to build map of features).

10. Should you use test cases or test scenarios or test steps as basis for automation estimation? As  James Bach prefers to call - test cases are like unicorns - how many of them you can fit in a suitcase or fridge? Without knowing what is inside - counting test steps or test cases and using that number as basis for anything (leave alone automation) useful is utterly stupid idea. Never do it - unless you want to mislead someone.

3.  Few words about keyword driven framework. Personally I think, there is lots of hype around this simple idea. Keyword could be a verb (also called as action) that describes some feature of application under test. In developers language it is some basic unit of code - typically a method or function. What is a big deal here when you say "let us use keyword driven framework"? It's all hype - no real stuff there. There even more irritating words like "keyword driven or based testing" - so far I have not figured out how to do testing (not automation) using keywords. Same goes for other related buzz words like data driven automation (a marketing term for saying let us use variables instead of hardcoded values) or hybrid framework. Note all these simple ideas had some place 10-20 years back but not anymore. I personally prefer to develop automation pretty much like a developer goes about doing product code - no difference. I hate over simplification by tool vendors and consultants on so called "excel based automation" or script-less automation, automation for your granny - they are simply empty ideas to bully unsuspecting boardroom folks that sign contracts.

How will I summarize - There is no simple solution for estimating automation effort.  Keep a watch on how development (programming) community deals with this conundrum and let us use that to build our own model. At present, when developers are working on small units work like use stories and in an iterative model of churning working code (theoretically) in say weekly or fortnightly or monthly basis -
I think the whole problem of "tell me how much time (and resources) you need to develop this solution" - will vanish. You would probably say "let me start with 3 people, I will publish a 1-2 week plan of what I deliver for people to use - let us take it from there".

I think gone are the days where you had 3-6 (or even more) months of lead time for something software to be deployed for use. In the mobile apps world - development times are even shorter.  I doubt if anyone would ask you - give me an estimate for automation of this app.  It seems that we have solved the problem of estimation by going small and going fast.

I am happy to be corrected on any of views expressed here. Let me not forget to add - when I say automation - my experience has been in IT/IT services world mainly working with commercial off the shelf automation tools. If it were likes of Google or Microsoft - it would be totally a different ball game all together.

Shrini


Friday, June 07, 2013

Are you measuring something that easy to measure or something that is important ?

Measurement is fabulous – Unless you are busy in measuring what is easy to measure as opposed what is important. – Seth Godin


... And what is important (and to whom and when) is often subjective and context based.

Thank you Mr. Godin for your sound advice that is useful for software folks
I have confirmation bias for bad metrics and measurements. We have obsession for measuring things to demonstrate that we are rational and objective humans (which we are not). It's amazing to see how Seth Godin in above post demonstrates "measuring sometihng that is easy to measure is waste".

"As an organization grows and industrializes, it's tempting to simplify things for the troops. Find a goal, make it a number and measure it until it gets better. In most organizations, the thing you measure is the thing that will improve"


Many people blame growth, size to "metrics menace" and say "how can we manage such volume of work if we do not have right metrics". Remember the thing that you measure will be victim of gaming and match fixing - people will change the behaviour to look good in terms of what is being measured. Look at our testing metrics - all easy to stuff to measure (sorry - simply count) - number of test cases, defects (and all dimentions of it), number of requirements (this is really bizzare), defect detection percentage, defect leakage rate, cyclomatic complexity and list is long - mostly all easy stuff to measure (in fact - simply count).

While what our users care is how software works (or does not work) - it is about those emotions (frustration, anger, happyness etc). Since these are important but difficult to measure (in some easily understandable numbers of percentages etc) - we take easy route. Pretend as though these do not matter at all or when confronted, wear "rational" hat and issue "scientific/engineering" statement "anything that cannot be measured - cannot be improved".

"And this department has no incentive to fix this interaction, because 'annoying' is not a metric that the bosses have decided to measure. Someone is busy watching one number, but it's the wrong one."

-- So true for software - our bosses (influenced by high flying software engineering/process consultants) have choosen to turn deaf ear to real "metrics" (that are tough to measure). Thus software developers or testers appear to have no incentive to "listen" and "fix" important issues that matters to users.


Software is developed, tested, used and maintained - in, for and by social enterprise and people are irrational, implusive, greedy and look for instant gratificatoin. Society (a name given to large number people living together) amplifies such indivitual traits.

We, software testers - need to adopt social sciences approach and stop aping practices of "enngineering processes" of a factory assembly line.


- Shrini

Thursday, May 30, 2013

How many heads can you roll off with this Automation?

...  This was exactly one of the managers in a meeting asked me when I was discussing with a group on automation that we were developing and maintaining. Hold your breath - this is not some 10-15 year old fairy tale story extract. This shows dominant view that is held amongst business stakeholders, IT execs, consultants and sadly many test managers.

For the starters in this field and to the topic - test automation (or simply automation) is an idea of some computer program "checking" behavior of another computer program (called application under test) and some sense "replicating" what a sufficiently disengaged or brain dead tester would do in the name of testing the application. In order to drive home the idea - early automation tools such as Winrunner introduced the idea of "record and playback". Wow - what a way to simplify a really complex and difficult work of testing a software application.

Thanks to IT consultants and managers - whenever the problem of "speed" showed up in meetings - automation was proposed as potential solution. This has grown to such an insanity that today for almost all problems of software projects - automation is common solution. But, very soon people in IT realized that doing automation requires Money - additional money than you pay to a tester.

Some clever fellow in the consulting company they shouted 'Return on Investment" - from that fateful day - life of tester or someone who supports tester through automation - has never been the same. Since automation requires funding extra to what is spent on testing - execs obviously want their pond of flesh in return.

This leads to popular equation. Without automation if you need 5 testers to do testing for a project/release, with 50% of automation - you would need 50% less people. That is how automation pays back itself. Since software requires repeated testing when changes happen - automation once done can be repeatedly used without paying for human tester.  That is how conventional and most popular thinking on automation goes.

While it is not very difficult to reason why and when automation cannot reduce the need of number of testers - very proposition of "automatic" testing and removal of need of some dumb tester staring at screens of automation test run is simply irresistible.

I fought many losing or lost battles in explaining my stakeholders as why automation should not be thought about as means to reduce number of testers or cost of testing. Everytime I lost, I was made to understand - it is just execs made explicit choice not to reason but to continue to insist that if automation can not reduce manpower required for testing - it is useless or at least not worth investing.

I am thinking of refusing to do automation if right expectations are not set with stakeholders - will it work? Will I be given first right of refusal not do automation if right level of awareness exists?

But then - if you are a business leader, IT manager (not someone with deep understanding or appreciation for testing and automation) - you would believe what a consultant or tool vendor would say.

As I close another pessimistic post on automation - I realize - it is tough to be in automation where everyone has opinion (strong one) and I have to force my agenda through.... tough...

But I have not given up --- trying to bring sanity in the mad world of test automation.

Shrini


Sunday, May 19, 2013

How to disagree elegantly and learn something in the process ...

I love my Zite iPad App that pulls out amazing (and latest) news just about anything. By tuning this to topics like science, philosophy, mind-body, software, programming, critical thinking (topics of my interest) - I can get hours worth reading everyday from this app. Thank you Zite.

I tweeted about Daniel Dennet's thinking tools - an article on Guardian. Lots of good stuff - take a look at it and if possible buy the book and read.

One idea that most attracted me from this article is about "how to effectively criticize/argue with someone". Here is a quick paraphrase.

3 simple rules (These rules are attributed to social scientist Anatol Rapoport - as Daniel suggests in the article)

1.  Attempt to restate (re-express) opponents idea in your words (so clearly that opponent should say "I wish I could have expressed like you did - that is precisely the idea")
2. State the points of agreement about opponents idea (especially if they are not matters of general/public agreement)
3. State what you have learned (new) from the idea

Only after doing 1.2.3 - you can do any rebuttal or criticism.

Notice - what 1, 2, 3 will do to your opponent?
By #1 - you have managed to show that you have understood the idea (even better than the opponent herself)
By #2 - you have established an emotional connect with opponent by explicitly stating what portions of idea you agree with. This will open up opponent for considering your points. This is the point where she will start actively listening to you.
By #3 - This is big one. Through this you show your humility and desire to learn when critiquing an idea.


Through this series of actions - you essentially convert a potential adversarial idea/person into a positive and collaborative interaction.

I will be putting these rules into action for situations where I am disagreeing with anyone and offering opposition or criticism. Let me see how it goes.

Pretty sound advise Daniel. Thanks.

Shrini

Tuesday, May 07, 2013

Book/Reading suggestions ...

Few days ago - a tester friend of mine approached with a request to suggest him for some books to read. I responded him with a small list that on the face of it - looked unlikely for a software tester.
I thought I would share the list with you folks ...

Here is it is 
This book introduces the idea of "systems thinking" and To a tester - I think it is most important to know and engage in general systems thinking as we engage in solving problems.


2. Surely You're Joking Mr. Feynman by Ralph Leighton and others

Richard Feynman is hero of testers in my opinion. This nobel prize winning American physicist lived life of a curious child all his life exploring  the world and never turned away from learning new things. He questioned things around him like a true tester. The encounters described in this book by him explain what it means to be a curious thinker. Although he openly hated philosophy and made fun of philosophers - we can forgive him for the enthusiasm he showed and examples he left through his life to demonstrate a human's thrust for knowledge and learning.

You can see his interview that he gave for horizon BBC "Fun to imagine" - look up in youtube.

3. Outliers by Malcolm Gladwell - This is not a book for testers in direct sense but a fascinating book that illustrates systems thinking that Jerry's book (as indicated above #1). This is one book that read from start to end. Each chapter is illustration of how to look at information that is publicly available and create a whole new interpretation of it.
Other books from same author that are worth reading are - "Tipping Point", "What the dog saw", "Turning point (this is a science book).

4. God Particle by Leon Lederman

This is again not a testing book - not a systems thinking book not a book about software. It is about amazing journey of science of understanding building blocks of our universe. I liked the narration of how to express and articulate heavy scientific stuff through metaphors and examples so that even a 7th grader can understand. What this has to do with testing ? Understanding tough subject and explaining in easy language - something that tester do all the time - find tough bugs and demystify them for our stakeholders including developers.

4. "How to think about science" - A CBC series of 14 interviews with scientists, philosophers, Writers - about emerging form of science. If we reckon testing as multidisciplinary - Look no beyond this. Download the series of interviews (mp3) and listen/absorb. These interviews left a long lasting impression on me about how think about an intellectual pursuit like science or software development or testing.

Shrini

Making a food item vs Solving a Puzzle - An attempt to characterize Testing Mindset

A Disclaimer: I am going to make some sweeping generalizations about how testers and developers (generic name including programmers, designers and business analysts) think and work. This is an attempt to characterize a /typical/ testing (or tester) mindset - a set of dominant thinking patterns, attitudes, biases, choices and behaviors.

I was reading out a bedtime story to my 9 year old daughter - I was holding a book of "Akbar-Birbal" stories. In one story - King Akbar asks Birbal, narrating how typically "giving" works. Under what circumstances - giver's hand is at bottom and receiver's hand is at the top - was Akbar's Puzzle. Under normal circumstances - giver's hand would be at top and reciever's hand below that of givers. How do you solve this puzzle? What goes on in your mind when you encounter stuff like this?

This got me into thinking how in general solving such puzzles/riddles work. When you start solving puzzles like the one above - your mind would be like water gushing out of a pipe - divergent thinking. You need to work towards solving the puzzle from definition of the problem out into vast open exploration.

Different types of puzzle require different approaches to solution - in some cases you know the answer and in some other cases you don't.

1. Math problem - Solve a simultaneous algebraic equation or solve a differential equation
2. Solve Sudoku
3. Play Chess - from initial state to win.
4. Play Scramble - how many words you can make from sets of jumbled letters?

Contrast solving puzzles to say cooking (or making) a food item from a recipe or with someone's help. Here you have more or less definitive, probably seen previously end state when you know you are done. You work with mostly known steps or incremental activities from start to end. In other words you do convergent thinking. Many acts of "construction" go from some known set of conditions and some known end state - you go from say "requirements" to "working software"

Contrast that to a testing problem or solving a riddle.

Extending these two activities - cooking a food item and solving a puzzle - I think former describes how developers work/think where as later characterizes typical testers way of thinking.

What do you think?


Support Keith - Find answers for questions about ISTQB and more....

Keith Klain is stirring the world of testing through some smart and witty comments about testing on twitter. I enjoyed his discussions with Rex Black and others related to ISTQB and other topics that are close the hearts of testers - especially context driven ones.

Here is what makes Keith a special mention - he is a Business/Technology leader (not a consultant) of a Bank and heads a software testing group. Unlike other testing leader, he talks more like a practitioner who does testing day in out (not someone manages someone who manages a team few of which are testers). It is a quite welcome change in the world of business leaders we see around.

Two things I want to bring to your attention about what Keith is doing.

1. Watch him debate with others on twitter and notice how gets people talking. In one of the tweet discussing about testing and confidence with Rex Black, Michael Bolton and others - Keith says (paraphrase) "For a change let us change our positions - how about you (Rex Black) arguing in favour of us (testing does not build confidence) !!!!

In a debate - can you take a stand that is totally opposite to what you have believed all in your life and see the world from that angle ? Confirmation Bias - No 1 Enemy for testers or that matter any intellectual - can be beaten by hanging around with folks that think differently. Well said Keith !!!!

2. Sign the petition that Keith has setup questioning some basic ideas about how ISTQB goes about doing its ("Non profitable") business. First of all read the petition and see if it makes sense - if does - please sign up.

Follow Keith (@KeithKlain) on twitter and watch out interesting debates he kicks off ...

Shrini

Friday, January 25, 2013

Should Automation that runs slower than human test execution speed - be dumped?


I am working on a piece of automation using java and some commercial tool to drive a test scenario on AN iPad App. This scenario involved entering multiple pages of information and hundreds of fields of data. This automation script runs this scenario for say 1 hr where as a tester that exercises same scenario on the app “manually” - claims that it takes only about 30 minutes.

I was asked – if automation script runs slower than human test execution (however dumb) – what is the use of this automation?  What do you think? 

Here are my ideas around this situation/challenge:
Mobile Automation might not ALWAYS run faster than human test execution -
Many of us in IT, have this QTP-Winrunner way of seeing testing as bunch of keyboard strokes and mouse clicks and automation is a film that runs like a dream at super fast speed.  GUI automation tools that drive Windows desktop application GUI or Web GUI have consistently demonstrated that it is always possible to run sequence of keyboard and mouse click events at higher speed than human.  Enter mobile world – we have 3-4 dominant platforms – Andriod, iOS, Blackberry and Windows Mobile. GUI Automation when enters the world of mobile – mainly runs on some windows desktop that communicates with app (native or web) on the phone that is connected to the desktop through, say USB port.  The familiar paradigm of all automation and AUT running on the same machine/hardware breaks down and so would be our expectations on speed of test execution. iOS platform specifically (in non-jail broken mode) presents several challenges for automation tool while android is programmer friendly. As technology around automation tools on mobile devices and associated platforms (desktop and mobile), evolves – we need to be willing to let go some of our strongly held beliefs of GUI automation that happens on web and windows desktop applications.
Man vs. Machine – items that might make machine/program slow
When you see a button on the screen – you know it is there and you touch it (similar to click on non touch phones) – as a human tester you can regulate the speed of your response depending upon how app is responding. Thus, sync with app, checking if the right object is the view and operate the object – all of this happens very natural to human. When it comes to automation tools (mobile tools especially) – all of this has to be programmatically controlled. We would have function calls like “WaitForObject” and some “Wait” calls to sync the speed of automation with speed of app responses. The whole programmatic control of slowing down or speeding up of the automation in relation with app response and checks to make sure automation does not throw exceptions – many times automation programmers need to favor robust but slower automation code that is almost guaranteed to run against all app speeds. This is one of several reasons why automation might run slower than human execution. You might ask how do likes of QTP handle this situation – even tools like of QTP need to deal with these issues. Given the state of technology – the problem is somewhat acute in mobile automation space.
Imagine long, large and highly repeated testing cycles – a human tester would lose out on 2nd or 3rd iteration due to fatigue and boredom. Consider current case of multipage and entering 100’s fields – how long do you think a human tester can focus on doing the data entry.  Here is where our “tortoise” (slow but steady) automation still adds value. This slow program does not mind working 100 times over and again with different data combinations – frees up human tester time and effort for you.
Remember – automation and skilled human tester both have their inherent positives and shortcomings. A clever test strategy would combine (mix and match) human and automation modes of exercising tests to get maximum output – information about issues, bugs and how value of the product be threatened.

If automation runs unattended well – why bother about execution time?
Many of us are used to sitting for hours staring at automation running to see if it works, pass or fails. If fails – check, correct and rerun. If automation is robust and runs unattended – why have someone looking at screen – watching automation running. Why not run it at non working hours? Why not schedule it to run at certain time. This will free up human resources that can be deployed at other areas requiring focused human testing. Isn’t this a value provided by a slow running automation – free up human testers? A well designed but slow running automation can still justify investment as it can run without bothering you.

How you can get best out of slow running automation?
  • Optimize automation to see if speed can be improved – remove sync/waits, “object exists” checks (not compromising on robustness of automation)
  • Identify bottlenecks in tool and fix them
  • Identify environmental and data related slowness in automation and fix them
  • Schedule automation at non working hours and save human effort


Have you come across automation that runs slower than human test execution speed? What did you do with automation? Dumped it? Want to hear about your experiences


Sunday, December 30, 2012

Where do you stand in this debate?



Inspired by Elisabeth Hendrikson's blog post 

[Updated 25th Jan 2013]
I am disappointed to see no responses to this post. While I expected some responses in agreeing or disagreeing. Whenever I see such condition where my post does not get any comments - I think of following possibilities (thanks to Michael Bolton)

1. The post is not very engaging - there is way too much information there. Everyone and every thing is seeking attention. This post simply failed to get any
2. Its dumb idea - completely useless
3. Post is simply a question which either is too simple to answer (so no one would like to feel insulted by answering) or something deep and intriguing (why bother answering)
4. Why Author is not saying anything? Trick to get some free survey done for some homework?
5. No comments

I will attempt to expand on this topic sometime in the future. This situation made me to learn something - no comments - will make you think.

Dear readers - thanks for not commenting and teaching me something.

Shrini

Sunday, November 04, 2012

A bizarre idea called "Software testing factory"

"Persistence in the face of a skeptical authority figure is priceless" - Seth Godin

Paul Holland (twitter handle @PaulHolland_TWN) shared this amazing video of Seth Godin on education systems. As I listened to Mr Godin talking about how present system of schools evolved from schools churning about labors for factories. Alas - even in our software testing "industry", we still need laborers as testers and companies take pride in setting up software testing factories. This post is about how bad and dangerous is the idea of "software testing factory"

According to Godin, about 100-150 years ago - schools used to be for a different purpose. He says - large-scale education was not developed to motivate kids or to create scholars. It was invented to churn out adults who worked well within the system. Scale was more important than quality, just as it was for most industrialists. A day in school started with "good morning" represented the notion of respect and obedience that was injected into students as a virtue. School was about teaching compliance, fitting in for the students into larger social context when pass out. Schools, according to Godin were established as public education to produce people who could work in factories - create set of people who can comply, fit-in and follow the orders of the supervisor.  

Emerging industrialization brought the focus on profitable factories - Godin points out. Factory owners thought "there aren't enough people, if we get more, we can pay them less - if we can pay less we can make more profits. When we put kids into factory that is called as school - we indoctrinate them into compliance.  Godin points out another key feature of factories - idea of interchangeable parts - when translated to schools - it meant producing people who are replaceable just as "standard part" of a machine.  When it comes to work - if you do more - there is always "ask" for little more. This is because - we are products of industrial age. The term productivity was brought the center of the things.

Key idea that I was attracted in this talk was about "Factory and how factory worked". I strongly believe that software and software testing work is "knowledge work" in contrast to "factory work". Here, thinking humans, in collaboration with humans assisted by computers create stuff that we call software that has changed and continues to change our life.  Wholesale lifting of idea of factory - thanks to strong association of "quality" to likes of toyota and promotion of idea of  "sick-sigma" (Cem Kaner used this phrase first, I think) - we have indoctrinated software people as factory workers.
I am troubled by this. When I ask people  - "does it what we deal in factory - machines/concrete things vs abstract ideas and machine instructions - matter? Should or is software produced like a machine in assembly line",  I get no clear response. Many simply think since our industry (Software) is immature and nascent - we must learn from engineering disciplines like manufacturing.

I am fine with learning from other disciplines as  I believe software testing is multi disciplinary - we constantly import ideas from multiple fields such as natural sciences, maths and statistics, behavioral economics, neuro sciences, cognitive psychology, philosophy, epistemology and list continues. I am against  wholesale and mind less import of ideas from the areas where we deal with a totally different type of things and we must exercise caution.

Coming back to factory - many IT services companies take pride in saying "we have successfully implemented software testing factory for a client" or "software testing is now commoditized" - what a shame !!! What happens in a software testing factory? There are dozens of "brain dead" people called software test engineers whose job is to produce test cases, bugs, test results, automation code (sorry popular word is "script"), metrics and tones of reports.  The intellectual pursuit of software testing that seeks to discover, investigate and report interesting and strange problems in software that requires - thinking, skeptic and open mind - has been reduced to "mindless" factory work. As a passionate tester, I would never want to associated with this deadly idea.

Am I biased as tester about my profession as some highly complex rocket science? Is my rational mind blocked or misdirected by confirmation bias? I think that is possible. If I am thinking about software testing as a business - like any other business say hotel, garments, manufacturing or engineering hardware - I would love the idea of factories. I would want to maximize my profits per dollar of investment. I would want to train cheap labour - teach them how to write test cases, report bugs and automate test scripts. I would then deploy them in "mass" to a client and charge handsome money in the name of testing. This business apparently works and it is perfectly legal, by and large ethical.

If I imagine myself as a tester in such factories (flip my context from factor owner to a factor worker or a supervisor) - I see a dark future for myself.  Just as factory works are expected to "comply" and follow a set pattern of work - when factor owner does not need me - I don't have any skills that I can trade outside factory. Over a period of brain dead work - I have lost my thinking and questioning mind. Unless I gain skills in becoming factory owner myself (that is a business development and management skill) - I must leave the factory quickly and move to an environment where I can grow my skills as tester  as a thinking individual.

In short - if you are managing software testing as a business - software testing factory is good for you. If you are a software tester working in a software factory - get out of the place fast or change the career to become factory owner or supervisor.

As a  tester in me roares - I wish for "End of compliance as an outcome - it is too boring for a curious, skeptic mind to simply fall in line".


Additional Notes: 
Following are few statements - that I liked that strikes chord with my belief in "software (testing)" as a knowledge work as opposed to factory work
  • Why we would not want to have our kids to figure it out and go do something interesting
  • Are we asking our kids to "connect dots" or "collect  dots"
  • We are good at measuring how many dots we collect - how many boxes are collected, how many facts memorized, 
  • we don't not teach kids how to connect the dots. You cannot teach connecting dots in dummies guide, text books. By putting kids in to situations where they can fail, experiment
  • Grades are an illusion - passion and insight are realities
  • Your work is more imp than your answer in congruence to answer key
  • "Fitting in" is a short term strategy to go no where.


Do not forget to read this pdf "stop stealing dreams" by Seth Godin.

Sunday, October 21, 2012

Divisions in Testing, Slotting People - How bad is idea of schools?



This post is an offshoot of discussion with friends Rahul Verma and Vipul Kochar on twitter. It started off from a blog post from Rahul on "exploratory testing" - one approach to testing that many in context driven testing community are working hard to be be good at.  When Vipul joined the debate - to me, two key things stood out as following "long" tweet of viper suggests - http://www.twitlonger.com/show/jm95dm

" ...classification, definitions are good. When one starts to use them to divide and slot people, it becomes counter-productive."

Vipul followed up with a detailed post here

Divisions amongst people

Take for example - the idea of schools in software testing by Bret Pettichord.

Rahul wrote a good summary and analysis of schools of testing way back in 2007.  Rahul's main complaint was schools concept divides people.  My view is different. To me, idea of schools has been very helpful to identify myself and my approach to testing distinct from others I see around. It helped me to develop my skills in the framework of context driven testing school. I think, testing as a multidisciplinary field was (will always be) divided. It is just that few refused to recognize the differences. Still worse, some insisted that their's is some sort of universally agreed way of doing testing.

What Bret did is phenomenal but at the core he simply named the groups/schools that he saw. In other words - schools of testing idea did not divide people - it gave "names" to different sets of practices "using" the name of testing. Having names to things arounds us helps to talk about the things, debate about them, understand them and improve them. That is exactly what Bret's idea of schools - did to some of us.

If you disagree with idea of schools - you might be saying one of these

"There is one universal way of doing testing hence idea of schools is absurd"
"I do not agree with Bret's classification - here is mine"
"I refuse the idea that there are patterns in testing that are distinct"

So - it would be not be correct to put the blame on idea of schools in testing to "division" in our industry - divisions always existed, we now have one model in which these differences can be named. I also argued with Rahul that "divisions" are good for our craft - they work like having multiple political parties in a democratic setup. With divisions we can have multiple, diverse ideas to co-exist. I am in favor of division in testing community as we need diverse mindsets, ideas and philosophies each offering solution to unique situations.

Vipul's post on "religions" and his apparent suggestion on being like "water" - indeed is a support of view of "divisions" are good.  If there are differences and divisions - cherish in diversity instead of trying to bring unification.


Slotting people, calling people by names

As strong supporter of schools concept - what I condemn is slotting people where they don't want to belong or identify with. There are factory or analytical school practices not factory or analytical school testers. Likewise there is Agile Testing (some form of testing that happens in Agile projects) but there are no Agile testers. There is exploratory testing and testers can chose to be good at it - but when they master it - they don't become exploratory testers - but testers with mastery over the approach of exploratory testing.

When people get slotted in groups/labels (for example if we call someone as factory tester) - for few it sounds "offensive". Personally, I am proud to be context driven tester. I don't have problems of me getting slotted in a category that Bret proposed. But that is me only speaking. By speaking of me as a context driven tester - I will let others know my testing philosophy and to some extent help others what to expect from me. This label for me is helpful to identify my approach and grow it in a framework driven by the principles of my school. 

Vipul approaches this from a different direction - he talks about dangers and obsession of belonging to a school (akin to type of fundamentalism that we see in religion). He says "Test matters and the test result matters not the division" - Well - I say - how does one test? what principles and values one approach the act of testing? The values, beliefs and approaches that one uses in testing define what Bret called as school. These elements of school are not independent and separate parts of a testers life and work. When we become conscious of them - we can work to improve them,  add few, modify few and delete few. How can one chase objectives and goals of testing without having a value system of individual about testing? If you think young testers struggle to define terms like of GUI testing or agile testing etc or struggle to belong or not belong to any school - it is sign of they trying to find their value system.

While a person can be FREE thinking person to choose and adapt - I can always see in the person - a subtle value and belief system about world, work (testing) - a view. Even choices of Free thinker are subtly guided by these values and beliefs. Instead of trying to deny the existence of these values and beliefs (in involuntary pretext of freedom to chose and adapt) - I urge likes of Vipul and Rahul to explore to find these subtle values that drive them. Bret's idea of schools and influences of James Bach, Cem Kaner and Michael Bolton - personally helped me to find my values or to be precise - they shaped up my fluidic and rather vaguely defined testing philosophies, values and beliefs.

I am proud to stand up as a context driven tester - I can talk about my values and beliefs about testing. While I do this - one thing that these great teachers (James, Cem, Michael) taught me is - not to get biased by one unilateral thinking. I constantly question my beliefs and values - I try to hang around with people who think and work differently than me. I train to be critical and rational thinker - constantly look to beat "confirmation bias".

I am reminded of this famous quote of Bertand Russel "Do not absolutely be certain of anything" - So…as a tester - I keep doubting my own ideas and that of others - that keeps me learning.

Shrini

Friday, August 24, 2012

How different Software Industry segments see Testing ...

Consider these views expressed by few real people about testing - cutting across the software industry segments. You (a tester) might be surprised by few of these comments - but take it from me - these reflect true state of how stakeholders see testing as.

A manager from a Software Product Company : "We follow Agile model - every team member in the team is responsible for quality and will do a bit about testing. We believe in Agile practices like test driven development, continuous integration, automated unit testing - our code is naturally comes out with good quality. We do not employ any "plain vanilla" black box testers. That is waste of our time. We would get all our testing done by developers mostly or in some cases - testers cover the rest through automated testing. We dont have anything called "testing" phase in our process. We hire testers that are capable of writing production level code - as most of their time will be spent in writing unit tests and automation to help developers.

A manager from IT/Captive Unit : "We believe in providing agility and value to our customers. Testing is one small bit in that whole process. We don't actually worry about how testing is done as long as it aligns to our business purpose. Bulk of testing that happens is done by our partners. We constantly seek to commoditize testing and aggressively deskill so that - we can gain the cost efficiencies in testing. More than testing skills - we value business domain skills. Testers eventually either become managers (and manage customers, IT services deliver/management and other stakeholders) or become business analysts.

A manager/consultant from IT services Industry: Testing is all about assuring quality and process improvement. We constantly develop tools and frameworks to help our customers to do testing efficiently and cheaply. We provide value driven testing services based our process maturity and experience in setting up large scale test factories. Our number 1 aim  is to reduce cost of quality - we do it by focussing in tools, processes and domain skills.

A consultant from Software Tools Company: Testing is an essential part of SDLC that can gain significantly from Tools - Automation tools. Usage of Automation aggressively can help reduce cost of testing. Software Testing tools help in implementing Software Test factory so that non technical and business users can use them and achieve faster cycle time and enhanced quality. Not to forget our strength in terms of Six Sigma, CMMi and other Software Quality models. We endorse software quality management through rigorous metrics and quantitative measures.

Now - dear tester - identify yourself where are you working and how are you improving skills in testing to suite the industry segment you work now or hope to work in the future. Does this sound similar to the view of testing that you read in text books or conferences ? Did you know software industry sees testing in such variety of perspectives?

Shrini

Sunday, May 06, 2012

A brief introduction of Test Automation...

I was asked by a blog reader to give a quick introduction of how automation helps in testing. Here is how I replied. I thought this might kick off some interesting off shoots...


"Certain portions of testing such data validation etc can be efficiently verified by automation programs than humans in repeated way (humans make mistakes and often are terrible at repeated executions). By carefully identifying portions of application under test that could be "safely" checked (validated) by automation - you can speed up testing (you can run many test cases in parallel, in the night etc) through automation. 

But beware - automation is a dumb and (humble?) servant - will do exactly what you ask it to do million times without cribbing - it does not have intelligence. A good tester can recognize something that is not in test script and looks like a problem. Automation cannot do this."


Do you like it?

One offshoot I am reminded of when wrote this piece - Automation is like people trying to losing weight. It requires patience, discipline and dedication. There are many quacks that operate in both automation and "weight loss" industry that promise "over-night" benefits.

If you are aware of how weight loss works or does not work - you can safely extend the analogy to benefits of automation.

Do not expect your testing or your application to become slim and trim with automation - overnight and most importantly - do not expect it remain so with no investment on ongoing basis. The later part - neither automation consultants (especially those who sell tools) nor those folks that run weight-loss industry - will tell you.




Shrini

Sunday, April 01, 2012

Testing is Dead - in which world?

Few weeks back I participated in VodQA event by Thoughworks. It was a day filled with lots of power packed sessions and discussions around the topic testing - sorry QA (that is how TW calls testing as).

My talk was around the topic about alleged death of testing and implications of the same on people who live and make their living by testing.

The slides of the talk are here and video of the talk is on youtube (thanks TW)

Folks organizing the event did a wonderful job of arranging a platform and have people exchange their views on testing. There was this "open house" where an adhoc group of people assembled at a place to discuss about a topic that one of them wanted talk about. There was passion and energy all around. I said to myself - in such an assembly of about 50-70 people - who could believe "testing is dead". It was  a real thing for the people in the event.

One thing that I wanted the listeners of the talk - was this idea of "two worlds of interpretation" - software makers world and software users world.  More about that later in a separate post.



Saturday, March 24, 2012

Learning from Tenali Raman's crows ...

As kids, like many in southern part of India - I grew up listening to stories of Tenali Raman - a 16th century wise court-poet of King Krishnadevaray of vijaynagar empire. Tenali Raman is also known as Vikat Kavi - meaning intelligent poet. Birbal from King Akbar's court - enjoys similar cult among kid's stories in India. This story of counting crows that I narrated to my 8 year old daughter - made me realize how real are Tenali raman's crows in our day-today life in software.

First, let me quickly run through the story. One day king throws up a strange puzzle to Tenali - asking him to count and report the number of crows in the city. Tenali thinks for a while and asks for 2 days time to come up with the answer. After two days, he comes back and reports to king that that there are One lach (10 lach = 1 million) seventy thousand and thirty three crows in the city. At first, the king becomes frozen and did not know how to respond - after a while, recovering from the shock of the answer - king checks if  Tenali is sure about the answer. King further says that he would conduct a counting (recounting?) and if number does not agree with Tenali's number - he (Tenali) would punished. Tenali being Tenali - responds to qualify his answer. He says it is possible that the recounted number of crows might be different from his number. If the new number is less than old number - then it is due to the fact that few of city's crows have gone out of station (city) to visit their relatives in nearby cities. If the new number is more than the old number, then additional number is due to crows from nearby cities visiting their relatives in vijaynagar city. Listening to this - king has a heart-full laugh and realizes the flaw in assignment/problem. As it happens in all Tenali stories - Tenali gets king's praise and some prizes for the witty answer.

Now, let us come back and see how this crow metaphor is applicable to what we do as project managers, test managers and testers in our day today work.

There are entities we deal that are similar to crows - in following respects :

1. Counting/quantifying is a prized puzzle
2. Counting is asked by an authority, a boss - you cannot say "No" to ( saying "no" can cost you your job or potential label of "incompetent")
3. Often you can fake a number
4. There is no easy, sure way to verify/validate the count
5. Even if someone does a recount and comes up with new (different) count - you can always "explain" the discrepancy, like Tenali did.

One example that comes to my mind is count of test cases. Typically, during test estimation process, as a test manager you would be asked "how many test cases could be written for a given set of requirements". The boss would then do the required math to confirm the number of testers required, time required to execute the estimated number of test cases (note - time required to "execute" test cases - not to test). So, wear hat of Tenali - throw up a number. If asked - show your working (be sure to have your working).  You would be OK from then on.

There are things we deal in software that can not be counted like we count concrete things.  Software requirements, use cases, test cases, lines of code, bugs, ROI from Automation - are abstracts not concrete objects. Counting them is akin to counting crows as in Tenali's story.

[Puzzle : Prove that ROI from automation is a Tenali Raman Crow count]

Cem Kaner says executives are entitled and empowered to chose their metrics. So, King was perfectly right in asking Tenali to count and report number of crows - though objective of King in the story is not to make any important decision for his kingdom. In any case - crow count metric was sought.

What can a tester/test manager do when asked to count "crows" ? While our community develops models and better alternatives to "imperfect metrics" - we need to tread a careful path. We should provide alternatives and possible answers to crow counts.

I have come to realize that refusal to give the count might be counter productive in many cases - trying to ape Tenali Raman might useful. Need for quantification is here to stay - years of persuasion and reasoning why in some cases counting can be bad - has not managed contain the problem.

What do you think about "Pass/Fail Counts"?

Shrini

Wednesday, March 21, 2012

My Views on Testing certification : 2012

A reader of my blog "Arpan Sharma" writes "What’s your take on certifications these days? I see your wrote about this is 2008 which is almost 4 years ago. Do you think the landscape of certifications have changed in recent times?".

Arpan - Thanks for writing and reminding that my stand on certification on this blog is about 4 years old now. It is interesting that you are checking with me if I have changed views. Here is how I summarize my current thinking on certification.


1. First of all the person seeking certification should be absolutely clear what they are expecting the certification to give them - Knowledge, Skill, skill enhancement, Marketing value, a job, an interview

2. Certifications that do not observe and qualitatively grade a tester - in action "while doing testing" - can not guarantee a certain level of skill in testing. Employers, Recruiters, hiring managers - please take a note.

3. If you want to learn how to do good testing, how to gain skills in broad testing landscape - certification is not something you should look for.

4. If there is a certification that let us get a job in a given situation/context or gets you a interview shortlisting - you should consider taking that certification. But - be aware - once you get your job - you are on your own. You would then be required to display (depending on the type of organization and nature of job) skills on job. Certifications' role ceases there.

5. Be critical about certification material and tests tell you - question them. Form your own ideas and logic about how things work. Do not take everything that taught or you read as part of certification as "universal truth. Why this is important? Only being critical on what is certification course - can help you to decide what value intrinsically you gained from it and what already existed in you.

6. Reputation is everything in today's world. You gain professional reputation by demonstrating your work and skills to your employer and to out side world (through networking world). Building reputation takes time and real good work. People with confidence in their skills and reputation - do not require a third party to endorse their level of skill. In today's world - people with skill and reputation - don't need certification. What does that tell you about certification?

7. Take special note of qualifiers like "Advanced" when applied to certifications - check out what is advanced and how? More often that not - it is more "jargon-laden".

#4 and #5 specially apply to freshers looking for /some/ job and those 1-3 years experience folks who either had some software job or a lost a testing job.

In terms of landscape of certifications - I don't think there has been change. Prime motive for certification providers is to make money - fast and cheaper. That has only intensified with many job seekers. That is fine as a business objective - we the target audience of such business ventures need to be clear about what we want from certifications and how capable are these certifications to deliver on the promises.

I repeat what I said earlier - if you want to learn, acquire skills, enhance skills in testing - certifications are the things that you should avoid. There are better, cheaper ways of doing that.

Did I answer your question Arpan?

Shrini

Thursday, March 01, 2012

Patterns in weakness in approaches about testing

I was reading this testing round table discussion and thought this might a blog post. Here I go...

To me, the biggest weakness is the perception or idea about what testing is  and why it is required.

Here are few examples as how companies treat testing.

1. Something that is avoidable to large extent or even can be eliminated if they could get their programmers and analysts get spec and code exactly right. The lousy work that these folks do during SDLC - creates need for testing.

2. Quality Assurance - Straight out of the box comparison with manufacturing assembly line. For these folks, testing is all about process and nothing else. If you get process right - you are done. It does not make any difference who does and when - all they need is to get process script right.

3. Building quality from the grounds up - A variation of #1 above - a growing group of people think that if you have automated tests (checks - actually) you don't have to really worry about testing. You are building quality from the ground - you cannot test quality but need to build it right? poor testers will not define and manage testing (under the name QA)

4. Testing? What? The whole team approach. This is creation of Agile model. Here, testing is everyone's responsibility. There goes "testing" out of the door as a specialist's job. When testing is something anyone in the team does - it is like any other project task.


5. Dont forget - this popular rhetoric - Testing (phase) is dead - That is biggest weakness in approaches about testing.  What else can be the biggest weakness about something other than saying it is dead?


Whole idea of testing is dead is around these beliefs and notions (test yourself if you agree or not)


1.  Testing (phase or role) makes developers complacent - a safety-net - remove it to make developers responsible.
2.  with so much focus on automated unit testing, test driven development, continuous integration - developers are producing quality software anyway
3. Finding problems is no big deal, we know where problems are (this is what James Whittaker said in his EURO STAR 2011 keynote). So do we need what testers for?
4. With cloud as popular software delivery model - you don't bother about bugs leaking. Time and effort to fix and turn around bug is ridiculously LOW - why bother testing?
5. What for you have crowd sourcing? Beta testing? - throw your stuff to users - let them use and tell us where are the bugs (there should not be many as we are group of smart developers and we know where the bugs are)



Thus,  the weakness in testing arises out of how we think about it  and what we want it to do for us. Thinking idealistically about how software is made and used, applying models from other fields without properly customizing them and removing or de-emphasizing the human element in the system  - are the key patterns of weakness in testing approaches





What do you people think?


Shrini

Saturday, November 12, 2011

Cause and Effect - Non Linear Systems

Here are three examples where cause and effect do not appear to corroborate. Take a look.

  1. Build a flyover on a busy road hoping that traffic will ease – A personal experience [Traffic will actually increase with flyover] 
  2. Dip a thermometer in boiling water. What happens to temperature reading? – Adopted from Gerald Weinberg’s Book “Introduction to General Systems Thinking” [Thermometer will show a lower temperature reading due to difference in thermal expansion between mercury and enclosing glass tube] 
  3. Making Cars more safer will cause drivers to become more aggressive and rash – “Peltzman Effect” Adopted from freakonomics post "What happens to your head” 
 In each of these cases – the effect or what is expected is not what happens – but the opposite. There can be explanations – lesson for testers : think holistically, develop systems thinking mind.

Why this happens? I think we often approach in terms of or use analytical/reductionist thinking - just bread/divide a thing into its constituents (atoms) and study them. This linear thinking of single cause-effect (taken one at a time usually) can help understand some aspects an object/phenomenon. With non linear systems such as societies, political/cultural systems, business systems etc - this simple cause-effect thinking simple does not hold good. So - think in terms of systems and their interactions.

Shrini