Sunday, June 09, 2013

10 Random ideas about Test Automation Estimation

I received a mail from a friend who asked about estimation approach for test automation. Wow - what a topic to mess up your head on a sunday night. Instead of responding to him in a mail - I thought of writing a post on this so that other can engage some conversation with me on this topic.

Here you do - 10 random thoughts on the topic. Well - I can extend this list to more than 10 items. For now there are 10 items.

6. Test automation is writing code - that involves all that writing code needs. Ask developers what they need. If development world claims that they cracked this problem. Automation folks can simply lift and use that solution.

2. Regardless of what commercial tools claim about "generating code" or similar non-sense around "scriptless" solution - fact remains that any sustainable automation code is similar to software product that automation aims to validate - so do not fall prey to false propaganda around tools that claim easy automation.

4. Ideas/Frameworks like data -driven, keyword, hybrid are simple ideas for automation design. You need to go deeper as ask if I need to write a method/function or a class in automation code - how much time will I need? Do get an answer from say a developer? You might be aware of some crazy metrics around number of lines of code written per day or number of functions/methods/classes written per day.  As you can - it only gets murkier if start insisting on measuring productivity of automation guy in terms of these meaningless metrics.

9. Important thing to note is we (folks in software world) are knowledge workers - meaning we do not work in manufacturing assembly line of factory. We deal with abstract things, software that can not be worked in the same way as say "Cars". So how does that change the way we should be viewing estimation of developing automation code - think about it.

5. Depending upon nature of piece of automation work that you would be developing - you would not know in the beginning how much time you spend in thinking and how much you spend putting your thoughts into compilable code. That is biggest challenge developers have. So are we - automation developers.

7. First question you should ask when you are working developing an estimation model for automation - what is my smallest unit of work - that unit which is building block of my whole solution. What answer would you get?  Function? class? Next question would be - are all building blocks are similar? How many different types of building blocks do i have?

Compare it say atoms - ask how can I characterize atoms or molecules of automation solution.

2. Once you get clear answer on above question - next you need break down your automation solution into building blocks and size them. Then ask - given a competent developer of automation - how much each unit would take to build - hours, days etc. Then add up time setup, testing, integrating etc. You will get some ball park number of estimate that you need go with as first estimate.

1. What reference should you use for creating your automation solution (or design?) - anything that describes what you want exercise on the application under test. One approach I found useful is create a mind map of application features and attach to each feature what application can do, what data it processes and what checks (note the word check) needs to be done. This is your skeleton reference. Build this first by collating data/information from various references - then make sure all information from each sources is accounted. This is your master reference. Work with multiple sources of reference (Requirements, test cases, use cases or simply walkthrough of application manually to build map of features).

10. Should you use test cases or test scenarios or test steps as basis for automation estimation? As  James Bach prefers to call - test cases are like unicorns - how many of them you can fit in a suitcase or fridge? Without knowing what is inside - counting test steps or test cases and using that number as basis for anything (leave alone automation) useful is utterly stupid idea. Never do it - unless you want to mislead someone.

3.  Few words about keyword driven framework. Personally I think, there is lots of hype around this simple idea. Keyword could be a verb (also called as action) that describes some feature of application under test. In developers language it is some basic unit of code - typically a method or function. What is a big deal here when you say "let us use keyword driven framework"? It's all hype - no real stuff there. There even more irritating words like "keyword driven or based testing" - so far I have not figured out how to do testing (not automation) using keywords. Same goes for other related buzz words like data driven automation (a marketing term for saying let us use variables instead of hardcoded values) or hybrid framework. Note all these simple ideas had some place 10-20 years back but not anymore. I personally prefer to develop automation pretty much like a developer goes about doing product code - no difference. I hate over simplification by tool vendors and consultants on so called "excel based automation" or script-less automation, automation for your granny - they are simply empty ideas to bully unsuspecting boardroom folks that sign contracts.

How will I summarize - There is no simple solution for estimating automation effort.  Keep a watch on how development (programming) community deals with this conundrum and let us use that to build our own model. At present, when developers are working on small units work like use stories and in an iterative model of churning working code (theoretically) in say weekly or fortnightly or monthly basis -
I think the whole problem of "tell me how much time (and resources) you need to develop this solution" - will vanish. You would probably say "let me start with 3 people, I will publish a 1-2 week plan of what I deliver for people to use - let us take it from there".

I think gone are the days where you had 3-6 (or even more) months of lead time for something software to be deployed for use. In the mobile apps world - development times are even shorter.  I doubt if anyone would ask you - give me an estimate for automation of this app.  It seems that we have solved the problem of estimation by going small and going fast.

I am happy to be corrected on any of views expressed here. Let me not forget to add - when I say automation - my experience has been in IT/IT services world mainly working with commercial off the shelf automation tools. If it were likes of Google or Microsoft - it would be totally a different ball game all together.

Shrini


Friday, June 07, 2013

Are you measuring something that easy to measure or something that is important ?

Measurement is fabulous – Unless you are busy in measuring what is easy to measure as opposed what is important. – Seth Godin


... And what is important (and to whom and when) is often subjective and context based.

Thank you Mr. Godin for your sound advice that is useful for software folks
I have confirmation bias for bad metrics and measurements. We have obsession for measuring things to demonstrate that we are rational and objective humans (which we are not). It's amazing to see how Seth Godin in above post demonstrates "measuring sometihng that is easy to measure is waste".

"As an organization grows and industrializes, it's tempting to simplify things for the troops. Find a goal, make it a number and measure it until it gets better. In most organizations, the thing you measure is the thing that will improve"


Many people blame growth, size to "metrics menace" and say "how can we manage such volume of work if we do not have right metrics". Remember the thing that you measure will be victim of gaming and match fixing - people will change the behaviour to look good in terms of what is being measured. Look at our testing metrics - all easy to stuff to measure (sorry - simply count) - number of test cases, defects (and all dimentions of it), number of requirements (this is really bizzare), defect detection percentage, defect leakage rate, cyclomatic complexity and list is long - mostly all easy stuff to measure (in fact - simply count).

While what our users care is how software works (or does not work) - it is about those emotions (frustration, anger, happyness etc). Since these are important but difficult to measure (in some easily understandable numbers of percentages etc) - we take easy route. Pretend as though these do not matter at all or when confronted, wear "rational" hat and issue "scientific/engineering" statement "anything that cannot be measured - cannot be improved".

"And this department has no incentive to fix this interaction, because 'annoying' is not a metric that the bosses have decided to measure. Someone is busy watching one number, but it's the wrong one."

-- So true for software - our bosses (influenced by high flying software engineering/process consultants) have choosen to turn deaf ear to real "metrics" (that are tough to measure). Thus software developers or testers appear to have no incentive to "listen" and "fix" important issues that matters to users.


Software is developed, tested, used and maintained - in, for and by social enterprise and people are irrational, implusive, greedy and look for instant gratificatoin. Society (a name given to large number people living together) amplifies such indivitual traits.

We, software testers - need to adopt social sciences approach and stop aping practices of "enngineering processes" of a factory assembly line.


- Shrini