Friday, January 25, 2013

Should Automation that runs slower than human test execution speed - be dumped?


I am working on a piece of automation using java and some commercial tool to drive a test scenario on AN iPad App. This scenario involved entering multiple pages of information and hundreds of fields of data. This automation script runs this scenario for say 1 hr where as a tester that exercises same scenario on the app “manually” - claims that it takes only about 30 minutes.

I was asked – if automation script runs slower than human test execution (however dumb) – what is the use of this automation?  What do you think? 

Here are my ideas around this situation/challenge:
Mobile Automation might not ALWAYS run faster than human test execution -
Many of us in IT, have this QTP-Winrunner way of seeing testing as bunch of keyboard strokes and mouse clicks and automation is a film that runs like a dream at super fast speed.  GUI automation tools that drive Windows desktop application GUI or Web GUI have consistently demonstrated that it is always possible to run sequence of keyboard and mouse click events at higher speed than human.  Enter mobile world – we have 3-4 dominant platforms – Andriod, iOS, Blackberry and Windows Mobile. GUI Automation when enters the world of mobile – mainly runs on some windows desktop that communicates with app (native or web) on the phone that is connected to the desktop through, say USB port.  The familiar paradigm of all automation and AUT running on the same machine/hardware breaks down and so would be our expectations on speed of test execution. iOS platform specifically (in non-jail broken mode) presents several challenges for automation tool while android is programmer friendly. As technology around automation tools on mobile devices and associated platforms (desktop and mobile), evolves – we need to be willing to let go some of our strongly held beliefs of GUI automation that happens on web and windows desktop applications.
Man vs. Machine – items that might make machine/program slow
When you see a button on the screen – you know it is there and you touch it (similar to click on non touch phones) – as a human tester you can regulate the speed of your response depending upon how app is responding. Thus, sync with app, checking if the right object is the view and operate the object – all of this happens very natural to human. When it comes to automation tools (mobile tools especially) – all of this has to be programmatically controlled. We would have function calls like “WaitForObject” and some “Wait” calls to sync the speed of automation with speed of app responses. The whole programmatic control of slowing down or speeding up of the automation in relation with app response and checks to make sure automation does not throw exceptions – many times automation programmers need to favor robust but slower automation code that is almost guaranteed to run against all app speeds. This is one of several reasons why automation might run slower than human execution. You might ask how do likes of QTP handle this situation – even tools like of QTP need to deal with these issues. Given the state of technology – the problem is somewhat acute in mobile automation space.
Imagine long, large and highly repeated testing cycles – a human tester would lose out on 2nd or 3rd iteration due to fatigue and boredom. Consider current case of multipage and entering 100’s fields – how long do you think a human tester can focus on doing the data entry.  Here is where our “tortoise” (slow but steady) automation still adds value. This slow program does not mind working 100 times over and again with different data combinations – frees up human tester time and effort for you.
Remember – automation and skilled human tester both have their inherent positives and shortcomings. A clever test strategy would combine (mix and match) human and automation modes of exercising tests to get maximum output – information about issues, bugs and how value of the product be threatened.

If automation runs unattended well – why bother about execution time?
Many of us are used to sitting for hours staring at automation running to see if it works, pass or fails. If fails – check, correct and rerun. If automation is robust and runs unattended – why have someone looking at screen – watching automation running. Why not run it at non working hours? Why not schedule it to run at certain time. This will free up human resources that can be deployed at other areas requiring focused human testing. Isn’t this a value provided by a slow running automation – free up human testers? A well designed but slow running automation can still justify investment as it can run without bothering you.

How you can get best out of slow running automation?
  • Optimize automation to see if speed can be improved – remove sync/waits, “object exists” checks (not compromising on robustness of automation)
  • Identify bottlenecks in tool and fix them
  • Identify environmental and data related slowness in automation and fix them
  • Schedule automation at non working hours and save human effort


Have you come across automation that runs slower than human test execution speed? What did you do with automation? Dumped it? Want to hear about your experiences


9 comments:

Joe said...

Yes, I've had automation that runs at slower-than-human speed.

No, I didn't dump it.

Even if slow, automation can test things so that I don't have to. Even if slow, unattended automation can use overnight time that I have better uses for (like sleeping, for example).
Even is slow, automation can test the same thing over and over again without getting bored, and often with new data being fed in.

Fast is often nice, but seldom critical for automated testing. I spend some time optimizing, but I don't waste time over-optimizing.

Raghu said...

How to convince the Project Manager who always looks at savings made by Automating and the first thought that comes to Project Manager's mind is what is the saving i am doing by investing in a tool which is still in its infancy.
It would be very hard to the Test Manager to convince the PM if the tool is not robust enough and automation is at the mercy of tool vendor to deliver fixes.
As Test Manager I am more than happy to even work with so called slow Automation tool and use 'off-business hours' to run tests.
The main issue here is-

"Is the tool in question is robust enough"
1.To run unaatended
2.Does this recognise all objects-by its attributes and methods
3.Maintianibility -How easy to update the code if there are changes introduced to GUI especially if the change is to the co-ordinates as the current tool seems to be using the x,y co-ordinates majorly.
4.Apple to Apple comparision-Do we have similar apple in the market to which we can compare this apple and check if there is any better apple present in the market

Anonymous said...

Nice topic about test automation...

I will start with test automation this year and I'm courios about it, if it works in a correct way for me or not!
The challenge for me is, that I have to handle with different and more than one technologies (e. g. different programming languages used in the AUT), or the fact that I need to use test automation for higher test levels than module test.
Sometimes I which to use test automation only to produce test data for me as tester that I can use the other day. That would be another challenge.

I think only when these challenges can be done with test automation, test automation can be will be usefull for me - never mind what time it will cost and the test automation scripts can be run overnight. But also all this is depending on the stability of the AUT.

And don't forget that test automation can't find bugs! This is still the task for you as a manual tester!

kind regards from Germany, Ralf

Rahul Gupta said...

Nice write-up Shrini. Interesting thing is, you asked questions and provided most of the answers too. Though, I don’t have any new solution but as you asked, “Want to hear about your experiences”. Following are what we experienced in last 6 months.
I will explain with a real problem. We had a big database application where one of the processes is data migration from Sybase to Oracle. The migration is not 1:1 rather it’s like 1:1, 1: M, M: 1 & M: M. i.e. all possible combinations. There were more than 2000 tables and each table has ~50K records (on an average). Humanly, it was not possible for us to validate each record which was critical to product. So we decided to automate the data validation process and with help of some customized tools we could test each and every record that was under migration.
Initially, it was very slow running system and as QA we were limited in our technical abilities. So we requested DBAs and developers to help us in optimizing our approach. Following steps were taken.
1) Fine tune SQLs so that code will take lesser time to fetch data from DB.
2) Put filters wherever applicable and compare data in chunks rather than all at once.
3) Keep data comparison and report generation part separate from each other.
4) Wrote a feedback mechanism to cross examines reports and it involved logs validation also.
Initially we were using a popular test management tool to run our tests but later we realized that this tool itself posing performance issues so we started running checks independently. Few other operational optimizations we did were like running scripts during off hours.
Initially we were reporting all PASS/FAIL checks but after few rounds of validation (of all PASS checks), we started reporting only failures and saved a lot of time. In report validation, we requested entire team to spend time and share ownership of quality  to which they agreed. Everybody from BAs to production support team chipped in.
It was a challenging exercise for us. The application had 15+ developers and 1 tester (full time) and 1 part-time automation engineer. The approach followed was Scrum. Automation was desperately needed to make sure all functional aspects are working and data validation was too critical to do a sample check. But our approach worked and I am proud to say that entire project team could make the solution work within time & budget.

Unknown said...

Nice post Shrini.

Even though one of the objectives of automation is fast execution, the other objectives are unattended runs and accuracy which could not be warranted by humans.

I have had this experience of automating a mainframe application which also contained report validations. It was horribly slow as I needed to scan the entire report and validate almost every individual elements of it. But we scheduled it to run in the off-hours in a separate machine. That way we saved some time most importantly the functional team had enough time to do exploratory testing.

Regards
Rajaraman R
Test Data Management Blog

Unknown said...

I would still go ahead with automation. You can always
1. Schedule nightly executions of tests
2. Have separate environment for execution tests

Unknown said...

Yes. I too will go ahead with automation even though it is slow. You can always
1. Run it nightly
2. Has separate env where it executes so that it does not disturb the work you are doing

Sweeper For Hospital In Pune said...

Hi Shrini,
I don't think Slow automation should get dumped.
Because automation has his own importance & human his own.Some times we can't do what an automation can do even though it is slower than our speed.

Unknown said...

Excellent write up. I would recommend check out this post - http://bit.ly/1TnrPQx