I am working on a piece of automation using java and some
commercial tool to drive a test scenario on AN iPad App. This scenario involved
entering multiple pages of information and hundreds of fields of data. This
automation script runs this scenario for say 1 hr where as a tester that exercises same scenario on the app “manually” - claims that it takes only about 30
minutes.
I was asked – if automation script runs slower than human test
execution (however dumb) – what is the use of this automation? What do you think?
Here are my ideas around this situation/challenge:
Mobile Automation
might not ALWAYS run faster than human test execution -
Many of us in IT, have this QTP-Winrunner way of seeing testing as bunch of keyboard strokes
and mouse clicks and automation is a film that runs like a dream at super fast
speed. GUI automation tools that drive
Windows desktop application GUI or Web GUI have consistently demonstrated that
it is always possible to run sequence of keyboard and mouse click events at
higher speed than human. Enter mobile
world – we have 3-4 dominant platforms – Andriod, iOS, Blackberry and Windows
Mobile. GUI Automation when enters the world of mobile – mainly runs on some
windows desktop that communicates with app (native or web) on the phone that is
connected to the desktop through, say USB port.
The familiar paradigm of all automation and AUT running on the same
machine/hardware breaks down and so would be our expectations on speed of test
execution. iOS platform specifically (in non-jail broken mode) presents several
challenges for automation tool while android is programmer friendly. As
technology around automation tools on mobile devices and associated platforms
(desktop and mobile), evolves – we need to be willing to let go some of our
strongly held beliefs of GUI automation that happens on web and windows desktop
applications.
Man vs. Machine – items that might make machine/program slow
When you see a button on the
screen – you know it is there and you touch it (similar to click on non touch phones)
– as a human tester you can regulate the speed of your response depending upon
how app is responding. Thus, sync with app, checking if the right object is the
view and operate the object – all of this happens very natural to human. When
it comes to automation tools (mobile tools especially) – all of this has to be
programmatically controlled. We would have function calls like “WaitForObject”
and some “Wait” calls to sync the speed of automation with speed of app
responses. The whole programmatic control of slowing down or speeding up of the
automation in relation with app response and checks to make sure automation
does not throw exceptions – many times automation programmers need to favor
robust but slower automation code that is almost guaranteed to run against all
app speeds. This is one of several reasons why automation might run slower than
human execution. You might ask how do likes of QTP handle this situation – even
tools like of QTP need to deal with these issues. Given the state of technology
– the problem is somewhat acute in mobile automation space.
Imagine long, large and highly
repeated testing cycles – a human tester would lose out on 2nd or 3rd
iteration due to fatigue and boredom. Consider current case of multipage and
entering 100’s fields – how long do you think a human tester can focus on doing
the data entry. Here is where our “tortoise”
(slow but steady) automation still adds value. This slow program does not mind
working 100 times over and again with different data combinations – frees up
human tester time and effort for you.
Remember – automation and skilled
human tester both have their inherent positives and shortcomings. A clever test
strategy would combine (mix and match) human and automation modes of exercising
tests to get maximum output – information about issues, bugs and how value of
the product be threatened.
Also note - A good manual test cannot be
automated – if you claim that you could automated one,then it could not have
been a good manual test.
If automation runs unattended well – why bother about execution time?
Many of us are used to sitting
for hours staring at automation running to see if it works, pass or fails. If
fails – check, correct and rerun. If automation is robust and runs unattended –
why have someone looking at screen – watching automation running. Why not run
it at non working hours? Why not schedule it to run at certain time. This will
free up human resources that can be deployed at other areas requiring focused human
testing. Isn’t this a value provided by a slow running automation – free up
human testers? A well designed but slow running automation can still justify
investment as it can run without bothering you.
How you can get best out of slow running automation?
- Optimize automation to see if speed can be improved – remove sync/waits, “object exists” checks (not compromising on robustness of automation)
- Identify bottlenecks in tool and fix them
- Identify environmental and data related slowness in automation and fix them
- Schedule automation at non working hours and save human effort
Have you come across automation
that runs slower than human test execution speed? What did you do with
automation? Dumped it? Want to hear about your experiences