Continuing on this discussion on test effort and manual testing – there is another popular variation in which Automation tool vendors claim “improved time to market” and/or “reduced cycle time”. In this post, let me dissect these claims and see how true and credible is this claim.
First of all let freeze what one mean by “Time to market” and “Cycle time”. Let me define the terms as bellow in the context of a traditional/Waterfall model type of software development.
Time to market is a time window between the time your start development (requirements and so on) till you ship the product in market for general public consumption. Depending upon whether you are doing a major release or minor release, this window may span from few months to few years (as in the case of Windows VISTA)
Cycle (when used without any qualification indicates a cycle of development and Testing) time is a time window for a software product under development. The cycle time can be divided into Development time and testing time. A development cycle starts with deliberation of requirements, design of features to be implemented. A test cycle starts with a development team “releasing” the code for testing to begin and test cycle ends with test team completing the planned testing for that cycle. During this period development team can fix the bugs reported. Hence for all practical purposes, a cycle time implies a time window between start of design/requirements until test team completing the testing/development team is ready to release next build.
So it is apparent from above definitions that cycle time is a subset of time to market.
Automation can reduce cycle time (less of a problem) only if (check how many items or situations that “automation” can control)
- Automated tests run without reporting ANY bugs on the software (A bug reported by automation means – some investigation and confirmation by manual execution that the bug reported by automation is indeed a bug with the application)
- Automated tests DO NOT report any run time errors (A runtime error by a automated test means some investigation and re-run.)
- Development team INSTANTLY fixes any bugs reported by Automation and these fixes are so well done that there will be no further verification (manual or automated) required
Manual testing (a very small portion indeed) that happens in the cycle does not report any bugs. All the manual tests pass.
- If manual testing reports some bugs those bugs will be fixed INSTANTLY without requiring any further verificationBug reporting time, triage, investigation (if any) is so small that they are negligible.
Automation can reduce/Improve time to Market only if –
- All the items mentioned under “Cycle time” and
- Business stakeholders do not worry about outstanding bugs. They take the decision to ship the product as soon as Automation test cycle is completed (because automation cycle is NOT expected to report any bugs). So the end of the automation test cycle, shipping the product is a logical thing to follow.
If you analyze these situations, you would notice that many of the factors that influence cycle time or time to market are not under the control of “test automation”. These factors are to do with development team, quality of the code, quality of requirements, number and nature of the bugs reported by both manual and automated test execution and above stakeholder’s decision about those bugs reported. One can not claim that there will be cycle time reduction or improved time to marked JUST because x% of test cases are automated. A big and unrealistic generalization - only an automation tool vendor can afford to make.
So next time when someone says automation reduces time (either cycle time or time to market) – do quiz them and say “What do you mean”?
Bonus Question : Can automation accelerate your Testing? If yes, under what circumstances?
Next Post : Can automation address the IT problem of Limited (human) resoures and tight deadlines?