Tuesday, May 27, 2008

Can Automation reduce cycle time or improve time to market?

Continuing on this discussion on test effort and manual testing – there is another popular variation in which Automation tool vendors claim “improved time to market” and/or “reduced cycle time”. In this post, let me dissect these claims and see how true and credible is this claim.

First of all let freeze what one mean by “Time to market” and “Cycle time”. Let me define the terms as bellow in the context of a traditional/Waterfall model type of software development.

Time to market is a time window between the time your start development (requirements and so on) till you ship the product in market for general public consumption. Depending upon whether you are doing a major release or minor release, this window may span from few months to few years (as in the case of Windows VISTA)

Cycle (when used without any qualification indicates a cycle of development and Testing) time is a time window for a software product under development. The cycle time can be divided into Development time and testing time. A development cycle starts with deliberation of requirements, design of features to be implemented. A test cycle starts with a development team “releasing” the code for testing to begin and test cycle ends with test team completing the planned testing for that cycle. During this period development team can fix the bugs reported. Hence for all practical purposes, a cycle time implies a time window between start of design/requirements until test team completing the testing/development team is ready to release next build.

So it is apparent from above definitions that cycle time is a subset of time to market.

Automation can reduce cycle time (less of a problem) only if (check how many items or situations that “automation” can control)

  • Automated tests run without reporting ANY bugs on the software (A bug reported by automation means – some investigation and confirmation by manual execution that the bug reported by automation is indeed a bug with the application)
  • Automated tests DO NOT report any run time errors (A runtime error by a automated test means some investigation and re-run.)
  • Development team INSTANTLY fixes any bugs reported by Automation and these fixes are so well done that there will be no further verification (manual or automated) required
    Manual testing (a very small portion indeed) that happens in the cycle does not report any bugs. All the manual tests pass.
  • If manual testing reports some bugs those bugs will be fixed INSTANTLY without requiring any further verificationBug reporting time, triage, investigation (if any) is so small that they are negligible.

Automation can reduce/Improve time to Market only if –

  • All the items mentioned under “Cycle time” and
  • Business stakeholders do not worry about outstanding bugs. They take the decision to ship the product as soon as Automation test cycle is completed (because automation cycle is NOT expected to report any bugs). So the end of the automation test cycle, shipping the product is a logical thing to follow.

If you analyze these situations, you would notice that many of the factors that influence cycle time or time to market are not under the control of “test automation”. These factors are to do with development team, quality of the code, quality of requirements, number and nature of the bugs reported by both manual and automated test execution and above stakeholder’s decision about those bugs reported. One can not claim that there will be cycle time reduction or improved time to marked JUST because x% of test cases are automated. A big and unrealistic generalization - only an automation tool vendor can afford to make.

So next time when someone says automation reduces time (either cycle time or time to market) – do quiz them and say “What do you mean”?

Bonus Question : Can automation accelerate your Testing? If yes, under what circumstances?

Next Post : Can automation address the IT problem of Limited (human) resoures and tight deadlines?



Anonymous said...

I don't understand why or how automation that doesn't report any failures - or those that it does are fixed instantly - has anything to do with cycle time.

If you are testing an application that ships in one or two configurations, and only one or two languages, I don't think automation has any ROI for you. If you're automation is only UI automation, I doubt that you will get any ROI on the investment even with 10 configurations or languages.
However - if your automation is at the API (or even object model) level - and/or you are testing on multiple configurations, it seems to me that you will get benefit from automation even if it does find bugs that developers have to take a few hours to fix.
I would guess that the cycle time improvement assumes that the original plan was to test most every possible configuration and API input parameter manually. If the original plan was to spot check across platforms, then you are correct - automation does not improve cycle time or time to market.

Shrini Kulkarni said...

>>>I don't understand why or how automation that doesn't report any failures - or those that it does are fixed instantly - has anything to do with cycle time.

If automation were to save cycle time, it should not report any failure. This is because any bug reported by automation will increase the cycle time due bug investigation, bug fixing and re-verification. Can automation control this time that gets added to the cycle time? So my point is if you wish that autoamtion reduces the cycle time - then make sure that it does not report any bug. (In reality, automation at time reports bugs - that is good thing. A bug discovery means more testing hence increased cycle time. Hence automation CAN NOT reduce the cycle). Same argument goes with respect to developer fixing those bugs. For automation to reduce the test cycle - first automation should not report any bugs - even if does (by mistake) those bugs should get fixed immediately.In other words, speed of developers to fix the bugs reported (by automation or otherwise) DIRECTLY affects the cycle time. Slower the developers response to bugs, longer will the cycle time. Presence of automation may not make bug fixing quicker.

>>> it seems to me that you will get benefit from automation even if it does find bugs that developers have to take a few hours to fix.

This discussion on benefit from automation is more applicable in IT scenario (may be bit less in product scenario like micrisoft) where both Automatoin Tool vendors and outsourced IT service providers - claim that "automation can reduce cycle time" (with no strings attached).

This post is meant to disprove that claim. The assumption of automation not reporting any bugs and developers fixing those bugs instantly is NECESSARY to drive home this point.

>>> I would guess that the cycle time improvement assumes that the original plan was to test most every possible configuration and API input parameter manually.

I would say, regardless of original plan, automation is LEAST likely to reduce cycle. This is because there are other items in the test cycle other than test execution. Though automation can help in gaining some cycle time benefits in case of mutli platform configuration (gain in execution time alone), other time factors often change the cycle time.

Please comment back if there are any follow up questions ....

Alan - a question to you ... If I were to gain cycle time by automation - what I need to ensure?
How should AUT behave, How fast developers need to be?


Anonymous said...

I guess I concur with Alan.

It is an accurate statement that automation will not improve cycle time if it finds bugs, where bugs would have otherwise gone undetected.

However, I believe the claims automation vendors make, is based on the fact that manual testing is also going to uncover those bugs. So the cycle time reduction offered by automated tests, is in automating the manual testing tasks that would otherwise require human intervention. If your manual test activities and automated test activities are equivalent, the bug should be discovered in both scenarios, so the ROI is based on the time to run the tests, re-run the tests and run the tests across different configurations.

Anonymous said...

I get it, and although I missed your point, I agree that most automation vendors are selling snakeoil.

Here's the answer I should have given.

Poor automation does not decrease cycle time. In fact, bad automation increases cycle time because so much time must be invested in identifying and isolating the failures - especially when so many are due to bugs in the test automation rather than the product.

Good automation towards a specific goal (e.g. testing against multiple configurations), however, generally does decrease cycle time. By good automation, I mean that when a test fails, 99% of the time it indicates a product bug. Furthermore, the test contains sufficient and appropriate logging as to indicate exactly what the error is - e.g. expected and actual results, or relevant environment information. Finally, "good" automation is maintainable and can be used on subsequent releases - auto-generated test "code" never has this quality.

I haven't had extensive experience in an IT environment, but my hunch is that for most applications, I probably wouldn't do much automation at all.

Serge said...

Even if the automation software might have some bugs, there are several experts who would take care of these so that your efforts would get you the customers which would be interested with your product.