Tuesday, December 22, 2009

Why GUI Test Automation is Popular and Tempting? Part I - Business context

It is rare to someone on testing in IT world not be aware of "QTP, WinRunner, Rational Robot etc ". For the most in IT world, the word automation is synonymous with GUI automation. Perhaps some even might ask - "Are there any other forms of automation other GUI automation". Lots of how IT world sees automation has to do with how they see testing itself. You might have heard this "celebrated" statement - "Regression testing is more suitable to do automation (read GUI) so that we don't have to have our IT staff do that unnecessary testing manually".

To my surprise, having been associated with IT for many years now, people here still believe in perfect requirement, perfect test case and hence "doing it right first time". In contrast to this, software product world (likes of Microsoft, Google) treats test automation and hence testing in a total different way. These two probably operate from two extreme ends of automation spectrum. While discussion how automation is perceived and realized in IT and software product world can be a separate post in itself, I propose to elaborate here my views on the popularity on GUI Automation in IT.

Testing is necessary evil and should get done quickly

This is the reason #1 that drives GUI automation in IT. Business spends on getting IT application features developed and hence often fails to see a reason why they should spend on "testing" at all. Business feels that IT should get their acts together, get aligned to business (hence IT-business alignment is often a well understood but poorly executed theme) and use the money towards delivering business value. I even have heard some business folks saying "We pay for the application features not for their testing". Hence funding testing becomes IT's problem. That is why they look for solutions to reduce testing (or spend on it). As simple as it sounds, IT pays for testing (not the business)

Testers (read business users or analysts) are hard to find and they hate regression testing

Traditionally, in IT business analysts (aka BA's) are hot commodities. They do most of what we call as "testing". With outsourcing wave sweeping IT, some of BA roles are taken by either BA's from outsourcing vendors or some junior tester working under supervision of BA through highly scripted test procedures. The problem is that these BA resources are few and are in great demand. But as software releases happen, changes come in – testing (regression testing) has to happen and BA's hate regression testing (or all of testing). So, an IT manager needs to find a way out to carry out this regression testing somehow.

How to get testing done fast and cheaper?

If you want to do something fast, give it to machine or a computer program - goes a typical approach of an IT manager. So automation could be the answer. If you position automation as an aid to reduce the cycle time to market chances of you getting funding is very high. Surprisingly this claim is so powerful that when made often goes unchallenged. The magic word "reduction of cycle time" (that too without any qualification) is tempting for many that it will make them to completely overlook bottlenecks in development (programming), bug fixing, investigation and any other non-testing activities. Thus speaking in terms of business language is so important in proliferation and apparent success of GUI automation in IT.

With GUI automation, you can easily make a business case for automation.

To me, the terms like test cases, regression testing, test cycle time etc are the ones that business stakeholders appear to understand very well. As compared to unit tests and other forms of non GUI automation, you can make your case for investment in GUI automation to business stakeholders. Let us say you have 5000 test cases for a regression testing to cover that takes 5 person days to complete, automate 50% of them you can straight way knock off 50% of your testing cycle time reduced. This claim due to its simple math and logic makes a compelling case for GUI automation or simply "automation". What else can be more appealing than investing once in automation (for the sake of making the business case and further forever – down play the maintenance costs) and saving on a percentage proportional amount of automation achieved for 100's of cycles in the future?

Tool vendors help you by talking in business language

Enter tool vendors – with the fancy looking charts, scary quotes like "more than 60% of project cost goes to testing, what are you doing to reduce it" and likes Gartner's and Forrester's to support the claims with numbers, life for an IT manager in charge of testing has never been so easy. Just buy a tool, engage an outsourcing vendor to do GUI automation – all your testing worries are over in one go. Dreaming never ends here. Take a look at any brochures, websites, commercial literatures – they are full of business terms like ROI, Cost of testing, business impact, and Time to market and so on - making a perfect connect with those that matter in making decisions about spending.

Now try doing this (or these) with non GUI tests such as API tests or xUnit framework based unit tests – what will business say? What will IT managers say – "well, developers got to do that testing anyway"? So there is no apparent business case to make so sell automation. Writing unit tests will not speedup your testing cycle, will not reduce time to market, might reduce some portion of that "boring" regression testing – but. So you have no financially oriented incentive to do any automation that is non GUI. Since GUI automation paradigm as so much hardened into dogma – very few challenge it.

If you are the one working in a software product company – you would say "do less of GUI automation, they are very brittle and costly to maintain". You would be surprise to know/hear – the opposite works (or at least appear to work) in IT world.

It is "power of talking in business language, very few challengers for the claims made about automation and excellent support by tool vendors" that makes GUI automation popular and tempting to attempt. Even if you fail – you can blame it on almost anyone or anything around – test cases, tool, automation folks, application changes, changing requirements, test data, operating system patches and so on.

To be continued in Part II

Saturday, December 12, 2009

Eurostar 2009 – A Trip Report

I am just back from EURO STAR at Stockholm. As a ritual, and like an obedient delegate and speaker, I am narrating the experience. It was my first visit to Sweden, home for Alfred Nobel. The conference started with a bang – welcome speech by Dot Graham, the program chair for 2009 edition of biggest European software testing event. After her being first program chair at 17 years ago, she still appear to breathe same energy of someone who is attending her first testing conference.

Lee Copeland, the veteran of testing conferences in the US, who followed Dot as a key note, was at his usual calm and composed best. He spoke on next few big things in testing- a good list to watch out for. His talk also contained slides on good books for the testers to read. Probably it has become a habit for him to promote his book on "Test design" in each of this talk … (I have heard 3-4 of them). He might call it as "Shameless plug" or "No… I didn't mean it include it here", I think he should stop doing it as the book sells on its own merit. Thereafter there were parallel tracks and everyone had a choice to go to the talk that they wanted. This time I like many others, had the power of twitter to spread information about conference, with the hash tag #esconfs, I was one of those many active twitters using tweeting about the talks, and happenings at the conference. Check out all tweets related to this conference here.

As it happens in every conference, much of the action was happening outside the track sessions, people meeting, exchanging cards, and fiercely debating on topics that they hold to their hearts. There I was, roaming around trying to see which group to join. Eurostar Test Lab was in interesting addition to this year's conference, run by James Lindsey and Bart Knaack - was a big distraction (constantly pulling passionate testers towards it and away from the track sessions). I spent few hours of testing and spent some good time with James Lindsey one of my favorite exploratory testing proponent. I logged few bugs too. It was a good experience. James reminded me to explore and find out when I had a question about a feature of the application that I am testing. Here is a report from Michael Bolton on the lab

Michael Bolton was an attraction, true to his image of great speaker and testing wizard. I rarely saw him alone, always surrounded by few people, Michael was just keeping them engaged with his teaser questions and people kept coming to him. His talk - "Burning issues of Day" was a presentation inspired by collection of white board statements of conference attendees, presented in an immaculate MB style. He took mock at several testing folklores and argued convincingly against standardize and over structured software testing practices. One thing that was really remarkable in that talk was Michael's ease and confidence - with which he entered the podium and started the presentation. He was at his total ease. Many wannabe speakers should watch him starting a speech and finishing one. To me, Michael gave an example of "what it means to be being oneself". Good learning there.

It was great to meet/see likes of Jonathan Kohl, James Lindsey, Fiona Charles and Ray Arell. In the conference made few new friends – Johan Johansson, Tobias Fors, Daniel and others. That evening we got together with and headed for a dinner and followed by a meet up at a friend Pabalo's house. That was a chance for me and others to have a go at musical instruments. I settled at Drums, Michael and Pabalo took guitars. The concert went until 12-1 Am. I decided to call it a day as I need to catch up with some sleep before my conference talk next day.

It was a big day for me. Getting to speak at Eurostar has been my dream. My talk was about metrics and how they can be dangerous if used inappropriately. I wasn't nervous and was sure that I will speak my heart. I requested Ray Arell and Fiona to help me with few photos. Michael's and few others presence at my talk, helped. I managed to finish my 15 odd slides at one dot – 39 minutes. Few questions and it ended with applause. I was greatly relived doing my duty at the conference. I plan to write a separate post about that later.

I met this guy Daniel in the conference– he did not carry his business card hence do not know his coordinates. Daniel and I argued for hours about the nature of the software – physical or abstraction. Daniel insisted that software has a 3D physical existence. He even said that he had confirmed this with 2-3 physicists he knew. Software when reduced to its "atoms" or basic units is some sequences of 1's and 0's that are stored on a magnetic material. When we power the computer or a computer like devices, electric current passes through transistors and other electronic components and bring the software to life. Daniel argued that hardware components that held the sequence were real and physical. We could never take the argument to the conclusion. But it was a good argument.

Third day of the conference started with keynote on agile adoption at Intel and experience sharing by Ray Arell. It was a good session with lots of advice on getting agile right. I had to do lots of running between office and the conference venue (yes, I had some official meetings set-up on that day). I track-chaired a session by Aslak Hellesoy on "Cucumber". The session was good and well received. The name cucumber look puzzling to me as why "Cucumber" – Aslak clarified during the presentation that it was the name suggested to him by his fiancĂ©/girlfriend. It is become fashion to name the tools/frameworks companies by totally unrelated names. Steve jobs success with Apple continues to inspire many to name their creations with the names that you typically would never make a connection. In the evening there usual conference rituals like panel discussions, prizes, vote of thanks and announcement of next year's track chair – John Fodeh. Zeger van Hese's paper wins the best paper award, Naomi Karten wins the best tutorial award and Michael Bolton wins best bug of conference. Michael wrote about it here.

In all, it was an excellent experience of being at Eurostar 2009 and at Stockholm. I will carry many memories from it for years to come… Thanks Eurostar!!!!

Don't forget to check out the photo gallery

Check out few reports about this conference

Female Funtestic Fanatic

Rikard Edgren's Test Eye

Star Tester issue 45

Note: Narrating a story or an event like a conference is tough, I think. I attempted to do it see if you like my story.


 

Thursday, October 15, 2009

Defining the problem : How business works

Lee Jack, a software testing consultant/IT services Sales Manager is tensed about his proposed meeting with Mark Johnson, a Senior IT manager for a large IT group. Lee, after some positive signals in a cold call, is looking forward to sell testing services to Mark's group and hence to his entire IT group. Here is how the conversation goes.

Lee Jack: What is your problem statement?
Mark Johnson: Not sure. In our recent conversation – you mentioned that your company provides Testing services - what you can offer me?


LJ: You need to focus on Independent testing
MJ: We have been managing with developers doing testing for last 6 years – have no problems whatsoever. Why do I need to bother now?

LJ: May be your developer testing is not proper or inadequate or it is costly?
MJ: How can you say that?

LJ: Tell us about production defects and quality of your applications in general…
MJ: I told you already. We have no production problems that we worry about. Our site is every now and then goes down. We wait for 30-40 mins and start over again everything works fine. Why bother?

LJ: Do you say your applications are of high quality?
MJ: I would say our applications are of “good enough quality”. I get what I pay for and I have the quality for the application that my users care for. So far so good.

LJ: What are your “business plans” going forward? New growth etc? What you seem to be saying “good enough may not be good in the future?
MJ: May be. I don’t think that far ….

MJ; Have you thought about cost of testing? Do you have any mandate from your boss about reducing IT budget and hence reduced budget for testing?
MJ: Oh!! Yes. We have taken care of the larger part of cost problems. We have nearly replaced all our commercial software stack by open source equivalents. Same is true even for tools. So we can say we run our entire shop on open source tools. That is our achievement.

LJ: Tell us about your development /testing model ?
MJ: We are pretty advanced from that angle. We follow scrum based agile development model – outsourced. Developers do all required testing and development and everything is fine. I am hearing you talking about QA again and again without me asking about it. We do agile model and as such there is no distinction between development and QA. As such we do not think cost is our concern.


LJ: I did not say “QA”, I said “testing” – I suppose you know the difference.
MJ: Fine … here we use these terms interchangeably. Well, does that matter in agile?

LJ: How can we help you?
MJ: You have been saying that you give QA , sorry testing services – Do you provide white box testing services?

LJ: Our Testers are trained well in both black box and white box techniques – that should not be a problem.
MJ: Let me make it very clear – in the name of QA or Testing, I don’t want some so called tester coming and doing “negative” testing and showing 100’s bogus UI level bugs. That is not what I want.
LJ: Our testers are trained in exploratory testing and can flush out hidden bugs
MJ: I said I do not want negative testing – UI level keyboard banging. I want white box testers.

LJ: What according to you is white box testing?
MJ: I am surprised that you are asking this silly question. It is the testing done with the knowledge of product internals. Something that developers do. Don’t you have such testers .. I mean developers?

LJ: Well, the reason I asked this question is – there are no universally accepted meanings of the term white box testing and I wanted to make sure I understand your version. OK Why can’t your current developers do that?
MJ: Our developers are already stressed out. We have very well defined agile processes. Our developers do very smart and JUnit based automated tests are our key to success.

LJ: OK … What can we do for you? It seems that you have everything that you need.
MJ: How about your folks doing an assessment and tell us what our maturity is? How well we are placed with respect to industry standards?

LJ: That appears like a different problem …. Anyway, are you sure You want an assessment?
MJ: Yes. I would like our processes to be best as per industry standards.

LJ: What are your expectations out of this assessment?
MJ: We need to add few white box testers to our existing team as we are weak in that area. We are sure we do not need QA resources that do negative or exploratory testers. We are also sure we do not need testers. We need developers who can do white box testing.

LJ: Oh!!!! That is what you want? I can give you developers who can do white box testing? Why you need assessment?
MJ: You see, we are looking for a consultant to make case for us to get budget for these resources. We really do not worry about testing, QA, White/black box testing etc. Can you get a consultant?

LJ: Great … I will have a top notch consultant at your office tomorrow.
MJ: Thank you. Make sure, you get someone who understands our problem very well. I don’t want to go through all these questions again.

LJ: So, let me recap. What is your problem?
MJ: Damn it …. I need a business case for increasing my team’s headcount and I need white box testers. Are you clear?

LJ: Yes … Sure …
MJ: Thank you very much.

What do you think … what is the problem here?

Update: I did a mistake of swapping the names of MJ and LJ. This is corrected. See how it reads now.

Shrini

Tuesday, October 06, 2009

Necessary Tester skills ....

Curiosity? Keen and close observation? Skepticism? Creativity? Arguing/debating? Imaginative? Analytical and Holistic? Good at Probing/Investigation?

None – would say many IT testers, I meet on regular basis

Today’s IT testers have forgotten these terms I suppose. When looked at macroscopic and group level – todays Test managers, delivery heads appear to have lost sight on these core skills for testers. When I ask any typical IT testers what they are doing to enhance the skills, typical answers I get are - learning domain, usage of specific tool, or a programming language, or some process/methodologies stuff. That is not wrong but having any of these skills without the basic skills required for testers – we would be developing the armies of robots who are very good at following orders and over a period of time, stop thinking on their own.

NY Police (a profession similar to testers) are taking course in “observing and describing” – What our testers should be doing to sharpen their testing skills? How they are putting their cognitive skills in their day today work?

http://www.smithsonianmag.com/arts-culture/Teaching-Cops-to-See.html

Shrini

A process is what you actually do. A process document describes what someone would like you to do, ideally. They rarely coincide exactly, and sometimes don't overlap at all - Jerry Weinberg

Friday, September 11, 2009

Who decides what is a bug and what should be fixed?




I was submitting a proposal for a paper for STC 2009 conference. After completing all fields – about 20 of them , I accidently hit clear button (by practice – “OK” or “submit” button appears first in such forms) and all that I entered is “gone” – worst there is no way to recover.


Is this a bug?

Another catch – for the date field – no format is specified, neither there is a calendar control. How do I find out the required format? Try one and get to know about the format.

Is this a bug?

If you were a tester who would strictly go by “test cases” generated out of “specifications” – you are likely to miss such bugs – remember there is something called “Requirement based testing”. Alternatively, you might argue that these are not bugs or bugs of low severity. Who has the final authority to say which is a bug and which one should be fixed and when?

Shrini

Wednesday, August 19, 2009

Your user acceptance testing is fixed – Is that a problem?

“This troublesome user acceptance testing got to be easier. It is really a pain” said of my colleague quoting an IT manager responsible for release management and user acceptance testing. Major problem according the IT manager was getting the “users” to do their job – testing a defect fix, feature enhancement or a new release. Users are always busy with their “day job” and for most of them doing testing is a low priority activity. I have heard many similar stories regarding this important aspect of “end user” responsibility.

In traditional IT organizations, user acceptance testing is a formal stage in the testing life cycle where business users will test the proposed changes to production software applications. IT organization that delivers the software to meet the needs of business users, require a (formal) approval of proposed changes that come in terms of defect fixes, enhancements and new releases. Since it is business users that ask for changes for their existing and fund the development/testing work – they will have the final say on “acceptance” of proposed changes to production applications.

Primary purpose of user acceptance testing is check (cursory- final) proposed changes to the production software by those users that asked for the changes so that any last minutes changes and surprises are avoided. User acceptance test is the last gate before software goes Live so that it provides last opportunity for all involved (both IT and business) to make corrections if required. Depending upon the nature of business supported, nature of software application, nature of changes proposed UAT may last for few days to few weeks. Software that fails in UAT is typically assumed to be insufficiently tested and is variably returned to IT for fixes and more testing.

The problem of UAT
Business users who hold the responsibility of approving software updates to production systems, typically are not testers and as such do not come with testing skills. Depending upon the nature of software updates, IT team will ask business users to carry out testing to check the features of new system. Most of the user acceptance testing tends to be repetitive and that is what drives “real users” crazy. For development team (IT), their work is not complete until the piece of code is user tested and accepted. Hence they literally chase user group to perform UAT with users complaining about the time crunch and their “important” business work getting impacted. This creates a situation where both parties (IT and business) want UAT to be somehow completed so that each can carry out their business as usual. Hence UAT often gets “fixed” with the consent of both parties having stakes in the activity.

Forms of fixing UAT
Between Business and IT, UAT can be fixed in number of ways – some of these are reasonably business driven models given the constraints.
1. UAT will be performed by the members from IT team where business only reviews the results of the test. If the results are OK then UAT is ought to have been completed
2. UAT will be performed by a third party, such as someone from support staff. Business will review the results and accept the proposed changes if results are OK
3. Business will provide a prescribed set of test scripts that can either be used by anyone IT staff or support staff. Results will be verified by the Business.
4. UAT will be done like a demo to the users where IT staff will execute some pre-approved test scenarios related to proposed changes.
5. IT team will train the business in new features proposed to be introduced. Business users after training, test (use) the proposed software and accept the software

Why fixing user acceptance testing is bad thing?

Why bother to UAT after all? For IT, more often than not, it is a formality to be completed before they push the code to production.

In my opinion, it is the spirit and purpose of UAT that gets compromised. Typically, in spite of all best efforts, the depth and frequency of interactions between IT and business throughout the project remains low. When business users do not participate with full spirit in UAT, lots of things go unnoticed into production. This might results users (non participating ones especially) getting surprised when they see the product.

What has been your experience? Do you bother if you see your UAT is fixed?

Shrini

Sunday, July 26, 2009

Tom DeMarco’s confession

Confession is probably a harsh to describe what Tom DeMarco, the creator of celebrated punch line of software managers – "You can't control that you can't measure", wrote recently. But, if you read Tom's recent article "Software Engineering – An idea whose time has come and Gone?" in IEEE software, you would probably say something similar to this. In this short 2 Page article, you will find Tom DeMarco in reflective and retrospective mood.

In the book "Controlling Software Projects: Management, Measurement, and Estimation (Prentice Hall/Yourdon Press, 1982) ", Tom talked about "Controlling" and "measuring" in Software Engineering. Nearly 40 years later, he now appears to admit that he pushed the notion of "control" and "measurement" too much.

This "confession" has apparently caught the attention of many. Jeff Atwood writes an obituary to "software engineering". More than his post, the comments for the post of Jeff Atwood are interesting to read. How come, suddenly so many are accepting now that "software" is human centric, people oriented, "engineering" is not the right term to use and so on? Michael Bolton's recent article (three kinds of measurements and two ways of using them) on stickyminds appears have been triggered by Tom's "confession". Matt Heusser writes about metrics here, here and here.

Managing vs controlling

In his own admission, Tom seems to distinguish between controlling and managing. He gives the example of "upbringing of a teenager". As any family therapist would recommend, you manage your teenage kid rather than controlling them. Like in most human endeavors (including software programming and testing), you can manage quite lot things than controlling them. I think we can manage (and also a control a bit) WITHOUT measuring ANYTHING at all. I like Matt Heusser's example of hair cut. Many of us control and manage our hair (style) without measuring.

Manage people and Control Money and timelines

This is Tom's recipe for managing the project without controlling it. While breaking project into human elements and non human elements (such as code, schedule, money) is a welcome change, I am not sure how can you do it – managing people and control money and timelines. To me, these two sets of things are NOT mutually exclusive so that you can treat each in a different way. Controlling time and money impacts people and managing people impacts timelines and money.

Some software is really engineered!!!

Dave Markle makes an interesting point in Jeff Atwood's post - "IMO you can't say that programming languages themselves aren't engineered based on solid computer science. You can't say that something like LINQ hasn't been engineered. Whenever you use a FSM in your software, you are applying computer science, which makes you a software engineer"

Programming languages are engineered, operating systems are engineered and so are software algorithms. It probably the user mass/ size of the group decides engineering vs. crafted.

Dave uses the Paint analogy nicely to drive home the point - the paint artist uses is engineered (developed and mass produced using the principles of chemistry and physics) where as the "art" produced by the artist is NOT. This stirs the nest of debate of what is engineering and what is craftsmanship. Are Engineering and Craftsmanship are mutually exclusive? I am afraid NOT.

Tom, Damage has been done and is still happening. Many still many abuse your punch-line to push loads of documentation, process, approvals, and meetings and of course endless charts/graphs of metrics – all in the name of "control". Probably the time has come to step back and be sensible on "measurements" in software.

Shrini

Tuesday, June 16, 2009

To tweet or to blog ...?

Due to me crazy traveling and work, I have not been able to write as frequently as I would have liked, for the blog. At least half a dozen potential blog posts are waiting to see the light of the day from my side. It is just not working out.

I have been on twitter (for starters - you can take this as a quick blogging or microblogging) quite active in recent days. Happy to see many following me now. It is suiting me for now as I need not feel guilty of not being able to discharge my duties as a blogzen (no!!! this word has not yet been used by someone previously)of software testing blogosphere.

So till, I get to full time blogging - please follow my thoughts on twitter. I have added twitter feed on this blog to facilitate for my readers to catch up with what I am working on ....

Thanks for being my blog reader ....

Shrini

Thursday, May 14, 2009

10 ways to make automation difficult or ineffective

Here goes another 10 items list for automation. If you are in IT or IT services space and manage/deliver automation solutions -- make sure you stay away from these as these items have high likely hood of making automation inefficient/ineffective and difficult.

This list is an extension of a topic and this list (of 10 items again) for test automation outsourcing

10. Wild Desire to automate 100%

9.Attempting to automate existing test cases without scrutinizing them for “suitability” to automate

8. Mapping test case to script 1:1 linear model – falling prey to deceptive traceability and gold plated reporting.

7.Not building automation solution bottom-up , unidentifiable building block of the solution.

6. Trying only one type of automation or attacking only one layer of the application – Farther you go from code, messier it gets.

5. Focusing only test execution related tasks

4. Treating automation as scripting – ignoring “generally accepted good software development practices for hygiene.

3. Failure to involve developers from the beginning – Not attempting to testability or automatability of the application.

2. Jumping to automation to speed up testing or save cost before fixing testing problems – inadequate, inefficient and broken.

1. Failure to arrive (formulate) at the right mix of human testing and automated test execution.

0. Using Automation as solution to testing problems.

I reiterate that these are applicable mostly to COTS driven, GUI functional Testing automation that is typical in IT/IT services environments. WI might have to rewrite some of these for xUnit type formalized unit testing (that is also automation and some call it even as "testing").

Shrini

Wednesday, May 13, 2009

Is this a bug?


I was flying from Cape Town to Bangalore through emirates flight. It is convenient to online check-in. I do it as I can choose an aisle seat. But for the second time, I got into problem while doing online checking for emirates. Probably the Internet connection was slow – in both occasions, emirates online application did not respond and I had to close the browser after 5-10 minutes of frustrating wait – staring at screen.

Here is the bug that frustrated me…

· I try to do an online check in and would like to change the seats.
· Application hangs when trying to save the changes.
· Close the browser.
· Try again to do check-in
· Get a message that the passengers have been checked in.

Fine – how will I know what are my seat numbers? How do I view my check-in details.

Apparently there are no easily reachable ways to gather information. Probably there is none. How do I search where is “view check-in details” or “view eBoarding Pass” or something similar? I tried site map, tried “Help” and tried “search”… could not figure out the link for viewing check-in details.


Is this a bug? If you are a tester will you catch this bug? If you are a developer will you accept that this is a bug? I am sure most people will say “if this is an intended functionality (I think, it is), then it should be documented requirement specifications. Once it is there, tester can write the test case and developer will make sure that the functionality is coded and tested”. Some testers might say this is “nice to have feature” …

What might have happened here? Requirements problem? Development problem? Or a Testing problem?

Sunday, April 19, 2009

10 ways to reduce cost of software testing

In current economic situations, IT folks worry about one thing – “reduce cost”. I have been frequently asked “how to reduce testing cost”. A no brainer answer would “do not do testing … at all”. How many buy this idea … can current breed of IT applications sustain with less or no testing at all? When I use the term testing –I am referring to “non-programmer” testing.

Here is my draft list of suggestions ...

1. Closely work with developers, do some parallel testing with them as the product/feature is getting developed
2. Identify and eliminate non-testing activities that occur in the name of process, documentation, management, metrics etc.
3. Analyze and profile every application under the portfolio to determine “stable” and “well tested” areas of the application. These areas should receive the least or no testing effort.
4. Analyze the test scripts suite and remove redundant, worn out ones. Aim to reduce scripted test repository as small as you can.
5. Review and reduce “regression testing” on the basis of “well tested/stable areas” of the application
6. Switch from resource intensive and highly scripted testing approach to highly improvisational exploratory /rapid testing approaches
7. Plan testing in small but frequent cycles (Session based exploratory testing approach) – reduce planning and management overheads
8. Analyze and reduce the usage of costly tool licenses - especially those do not help in testing directly (test management tools)
9. Cut down on lengthy test plans, testing reports, dashboards – switch to simple but frequent test reporting.
10. Simplify defect management process – reduce defect life cycle – resort to informal/quick defect communication.


Some this advice might look like a simple common sense (eliminate waste, focus on tasks that impact end result DIRECTLY). With so much selling happening about “testing tools”, “factory models”, “cheap and best testing services” – any common sense is difficult to come by.

How would IT community react to these suggestions – most likely response would “This would not work, how can we reduce testing, those test cases, processes, metrics, management practice?”. These suggestions would be most likely to be rejected on the grounds that testing cost needs to be reduced without “compromising quality”. Many IT folks think that quality comes from test scripts, processes, metrics, testing tools, automation etc. I am afraid quality is not such a simple thing.
Again, there are no free lunches here … if you are thinking about reducing cost of testing, there are always risks of impacting quality (roughly goodness or confidence in the product) in one or other way. If you approach the problem (cost vs quality) from a quality side (improve testing - test better, deeper and wider), then the chances of achieving good quality and also “some” cost benefits are more likely. However, if you approach it from “cost” side of the equation – you might do achieve that albeit some impact on overall goodness/quality of delivered product.

Note that some suggestions mentioned in this list call for some smart testers who can think on their feet, work with least supervision, least (optimum) documentation and processes and so on. I think, the focus should shift from process, tools, management, documentation to Skill. There can be problems in getting such resources in IT scenario (especially in outsourced/offshored world)

You have choice … which side you would like to approach the problem ..?

[update 20/Apr/2009] A colleague of mine reacted to this list saying "These are too risky suggestions and he would not recommend any of these. Business prudence is totally missing".

I think he was expecting to see some "low risk and high return" type of suggestions - like those "cheap" and "best" items. I still do not understand - there can be no risk free ways of reducing (testing) cost -unless you are totally spending like crazy without any thinking. We do not seem to have such risk free - free lunches - why fool ourselves and the client in believing such "non existent" things?

Another suggestion that came up was "Let us use standardized processes". How standardization can reduce cost? what is the cost bringing in standardization itself?
May be the expectation is that standardization will make each tester behave identical to another like robots. Are robots cheaper? may be? may be not .... They at least do not whine about working on week ends :)

Shrini

Saturday, March 21, 2009

When "Process" stops working for you ....

Other day I overheard a Test manager speaking to his team “As a CMMi Level 5 company, I don’t think we are following processes. We often talk English rather than showing supporting metrics that are back bones of any CMMi Level 5 organization. If you don’t measure then how can you improve? Considering the economic slowdown, it is high time that we should start showcasing our continuous productivity improvements or else we will lose the client"

What is happening here?

Most managers somehow (more so in current economic situations) confuse skill, human ingenuity and expertise to metrics/measurement. When customer cribs about “value” and quality of work delivered – she really is cribbing about people and their skill (not about metrics and measurements). When people hear about customer cribs … managers suddenly jump and say “let us collate some metrics and show client that we have delivered the value (which they will dump eventually)” and push the core issues about skill below carpet. This “hide and seek” game goes on until we lose the client. This pattern has to break and unfortunately I have no simple solution for that (probably no one has). Few of us appear to know the root of the problem now.

If following processes would ensure quality and being very serious about metrics is HOLY – then our problems would have been solved long ago …. Why people do not follow process? Is it because they are so tough and stress full to follow? Is it because they are difficult to understand? Probably people follow process and we have stopped being critical of whether process is doing any useful thing are not …. That is the start of the problem. Glorifying process beyond its own utility (ask process – it would probably say … beyond this, I cannot add any value). I understand process (whatever is the definition) provides some common framework within which people with diverse educational/technical/social background work to produce consistent output so that whole thing can be managed easily. Beyond certain point (this limit might vary from context), process cannot help any further. It calls for people's skill to deliver - process then becomes an enabler or mere Hygiene factor. Just walking or eating alone can not keep you healthy all the time. Do you know where is the limit beyond which "following process" can no longer help?

There is a big fuss about “using English than numbers” … Why there is so much faith on numbers? Why qualitative subjective wordings are such a waste? Why not we express everything in numbers all the time – our hunger, happiness, intelligence (yes there is IQ test), pain, sorrow, emotion (yes there is emotional quotient), commitment, enthusiasm, creativity and what not all human attributes are so rich and multi dimensional that poor numbers can express a minute part of them. And we refuse to use qualitative measures saying that “objective is better than subjective”. Many would like humans to behave as if they are machines so that they can be objectively measured. A sad reality…. Perils of advanced economic world. Hunger for objective interpretation of human attributes is probably has reached its crescendo. I am waiting for the downfall of that raise. Will it come?

There is a big deal about “improving productivity in testing .. We must meet SLA’s and show continuous improvement in productivity”. I am STRONG opponent of usage of the word “Productivity” in testing in general terms. When people say productivity, they typically refer to speed – number of units produced per unit of time. Much like in a shop floor assembly line. There might some portions of testing that one does that are “speed sensitive” but by and large skilled testing is not about “speed” more than it is about “coverage”, “identifying tough to find problems”, “asking right questions”, “seeking information”, “building on available information”, “investigation” and many more. Probably not more than 5% of good testing is speed sensitive… most of it is not … then what is the meaning of “productivity” when it is applied as “serious generalization” to all testing. I PROTEST ….

Finally, come’ on, let us accept there many ways we can improve (many) things without measuring them (at all) at least in poor numbers. We all do it in our day today interactions with our near and dear ones in family and those out side in society. So there are clear exceptions to the statement “you can not improve if you can not measure”. I strongly oppose the statement. Too poor generalization that suites machines and mechanical constructs well, than human beings in a social structure.

Friday, March 06, 2009

C-DLICE’ing in software Testing

Let me take credit for making this pneumonic up: C-DLICE. I was listening to Michael Bolton's video interview on youtube. He said testing is more than verification, validation and confirmation. It is about Challenging claims, Discovery, Investigation, Learning and Exploring. Any skilled tester would do one or more of these activities as part of testing. By explicitly chaining them in a pneumonic, a tester can focus on a specific aspect of the interaction with the test subject.

Let me expand the pneumonic –

C – Confirmation. Other than traditional words like Verification and Validation (whatever may be the meanings of these terms) most people on this planet think that sole aim of doing testing is "confirmation". It is seen confirmation of claims made about the product, conformation of what developers "felt" that they created in response to requirement specifications that received and interpreted to their best of abilities. The confirmation about some specific user expectations (assumed to be routed through specifications into the software product). In basic form confirmation is somewhat like "Click this button, such and such thing should happen – Does it happen?" While confirmation is important aspect of testing, any testing that focuses on confirmation will become boring, brain dead and poor way to think about testing. Notions like anyone can do testing, process plays important role in testing, testing without test cases and requirements is not possible – are creation of confirmation oriented testing. I will not dwell upon challenges in confirmation oriented testing. And there is a big deal called "reference" against which you confirm – specifications. If your reference is wrong, ambiguous and incomplete – so will be your confirmation. That is the weakness of confirmatory testing.

Though my pneumonic is more about DLICE, I will still keep this "C" in there to remind us that confirmation may be as important as other letters in the pneumonic.

D- Discovery. While we test tester, we discover information about software, certain behaviors. It is like discovering an unknown island. As product grows bigger in size (in terms of codebase), discovery becomes important. No user would use the software as per the user manual. Discovering way in which software could be used and misused is important aspect in testing. Discovering is about finding information about unknown areas. For a growing software application, every time there is more to test than before – more to cover than before. Under such circumstances, you constantly discover the application, its variations, behaviors and so on.

L- Learning- This is a freaky one. A significant part of testing is implicitly spent on learning about everything that software under test. Be it business domain, technology domain, community of users using the software, cultural and social set up in the organization producing the software, we learn all time. Learn about how the software is constructed, deployed, distributed and so on. Often, I have seen people downplaying "learning" aspect of testing as they would like to position themselves as "experts".

I – Investigation As testers we investigate claims about the product. How people perceive the product? Investigate inconsistencies. Investigate bugs, Investigate impact of a new technology, software change on the over image of the software. Investigation is about focused information gathering, analysis on certain events. Examining the evidence etc. Investigation starts off as open ended.

C- Challenge (used as verb) – As testers we need to constantly challenge the assumptions, beliefs around how people think about software. What each stakeholder thinks about the capabilities of the software? Challenge the premises and so on. Challenging would require the design of tests, experiments etc to expose the weakness of an aspect of software.

E – Explore – Somewhat similar to Discovery, Exploration enables any information gathering exercise. Explore market conditions. Exploring is about taking a tour. Exploration helps in modeling the problem space. Exploration is more open ended than investigation.

Notice that each letter is has some overlap with others. You can learn while discovering or challenging a claim or exploring a feature. You can investigate something by exploring it or discovering it. You can challenge something by investigating it or learning about it and so on. One way to think about DLICE is – Discover like Magellan, Columbus, Learn like a learning a new language, Investigate like Sherlock Homes, Challenge like Lawyers, Explore like exploring moon's surface or deep African jungles.

Few practical themes to apply dlice'ing

  1. When there is a new thing most people around you know little about - something that you do not understand well , then – Discover, Explore and Learn
  2. When there is something that several others know but you do not – Learn through exploration, discovering
  3. When there is"suspense" or "mystery" about a thing – Investigate – a defect, a strange behavior etc.
  4. When there is some that is "well known" to you (you are pretty sure) about some claim – Challenge it and prove your point ( backed up by prior discovery, exploration, investigation and learning)

So, next time you feel bored doing testing, try switching your focus … try doing some investigation, discover new ways of using software or explore an area of software and so on. You would find that testing is always interesting but you were told about only one dimension of it (confirm, find bugs, check it passes tests) so you felt low or bored about doing testing that way ….

HAPPY C-DLICE'ing

Shrini

Monday, January 05, 2009

Context Driven Testing gets a boost – to grow stronger…

It is a fantastic "new year" gift (and also week end feast to many like me who is spending better part of Christmas/new year in front of laptop) to all testers… Dr. Cem Kaner (along with James Bach) has posted about context driven testing with some new definitions and articulation. I belong to context driven testing community with many others. Context driven testing community has proclaimed its philosophy, guiding principles since its inception. As years went by, various groups of people (quite a few "unidentified" ones) started pushing false propaganda about our community. This I believe might have driven Cem and James to "rework" the original principles and overall articulation about context driven community.

I have personally been part of many discussions, where I heard people just attacking the theme of context driven testing by saying "Everything in this software world happens with the context. Only a fool would work without context … so talking big and calling oneself as context driven tester is no big deal. Context is forever one … every one applies the context to best of their ability and knowledge". The new articulation of "Context aware" - attempts to describe such people. If you think about a practice first, then tailor/modify that practice to the context – you would be a "context aware".

Another interesting point this post makes is that some people in agile software development community have found so much common with context driven testing that they were claiming agile and context driven testing as one and the same. With so much focus on people and their interactions – agile and context driven testing can be said to have common roots – "people focus" (as against process/standards focus). I would say Cem's articulation could have been much stronger when it comes to the differences between agile and context driven testing. Insistence of 100% automated unit tests, compulsory standup meetings, TDD is a must and all sorts of "standard" stuff is clearly deviation from agile manifesto (choice of people over Processes) as articulated by James Bach in his STAR West 2008. I see context driven testing community distancing from agile community on these areas …

Also look the new, cracked definition of context driven testing – "Context-driven testers choose their testing objectives, techniques, and deliverables (including test documentation) by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing." I am sure most of my friends who are not in the CDT (context driven community) will say … oh yes … that is a common sense. We work for stakeholders, customize our testing approaches to context … what is big deal? Why are you making common sense a big thing?" My response to such people would first "so called common sense in this case is not so common … so there is a big deal here. Secondly, thinking about practices first then context is "context aware" not "context driven".

Critical Focus on "best practice" has been rather mild I would say (appears in only 4 times in the post). Cem, apparently left the proponents of best practices, lightly by saying "Context-driven testers reject the notion of best practices, because they present certain practices as appropriate independent of context…. However, when someone looks to best practices first and to project-specific factors second, that may be context-aware, but not context-driven."

Context imperial testing is somewhat similar to what I normally refer as "goal displacement" (a term I learnt from James Bach). Instead of designing and adapting the practices to project /organization/group context, Context imperialists would RATHER change the project/group/organization itself to suit the "best practices that they are aware of. A recent example … someone said, we cannot afford to do testing this way as it will not allow us to collect the metrics that we need, so let us change the process so that we can collect and use the metrics". It is sad state of affairs that, people will get away with such "context imperialism".

Context driven testing community's view point on "detailed specifications, detailed test script documentation etc" has always been perceived as "means to promote exploratory testing". Many people, whom interacted, would immediately say "Oh!!! You mean to say do exploratory/adhoc testing then" in response to my statement that "In context driven community, we do not insist on detailed specification … In fact some of us think that is a crime to refuse to test without specifications". In reality, the crux of the matter is, in CDT, testers need to cope up with whatever information that they get and start from there to gather information that they need. In those situations where time, information, people with knowledge - are rare commodity, context driven testing is clear winner. It prepares its tester for such eventual realities.

Finally the assertion that "There are no context-driven techniques" – should put all those statements/viewpoints to rest such as "Exploratory testing is a context driven testing technique". Neither ET, nor context driven testing is a testing technique in itself…

Overall, this post is a great milestone in the history of Context driven testing … should be mentioned in context driven website.

Shrini

Sunday, January 04, 2009

MS outlook as Alarm Clock: Is this a bug?


[Background in the beginning … those who are interested in reading about bug I am talking about, skip first few paragraphs and go straight to Bug]

There is a saying that "Software users, most of the time, do not use the software as perceived by the designers or analyst". I happen to deploy Microsoft outlook as an alarm clock for me to help with a wake-up call. I am away from home, not carrying a cell phone, ipod is not a good tool for this purpose, buying an alarm clock for short duration would be a waste … hence zeroed in on Outlook meeting reminder feature as a software solution to help me to give wake up call.

I set up a recurring meeting of 0 minutes at say 7:00 AM everyday in the morning with reminder zero minutes. I thought this would work. I needed another hack … a sound alarm long enough for me to wake up. Most of small sound files shipped with windows/MS office were not helping me with this purpose. I had Yahoo messenger on my machine with an audio file that plays for few seconds. I decided to use that as reminder sound file. Even there was a problem. I thought 2-3 secs of Yahoo sound file might not be long enough to wake me up. So I needed to play this one file repeatedly say 7-8 times and record the whole thing so that I can get a sound file that plays for 8-10 seconds.

I went live with this setup, thinking that my next day's worries of waking up at specified time were over. It seems that UAT was not proper … first morning the alarm did not wake me up at all… when I investigated, I discovered that previous night before sleeping, I had muted the audio of the laptop … so far so good. I was waiting for next day to see if it works … Bingo … it did … exactly the way I wanted. No big deal I thought. However, I was happy to discover a low cost (zero indeed) technology to a problem in hand … software alarm clock.

I am sure there might be better ways … one could have written a small program in VB or Perl or Python to do this … with even snooze feature …. But, I think my solution worked for me … only to fail next day …. What ..? Yes, next day, the important day where I had some important meetings to attend… Outlook alarm failed … Is that a bug … Looks like I accidently stumbled on this bug …

[BUG] I think few hours before the scheduled appointment – alarm to kick off, a pop-up message about (Junk Email alert) came up prompting the user to take action. This is a modal window. Hence before dismissing this dialog box, outlook cannot proceed with any other pop-up windows like meeting reminders. So, my wake-up alarm did not get activated as there was a modal window waiting to be acted upon.

Is this a bug? May be or may not be …. Some might argue that this is not a bug as outlook is not supposed to be used this way. A developer may say that Junk email feature was required to be implemented as modal dialog box, it was assumed that people will act upon it … as they do for any other modal dialog box. Others may say it is too small problem to worry about …Some may even point out a work around for me … check on "Please do not show this dialog again"… so that my alarms will work without any problem. The Fix might be simple … (superficially) just make it as non modal window…. Or there can be other implications. I do not know... What if I had missed the flight back to India due to this software problem? What if I was delayed to all important meeting due to this software problem leading to huge financial loss?

What do you think?

Shrini