Wednesday, December 06, 2006

How can a software tester shoot on his/her own foot?

Would like to know the self destructive or suicidal type of notions of the today's tester? Would like to know how a tester can shoot his/her own foot?

There are many ways – one of them is by “declaring” or "asserting" that -

Software Testing is an act of (whole or Part) Software Quality Assurance.
Few variations of above –

Software Testing = Software QC (quality control) + Software QA
Software Testing = Verification + Validation.


As an ardent follower or disciple of Context Driven school of Testing – I swear by following definitions or views

• Quality – “Value” to someone (Jerry Weinberg)
• Bug (or defect or Issue) – “Something that threatens the Value” (I am not sure about the source)
OR “something that bugs somebody” (James Bach)
• Whatever is QA – that is not Testing – Cem Kaner
• Testing – Act of Questioning product with an intent of discovering Quality related information for the use of a stakeholder (James Bach /Cem Kaner – I attempted to combine definitions by both James and Cem)

OR
• An Act of Evaluation aiming at Exploring ways in which the Value to Stakeholders is under threat (this is my definition – which discovered quite recently – open for criticism)

• Stakeholder – is someone who will be affected in success or failure OR actions or actions of a product or Service (Cem Kaner)

• Management is the TRUE QA group in a organization (Cem Kaner)

Now let us see how notions that assert to Testing as QA or combination QA and QC roles are self destructive or similar to shooting on ones foot …

1. The terms like QA, QC were appear to have barrowed from Manufacturing Industry – Can you measure and assess the attributes of software in the same way as you do for a mechanical component like piston or Bolt?
2. You can not control or assure Quality in a software By Testing
3. It can be more dangerous or costly to claim as a Tester that “I assure or control Quality by Testing” as it can backfire when you don’t.
4. Unless your position in Organization hierarchy is very high – you as a tester can NOT TAKE decisions about
a. Resources and Cost that is allocated for the project (Budget)
b. Features that go into the product (Scope)
c. Time when product will be shipped out of your doors (Schedule)
d. Operations of all related Groups - Development, Business, Sales and Marketing etc.

When none or most of above not in your hands – How can you Assure or control quality?

5. When you claim that you assure or control quality – others can be relaxed – Developer can say – I can afford to leave bugs in my code – I have anyway some paid to do the policing job Or others will say “Let those Testers worry about Quality, we have work to do” – Cem Kaner

6. You will become scapegoat or Victim when you leak Bugs (or issues or defects) go past you. One of the stakeholders may ask – “you were paid to do the job of assuring or controlling Quality – how did you let this bug(s) to product”

An interesting and relevant is reference is mentioned in Cem Kaner’s Famous article
The ongoing revolution in software testing

Johanna Rothman Says (as quoted in Cem Kaner’s article) -

Testers can claim to do “QA” only if the answers to the following questions, is YES
• Do testers have the authority and cash to provide training for programmers who need it?
• Do testers have the authority to settle customer complaints? Or to drive the handling of customer complaints?
• Do testers have the ability and authority to fix bugs?
• Do testers have the ability and authority to either write or rewrite the user manuals?
• Do testers have the ability to study customer needs and design the product accordingly?

Clear enough?

What is the way out ---?

Treat software testing as a service to Stakeholder(s) to help them conceptualize, build and enhance the *value* of a product or a service.

Be a reporter or service provider – Don’t be Quality Police or Quality Inspector of an assembly line …

Thursday, November 30, 2006

Launching Indian Software Testing bloggers community ...

Are you someone from India or of Indian origin?
Do you work in/for Software Industry?
Do you do or have interest in Software Testing?
Do you read blogs on software Testing?
Do you blog?
Is Software Testing is your passion?
Do you believe in sharing knowledge in software testing community in India?


Friends – if ,answers to one or more of above questions is “Yes” – please send me an email – I am launching “Indian software Testing bloggers” community – a Platform all passionate Indian Bloggers out there. I need your support, energy, passion to build this community.


I have thought of one or two ideas as how do I host the community on the web, what is the charter of the community etc … please share your comments ….

Let us start this with a small step – who knows one day might take the shape of big “Revolution” in Indian software Testing.

I already have notable people like Pradeep S (who blogged recently calling Indian testers to starting blogging here) with me and guidance and blessings of world renowned visionaries like - James Bach, Michael Bolton.

Here are my contact details –

Shrini Kulkarni
Email: shrink@gmail.com
Cell: 91-9945841931

“What topic in Testing you want to blog today?”

Wednesday, November 29, 2006

Story of a Test case ....

A Test case or Test is an important entity that we as testers create/use/work with as part of our testing activities. What if a test case were to come Live like a living thing or Ghost and were tell it's story or it's life cycle ---

Here is how it MIGHT go ...


• Born – in word document or excel or in some text file or HTML form of web based test tool – very rarely in the minds of a tester.
• Some form of Reference document (requirements, design or Functional Spec) is considered to be my one parent while my other parent is Application behavior that I am suppose to check and verify.
• I exist for proving that my both parents are one and they don’t have conflicts.
• My body structure is such that when one of my parents changes it’s shape or form – I need to change otherwise I temporarily get dumped and do no exist –get invalidated.
• I live in different types - manual, automated, semi automated, documented, un documented, versioned and non versioned, In test management tool, in informal documentation.
• I am a countable *thing* though I differ from my siblings, cousins, friends in may ways.
• Named by a tester and pushed into some repository I have an ID
• I am referred in many ways like test, test case, Test idea, Test spec, Test procedure, Test pack, Test set
• Sometimes I am too detailed that a school kid can execute me by following the steps and sometimes I can be very tricky. Some time I become very lengthy and some times just a one liner – “Verify this….”
• Sometimes I have hard coded data and some times I will not have expected results. Some times my internal parts contradict each other.
• I get classified as “simple, medium and complex” so that people can measure time for creating or modifying, automating or executing me
• I get a graduate degree when I people start calling me as ‘Regression test” – I need to pass every time I get executed.
• Developers hate me when I fail and managers would like see only “passed” tests.
• Some times I am called unit test or some time end to scenario
• Some one will review to confirm that I am indeed born to my logical parents
• I tend to loose my identity when some one automates me and forgets me that I ever existed.
• Some tester adopts me and uses me and abuses me and cruelly compares me against my parents and declares that I pass or Fail.
• I just go into hibernation mode every now and then (when either or both of my parents change their form and shape)
• I am at mercy of tester to look for me changes that might be required to bring me back to life (from hibernation)
• When one or both of my parents change their form and shape OR when one or both of them die (Feature deleted form the application and reference document) – That is the end of my life – I get deleted.

Interesting Right?

What is the story of your Test case?

Shrini

Some more interview tips - Questions that you should ask to the interviewer ...

Continuing from my last post on this topic, I would like to touch upon an interesting and important aspect of a job interview from job aspiraint's point of view.

1. Asking questions about Employer and his company
(Demonstrate that you have researched about the company and (already)know any publically available information)

i)Nature of business, Size, office locations, Company's history
ii)Company's achievements in the recent past
iii)Company's Financials
iv)Organizational hierarchy and the where the position for which you are being interviwed fits
v)Information about competators
vi) Ask about customers and company's standing in it's operating domain
vii) Company's future plans about consolidation, diversification, expansion etc


2. Other high impact questions

i) What are the immediate challenges that you [the manager doing the interview] are going to face in the next 3 months? Are there ways that someone in the position you're hiring for could help address those challenges?

(source Johanna Rothman's this Blog Post:

ii) A year from now, how will you evaluate if I have been successful in this position? (Source : Louise Fletcher's Bluesky Resumes Blog)

iii) what is next step? Where will we go from here?


Suggestions, views and comments Welcome

Shrini

Thursday, November 09, 2006

Context Driven thinking in Testing ...

I have been discussion/arguing with BJ Rollison on the issues of "Schools of testing" here ...

http://blogs.msdn.com/imtesty/archive/2006/10/20/end-segregation.aspx

BJ is suggesting to end the seggration of four schools of testing - which I strongly disagree.

James Bach blogged on this here http://www.satisfice.com/blog/archives/74

and look at this simple explaination (by James again ) of equating adapting to a context to "parenting" here ...

http://www.satisfice.com/blog/archives/60#comments

"There’s only one context that matters– the one that you are in at the moment. If that changes, then you adjust accordingly. It’s like parenting. You don’t have to figure out how to parent every child, just the ones that belong to you. The context-specific attitude says adapt to your children and then stop adapting. The context-driven attitude, taken to its logical conclusion, is like a child psychologist’s approach. Child psychologists need to know how to adapt to any given child (normal and strange) who walks in the door.


For the same reason, if you figure out how to report coverage on your project in a way that works for you, you can’t assume that the same method will work for me, nor do you need to worry about whether it would work for me. You can’t tell me “James, I have discovered the right way and you should do it my way, too.” What you say, intead, is “James, would you like me to describe some experiences I’ve had with coverage reporting? I feel good about how I do it, over here in my project.
"

Expect more on this in coming days

Shrini

Hola SPAIN - QA&TS international conference and me ...

Hola SPAIN….

I was at Bilbao, SPAIN for a QA&Testing conference on embedded system – oct 18-20. I spoke about “Test case design and Automation” – a topic that I am working on since last 3-4 months. My talk and others in Test automation track were the ones that focused on “non embedded’ software systems. This paper was an initial attempt to explore the two big, complex and more of mis-understood concepts in software testing – “Test case design and Test automation”. I will be continuing to work on this topic and explore more about these. I was also given an opportunity to express my views about “Future of software testing and Challenges” at a round table discussion in the one the evenings at the conference.

I met few nice and interesting people in the conference – Paul Jorgenson, Scot Barber, Ray Arel, Doron. At lengths, I discussed with these people about various topics ranging from automation, Test design, Testing and automation in Embedded systems space.

Paul Jorgensen,(Author of the book "Software Testing A craftsman’s approach") - with his depth of experience in testing (both from Working Telecom industry and University experience), was a pleasure to listen and discuss. He gave me a patient hearing for my bugging questions on topics related to test design. His presentation on “All pairs testing” was one of thought provoking papers of the conference.

Ray Arell of Intel was very lively and quickly mixed up with group and I never felt that we were meeting for the first time. His presentation on “How to expand and improve your Test capabilities” was another great presentation of the conference. Ray is a highly experienced professional with about 21 years of exp and has to his credit a book on the topic “Change based Test Management”. Ray with his witty comments, kept the participants hooked up to him all the time. My discussions with him were related to testing and related practices in microprocessor/Semiconductor industries. We traveled together right from Bilbao to Bangalore.

Scott Barber (www.perftestplus.com) managed to be at the conference on the second day and was another interesting person to meet. Scott’s proximity with people like James Bach, Michael, Bolton, Cem Kaner, especially made me to spend more time with him and understand his current areas of interest. He is performance Testing Guru – gave lots of good tips and hints to me on the topics ranging from Performance testing, Test automation, Challenges in Independent Test consulting and future trends in Test automation – which prefers to call as “computer assisted Testing” (Which I agree with him). Thank you Scott - for all those valuable suggestions that you gave me.

One very pleasant side effect of my presence in this conference was ‘exploring” the beauty of city Bilbao. I am nature freak and love to hang around greenery, water bodies, lakes etc. Few locations in Bilbao provided me a perfect opportunity to be with the nature. I struggled with language at few occasions and that with the food – it was difficult to find a 100% veg food that I need. Conference organizers at every dinner and lunch – made special effort to some closest Veggy food for. They did great job at making this conference. In all, it was a very enjoyable and knowledgeable trip for me. I definitely look forward to participate in QA&Test Conference 2007 … bye Bilbao till then.

Monday, September 25, 2006

Bug or a Feature ?

Differentiating between bug and feature - more often than not is a result of someone (typically a developer or a tester) trying to prove some other person (typically again a developer or a tester)wrong. Somebody says "See this seems to be a bug to me - I trying to be like typical end user" where as somebody yells back "Look this is as per this document and no where it is mentioned that the feature should work like this".

In simple words, the difference between bug and a feature OR "Desirable" and "Undesirable" OR "Expected" an "Not Expected” - is with respect to some REFERENCE. What is that reference? Who defined it? How credible is it? The moment all the concerned parties and entities involved in Bug-or-Feature conflict agree upon this Reference - the distinction becomes very very clear.

In most of the cases - requirements document, market survey or some expert opinion is considered to be the reference. The confusion in most of times is because of lack of reference (Oracle) and still worse the lack of knowledge that “there is no Oracle".

If I find myself in such situation - I simply say "This is my observation on this feature. I think we should analyze and explore to see if anything wrong here. I would be happy to participate in this exploration. I would not get into issue of whether it is a bug or feature - I am not the right person to decide that. I only report on observations that I make with respect the features I test.

so next time when you hear an argument like this --- Just ask this question - "Can all of us first agree upon the reference?". I bet, you would sound lot intelligent in the crowd ....

Shrini

Tuesday, September 19, 2006

Are you making most of Test automation?

Test automation is a beautiful and a very handy concept. Making computer doing things while you watch all those things that you would otherwise don’t observe – can be a very powerful thing in Testing. Deploying Automation can magnify the reach of manual tests in lots of ways. Unfortunately, Automation in lots of places has been deployed with intention to replace (not supplement, enhance or extend) human Testing.

If you are using Test automation for following cases – you are making the BEST use the investment made in creating and maintaining automation suites …

1. Those tests that have a high chance of human errors, those features that under go minute changes frequently that a human eye is likely to miss.
2. Another extreme of (1) – Routine and mundane tasks like installation and smoke tests.
3. Frequently repeated, High volume repetitive Tests
4. For covering multiple platforms
5. Those cases where it would impossible to perform a test manually
a. Simulating some behavior that exposes a risk e.g. memory leakage


Less desirable or efficient Usages of Automation

1. Automating a set of low power manual tests just to cut down the cycle time of testing of those tests manually – without evaluation of those tests.
2. Automating every test that is possible to automate and aim at covering maximum of testing by automation – a notion of “more is better”

Shrini

Tuesday, September 12, 2006

Top burning issues in Software Testing ...

Top 10 Burning issues in Software testing ….

While thinking about asking a thought provoking question to make James Bach - write about it, I got this idea. Here is a post that lists top 7 (want to reach 10) burning issues in software testing today. I solicit other issues and thoughts about how to address them … Write to me, I will consolidate them post back on this blog …

1. External Issues

a. Business pressure (cost reduction, quick time to market, proliferation of computers –hence growing complexity of software systems hence growing complexity in Testing)
b. Tester’s place in overall Software Engineering eco-system (conflicting roles and responsibilities)
i. Developers
ii. Business analysts
iii. Sales and Marketing
iv. Stakeholders
v. End users
2. Project management
a. Predictability - questions like - when you will be done? How far to go? How much time it will take and how much does it cost?
b. Dependency on Business Analysts (specs) and Developers (code delivery) and Project manager expectations (deadlines)
c. Accountability issue – What if a bug is missed from testing?
d. Impossibility of 100% Testing
e. Testing resources are limited, added late in the project cycle
f. When development delays – Testing time gets chopped – Deadline remains – when the testing misses a bug – heads are rolled in testing.

3. Justification for Existence issue

a. Objective of testing
b. “Anyone can do” Notion
c. Testing as quality Gatekeeper
d. Awareness issue – how make others especially stakeholders understand value of testing.
e. Outsourcing

4. Hiring and Managing testers (Performance monitoring)

a. Skill issue: what is important skill to look for in a good tester – Technical knowledge, business domain knowledge, Test process knowledge (all folklore and legacy), Formal Techniques and methodologies in testing, Good learning and thinking capabilities and so …
b. Objective goals – How do you know tester has done a good job
c. Measuring the performance of testers by # of bugs logged
d. Notion of “Tester needs to have deep technical knowledge (not mere process stuff) so that he can get respect from development
e. Notion of “Testers needs to be a business domain expert “– Modern day interview question – “What all domains you have worked on “? Do you have any experience on “Telecom (that too billing) domain, health care, Do you have knowledge in Capital Markets? And so on…
f. Notion of “We need testers who can code”

5. Tools and mechanical Part of testing

a. Automated Testing (not Automated Test execution)
b. Regression Testing ( Repeatability Argument)
c. Testing is a branch of Computer science (yes or no?)


6. Philosophy of Testing Issue – what do you think a tester should do?

a. Handling Process and Metrics fanatics
b. Testing as Factor assembly line (test cases IN, Results/metrics OUT)
c. Find bugs – more bugs – better Testing
d. Prove that software works
e. Quality Assurance/Quality

7. Lack of Education and Skill development programs

a. General Awareness among the community
b. Problems with certifications
c. Formal university Programs
d. Research

Shrini

Thursday, September 07, 2006

Ask James Bach - a question on Testing...

James Bach has a post on his blog inviting questions on Testing. This open invitation comes with a rider ... only *interesting* questions will be answered, rest will ignored and best ones will be *awarded* with James writing a whole new blog post on it. I am still thinking on coming up with a question that will make James to write a seperate Blog post - that will be a real question.

Just to remind you the commenting policies on James' blog --- Any comments that makes to his blog post (after moderation) are considered to be useful to the readers of the blog (as endorsed by James himself).In my opinion that is a like "treat" to me when my comment makes it to comments list.

So what are you waiting for ... Just grab the opportunity ... Ask James a nice question on Testing ...

Thursday, August 10, 2006

A good bug report ...

Read this bug report for Mozilla Browser ---

https://bugzilla.mozilla.org/show_bug.cgi?id=154589

What makes this bug report a special example ...

1. Clearly explains the background of a bug - with example
2. Makes a strong case for Why bug is important from user perspective .
3. Persuasive enough for a stakeholder press for fix.
4. Examples of other sources that give references.

As Cem Kaner puts it -- a good bug always makes developers to fix it. If you have managed to draw attention of developers, PM and other stake holders - you have made a strong beginning - make the bug report look appealing

BTW, bugzilla.mozilla.org is a best place to learn

0. It is an open Bug database -- A huge knowledge repository.
1. Good bug reports - look for bug patterns - learn from them.
2. Know about security Vulnerabilities and Brower issues.
3. Learn and brush fundamentals of Web and standards that make "Internet"


Shrini

Sunday, August 06, 2006

Test Automation - Takes toll of Microsoft Testers ....

I read this old story (Reported by Seattle times in and around Jan 2004) about Microsoft Laying off 62 Testers in Windows group.

http://seattletimes.nwsource.com/html/microsoft/2002155249_mslayoffs20.html

Because (as reported by Seattle times - www.seatletimes.com)

1. They had automation so testers not required.
2. They need to cut cost - either send jobs to India (low cost option) or aggressively automate...

It is pretty sad to note that a company like Microsoft (I am an ex-Microsoftee) is taking step like this. Conventional wisdom and all classical/contemporary literature on Test automation makes it clear that "automation cannot replace human beings and human part of testing". I am at loss to understand why Microsoft (some groups in MS) thought that automation can replace Testers.

This is a story published about more than year ago and is not an official communication from the Redmond based Software giant.

But The Seattle Times, being the largest daily newspaper in Washington state and the largest Sunday newspaper in the Northwest. Well respected for its comprehensive local coverage, The Seattle Times, winner of seven Pulitzer Prizes, is also recognized nationally and internationally for in-depth, quality reporting and award-winning photography and design.

I am afraid it sends wrong signal -- Microsoft should have (might have) done something to set this right...

Anyone listening?

Shrini

James Bach on automation ...

James bach has posted following blog post on Manual Tests and automation.

http://www.satisfice.com/blog/archives/58

It is prety interesting stuff. Read the post and my comments on that post.

Especially Rules of Automation as per James

Test Automation Rule #1: A good manual test cannot be automated.

Rule #1B: If you can truly automate a manual test, it couldn’t have been a good manual test.

Rule #1C: If you have a great automated test, it’s not the same as the manual test that you believe you were automating.

Note the comments chain for that blog post --- Really thought provoking

More on this later ...

Shrini

Sunday, July 23, 2006

W3C Markup Validator ...

Michael Bolton (www.devlopsense.com) mentioned about this tool while we were having lunch together at Toronto. I was looking for ways to assess, report and hence improve the Testability of Web applications and hence improve automatability. I found this to be something that is close to what I was looking for.

http://validator.w3.org/

Using this tool we can verify the web pages of the application against W3C and other Web standards.In my opinion, confirmane of web applications to standards like W3c are useful from following aspects

1. Application upgrades and future enhancements in platform and core technologies - will become less painful and provide cost advantages. For example - using new web/application server, supporting new mobile platform, Technology upgrades in J2EE and .NET.

2. Improve Testability - This is a big issue in Automation. If the applications that are candidates for automation are not built for testability ( simple things like having unique IDs for gui controls and Windows so that automation tool can recognize them) - automation will be difficult and will cause lots of custom code to be written. At the end in terms of both development and maintenance of automation solutions for web applications.

In addition you can also find other free tools at

http://www.w3.org/QA/Tools/

shrini

Friday, July 14, 2006

Some web security related stuff

Information about 3rd party cookies

http://www.mvps.org/winhelp2002/cookies.htm


Using hosts file to block third party cookies

http://www.mvps.org/winhelp2002/hosts.htm


General Secutiy Issues in windows and IE

http://www.mvps.org/winhelp2002/security.htm#Firewall

Thursday, June 08, 2006

Adaptive Automated Testing ....

I came across this topic on automation
http://www.aberrosoftware.com/aat.html

Most of the White paper looked like a sales pitch for the Aberro Product but this seems to be a new and interesting concept.

A brief review by me ----

Objective of automation

 Provide high coverage of the application under test, cost effectively --- Accepted
 Enable early deployment in the development cycle when defects are less expensive to fix --- Does not come under Traditional Test automation that we know today. I would rather apply extensive skilled manual testing and reviews to limit the defect early in the cycle – Not by using automation.
 Be fast and inexpensive to develop and maintain tests – Fast - yes but “ inexpensive to develop” will potentially remain is “Forever wishlist”
 Eliminate the requirement for programming skills – Very incorrect advise and nearly impossible goal to achieve
 Adapt well to changes in application functionality - Very Good one and somewhat likely to achieve
 Enable fast, unattended test execution – Accepted and achieved in most of tools available in market today
 Provide strong verification capability – Verification capability is a strongly related to Testability of the application. Most of the tools in the market today don’t have this as key functionality



Comparison of Various Automation Techniques
Page 11 – good one.

What Adaptive automation means –

1. No Test authoring – No manual test cases required for automation
2. Can be adapted in any phase in development cycle – even when the application is unstable.
3. Almost in-sensitive to application changes – Tool adapts to the application.

I am not sure how it works, appears to be interesting.


Finally few Not-so-good points in the paper

1. The paper title has keyword “Automated Testing” instead of “Test automation” – there is a difference. In no sure terms, testing as an activity can not be automated. Only execution part can be automated. Anything that talks even remotely about “Automated Testing’ in software world is a suspicious and is a sales pitch.
2. Paper starts off selling “Testing” by quoting some famous quote describing 60 billion dollar loss by software defects. Do we need to sell testing by talking about some survey done in 2002 and by talking about defects? I am sure there is better way ….
3. Mentions that “Manual Testing is labor intensive and hence the cost” – So is quality. Even development is labor intensive – why there is no concept of “Automated development” – one that sells? Saying manual testing is costly hence go for automation is a bad argument in favour of automation. Further equating manual testing to “brute force” – is to insult the craft of Manual testing – I strongly object to this.
4. Mixes up QA and Testing all throughout the document.


Shrini

Wednesday, May 17, 2006

A note on Test estimation ...

I was discussion with one of my colleague about test case estimation where a question was raised about “how to handle creep in terms of number of test cases? Let us say initially if we estimate x number of test cases and this number at later point of time becomes 2x”

My response was -

Estimation is always an iterative process. You typically make estimation in terms of test case early in the test cycle - that is in planning phase. Make all the stake holders clear that "Estimates are based on current understanding of the application and test requirements and are likely to change". Have this as main "Disclaimer" in the test plan or estimation document.

This will give flexibility later in the cycle to ask for more resources and time. If you make your PM and other stakeholders crib or complain - Tell them as test team progresses, test team will get more understanding and initial estimates are likely (mostly will) change. Just say "I told you so" and show them the line in the test plan.

This is diplomatic way of handling future uncertainties in test estimates

Other side of this situation is that - test cases number increasing from 2000 to 4000 will happen in test design phase. So you can write all those 4000 test cases if time permits and execute only important ones during execution phase.

So far, as I have seen in this industry - Test effort estimation happens by experience, manipulation and intelligent planning. It is more of negotiation and communication skills than any science or proven method.

When development - in spite having about half a dozen estimation techniques, international bodies or knowledge, certifications like PMP - fail more often in estimating development effort, we, in test while estimating, can start with some good value and keep it open for future updates.

Please spare testing community from getting subjected to scrutiny for estimation... We are learning

Shrini

Wednesday, April 26, 2006

Windows Registry hacks ..

Here is a lazy post but quite useful ....

Your one stop shop for all registry hacks on Windows

http://www.winguides.com/registry/

My Fav registry hack is to block access to specific hard drive (let us say C) to unauthorised users ....

Other one is preventing Right click on Desktop....

Shrini

Wednesday, March 08, 2006

automationjunkies ....

Found this interesting site --

http://www.automationjunkies.com/index.shtml

Looks bit outdated (appears to be updated last in 2004. But is a good collection of resources related to automation...

Check it out ...

Shrini

Thursday, March 02, 2006

QA and Testing - Debate continues ...

Further to this discussion on QA vs Testing – Michael Bolton makes a very interesting statement about what QA can do which tester can not or not empowered to do.

For myself, I don't like the term "quality assurance", and will do what I can to make sure that I'm called a tester. Unless I have authority and control over schedule, budget, staffing, product content, and product direction, I don't have the ability to assure quality in a product. I can report on it, though--and that's what a tester does, in my view.

Here is the complete Google group discussion thread on the topic
http://groups.google.com/group/comp.software.testing/browse_thread/thread/71c357a1c4d1b342

Shrini

Wednesday, February 22, 2006

Estimation for Test Automation Part 1 ...

Test estimation itself is mystery or magic tool that every test and project manager is trying to master these days with very little success - I should say. Then Test estimation for Automation gets little more complicated as it involves other software - automation tool. It is like a estimating for full-fledged application development and testing project. Treat it like nothing less. Ask your manager or his/her colleague PM how they do estimation for complete project (dev and test) - Take some leaves out of their experience.

Some thoughts to get you started off :

1. Like software development project - automation also has its own similar lifecycle
a. Automation Planning
b. Test cases Analysis (= Requirements phase)
c. Automation Design (= Design Phase)
d. Coding and Unit Testing of Scripts
e. Automation builds
f. Source control and Testing, Defect management
g. Deployment on Test lab - try run - fix and re-run
h. Sign off
So be sure to factor in time for all these - just focusing on test case and their complexity is a surely leads to underestimation

2. Following are common and assumed to be factored by the PM - make sure you check whether your team is up there.
a. Required tools licenses
b. Team having training usage of QTP and some development experience
c. References like coding guidelines, Config Mgmt guidelines and other dos and donts kind of documents
d. Setting up of test environment - this is very imp. Mostly we assume that it is there and when you are starting the project you will spend more than you estimated time in getting Test environment up and running for whole group. Take this point little more serious if you are running an “offshore-onsite” type of automation.
e. Framework - supporting structure other than the tool - decide whether you need it or not
f. Other tools and software requirements like - VSS, Database, shared drive to keep common stuff etc

3. Test cases - you need to look at test case complexity from a different angle when you are automating. It does not help in classifying test cases as simple, complex etc. You should look it from over all development of common code perspective. Take a set of logically related test cases and think how many re-usable functions you will require - each for navigation, data input and verification. More the number of functions that a test case needs – more complex it will be for automate. Hence take more time. While estimating always consider a group of test cases not individuals. In a test case items that need to considered are - number of steps, number of inputs and number and type of verification points.

When you are asked for estimation for automation - ask for time to analyze the target test cases and then make judgment call. If you are asked to estimate in a quick and dirty way - shoot back asking - how much of error in estimation they are willing to take - 20-30%? Tell them you will refine your estimates once you have a complete look at test cases. As it happens in development you will revise your estimate after requirements (if it is allowed) - Do it after test case thorough analysis....

Rest in part 2 ...

Shrini

Monday, February 20, 2006

Pairwise Testing ...

This is an interesting topic in testing related to testing of a feature that is influenced bt multiple variables. Here is one blog post from Apoorva Joshi - which is like single reference that points out to several others notable one - each two Famous Michael's in testing community - Michael Hunter of microsoft and Michael Bolton.

http://criticsden.blogspot.com/2005/02/pairwise-testing.html

Right now, I am too anxious to make this post so that I can comeback later with my comments ...

Here are my quick questions about Pairwise testing
1. Why pair is imporatant? What about triplets or 4 variables at a time?
2. How is concept of "Orthogonal" or "Taguchi Arrays" related to Pairwise testing
I will study and come back on this ... Meanwhile enjoy reading above thread ...

Shrini

Server Virtualization

In computing, virtualization is the process of presenting a logical grouping or subset of computing resources so that they can be accessed in ways that give benefits over the original configuration. This new virtual view of the resources is not restricted by the implementation, geographic location or the physical configuration of underlying resources. Commonly virtualized resources include computing power and data storage.

A good example of virtualization is modern symmetric multiprocessing computer architectures that contain more than one CPU. Operating systems are usually configured in such a way that the multiple CPUs can be presented as a single processing unit. Thus software applications can be written for a single logical (virtual) processing unit, which is much simpler than having to work with a large number of different processor configurations.

Virtualization is about running an Operating System (the guest OS) on the top of another OS (the host OS). This technique enables running several virtual machines with different OSes at the same time on the same hardware. VMWare, MacOnLinux, and Xen are examples of virtualizer software. Virtualization requires guest OSes to be built for the host machine processor. It should not be confused with emulation, that do not have this requirement: When an OS runs on the top of a virtualizer, its code runs unchanged on the processor, whereas an emulator has to interpret to the guest OS code. MAME or Basilisk are examples of emulators. Binary compatibility is another different feature: it is the ability of an OS to run applications from another OS. For instance BSD systems are able to run Linux binaries.

Avenues for Virtualization software

1. MS Virtual Server 2005 - http://www.microsoft.com/windowsserversystem/virtualserver/default.mspx

2. VMWare – www.vmware.com
3. XenSource - http://www.xensource.com/ (from Xen Open Source community)


References:
http://www.virtualization.info/
A great article with an introduction to Virtualization [kernelthread.com]
Microsoft Virtual server Road Map :
http://www.entmag.com/reports/article.asp?EditorialsID=87


Advantages of Virtualization

1. Increase utilization of existing Server hardware
2. Easy maintenance
3. Help in Business continuity and Disaster Recovery initiatives


Shrini

Friday, February 17, 2006

Automation of Setup programs ..

Let me make a blanket statement - My first impression and view is that "Setup programs" in general are not suitable for automation. They are best tested manually - unless you work for company like InstallShield whose products themselves help in creating setup programs. Those seup programs having 5-6 steps and taking just folder name as input - are not going provide return on investment for automation.

You decide to play devil's advocate and say -"I don’t agree with you" and insist on automation - Here is one approach.

1. Identify how big is setup program - how many steps are there? 10? 20? or more than 20? what are the variations possible? 100+? If yes - automation will help you.

2. Now identify the most dense and cluttered screen/step in setup that takes large number of inputs. Automate that screen only and proceed and automate let us say top 5 critical screens with 5 scripts.

3. In my opinion - there is no need to automate the setup flow unless there are more than 20 steps and 100+ variations possible.

Look at the beauty and usefulness of such analysis - you are trying to do automation, in the process, you ask so many questions. Think and create those scenarios which otherwise would never have been explored if you are following a structured and scripted test plan. See the value here. At the end you may or may not automate all those scenarios but while trying to automate and tying to convince that automation is way to go - you have tested and enriched test scenarios.

To quote Michael Bolton (www.developsense.com), a friend and mentor - "Often automation, in itself may not lead to good testing or value directly but during the course of preparation whatever the analysis you do and questions you ask "How can I verify this"?”Why should I automate this?" "What can go wrong here" - are VALUABLE and should be done.

How? By always playing a devils advocate - "why" and "why not" If you stop questioning and just accept what is being told to you - you stop learning and cease to become a Tester...

Shrini

Sunday, February 12, 2006

Shrini at STEP IN Posted by Picasa

Tips for Developer Testing ....

Do stop by at this blog post to get Braidy tester Michael Hunter (Microsoft) tips for Developer Testing ...

http://blogs.msdn.com/micahel/archive/2006/01/25/TestingForDevelopers.aspx

Shrini

In quest of automation tools ...

As automation is catching up like wild fire in software testing field – people are frantically searching for cost efficient ways to do automation. Some cash rich companies are investing in industry standard and proven automation solutions from leading tools vendors like Mercury, Rational, Compuware and Segue – other “not_so_rich” companies are struggling around “open source” free tools. Some of the product companies like Microsoft, CISCO – invest in developing their own in house tools. So, broadly we have three categories of automation tools in testing – Commercial automation tools, Open source tools and in-house tools. The first two are available to testing community at large. The information about commercial tools is rather well known and is available at respective websites

Mercury - http://www.mercury.com

Compuware - http://www.compuware.com

Rational - http://www-306.ibm.com/software/awdtools/tester/functional/index.html
(It is quite surprising to see that the information about "once highly popular automation tool" Rational Robot - has gone so deeper into IBM site - it is hardly visible link on IBM main site)

Segue - http://www.segue.com


Here are some free tools on web (a partial list based on my own searching of such free tools).

1. Watir – Web application testing in Ruby http://wtr.rubyforge.org/
2. Watir Web Recorder - http://www.mjtnet.com/watir_webrecorder.htm
3. Open source testing tools http://opensourcetesting.org/
(Note that these are “TESTING” tools not “AUTOMATION” tools)
4. Selenium- A test tool for web applications- http://www.openqa.org/selenium/
5. TestMaker – Framework for Test automation of web based applications and Web services. http://www.pushtotest.com/Downloads/

This is ever growing list – Good thing is that more and more people are investing in developing Open source tools. This will build up pressure on Commercial automation tool vendor to offer tools that are cheaper price and rich in functionality.

Shrini

Further on the road in becoming a finest software tester ...

One of my blog reader asked my views on things that a good tester should invest on and personal traits and qualities of a good tester. I did write about it at my blog in this post. Here is a sequel to it

1. First important thing become a great testers is to question. Question around you anything - be it Door Knob, Watch, your vehicle, your access card, your rice cooker, Gas stove, TV, cell phone. Get curious about everything around you. Find bugs in everything that you see. World is a giant software and is full of bugs. Find bugs there. Then finding them in software will become fun.

2. Powerful observation: can you observe things that others don’t see? Can you notice that little color fading on the billboard hoarding? Can you find error or mistake in Prime ministers reported speech? How about in annual report of Infosys or IBM? Be aware of anything around see deep into it. Smart testers find bugs by observing carefully and are always curious about things - they never stop.

3. Memory and analytical skills. Take tests or training to improve memory. Solve puzzles, play chess, Suduko, jigsaw puzzles. Few of these tricks are mentioned in this blog post.

4. Becoming a good tester should be your goal - "logging tonnes of bugs" or "getting expertise in automation" - will come as side effects - they will be natural to you. Remember - there are no shortcuts - be lifetime student of learning, questioning and observation.

Have you read articles from James Bach and Michael Bolton on Critical thinking? If not, they should be good beginning. Don’t expect immediate results. Depending upon how committed are you and how fast you get into the mode of questioning and observation - it could take about 1-2 years to call yourself a descent tester. That is my rough estimate.

Current industry trends focus on process, factory thinking and want to make testing as routine and predictable - real testing is always interesting, instantaneous and dynamic. You can not do good testing by a following a “pre-laid out list” - You need to think.


Shrini

Wednesday, January 25, 2006

Testing ideas for Database Stored Procedures ...

I happen to see a query in Google group on software Testing regarding testing of stored procedures ( as a part of Database testing). Here are my views on this topic

In one way Stored Proc (SP) Testing is like a testing an API. So apply all the rules that you would apply for API - to SPs also. You can design test cases treating Stored Proc as black box - design all possible combinations of valid and invalid inputs and observer outputs. Be sensitive about the fact that when a SP is tested like an API – you will supply certain inputs which are otherwise not likely to be fed to it if used by a middleware component. So developer may reject a bug related to SP saying that SP will be never be called with such set of strange of inputs – client/middleware layer will filter all bad or unlikely inputs.

For structural Testing (white box) following things come to my mind...

1. You might want to consider measuring Cyclomatic complexity of the code (of loops in it). I believe there are some tools that measure this. Here are few links related to CC measurement. By the way this is also referred as "Code complexity". Higher the code complexity - higher the testing effort required to validate it.

http://www.sei.cmu.edu/str/descriptions/cyclomatic_body.html

http://www.linuxjournal.com/article/8035


2. Databases products like MS SQL server (oracle - not sure) provide monitoring / profiling tools to assist in the time taken for SP execution and other runtime parameters. This will give good idea about SP's Runtime performance. I used MS SQL server Profiler tool to monitor SP execution.

3. You could test SP for security/access control.


Shrini

Tuesday, December 27, 2005

QA!= Testing : Mother of all testing related debates is here ....

Whatever is QA, It is not testing – Cem Kaner
Do you want any better and authentic source of opinion on this topic? May be not. It is the time to open the eyes. Let us face, Testing has grown big enough to be separated from its Big brother QA. My sole aim of writing this post to educate the mass about the misconception caused by using wrong terminologies at all places.

My versions of differences:

QA: An entity that observes the activities of software engineering (while being performed or at the end of some logical milestone) and verifies that all documented or know rules and procedures are adhered. This entity does not “Touch” the work product being produced. It watches the process.

Test: Something that gets hands dirty, works with the product, uses it, abuses it, checks all relevant documents etc. Something that is similar to the development team that produced the work product – by from an evaluation angle.


These definitions are my versions and I have attempted in my best abilities to represent the mass opinion about these terms. I could be wrong … I am ready for a debate.


Problems of using QA word for testing:

Testing can not assure quality. “Testing” only measures or attempts to measure it. Quality has to be built from the beginning not to be tested later.
Assuring Quality is everybody’s (in the team) responsibility. How only testing team can be entrusted with that job? In true sense, it is management that is the real QA.
If it is strongly believed that Testing is responsible for ensuring the quality – then others can say – it is testing teams to job to ensure quality not ours.
Sets up wrong expectations about the role of testing.


Notable posts that discuss this topic

http://www.testingreflections.com/node/view/827
http://blogs.msdn.com/micahel/archive/2004/10/27/248564.aspx
http://www.stickyminds.com/sitewide.asp?ObjectId=3543&Function=DETAILBROWSE&ObjectType=COL

Shrini

Saturday, December 24, 2005

AJAX - Making web experience more interactive ..

AJAX - A technology of bringing desktop software experience to web users. In this it is possible to do the things like editing data on client browser and get an update without web server resending the updated information. Be it “Editing photos on Yahoo photo albums” or using “Google maps” - it AJAX at work. Ajax is not a technology in itself, but a term that refers to the use of a group of technologies together.

AJAX - stands for "Asynchronous Java Script with XML". Java script is a default scripting model for web pages that is used to perform most of the client end data handling, rendering and other local processing. This when combined with XML can transfer the things greatly

AJAX is often considered as a threat to Macromedia's Flash Technology. Flash is more popular technology for rendering dynamic content in the form of multimedia, video based content at Web pages.

Companies like Microsoft, AOL and Amazon are already off the block in deploying this technology to spice up their Web sites. There is an increasing trend in web designers to embrace this. Ajax applications look almost as if they reside on the user's machine, rather than across the Internet on a server.

Read here more about it --
http://en.wikipedia.org/wiki/AJAX
http://www.adaptivepath.com/publications/essays/archives/000385.php

Happy Christmas and New Year 2006 ...

Shrini

Friday, December 23, 2005

Risk based Testing ....some points to ponder

Risk based Testing - I can see lots of eyes rolling and eye brows raising on seeing this term. Likes of Project managers and Delivery managers simply love to hate this term "Risk". My boss, Manjunath Hebbar, other day gave a this beautiful definition of risk "A risk is piece of information regarding tomorrow 's (future) problem as on today (present)" and he further said - in contrast this an issues is "problem in the hand". Well - this discussion with him was around "project management" not about testing.I am not that competent to talk about project management, so let me be happy with what I am good at - testing.


Risk based testing is all about "Test" the risks in the object that you are testing. That is simpler while saying. In reality - Risk based testing involves

1. A methodical approach for identifying risks
2. Design the tests to explore the risks - know more about identified risks - As quickly as possible
3. Precipitate the failures resulting from the risks - execute tests - As quickly as possible
4. Explore other possible (unknown in the beginning) risks - As quickly as possible
5. Repeat steps 1..4 until all risks evaporate to the reasonable degree OR you run out of time/budget.


If you claim to have mastered Risk based testing technique - check you have following -

1. A proven method of identifying the risks in the product domain - most of these proven ones have a strong theoretical or empirical back ground ( mathematical/statistical or similar). A method you can rely on.
2. Test design methodology - that is matured enough to design tests to explore and precipitate this risks
3. An exit Criteria to know when are "reasonably done with"
4. Knowledge of "Risks" associated with this Risk based ( ironical isn't?) approach to testing.


There seem to be two broad categories of Risk based testing methodologies

1. Exhaustive Risk modeling and analysis - those with involved mathematical and statistically models one the lines of those used for "Reliability Engineering"

2. Heuristic Risk based Testing - rather simplistic one. James Bach has written an excellent article on this topic. As described in the article this is one of the approaches that can be adopted ( in combination with other "formal" techniques) when you have resource and time crunch. He cautions that the whole concept of "Heuristic" could be a fallible one and more of provisional and plausible. It tends to explore the solution.

I happen to discuss this issue with others in Google group on software testing, here. There are some very interesting and thought provoking ideas from Andy Tinkham in this thread.


Read following lines carefully - before you finish reading this post and tell me what do you think …
1. Testing is more about finding important problems that threaten the value of the product - issues that bug someone who matters in the context
2. Risk is a problem that might happen - higher the probability of Risk - higher is the probability of failure.
3. Should not all testing be risk based? Why would one want to test where there is no risk? When non risk based testing would be appropriate ….

Shrini

Monday, December 12, 2005

Group Test Manager - Role Description

I happen get mail from a head hunter, asking me if I am interested in a "Group Test Manager" position in a reputed Company. When I looked at the job description, what surprised me is that - not even once the word "Testing" appeared in the profile of the job that required to manage a group of test managers. I am not sure this "profile acrobatics" is from newbie head hunter or from a HR trainee who was given the task of drafting job requirements for Group Test manager.
What is the problem here? What if the word "testing" does not appear? What is big deal if I confuse "Testing" and "Quality assurence" or "SQA" or "Production Mangement" or "Software process Facilitation"? The danger of such job posting is that it is loose-loose game. As there will be expectation miss match from both sides. It will be good if the expectations are settled down before the person starts the job. If that does not happen then - chances are that either company or the candidate will start looking for alternatives within months in the new job. Think of the cost of recruitment, training, relocation and other indirect costs that involved in bringing a person onboard - especially at such senior level. If, in the beginning, the hiring manager and HR - fail to put right effort in drafting job requirements and matching title, there would be dark clouds over the future.
Notice that in this job description - the pre-requiresites (Excellent Communication, Interpersonal, Analytical Skills) mentioned last and under "Needless to mention" category. It is unfortunate that these essential requirements for the job are taken for granted - is this the case?

====================================================
Position - Group test manager
Time line - very urgent
Experience 9 to 14 years
BRIEF PROFILE: Must have
Degree in Computer Science or related field or equivalent work experience and
Minimum of 8 years of related experience in SQA and/or Production Management required Combination of equivalent training and related work experience with experience managing technical teams.
Good software engineering process facilitation skills.
Excellent problem resolution, judgment, and decision making skills required.
Strong mentoring, team building, and counseling skills on software engineering and career issues.·
Excellent written and oral communication skills required.
Willingness to travel internationally on a regular basis (at least twice a year)
Nice to have· Overseas experience
Work experience in offshore and outsourced environment
Full SDLC experience· ITSM or Six-Sigma knowledge
Needless to mention: Excellent Communication, Interpersonal, Analytical Skills are pre-requisites.
============================================

Here is another example of vague and ambiguous job description. This is from very reputed internet MNC.

Need QA Engineers with 2-6 experience in following

- Experience in Linux, Solaris and Unix
- Experience in White box
- Experience in Seibel and QTP

================================================

May God bless - the company who requires people with above mentioned skills, Head hunter who hunts for the people, Interviewer who does the interviews and finally the poor candidate -applying for this job (like someone who enters automobile "spare parts" shop asking for medicines)

Finally, if you are somebody who is currently drafting job descriptions for your test positions - Need help in reviewing and fine tuning them. Feel free to contact me at shrinik@gmail.com for my services in this regard.

Shrini

Saturday, December 10, 2005

10 Classic Reasons why Test Automation projects Fail ...

Yes, this is the title of the paper that will be talking about at STeP IN Summit 2006 International Conference on Software testing at Bangalore, INDIA.

Follow this link for the conference details

http://www.stepin.qsitglobal.com/speakers.html

Be there ....

Shrini

www.crazytestingideas.com ?

Hold on !!! this is not the name of my new website nor this is another site I want you to rush and have look at. Then, What the heck is this? Read on ....

At times, when you as tester are frustrated - as you were not able to find bugs - might have done something funky, totally out of box, or outrageously creative or totally un-imaginable and that led you to some interesting bugs? When was it that jumped out of the chair and yelled “Eureka!!!! I found a BUG - Look at this?"

Something on the lines of Shoe testing as James Bach mentions in his Rapid testing class?

One such I was trying to do was similar to Shoe testing. I was testing an application screen that lists some items where the number and status was constantly changing - something on the lines of Flight arrival and departure status in an Airport Gaint Display screen. There was button for "Refresh" I just used a pen top or a clip to stick "enter key" and so that stays pressed - before doing tha I started monitors like Filemon, Task manager at performance Tab ( CPU usage and memory) and left for lunch.

When I came back after an hour so, application had died with some weird message. Looked like a memory leak issue. No would have imagined that application would be used in this way. Through this I came to know about some interesting about the application - very unusual type of testing but some times powerful. Developer might react to this and say “No would ever do this?” “This is not a typical user scenario”. I say – "Look this weird test exposed some unknown behavior about this application which none new till today. I leave it up to you whether you want to fix it or not”

Have you come across such crazy test ideas? Do share –

How about hosting a website - http://www.crazytestingideas.com/ ? Volunteers please? OR Venture capitalist to fund this new company that sells crazy testing ideas on web :-):-)

Shrini

Tuesday, December 06, 2005

Vision of an expert Tester?

Have you heard any skilled tester claiming this Vision? "I can test any thing, under any conditions and under any time frame". I do. Read this another "James Gems" article. Very cleverly he supports this tough goal with qualifiers. What would you do as expert tester if asked to test a nuclear power plant in 30 minutes? How will you react to test "IBM's Deep blue Chess software" in 2 hours? James gives few hints in this article as how to handle such seemingly impossible testing goals.

BTW, thinking of it, all testers should slowly march towards developing ability of "Testing anything, anywhere, under any time frame". Yes with your own qualifiers. It is a tough goal but worth pursuing – in my opinion.

Thank you James - for setting all skilled testers to this journey towards MOON - every tester is invited.

Shrini

Sunday, November 27, 2005

Testing ideas for Webservices ...

I happen to stumble on this link on Mike Kelly's blog where he describes a method of using JUnit and XMLUnit to test webservices. Worth idea to copy ....

http://www.testingreflections.com/node/view/2966

This post is interstingly on "Automation". Read on ...

Shrini

why skilled testers do not like scripted tests that much ...

Jonathan Kohl, has posted an interesting article on "Scripted procedural test scripts". In the post, Jonathan takes us through a story line that points to developers. How about giving a “step by step”, clear and detailed set of instructions to developers and something that is written long ago before the development begins. Will it work? The development manager who heard this, said – “I would fire a developer who would need that amount of hand-holding”. Developer would decide what his best and do that based on good judgment and skill.

Yes exactly - why can not we apply that to testing? As a skilled tester would you like to be constrained by procedural tests? Would you like the hand holding? Notice the keywords that Jonathan mentions in the post - Tester's skill, Heuristics, Seeking for the information when it is not readily available, judgment, Mission of testing. They are very powerful.

He finally calls out for Skilled testers – Are you identifying yourself with Skilled testers community? Are you somebody who thinks that testing is about "Critical thinking" and interested in improving and nourishing that skill? If yes, read books, articles and blogs from James Bach, Cem Kaner and Michael Bolton. Here the websites for your reference...

James Bach - http://www.satisfice.com/
Cem Kaner - http://www.kaner.com/ and http://www.testingeducation.com/
Michael Bolton http://www.developsense.com/ And don't forget to check out articles by Michael on stickyminds.com. They are like big bang on "Rapid Testing" and Critical thinking....

Shrini

Testing is not a mechanical Activity ....

I was reading this article posted on Cem Kaner’s site on - I wish to stress the following statement made in the article -

"Testing is a cognitive Activity - not a mechanical one"

Here is meaning of the term Cognition from Web

Cognition: 1. The mental process of knowing, including aspects such as awareness, perception, reasoning, and judgment. 2. That which comes to be known, as through perception, reasoning, or intuition; knowledge.

Note the keywords: Perception, Intuition, Reasoning, Knowledge, Awareness, judgment – all these represent “testing”. A true testing reflect this basic qualities related to HUMAN INTELLIGENCE.

In todays world of software with focus on "processes", "standards" - we all try to reduce the practice to the testing to a mechanical activity- be it test planning, test case design and especially execution - automation. Given a chance, the whole Non testing world (some even in testing group) would replace all smart and thinking testers with "nicely programmed Robots" - Righting test plans, test cases, executing them, logging bugs, making reports, attending meetings too ?? They follow processes, Do things 100% predictably all the time and don’t crib about "burn out" - All at the push of a button.

Such is the craze and lack of understanding about testing in the current Industry.

Remember "Real Testing" is about cognitive thinking and it is very "HUMAN" - don’t try to mechanize it. Today’s software systems have become so complex that It looses its effectiveness. Next time when some one says "standardization" or creating some types of robots - point them to this article.

Be tester, a thinker and be a human (No pun intended for Non testers - they are human too)


Shrini

Friday, November 25, 2005

LUC - what is Least-privileged User Account?

Today's Tip is related to "Access control and security":

Most of us who test applications on windows platform use an account that has administrative privileges on the machine from where they are running the tests. This means that the application that runs has access to "everything on the computer". Try running the application that you are testing - by logging in to windows as non admin account. You might see serious issues - application may not launch, it may give weird messages and host of other issues that you would never notice if you are not working with an account that has admin privileges on the machine.

Why it is important to Test (Sometimes in a test cycle) by logging as non-admin account?

As golden rule of access control and security - a program should mandate an account privilege that is just sufficient to execute the required functionalities nothing more. Developers while developing typically work on admin account , develop code that assumes admin level of access and put that code on a production box. But the moment some body with non admin account uses the application - some part of the application may break. A Tester should explore the ways in which the application is using the access model and investigate where things look suspicious.

What's the problem with a developer having administrator privileges on her local machine?

The problem is that developing software in that kind of environment makes possible Least-Privileged User account (LUA) bugs that can arise in the earliest stages of a project and grow and compound from there.

Following as an excerpt is from the book Computer Security: Art and Science, written by Matt Bishop, ISBN 0-201-44099-7, copyright 2003.

Definition 13-1. The Principle of Least Privilege states that a subject should be given only those privileges needed for it to complete its task.If a subject does not need an access right, the subject should not have that right. Further, the function of the subject (as opposed to its identity) should control the assignment of rights. If a specific action requires that a subject's access rights be augmented, those extra rights should be relinquished immediately upon completion of the action.
This is the analogue of the "need to know" rule: if the subject does not need access to an object to perform its task, it should not have the right to access that object. More precisely, if a subject needs to append to an object, but not to alter the information already contained in the object, it should be given append rights and not write rights.In practice, most systems do not have the needed granularity of privileges and permissions to apply this principle precisely. The designers of security mechanisms then apply this principle as best they can. In such systems, the consequences of security problems are often more severe than the consequences on systems which adhere to this principle.This principle requires that processes should be confined to as small a protection domain as possible.

Read this article on Least Privilege Access http://www.windowsecurity.com/articles/Implementing-Principle-Least-Privilege.html and this https://buildsecurityin.us-cert.gov/portal/article/knowledge/principles/least-privilege.xml

Try your LUC (luck?) next time by logginng in with the user not having the admin persimission on and see if you can notice something suspecious. Don't forget to send me a mail about the bug that you caught ...

Shrini

Friday, November 18, 2005

Sanitising your Resume...

I was going thru a set of resumes for test engineer position. Following are few things that I really find un-appealing and takes off my interest. I suggest to my fellow test professionals (others in general) - watch out for these (irritants) and if you are convinced that what I am saying makes sense - Clear your resume TODAY....

1. Open your resume in Word and search (Ctrl F) for words like "Involved", "Participated" etc. and delete the sentences containing them. The recruiter is not interested in what all are you were involved or participated - but he/she would like see "what you achieved" by doing that. Here is a way to rate your resume - Give one negative mark every time you encounter such word in your resume. How much did your resume score? Now do you understand why you are not getting enough interview calls?

2. I will not press very hard for this one - but if you do it is better. Get rid of words like "was Responsible for" or any variant of "Responsibility". What attracts recruiter is action word - "Achieved zero Downtime for systems I was responsible for" v/s "I was responsible for maintaining systems and ensure that downtime was low”. Notice the power of action. You will be delivering the same message but in a power packed way. That catches eyes of who "matter" in getting you a new "dream "Job. It is very important that you load your resume with these power packed action words, lots of them - especially in first 1-2 pages.

3. This one is the most "useless" part of resume if it is present. Writing paragraphs about the application that you tested with the names, versions, modules, detailed functionalities. Looks like a copy paste from functional specifications or SRS (System Requirement Specification) of the software product that tested. Watch out, some times this might land you in legal issues with your employer dragging you to court for leaking strategic product information to public - via your resume. This is big TURN OFF for the reader - especially a recruiter who would process and see thousands of resumes in a day.

Don’t forget the thumb rule – 1 page of resume for every 2 years of experience. So a person with 8-10 years of experience should not have a resume that exceeds 5 pages. Less and crisp is better and easier to read.

Enough? Open your resume and sanitize it TODAY.
May God bless you with job offers and interview calls pouring all the way !!!


Update: 21 Nov 2007 -- I referred this post to someone and at the same time happened to read another related but useful post on the same topic ...

http://steve-yegge.blogspot.com/2007/09/ten-tips-for-slightly-less-awful-resume.html


Shrini

Thursday, October 27, 2005

Getting Buggy ... Some gyan about Bugs ...

This is post is in response to few typical interview questions that are posted in various user groups. I typically don’t jump in and answer to them - But in some case I just can not remain silent - some questions and answers make me to speak up. As if they beg for answers. Here is one such occasion and here is how I responded....

1. What is difference between Bug and Defect: There are many definitions that float around there are no simple and universally acceptable definitions for these things. When used in an informal environment, both defect and bug mean same thing. It is some unwanted unexpected behavior that bugs somebody who matters. This definition of bug does not change depending upon the phase of SDLC. A bug is a bug is a bug is a bug. Same holds good for defect. I quote Cem Kaner, James Bach and Michael Bolton in this connection. Believe me they say same thing - no one can dare to question them in the knowledge in testing field. As per Michael Bolton - “I say that you may define "defect" in any way that you like, as long as the person that you're speaking with or writing to understands your definition."

Michael Bolton in a Google group post --

A bug is something that threatens the value of the product, or, if you like, a bug is something that bugs someone who matters. Both of these definitions come from James Bach. Your definition may differ. "We" depends on the context of the project. On a typical project, someone (the project manager) has the authority to determine whether something (a bug, failure, fault, defect, and symptom) is serious enough to merit attention. In my context, an intermittent problem is a bug if the project manager says it's a bug. James also wrote an article on intermittence in his blog; try http://blackbox.cs.fit.edu/blog/james


Following is an excerpt from Cem Kaner’s blog – Note that according to him the use of word defect in more formal context means "Legal Implications". If there is a defect in software, an end user can sue the producer of the software. He recommends that word "Bug" is more informal.

Quote: Cem Kaner --

I have two objections to the use of the word defect.

(a) First, in use, the word "defect" is ambiguous. For example, as a matter of law, a product is dangerously defective if it behaves in a way that would be unexpected by a reasonable user and that behavior results in injury. This is a failure-level definition of "defect." Rather than trying to impose precision on a term that is going to remain ambiguous despite IEEE's best efforts, our technical language should allow for the ambiguity.

(b) Second, the use of the word "defect" has legal implications. While some people advocate that we should use the word "defect" to refer to "bugs", a bug-tracking database that contains frequent assertions of the form "X is a defect" may severely and unnecessarily damage the defendant software developer/publisher in court. In a suit based on an allegation that a product is defective (such as a breach of warranty suit, or a personal injury suit), the plaintiff must prove that the product is defective. If a problem with the program is labeled "defect" in the bug tracking system, that label is likely to convince the jury that the bug is a defect, even if a more thorough legal analysis would not result in classification of that particular problem as "defect" in the meaning of the legal system.

We should be cautious in the use of the word "defect", recognize that this word will be interpreted in multiple ways by technical and no technical people, and recognize that a company's use of the word in its engineering documents might unreasonably expose that company to legal liability.

Unquote Cem Kaner.


2. Bug severity v/s Priority - Who assigns them: When the developers have plenty of time, bug arrival rate is lagging behind the fixing rate - nobody really cares about "Priority" and to some extent "Severity". Both of these are filtering mechanisms to select few bugs from the whole lot so that only important ones are fixed first. Severity is one way of grading the bugs from "bug impact" point of view - so that tester can influence the fix - say "This is more serious needs to be fixed first". After all, as very few people know - real value of tester is in getting a bug fixed than simply logging it. Severity can be/is assigned by the tester; can be modified by test lead if there is real need. There after it is in developer’s court. Developers use rating called "Priority" to pick top bugs to fix. So priority is set by Dev lead in consultation Program/Project manager some times even client will get involved. This can not happen without buy-in from test lead. What I am describing is IDEAL situation. In a mature Test organization this happens. I have see this (been a party to it) happening in companies like Microsoft. In Microsoft (also in many other organizations) – they use a process (ceremony in Agile world) called “Bug Triage” where dev lead, test lead and PM sit across the table with the list of bugs and deliberate on severity and priority. More often than not, the discussion is more oriented towards “Priority” than “Severity”. Bug Triage meeting is a formal platform to change the “Severity” and “Priority” levels.

In most of the test groups I have seen, Severity rating unfortunately is a means of performance of developer or tester. Like “This tester has logged 10 Severity 1 bugs” OR “The module developed by you had maximum number of Severity 1 bugs”. But then, that is another big topic of debate.

Last but not the least - All those who wanted to know about bugs but did not know whom to ask - read "Bug advocacy” by Cem Kaner - the bible on bug management. You will never have any doubts about bugs in rest of your life.

www.kaner.com/pdfs/bugadvoc.pdf

Buggy - all the way is'n it? I love to talk about bugs and the way they are managed...

Ideas and views are welcome...

Shrini

Wednesday, October 19, 2005

Testability: A way to enhance your product's Utility Value

Testability to me is a feature of the product that makes it testable (a synonym for “usable”) in multiple contexts and ways. A mature testing process would always vouch for building testability right into the product during design stage where the developers are more willing and accommodating to accept “requests” from testers.

Typically only bugs make developers to listen to testers. Another important use of testability is that it makes “Test automation” more easy and efficient. Testers while doing automation need not spend hours and hours to create custom code to verify a test which could have been implemented as test hook – say a log file or a registry entry or database record or a status bar text. Ask for a test hook.

Just to give an example – I was looking at an automation idea where the test was to kick off a batch file and wait for it complete and then based on the result of the batch file, automation script would proceed. The problem the team was facing is how to make wait for script to finish. One option given by a tool vendor is to use a wait commend with some hard coded wait period.

Here is where stuck this idea of testability feature. I simply asked the developer to include a feature/task of creating an empty text file or registry entry to mark the end of batch processing. Developer happily included that feature which made our “wait” task easier and efficient than putting a dumb wait(100) kind of command. Another example of test hook is command line interfaces or APIs to product features that are typically driven with GUI. This will ease automation script maintenance when GUI unstable or changing.

One more example. We were planning to automate a feature where user would fill-out a form with lots of data which eventually would be submitted to web server in the form of an XML file. We wanted to simulate a load on web server by submitting a large number of such forms in a given span of time. When we asked the developer for help, he said that feature it not available. Then, we asked the development team for an API that would take the path to an XML file as the parameter and submit the contents of the file to web server to simulate form submission from client end. This made whole of automation work easy. Later on this testability feature was extended to include many interesting test hooks to make the job of testing easy.


Michael Hunter talks about testability at his blog – makes a good reading.

Monday, October 17, 2005

Resume v/s Ad

Here points about resume writing that I collected from various sources. I am not sure about the last point.. People do include (me too) their personal (contact) information in the resume so that they are "contactable".

I have been observing that people write resumes that are typically 8-10 pages. As a hiring manager, I mostly see first 2-3 pages, if I don’t get attracted by the profile till then; I will not read any further. My thumb rule is to have a page for every 2 years of experience. I have also observed that people write about their current and past employers in lengths - often about half a page for each employer. This is waste and is going make your resume less readable and less attractive. Sell yourself in the resume not your current or past employers.

Final point is that treat your resume as an ad that you put up in TV to market yourself in the job market. Decide what all you want to go in such an ad. Do you think - people sit and watch ads that are hour long and lack focus?

Read on...

Resume Writing: Seven errors common to an average resume

  • Too wordy. A rĂ©sumĂ© should be one page in length (one side only), or two pages at the most. A rĂ©sumĂ© is primarily an introduction - in the same way an advertisement is primarily an introduction - and should be under conscious control every inch of the way. Basic outline: Position Desired; Summary of Qualifications; Education; Skills; and, Employment.
  • Contains salary requirement. This is a big mistake. If you list a salary requirement it may well appear, to someone who has yet to appreciate your real value, to be too high or too low, and you may never get the chance to explain or elaborate. The thing to do is first make a favorable impression, and evoke some corporate response. There will always be time later to negotiate your salary - after the company decides it likes you and wants you and you're in some kind of bargaining position. It may be that their offer will not require negotiation.
  • "Me-oriented" Excessive use of the word "me", or "I" and prominent use of the phrases such as, "I seek," "my objective," etc. are to be avoided. Employers want to know what you can do for them. You must lead off with and elaborate on your benefit to the employer; plays up to what you think are the employer's objectives.
  • Assumes too much reader comprehension. This takes the form of listing and explaining numerous accomplishments, courses taken, etc., not necessarily related to your position objective.
  • Contains unnecessary and confusing information. (Different from being too wordy). You must be specific. Everything in your rĂ©sumĂ© should support and point to a single skill/expertise. In advertising, the simplest ad is best. No ad, no matter how high-powered, can sell several concepts at once. Neither can a rĂ©sumĂ©.
  • Stiff, formal language. Don't be flip, but make it readable. Aim for your audience and the people you want to impress. In short, communicate.
  • Includes personal information. Do not include any personal information. Name, home address, and home voice phone that's it.

Friday, October 14, 2005

Advice for budding software test professionals ...

Here are few points of the post that I made for QTP yahoo groups - in response to a query made for FAQ/interview questions on QTP. The main motivation for me to post this is that I see most of the new entrants in this field, do not know where to invest time and effort to know the field of testing. Often they end up reading some FAQ or Typical interview questions posted on some site and think that they have arrived. Software testing today suffers from lack of education and awareness about "What it takes to be a software tester" and "how to successfully carry out and add value to the overall process of software development.

Read on .....

Here is my advice to all aspiring QTP or Test engineers and professionals. These are lessons I learnt personally and useful for any software professional who is serious in testing.

1. Do not look for short cuts to learn and get knowledge. Have a long term plans to get good mileage in this profession. FAQs, etc are good to read only for knowing top line. To succeed in the interview you will have to win it from inside of your heart, invest honestly in studying and expect fruits. Banking on FAQs, interview questions etc may get you the job but will not keep you there.

2. Most important for a tester is to understand what makes a good tester? How he/she is different from a developer? What value tester brings to the table? How to find talent in testing and nurture it? How testing is different from QA or any flavor of process (CMM, Six sigma) etc.

3. Invest in sharpen problem solving and "thinking out of box" abilities. Read good stuff on testing. Participate in conferences, discuss with test professionals in other companies, participate in activities in SPIN, etc. Solve puzzles ( zig saw or shankuntala devi). Never stop learning.

4. Sharpen Technology skills. Know about "How web works" , DNS, Networking, protocols, XML, Web services, Cryptography, databases, Datawarehousing, UNIX commands, Fundas of J2EE, .NET, system admin list is endless. Today testers are expected to know the basics. I take lot of interviews for various positions. Most of the people do not have these basics. It is difficult to survive in this world of testing only banking on "Automation tool" knowledge.

5. Learn programming languages like , C#, Java and scripting languages like PERL, python, Unix shell etc. This will increase utility value of yours. Developers and PMs will respect you.

6. Improve communication skills - take English class. Improve vocabulary. Read Read and Read.
Most of the people I have seen ignore this important skill. They can not write a paragraph on their own without spelling and grammatical mistakes. Make a habit to learn a new word a day.

7. Read and write on blogs ( Google to find out what is blog- if you don’t know already). Here are few blogs that I suggest for every test professional.

http://www.testingeducation.org/ - cem kaners free testing courses.
http://www.testing.com/cgi-bin/blog - Brian Merick site
http://blackbox.cs.fit.edu/blog/james/ - James Bach - highly respected Visionary in testing.
http://www.qualitytree.com/index.html
http://blogs.msdn.com/micahel/ - Microsoft’s Famous Braidy Tester
http://www.developsense.com/blog.html - Michael Bolten - Testing in plain English
http://www.stickyminds.com
http://www.kohl.ca/blog/ - Jonathan Kohl
http://www.io.com/~wazmo/blog/ - Bret Pettichord - Automation testing Guru
and

my own blogs -
http://blogs.msdn.com/shrinik ( my Microsoft blog - is closed since left that company)
http://shrinik.blogspot.com

Last but not the least, Be a person with positive outlook in the life. Believe in yourself other wise nobody else will believe you

All the best. Let us build a next generation test professionals community and change the way world does testing today.

Tuesday, August 09, 2005

Definitions of software testing - tracing the history

As I read more and more on Testing and explore it, one thing that always fascinates me is - way in which “testing” defined. Starting with this one from Glenford Myers “The process of executing a program or system with the intent of finding errors” to this one from James Bach “Testing is questioning a product in order to evaluate it".

It seems a quite a big journey. From a predominantly a “Bug finding Approach” to an “Evaluation based, information gathering approach” – indicates a clear shift in the thinking and approach. In my view, these definitions are not mere “theoretical” definitions meant to be memorized and used in job interviews. They paint a picture about thoughts, directions and perceptions about the “Testing” at respective times. They are indicative of levels of knowledge, trends prevalent at specified period.

I am currently working on these definitions (source and time period) and present a chronological study and account of how software testing evolved since the days of “Myers (Book – “The Art of Software Testing” – 1979) and Bill Hetzel (Book “Program Test Methods”, 1973) through the days of “Boris Beizer”, “Cem Kaner”, “James Bach”, “Bret Pettichord”, “Brian Merick” and other visionaries in testing. Help me by sending interesting definitions about testing along with the source and time period – I will consolidate and post it on this blog.

Critique this one – “A software engineering discipline positioned strategically in SDLC to help developer, in all possible ways, to ship better quality software”.

I know, the words like "help”, "SDLC" "strategic" in this definition are open for debate. The theme here is to "Help the developer" - the poor guy who is nicely sandwich-ed between the “Tester” and the “Project manager" (with conflicting interests - most of the times) and we expect him to deliver “defect free" software in time within budget - all the time !!!!

Monday, August 08, 2005

Jerry Weinberg ....

I was led to “Jerry Weinberg’s site by James Bach. I heard about - Jerry Weinberg in most of software testing related discussion with people often quoting him. I was lazy to google and find his site. James’ Blog post (titled "How to investigate intermittent problems") came handy to me locate Jerry’s site. I just finished reading some stuff from his site. No wonder why he so big hit in the software community. I especially enjoyed this humor article related to wisdom of dogs …..



“A German Shepherd Dog went to a telegraph office, took out a blank form and wrote: 'Woof, woof woof woof, Woof woof woof woof woof.'

The clerk looked at the paper and politely told the dog, '"There are only nine words here. You could send another 'woof' for the same price.'

'But that wouldn't make any sense at all,' replied the dog.“