- Self Driven or high levels of Inner drive for learning new things – No fear of unknown.
- Spontaneous – Thinks on the feet – Good in emergency response.
- Agile and adaptable.
- Love for Science (Physics/Chemistry), Mathematics and Philosophy.
- Love for problem, Puzzles.
- Hunger for self Expression – Writing, speaking.
- Organized Skepticism and constantly challenge their own thoughts
A Tester driven by curiosity and relentless question "what if"
"My vote for the World’s Most Inquisitive Tester is Shrini Kulkarni" - James Bach
My LinkedIn Profile : http://www.linkedin.com/in/shrinik
For views, feedback - do mail me at shrinik@gmail.com
Monday, December 31, 2007
7 Habits of successful Testers
Saturday, December 29, 2007
Exploratory Testing challenged - Part I
Here is a conversation that I had recently with one of my manager. We were discussing about merits and demerits of “Exploratory Testing” (ET) as a testing approach. This manager is a die-hard fan of “scripted testing” and apparently swears by “Quality/Factory” school of Testing.
Me: We should propose ET for this
Manager: Why? What are the benefits?
Me: ET will extend the test coverage over traditional scripted testing, you will be able discover those bugs that are not “catchable” by scripted tests.
Manager: Why our test scripts fail to find those bugs? I trust that our specifications are exhaustive and our scripts are thoroughly reviewed and signed off by the business experts. Test scripts should be able to find nearly all bugs that I consider important to be fixed.
Me: Our scripts are based on specifications which are one narrow, fallible source of information about “intended software behavior”. Since specifications are written in English – there could interpretations/misinterpretations. Since our specifications are fallible, so are our scripts. There is a human limitation to understand and interpret specifications (objectively) and design the test cases that cover the entire test space. So, there is good possibility that scripts will not find all bugs that potentially be discovered.
Manager: What are other benefits of ET?
Me: ET helps the test coverage and provide enhanced bug finding capabilities over scripted testing – especially to nullify “Pesticide Paradox” associated with scripts.
Manager: How? What is this pesticide paradox?
Me: Just as pests in soil over a period of repeated application of a specific pesticide acquire immunity to it and fail to die or show up – software bugs become immune to repeated application of specific Test cases. Over the period of time developers are become aware of test cases that are executed and *specially* test to make sure that new build of the application good just enough to pass those tests. As result of this, there is a “false” sense of stability and quality of the application.
Manager: So... Test cases wear out … why so?
Me: Test cases wear out as they do not have any in built mechanism in them to alter themselves to changing product environment. Test scripts can not think, infer, improvise, get frustrated as intelligent human testers do. Hence test scripts can not find bugs that repeatedly than a human tester.
Manager: what else …?
Me: Quoting James Bach – “The scripted approach to testing attempts to mechanize the test process by taking test ideas out of a test designer's head and putting them on paper. There's a lot of value in that way of testing. But exploratory testers take the view that writing down test scripts and following them tends to disrupt the intellectual processes that make testers able to find important problems quickly.”
Manager: What are other attributes of ET?
Me: Cem Kaner states that exploratory tests provide
- Interactive
- Concurrence of cognition and execution
- Creativity
- Drive towards fast results
- De-emphasize archived testing materials
Manager: I heard another related term “adhoc testing” is this similar to ET?
Me: Yes and no … Yes as Adhoc testing is well known predecessor to ET. Cem Kaner coined this term ET around early 80’s to distinguish ET and Adhoc Testing. Cem thought that there were lots of confusions regarding some kind of “impromptu” testing that does not rely on predefined scripts. Ad hoc testing normally refers to a process of improvised, impromptu bug searching. By definition, anyone can do ad hoc testing. The term "exploratory testing"--coined by Cem Kaner, in “Testing Computer Software” -- refers to ET as a sophisticated, thoughtful approach to ad hoc testing.
Manager: What is specific about ET vis-à-vis scripted testing?
Me: ET is more of investigative approach where as scripted testing is more of “validation” or “conformance” oriented. In scripted Testing, the tests, the sequences, data, variations etc is pre-defined where as in ET, test design/execution and learning all happen more or less at the same time.
Manager: I heard that ET requires “experience” and “Domain knowledge”. Can an average tester do a good ET?
Me: I am not sure how you define “average tester”, “experience” and “domain knowledge”. I believe, ET requires skills like “questioning”, “modeling”, “critical thinking” among others. Domain knowledge certainly helps in ET but I do not consider it as mandatory.
Manager: Fair enough… What types of bugs ET can discover?
Me: It depends upon what kind of bugs you want to discover. ET can be performed in controlled, small time boxed sessions with specific charters to explore a specific feature of the application. ET can be configured to cater to specific investigative missions. You could use few ET sessions to develop a software product documentation or to analyse and isolate performance test results.
Manager: I notice all along you argued like a “purist” in Testing. I am more of a Business owner; I would need to relate every dollar I spent to the return or the benefit that it gives.
Me: No … I would not call myself as purist, not at least here. I bring in lots of business considerations in my recommendations related to testing. ET provides a way of optimizing testing efforts by time-boxed sessions with charters. Depending upon the nature of the information stakeholders are looking for, ET sessions can be accurately planned.
Manager:
Me: Let us say you have 5000 scripts for an application and they pass all the time. Would you be worried?
Manager: Ummmm… It depends upon the context. But mostly I would not worry about it. I interpret that as a sign of enhanced maturity of those specific application areas. It is quite possible that there are no bugs that I should worry about - the “scripts passing” is a “confirmation” of that fact.
Me: What if this trend continues and next 5 cycles of testing also do not produce any bugs? Would you be worried then?
Manager: No …not at all In fact, I would reduce the size of scripts being executed to say half – 2500 as the application has become stable. This is an indication for me to "cut down" testing effort, I can possibly look at automation as well.
Me: Here is a twist, what if your ALL scripts are passing but your are seeing bugs (either detected by other means or by the customers) .. Would not you doubt your test cases?
Manager: Depends upon the kinds of bugs I see … If I were doubt something or someone at all … I would doubt test results, testers integrity and project management in general. Test scripts are less likely to be at “falult”. That would process issue - we would need to tighten the process.
Me: OK … What corrective action you would take then? What you steps will you take?
Manager: I would immediately order a thorough Root Cause analysis of the defects and identify what is causing them in the first place. Tighten the development, configuration and deployment process. I would strictly enforce the processes (improved) in Testing and mandate the testers to correctly and meticulously execute the scripts and report the relevant results correctly.
Me: What if you still find bugs outside your scripts?
Manager: That would be a “hypothetical question” – not likely to happen. In any case my focus would be to improve the testing process and strengthen Test scripts. Again, if you are finding still finding bugs – probably those bugs would be “obscure” type – I might not have to bother about them…
Manager: Good… I am still not convinced that ET can give the bang for the buck. As someone who is interested in predictability and repeatability of testing, I am interested in a test process that can scale.
Me: Ummmmm … OK… what is a testing process? Is this something that “actually happens” or “something that is intended”? Is repeatability and predictability - all you care?
Manager: You are too much … there is a limit to asking questions …I don’t think this discussion is leading to any good … Let us talk about it some other time [walks out of the room]
I am continuing my discussion with this manager and post the views and continued discussions in Part 2....
A very happy new year to all …
Shrini
Friday, December 14, 2007
Advantages of "highly repeatable tests" ...
I stumbled upon this GEM from James Bach.
“Highly repeatable testing can actually minimize the chance of discovering all the important problems, for the same reason that stepping in someone else’s footprints minimizes the chance of being blown up by a land mine.”
- James Bach,Test Automation Snake Oil, 1996
So, if you have an excellent set of "highly" repeatable tests, in terms of execution (automatable sequence of actions) and in terms of results (pass of fail) - congratulations, you have successfully managed to find a set of test cases or scenarios where the software is least likely to fail - meaning you will not (or do not expect to) see bugs/problems
But ... wait .. is that your testing mission?
I heard someone yelling from my back… “Yes .... that is what we expect in regression testing. But sometimes occasionally we do find bug as developer made a mistake that was caught or tester [by mistake] deviated from scripted test sequence [a process issue or discipline issue]"
What do you say?
Shrini
Wednesday, November 21, 2007
Dr Kaner on Software Metrics ...
A rare insight into metrics world ....
http://www.artima.com/forums/flat.jsp?forum=106&thread=218013&start=30#287847
This is in responce to a thread by Alberto Savoia
http://www.artima.com/forums/flat.jsp?forum=106&thread=218013&start=0&msRange=15
I keep looking for such insightful replies and posts ... I hope readers are liking it and getting benefited by it ...
Shrini
Saturday, November 17, 2007
Further on Testing as a career ..
Jeff Fry's post itself is a very good post that goes in details about "Testing, Career, Enjoyment and few suggestions for tester to stufy read".
Following are Steve Sandvik's comments that worth "consideration"
"...Yes, it may be my first formal job testing software, but as so many people in testing like to point out, nearly any experience or learning has some translation to testing, if you know how to apply it. 15 years of power plant operation and maintenance experience provides an awfully large number of troubleshooting and investigation opportunities.
Identify the fields outside of your industry where, for lack of a better description, good forensic skills and an agile mind (not to be confused with an Agile mind) are at a premium. Industrial equipment field service, process and generation operations, and auditing are a few I can think of off the top of my head. "
And these comments about "in-born" testing qualities
" ...I’m not sure whether truly great testers are born or made, but I think there’s at least a component of most of them that falls firmly into the born camp–in the same way most writers would write whether or not they were paid to do it, I suspect that most people who seriously take up testing as a career for its own sake rather than as a stepping stone to something else approach the world in a certain way even when they’re not formally testing. I know I approach things from what seems to me to be a testing perspective most of the time."
I am filling my blogs with few interesting career related suggestions ... I hope my blog readers are enjoying ...
Shrini
Friday, November 16, 2007
Michael Bolton on "Software Bugs"
Here are his views about "software bugs" (Again, these Michael's views in response to a question on the forum about Bugs - whole his reply stands on its own, I believe)
Personally, this is best advise that I have ever seen with respect to handing bugs by testers and how that decision impacts other stakeholders ... Read on ...
[Michael Bolton : Quote]
When I'm a tester, I'm concerned about trying to drive the project. As a project manager, that was my job. As a tester, my job was--and is--to ferret out information of any kind about the application that helps the project manager to achieve her goals. For me, this has a couple of implications.
First, I don't merely observe the product; I have to observe the things around the product--the platform, the systems with which the product interacts, the business processes, the anticipated or unanticipated users of the product, and so on. I try to be leery of recommendations to fix specific bugs, because in the past I spent too long going the other way--believing that I'm running the project when I'm not. (I'm arguably not a multimillionaire at least in part because in one company where I worked, company project managers had abdicated quality decisions to the testers and developers, which meant that we had a great, largely bug-free product that missed its market window by about a year.)
Second, there is one particular kind of bug that I will try to sell: bugs that make testing harder or slow it down. My goal is to reveal information about the product. Even if we do great testing, there are some things that we won't know about the product. Things that impinge on testing pose the risk of us knowing even less than we would otherwise.
So, with at least one eye firmly fixed on the context and the best judgement I can muster, I will advocate strongly
- to fix immediately bugs that block deeper or broader testing;
- to add testability (logging, scriptable interfaces, configurability, controllability, installability) to the product such that we can increase test coverage;
- to fix immediately trivial-looking bugs that add distraction and noise to the project effort--for example, typos that absolutely everyone will notice and report, such that the reporting and processing of the report will take time away from other coverage.
However, I also remind myself that we testers are vulnerable to representativeness bias--bugs that look trivially simply might be hard to fix, bugs that seem gnarly might be insignificant to the end-user, bugs that look hideously complex might have easy fixes, and so on. So I try to tell the absolute best story that I can about the bug and its worst ramifications, but I also acknowledge that I might not have the whole story about the technical or business reasons to fix or to defer a bug
[Michael Bolton: Unquote]
Shrini
Dr. Cem Kaner on Software Testing as a Career
There were actually two replies by Dr Kaner - I am taking the liberty of rearranging few paragraphs from both replies in order to givea specific flow to the whole thing. The purpose of this post is to share the words of wisdom and experience for all those who would like pursue the career in “Software Testing
[Dr Kaner: Quote]
Let me start by distinguishing between a CAREER and a JOB. A CAREER involves a long-term, intentional focus on a field or type of work. A JOB is a temporary assignment with a particular employer. My career is focused on improving the satisfaction and safety of software users and developers. My current job is as a professor. I have also held jobs as a tester, test manager, programmer, human factors analyst, software development manager, technical publications manager, development director, organization development consultant, salesperson, software development consultant, and attorney focused on the law of software quality. Each of these has addressed different aspects of what has been, to me, the same career. People define their own careers. Many people define their career in terms of traditional categories (programmer, tester, lawyer, teacher), but the choice belongs to the person, not the category.
When you make a choice ("I am an X" or "My career is X"), that choice is both inclusive (Xness is in your path) and exclusive (if Yness is not part of Xness, and Xness is not part of Yness, then "I am X" means also "I am not Y"). When someone defines their career as "tester," I think that definition is too narrow.
I see software development as a bundle of coordinated tasks, including programming, design, testing, usability evaluation, modeling, documentation, development of associated training, project management, etc. Very few people would do all of these as part of the same job. Fewer would do them all on the same project or in the same week. But working at one company as a tester and another company later as a programmer is not inconsistent with calling myself a software developer at either/both companies
I don't generally encourage my students to pursue software testing AS A CAREER. They can make that decision later, after they have more experience. I prefer to encourage them to try SOFTWARE DEVELOPMENT as a career -- to me, development includes testing. And that they take a job doing serious, skilled testing as PART of that career. Most of the best testers I know have significant experience outside of testing and apply that experience to what they do as testers or test managers.
I think that testing is a fine choice for a first job--for some people--but that doesn't make it a first career. It becomes a first career only for the person who says, "This, testing, is my career." I don't recommend that people make a decision to narrow their career that much, early in their career. Let them explore the field more, in their next few jobs, before they lock themselves into something.
I think that some people are good at both programming and testing, some people are good at both writing and testing, some people are good at design and testing, very few people are good at every software development task. So I think it is inappropriate to say that someone shouldn't be considered a software developer because they are good at some aspects of development but not others. Most (all?) of the hidebound process-pushers that I know in the field have never done serious professional work outside of testing. From their narrow perspective, they think they know more about how to manage a development project than the people who retain their testing services. Instead of trying out their ideas as project managers (where they will be accountable if they fail) these process advocates undermine the projects they work on by trying to control things they don't understand with rigid policies and procedures, standards and superstitions, whose costs and impacts are beyond their imagination. We have too many of these people in our field. We need more people who have a broader view of the tremendous value that testing can offer--within its limited role--and are glad to embrace a service-provider role that provides that value.
I think some fresh engineers should start their career with a job in programming, others with testing, others writing, others with human factors assessment, others with configuration management, others with data analysis. I think that choice should depend on what motivates the particular person.
What makes testing worth spending time on--as a job and maybe as a career?
We are professional investigators. Rather than building things, we find ways to answer difficult questions about the quality of the products or services we test. Our job--if we choose to do it well--requires us to constantly learn new things, about the product, its market, its implementation, its risks, its usability, etc. To learn these, we are constantly developing new skills and new cognitive structures in a diversity of fields. It also requires us to communicate well to a diverse group of people. We ALSO get to build things (test tools), but very often, we build to our own designs, which can be more satisfying than building an application that does something we'll never personally do (or want to do). Learning to do good software testing requires learning to do critical thinking well, and to back it up with empirical research. Not everyone will like to do testing. Not every engineer or programmer will have the skills or the interest to do professional-level testing. But for those of us who enjoy critical thinking, experimentation, and keeping the human relevance of what we do always in mind, there is nothing else like it in software development (except, for some people on some projects, requirements analysis backed with rapid prototyping and prototype-based research).
[Dr Kaner: Unquote]
Shrini
Monday, November 05, 2007
Tester's world of Possibilities
The future belongs to those who see possibilities before they become obvious. -- John Sculley
Recently, I challenged a fellow tester about testing “Notepad ->File -> Save As” functionality. I asked him to zoom on (focus) only Text files and investigating about file names (say base file name – one without the extension) and come up with testing ideas.
He started with domain testing approach and said that file names that can be supplied to the program could be classified as:
Valid File Names – those where file creation succeeds.
Invalid File names – where file creation fails and no new file gets created.
Then he went on thinking about possible values in terms of these “classes”. I argued with him about his classification – why only think about valid and invalid file names? Can you think about other possibilities …?
Few examples that I gave are –
What if file is created but it can not be opened to view?
What if the file is created but is read only?
What if the file is created but notepad application crashes during the file creation?
What if the file is created but notepad crashes while opening such a file?
What if the file is created but it takes 10 minutes to load the file?
what if the file is created but can not be searched using Windows – Search?
And so on …
My friend said “well all of these can be considered as invalid file names” … To that I said “But as per your initial classification, no file gets created for an invalid name ….!!!!”
My friend continued “ These all are possibilities but not real values …”. I said “That is exactly is my point. As a curious tester, I think about all possibilities and then investigate on those possibilities. My work starts from that point where other wash off their hands saying “We are done”.
What do you think are the possibilities when a file is entered for Notepad -> File->Save As dialog? Keep your investigations on the lines probing the file name parameter…Can you describe the "big picture"?
Shrini
Tuesday, October 23, 2007
100 Questions or 100/10000 Test cases in 20 minutes …
If every question is a test case or represents say 10 test cases, can you create 100-1000 test cases in say 20 minutes… Any James Bach student, context driven and or rapid tester will demonstrate that it is possible ….
Without any KA/KT, training, domain knowledge, no pages of documentation …? Plain vanilla testing at its best. Unbelievable right?
One might say --- that is cheating …!!! you may scream… that is not a test case. Where are steps? Where are the detailed information about application? Can a novice, low skilled tester use this? Can this be used for next five years? Can that be automated? And run in night when people are sleeping and give the results in the morning (in plain pass fail – way)?
The answer is “No”
Please note, while we create lots “frills” or decoration around real test and call it as test case – at its core a test is a question that we ask the program and program responds to it with one more answer. Some of these answers we can observe, assess, report while lot others go un-noticed. And while this happens, the environment or platform also responds to the question.
Questioning is a key attribute of a skilled tester. Questioning is also important aspect of learning. Unfortunately, right from our childhood (remember your father and mother saying “this kid is too much – asks too many questions. You will know when you grow up”!!!), through our education and now in the Job, Questioning is not encouraged.
Why?
- Questioning is considered as disrespect or contempt
- Questioning is indiscipline
- Questioning is disturbing
- Questioning is embarrassing when answer is not available
- Questioning sometime is considered as silly and stupid
- Intelligent people do not respond to silly questions
- Questioning at times make everyone think – that stalls the progress in some cases
- Questioning in a group is considered as bad
- Questioner is a labled as “trouble creater”
- Question = Trouble, more work, Road block
- Oh Gosh, we have not thought about this at all … this is terrible, what do we do now – is not easily coming response for a question
Today’s testers are forced to follow processes, documents, checklists and other “standard” things. If one were to follow the kind of thinking that Pradeep displayed – testing can happen with least information – a lot can happen in small time. I have heard testers who can not start testing (or test design) until they get specification, training and other supporting material. This is a damaging trend for testing profession.
As a famous punch-line of Café Coffee Day (popular chain of coffee joints in India) – “A lot can happen over coffee", goes, can I say “A lot of testing can happen in 20 minutes for any application”.
Just give me the *stuff* to test … I will flood you queries that can potentially lead (if investigated and answered) to an arsenal of information about application under test …
Shrini
Sunday, October 07, 2007
Types of Equivalence: Equivalence Class Partitioning - II
Here is ECP in nutshell - “Group a set of tests or data supplied for an application. Assert that all the tests/data belonging to group will teach you “same thing” (application behavior). Hence it is “sufficient” to use only one value/test from the group”.
Fundamental to ECP is the concept of “Equivalence”. Most of the authors or proponents of this technique give examples of date, integer fields and demonstrate identification of classes and equivalence. For example if you consider a date field in “NextDate” program, using “generally accepted rules” governing the usage of dates in Gregorian calendar – you can identify some classes – All the dates in the month of January can be considered as equivalent (except first and last day of January and first and last month of the century – which are boundaries). These “canned” classes appear to be applicable for every application that has date field in a “next date” function. Another example would be a field of integers (1-100) – most authors have mentioned the example of 2-99 as one equivalence class meaning all numbers in the range 2-99 would be treated “alike”.
I would call such equivalence that can be arrived without knowing anything about application, it logic and programmatic implementation details as “Universal equivalence”. It is easier to explain the concept of ECP using “universal equivalence” – date and integer fields are the most popular examples. But I see a danger here – the way ECP is explained using “universal equivalence” – it leaves out lots of key details such as basis for equivalence.
What are other forms of Equivalence?
Functional Logic equivalence - Consider the “Age (1-150)” field. Application logic might enforce that Age range 1-16 considered as one eq class (Kids) and others like 17-45 (Adults) and 60-99 (Senior Citizen). This kind of equivalence is very straight forward, easy to derive. Often specifications help us to arrive at such equivalence classes. Here is where the classic examples of “valid” and “invalid” EQ classes seems to have been originated.
If one were to go by pure functional logic equivalence, it would be sufficient to model Age parameter having three eq. classes and hence one value taken each from these classes (3 in all) would provide “complete” test coverage from an ECP perspective.
Dr Cem Kaner calls this as “specified equivalence”.
Implementation Equivalence – This is where one deep dives into how data is processed (validated, accepted, rejected), passed around (within application components) and eventually stored or discarded after use. Here we would talk about programming language (data types), software platform (OS and other related programs) and the hardware platform.
Dr Kaner identifies another two sets of equivalence - “Risk based” and “Subjective”. If the equivalence is in the eyes of tester (“these two tests appear to teach me same thing”), this form of equivalence is called “Subjective” equivalence. If a notion of equivalence is established targeting a specific class of risks or errors, it is referred as “risk based” equivalence.
Thus one way to apply ECP effectively is to start with universal equivalence and go on refining the sets EQ classes as we go deep into the application and platform (add, modify, delete the classes and their definitions). Implementation Equivalence seems to be the lowest or the last in the chain overrides the specifications of classes as determined by higher levels of equivalence (universal or functional logic type)
One question to spice up the discussion – Is ECP a black box technique?
Yes if we restrict to “universal and functional logic” equivalence.
No if we deep dive into code of the application and look around at platform (software and hardware)
What do you think?
[ Update ]
ECP attempts "simplify" a big picture (data domain with infinite set of possible values"). When attempting to apply ECP for a data variable, best starting point would be "what is that big picture I am trying simplify using ECP"? This is a top-down approach - model, understand, analyse, hypothesize the big picture then go to next level and then think about EQ. classes. I have people mostly approaching this from "bottom up" approach - think about valid and invalid classes first (or even actual values) then if possible think about the big picture.
Which approach you think is a useful one to start with?
BTW, there is "Equivalance principle" by Einstein related to theory of relativity. Can I say equivalence as applicable software tests is "relative" in nature?
Shrini
Sunday, September 30, 2007
More on definition of Test Automation ...
(You would also find some debate about "model based testing" in comments)
Test automation involves - Automation Design and Execution of automated tests ...
Design: An act of translation of some question that one asks regarding a feature of an software application into some programmable instructions so that the question being asked is modeled with reasonable accuracy (courtesy: Michael Bolton) In simple words translating a testing question into a program
Execution: Use of a machine (hence a set of computer programs) to support any aspect of Testing with Testing at the center of whole scheme of things.
(derived from the definitions by James Bach and Michael B)
As I continue to write about definitions and terminologies around test automation, let me reiterate - Nothing called "Automated Testing" exists in this world....
Shrini
Shrini
Saturday, September 29, 2007
Simple things and me ...
Consider following –
Software Project Status – Red, Green and Yellow
Project Size – Small, Big, Medium
Test cases – Simple, Medium, complex
Testing – Gather requirements, write test cases (some times even prepare them), execute them, check if the pass, if pass report and go home and if fail log a bug and go home
Automation – Create an automated script using the state of the art Automation tool. Store the script in test management tool and execute it from it. See the result logged automatically. If script fails (I mean reports failure), automatically make another script to log the bug
Knowledge Transfer – As simple as Fund transfer in bank. Transfer the relevant documents to the subject – transfer. If there are clarifications there always our best friend “issue list”, “action item tracker”, “clarification list” – many names but all is one.
Knowledge acquisition – As simple as one country invades and acquires another country. Just acquire – by force
What is common between each one of these – simplicity.
I become restless when
- People tell me about testing project status in terms of Red, green or yellow.
- People start approaching testing as sequence of actions – test design, execution, bug logging and regression
- People start describing a bunch of test cases or software features as Simple, medium and complex – SMC model.
- Testers getting classified as great, lousy based on the number of test cases executed or number (not the type) of bugs logged.
To me, all of these are simplifications of some complex things that we are trying to understand. My frustration is “Why things can be so simple”? “There must be something that is hidden behind this simple thing”
Imagine, if Newton were to think – when apple fell on his head – “No big deal – it had to fall so it fell down – Gosh – my head is aching. Why did sit bellow this tree?” Same thing applies to Archimedes. Jump out bath tub and run …?
While thinking about simplicity about some things that we deal with helps us to get started off – important to understand that it was just a beginning.
While testers and most of the managers are happy about such simplifications - a skilled tester is always worried about such simplifications and often attempts to find the loop wholes in simplified models, notions, beliefs, description etc…
Nature’s simplest things have hidden inside then greatest and deepest mysteries. That would a fascination journey – we just need vehicle capable moving into it – Our imagination and curiosity. Today’s skilled testers are blessed with this imagination and curiosity. The journey has just begun …
Do you think post has horribly and outrageously simplified the seriousness and intent behind the post? Well, I can not be too serious …
Thursday, September 27, 2007
Automation Dreams - Thinking about END in the begining ...
- Napolean Hill
Thinking about “end” in the mind helps me most of the cases such as planning, checking and carrying out various day –today tasks like performing 4-5 interconnected tasks, leaving home for a week/month long journey, arranging my daughter’s birthday etc. Thinking about “END” while you are about to begin or at the time of beginning a task - can help you to visualize the entire stretch of journey from start to end, anticipate various milestones, problems and as many as details as possible. This can prove to be a powerful mind modeling concept if you train yourself in visualizing from the END and work backward from there to the beginning.
Let us apply this to Test Automation …..
- How do you describe a successful automation?
- How can one describe steady state automation in deployment?
- Can you trace a journey from the beginning of an automation initiative and the end where your automation has become “obsolete” or has reached a steady state?
- Is there any END for Test Automation as a software project?
Automation initiatives have end goal(s) to achieve – cut testing cost by certain percentage, increase test coverage by certain percentage etc. In a way every automation project has some dreams to realize. Can we describe those dreams related to automation?
Let me give it a try ….
I will have an automation suite that runs “unattended” for up to 8 hours. (Length of Execution)
My automation suites covers 40% of my regression tests for product X. (Automation coverage)
I will have an automation suite whose results I can trust the most or When my Automated Test fails – I am sure that is bug (Reliability and Trust)
I have 40% of my test cases automated. Hence my testing effort now on will be 40% less (You can dream and these are DREAMS …. )
My smoke test suite is automated – I can now make changes to the code more frequently even when there is time crunch.
And so on ..
What are you Automation Dreams? Can you describe them?
Thursday, September 13, 2007
Physicians, Surgeons, X Ray Lab Technicians ....
“We would like our business users, domain experts and subject matter experts write automated scripts (tests?)”
“Our next generation automation tool – allows persons with ZERO programming and testing knowledge generate automation scripts in MINUTES”
Think about it …. Will you ask a Physician (Medical Doctors) or Surgeon to work as XRay or Ultra Sound Lab Technician and vice versa? Or An Aneathesian to perform a surgical operation?
Each one in professional life are known to have some core competencies. Best bet will be to utilize each one to their strengths….
Business users know business domain while Automation engineers and Tester know programming and software testing.
IMHO, asking business users to do “main stream” testing and automation is sure recipe for “failure”.
Any views? Do you think the analogy presented here is logical?
More as I hear from you ….
Shrini
Tuesday, September 11, 2007
Inattentional Blindness ...
Sajjd mentions that - "Exploratory testing is supposed to be better at minimizing inattentional blindness". This kicks off a thread of thinking in my mind - Why it might be so? what elements in ET helps one to minimize inattentional blindness?"
Let me take a guess and answer --
May be it is "thinking between alternating polarities" while doing ET - doing vs explaining, fast vs slow, reading vs doing, focussing vs defocussing. In my opinion inattentional blindness happens due to "heavy" focus on one or more "atomic" aspects of bigger object under observation. one possible solution is to defocus as often as you can.
A related phrase (a kind of antonym) I use is "your eyes will see what you would like to see". This is more dominant theme in scripted testing where a tester is "pre-programmed" observe only expected results.
Here is a wikipedia page http://en.wikipedia.org/wiki/Inattentional_blindness
Do not forget to read James Bach's comments for this post -
"Testing is a lot like fishing. You try to create conditions that maximize your chances for catching something tasty. Exploratory testing is a little more like fishing, because just as with real life fishermen ......"
Shrini
Monday, August 27, 2007
Presenting a tutorial @ QAI STC 2007
The details of the conference tutorials are as follows:
http://www.qaiasia.com/Conferences/STC_conference_2007/tutorials_kulkarni.htm
http://www.qaiasia.com/Conferences/STC_conference_2007/Tutorials.htm
This half day tutorial has been created out of my last few years of "on the floor" experience of managing automation projects - right from conception till UAT and deployment. I hope to improve the material and other stuff to a day long program ..
See if you can make it to this and share your thoughts ...
Sorry for last minute announcement .. :(
Shrini
Friday, July 27, 2007
A mystery called Automated Testing ...
- Answers.com
Hope you have heard many times the word “Automated Testing” (Some places I even have heard “automatic Testing”)
Have you ever thought – what does that mean? Let us dig little and try to figure out what each term in above phrase *might* mean?
We have two words in above phrase – “Automated” or “Automatic” and “Testing”. Let us explore possible meanings of these terms ….
(A) Automated – something that happens without any human intervention – something that is done by a machine-software program or by self.
Answers.com says -
"automated" (adj )
Definition: made or done by a machine
Antonyms: by hand, manual
Automation (ancient Greek: = self dictated),
The word testing has many meanings – I present two contrasting definitions
(B) Testing is an act of technical investigation performed on behalf of stakeholders in order to reveal quality related information – it is an act of questioning – An infinite search for problems – problems that can annoy/irritate/destroy/damage a stakeholder.
(C) Testing an act of executing a test case derived from a specification and verifies whether the test case passes or fails.
(D) It is an act of confirmation that software behaves in a way as prescribed by the underlying specification/design.
(E) Testing proves that the software works as desired.
Now think about constructing the meaning of the word “Automated Testing” and we have these definitions (A), (B), (C), (D), (E) …
Automated Testing = (A) + (B) - Possible? I believe No!!!
Computer/Program can not question, improvise, learn, adapt, think, emphasize
Automated Testing = (A) + (C) – Seems possible – but in a narrow way. Using another program (automation tool) one can execute an automated test … but that is only a part of whole story – what about Test design? Defect investigation? And host of other activities that fall under testing umbrella …
Automated Testing = (A) + (D)
Automated Testing = (A) + (E)
Both of above seem to be “Not possible” as checking or proving that software works the way intended or as per spec (writing in English – hence can be interpreted in infinite number of ways) using “automation” – solely – is not practically possible.
So … think twice before using the term “Automated Testing” – as nothing really EXISTS in this world of testing called as such …
Challenge me … think of ways where you can justify the use of “automated testing” – come up with definitions of “automation” and “testing” and link them !!!
BTW what do you think about "automated tests" ??
Shrini
Tuesday, July 10, 2007
A Blogger Bug
The title field is "non editable" - hence this post and my previous posts are going without any title for the post ...
I am investigating this -- any clues?
Update 13th July : The title field becomes editable when (this is one the ways I figured out) I edit (quick edit from the blog ) the post with the blank title.
Shrini
Regression Testing and Insanity ....
Albert Einstein (attributed to Ben Franklin too)
Does this sound similar to what we do in the name of "regression testing”? In regression testing (more so in that regression testing whose test execution is automated - note I am not using the term "automated regression testing") , we believe that by doing repetitions of some things that we did previously - we can find new results.
Thanks to inherent complexity in software and involvement of humans in development, testing and usage of the same - this regression insanity appears to be *working*
This one of those Francis Bacon's idols - Idols of the Theater: Errors formed from dogma (institutionalized doctrine) and flawed demonstrations.
Read following to know more about Francis Bacon and how his idols are relevant in testing
Francis Bacon’s New Organon (James Bach)
Bacon and boundary testing (Mike Kelly)
The four Idols of Francis Bacon
The Four Idols of Sir Francis Bacon by Ben Chambers and Zeb Dahl.
More on this later...
Shrini
My acrobatics on the Boundaries ..
The boundary condition of the universe is that it has no boundary." The universe would be completely self-contained and not affected by anything outside itself. It would neither be created nor destroyed. It would just BE
Stephen W. Hawking - A BRIEF HISTORY OF TIME
Science is a differential equation. Religion is a boundary condition- Alan Turing
Boundaries are actually the main factor in space, just as the present, another boundary, is the main factor in time- Eduardo Chillida
Earth has its boundaries, but human stupidity is limitless – Gustave Flaubert
I would read above quote from French novelist of 18th century (Flaubert was regarded as the prime mover of the realist school of French literature and best known for his masterpiece) by replacing the word “Stupidity” with “intelligence” or “imaginative power”.
Mike Kelly has an interesting post related to boundary testing here
I blogged about this topic few months ago under the name BVE and I think it makes perfect sense to link these two posts and get a new perspective of boundary test design (part of domain testing)
Note following important points in Mike’s post (with my views/comments) ---
- Understanding, Identifying and working with boundaries in software - is a Modeling problem. There can be multiple ways in which boundaries in a software system can be modeled.
- Depending upon how *close* your model models the system behavior, your (boundary) testing can be incomplete or wrong to *that* extent. Since there can be boundaries out side your model – you will not notice them.
- There can be (are) multiple boundaries - what you notice is limited by your model (ref. #1)
- No boundary exists in isolation
- All boundaries, those that are explicitly identified in model – hence are known to you and those laying outside your model, INTEREACT and ALTER/AFFECT system behavior in some *significant*. This introduces complexity in the model. In other words there exists some RELATIONSHIP between these boundaries.
- Inputs in identifying (in fact for modeling) the boundaries come from various sources such as technical specifications, user expectations, requirement specifications, device/OS specifications. If you narrowly focus on any one source of information – you will miss modeling of others boundaries (outside the mode) hence you would miss bugs/system behaviors associated with them.
- A weak boundary Analysis would start with boundary first without first having a thought /explicitly/ about my model.
- A strong or useful boundary analysis would start with model and identifying boundaries resulting from all possible sources of information.
Don’t forget to check following ---
All models (those especially having predominant influence of domain testing) are APPEAR to be deterministic (algorithmic/mathematical representations - set, graph, state machines etc) but indeed are HEURISTIC models as act of modeling seem to follow (in most of the cases) a heuristic approach…
Shrini