Tuesday, November 24, 2020

What's big deal with testers? Hit that punching bag !!!!

 Once in every 1 or 2 years - testing team or function comes under "question". I have seen this consistently for last 20 years. For business leaders, Heads of Business verticals - testing has been a punching bag. Whenever leadership changes or sales go low or profits fall - suddenly discussion on "efficiency" and throughput per person becomes an important metric. Why do we need such a big testing team? why cannot developers do all the testing required? Why not end users do the testing required? why cannot be automate this stuff and get testing head count out of equation? Why cannot we do crowd sourcing? - are the questions that business leaders ask. Testing leads and managers - start running from pillar to post to justify why they exist -- come up with all sorts weird metrics and process maps to show why testing required. But business leaders are hell bent on cutting that "extra" fat in the system. There go "testing team " out of the company.

What is big deal about testing or testers ? They ask. First of all let us ask what is/ required to do testing? Knowledge of the system or business domain or technology that used to build it?  Let us explore...

Business analysts know the product inside out as they have "defined" it (by writing down BRD - business requirements document). They should be the ones best suited to do all or required testing. No?

Developers too claim that they know system inside out  - as they have created it. They know nuts and bolts of the system where each one is. Should they be the best ones to test the system?

End users are best to test the system as they are the ones that eventually use the system they know what should be there in the application. Are the the best party to test?

Before we respond to above possibilities - let us also look at how industries shaped the views about testing. In software world (or software enabled businesses), broadly there are 3 types of industries - Software product companies, IT services companies and IT organizations or captive units of other (non software) businesses - likes of banks or pharma or Automobiles etc. Each of these have a very specific and unique "culture" about software and hence about testing.

Software product companies call software development as "Engineering" and all activities about creating a software is put under one umbrella "software engineering". Testing is part of "engineering". Pre-agile Era had testing as part of "development" life cycle and developers did all testing that they could. Agile and Post Agile (now DevOps era) killed "testing" function or skill or requirement by few beliefs or practices. Agilists extend logic of all testing should be done by developers and said "quality" is every ones' responsibility. Thru practices like TDD and formalized unit testing - testing as seen as excercising or executing every line of code (they wrote). Testing would mean writing code - hence programming and testing at some point merged into one skill. If you are in an engineering team you did not do coding means - you are either a business analyst or project manager (or scrum master - not ring master !!! :) 

IT organizations  or captive units are next big communities - looked at testing as either some necessary evil or something that programmers or technologists should do. Business leaders of these groups saw entire software development as an alien thing and kept a safe distance from it. For these software or technology was a support function and thought any investment in activities (like testing) related to software development as "distraction" from their main business function. Technology teams in such organizations too had a strange problem or approach. They thought all required knowledge of business domain or application resides with business so technology job is sincerely translating whatever business gave as requirements. No questions asked. They also propagated a view that technology testing is a shallow happy path and basic validation. They believed its not their cup of tea to learn business domain.  So where did  bulk of testing go? To business and End users in the form of "User acceptance Testing".

IT services companies did not have any of such challenges about thinking about domain, business, technology and testing. They simply supplied what their clients wanted. Hence IT services companies had mix of both IT org and product company cultures. IT services companies did not have to have their testing culture. In the beginning (pre-2000) - by riding outsourcing wave - went and told their outsourcing bosses - "testing is low risk work - we can take it away and do it for very low cost". I have heard pricing models of testing services on unit basis - xx $$ per test case. How crazy was that? When they ran out of business supplying "low skilled" (almost brain dead) "executors" of test cases (called manual testers) - they started selling them as business analysts. In that parlance - business analysis did some testing but knew business domain and could "write" BRD's. Even today many testers think that "business analysts" are one notch above testers in terms of salary and org hierarchy.  That option too soon ran out. These companies then started selling failed or low skilled developers as "automation engineers". This going even today..... skilled testing has been dying... not dead yet.

I ask -- what is big deal about testing - what makes testing  "tick" in todays situation? Share your views.



 


Sunday, November 08, 2020

My journey of Software Testing - Looking 20 years back - Part 1

“You seemed to have a good eye and aptitude about find bugs in software. Why don’t you pursue a career in Testing/QA? “said Premjit my manager at i2 Technologies roughly about 20 years ago. As it turns out he was damn right in assessing my special skill and interest for looking for bugs in software. At that time – like many other peers I felt Premjit was trying to put me down by asking me pick up a career in Testing/QA while I was aspiring to be a software developer and create awesome applications. That time (true in most of jobs even now), developer meant to write/design code while tester means to “test” (use) those applications to find bugs. Honestly, I felt, it was a let-down for me.

I reluctantly took full time testing role (at that time, in software product companies, testing was a part time job and everyone were called as software engineers). I have to admit that it was decision born out of fear of losing job. But in retrospect – here I am 20 years later – celebrating that shift Premjit suggested me. Post that shift, initial few years were bumpy, I kept looking at my developer friends and used to envy them. In initial few years I took roles that are closer to development while my full-time role was to test. I was software librarian for a new product development - teaching and establishing software versioning and build management. It was interesting as I could hear few developers appreciating my knowledge development practices, IDE (Visual studio) know-how etc. Soon, testing component of my job increased and I became moulded well into doing testing. I went on investing in learning about testing deeper – I took a course (2-day workshop) by T Ashok of Stag Software – who happens to be my first guru on software testing. He still continues to be a great friend, guide and mentor. I attended a testing conference by STeP -in in 2004 – got convinced that the career I took up indeed has good prospect – looking at and hearing all those leaders from industry who were managing testing teams. I was introduced to leaders like Srinivasan Desikan – I still continues to be in touch with him. In another interesting encounter in same year, in QAI conference, I meet Vipul Kochar – good friend with whom I continue to share good collaboration on software testing related initiatives in India. Vipul in his talk (in that QAI conference) on exploratory testing – introduces me to the works of James Bach who still continues to shape up my thinking and influences me about “fearless and think—for--yourself” approach to testing.  That’s I think happens to be a major turning point in my career in testing. That QAI conference was a unique experience as that was my first instance of speaking in a conference, guess what!!!  topic of my talk was “Agile Testing – is history repeating itself”.  I also meet Krishna Rajan in that conference who would 6 years later would give a job for me at Barclays. So, in many ways 2004 was an eventful year for me.

In the mean time I made transition in to IT services world and started learning more about testing by reading through works of James Bach and Cem Kaner. I then came to know about Context Driven Testing school of testing. There used be a yahoo user group called “software testing” where James, Cem and others from CDT were regular contributors.

I vividly recall a post (query – actually) in that group where I expressed my doubts of testing as I saw during that period – 2005. This interaction on that forum with James Bach as a clear eye opener for me and would set me in quest of knowing and learning more about testing in coming years.

Here, I share few snippets of my questions and answers by James. Read carefully. This is how a newbie tester starts off.

My first question about testing was about role of Domain expertise. Major view at that time around me (it is even now largely persisting) was for doing good testing and excel in that one needs to be a domain expert.

I ask “In testing domain expertise is important - for example without knowledge of stock markets – you cannot test an application that is meant for stock markets. I agree with this notion but banking only on this is recipe for failure.”

James responded to this – “Someone without specialized domain expertise can contribute to a test
project within any domain. Besides, some domains aren't that challenging to pick up, and in some cases, the test oracles aren't that difficult.  However, trying to test, say, a radiology imaging system using only testers who have no knowledge of radiology would seem dangerous to me.”

That as an important lesson to me – while having domain knowledge in area application is a good thing but that is not a mandatory thing. You can contribute to testing by various other skills in spite of being a newbie in that business domain.

My next question was [paraphrase] “Any developer can do testing (he/she does any way unit testing and other forms of developer testing). So, if we train him/her on automation tools - we are ready for state-of-the-art Testing. Why we need a separate role for testing?”

James responded – “Any developer can do testing. So can any plumber, any housewife, or any politician. Anyone at all can do testing. Do they do it well? That's the question. Developers bring certain skills to testing that I like to have in my test group. They also bring certain biases.”

This taught me why it is important just not do testing but doing it well – like a specialist would do. Anyone can do testing – that is a fundamental human trait to check things. But a professional tester would do testing “well”.  Developer Testing has its own biases. I was introduced to the idea of what developers often miss when they test – conformance oriented or confirmation heavy testing.  When you are a professional tester – you would look at the task of testing as if your whole world depends upon how well you do it. It’s no longer a ‘task” that had to be done. This thought made a huge impact on my thought process of testing.

I, then asked about another popular field of QA Process” – Why certain companies bank on quality jargons - CMM, ISO, Six Sigma saying that testing and QA are one and the same. Further they claim they follow CMM /ISO/Six sigma based so quality is obvious outcome – we don’t consider testing as specialized skill. We are fine with our developers doing all required testing.

James dismissed this by saying “Well, that's just corporate religion. I'm frustrated with that, too.”

This was a burning question that I had at that time “As mentioned in points 1, 2 above, if we have developer who can do  testing  with automation tools and business experts who bring  the necessary business knowledge and speed/efficiency in testing - where is the scope for  Tester - How do I emphasise/sell  the concept of " testing" being a unique software life  cycle activity and brings definite value to end product. In other words, in what way a tester is different (in skill sets and nature of work) than developers and business consultants or analysts?  What is the need for hiring testers instead of developers doing testing too?”

James responds – “I think if you don't have a clear idea, yourself, about what testers do and what skills they have, you won't be able to convince anyone else. Here's one skill: the skill of making models. This is a sub-skill, in my reckoning, of general systems thinking. Another skill is critical thinking, which involves using logic, of course, including the process of abductive inference. These skills are difficult to evaluate in an interview. Hiring people who aren't developers is worthwhile for a few reasons: there's more people to choose from, they probably have different biases, they won't be trying to get into a programming job, they will probably be better at testing overall and they will become better testers over time. I like hiring people with a philosophy education, when I can find them. You have to *demonstrate* the value of testing culture, in order to sell testing culture.  Read the articles and class materials on my website. Go to www.testingeducation.org and take the BBST online testing class.

I learnt through this response from James that – I did not have clear idea of what good testing was myself. I was just observing industry trends, what I hear in conferences and what my peers speak about testing. I was also constantly distracted by “jobs” that are related to testing but not testing actually.

Looking back, I laugh at myself and raw thinking. But these interactions would lay good foundation for my future learning.  My continued asking about my doubts about testing with likes of James and others Context Driven testing community in coming years would make be a better tester and would put me in a position that I can answer questions like these to myself.

To be continued ….

 


Monday, June 24, 2019

There is no such thing called defect/bug in Machine Learning/AI domain

One question that comes up again and again in Testing world today is about role of testing in the domain of applications in Machine learning and Artificial Intelligence. To be precise, many in testing community are curious and some-what confused about what they need to do differently (if at all) and what skills they need to acquire additionally. This is post is an initial attempt to share my thoughts in this direction.

What is an ML Application ?
(Machine Learning is considered to be a branch of Artificial Intelligence, hence Omitting using AI along with ML)
The term "Machine Learning" is not new, it was coined by Arthur Samuel in 1950. Definition given by Arthur was "ability of computers to learn without being explicitly being programmed".  In reality, computers do not learn, but software programs learn - a small difference, if you chose to care. How do programs gain such ability to demonstrate such human-like ability to learn? Any any or every program be made to "learn" like this? What has enabled today's computer's technology enabled such possibility being realized? Answers to these questions take the post beyond the topic about ML, Testing and defects/bugs. In short - I would say ability of computers to store and process large volumes of data at the speed needed at processing transactions - has enabled Machine learning as Arthur Samuel might have envisaged.

What is Machine Learning application then? A program that  uses a set of algorithms processing sets of specially selected and curated data about a problem that program intends to solve. Under the hood, the algorithms "fit" the data to some selected mathematical "function" called as "model" such that the programs logic is data driven not hard coded. When I say hard coded in ML parlance - you will not find explicit chunks of if-else or select-case or do-while depicting rules of logic. The "model" through "fitting", generates the logic that data presented to it shall comply.

What kind of problems ML programs can solve? Largely two categories of problems - prediction and suggestion. A machine learning program can classify a bunch of financial transactions (say credit card) as fraudulent (potentially) or genuine or recognize faces in a picture or auto complete what you are typing in a search box on a web page. 

What does it mean for a program to learn ?
In simple language - learning for a program is to discover parameters of mathematical function that program uses to establish relation between input and output. Let us take an example of classification that aims to predict whether an image contains text or not. In this case the image and its properties (what each pixel tells about the whole picture) are inputs and output is a binary decision whether image contains text or not (1 or 0). For a human eye - it is easy to make the decision where as for computer  - the problem needs to be presented as (an example) a mathematical function like y =f (x). This function will have its parameters that the program needs to compute. For this purpose the program needs to presented with loads of data (input images and decision whether there is text is there or not). By processing this data the program is expected to identify the relation between "y" and "x" which is a mathematical function like y=mx+c (here m and c are parameters of the function).
This process of arriving at parameters of the function by working through data is called as "learning". Once the program learns the relationship - then, it can predict "y" - decision that whether image contains text of not - given any new image that program has not "seen" before.

Needless to say computer (program) does not "see" the image like a human eye - it (program) sees the image as a matrix of numbers that indicate pixel color scale or density. There easy python modules/programs that can convert an image into a matrix of numbers that a learning program can consume.

Also important to note all that data that the program has "seen" or processed during the process of "learning" does not stay with the program. What is left in the program is just the "essence" of data that leads to establishing the relationship y=f(x) in the form of parameters of the function. The data that program uses to "learn" the relationship is called as "Training Data" - how innovative !!!

Coming back to main topic of the post - what does a bug mean in this context ? When a program incorrectly calls an image as containing text when image does not contain text  - do we call that behavior as application bug? ML programmer would probably call  it as "program is learning" or "program needs to see more data to increase its accuracy of prediction". In this way - every opportunity for program is learning, like we say a lawyer or doctor as "practicing" - ML program, probably never "performs" but always in the process of "learning" !!!

What do you say? If program does learning (I have dislike for the term "machine learning" as its not machine that learning - its the program that is learning. Try saying programming learning, or software learning !!! its funny) - what testers need to learn ? What is left for testers to learn if programs become intelligent ?

Wednesday, May 15, 2019

Industrialisation of Testing, Heuristics and Mindfulness

Over last two week end - The Test Tribe (popular testing community) hosted two sessions on facebook - one from T Ashok on Smart QA and other from James Bach on "Testing Heuristics". Both sessions were well received and interestingly I could see some connection between ideas that were part of these two sessions.

Industrialisation of Testing - Up until now - I thought industrialization in testing as bring "factory" metaphor into what we do as testers - intellectual search for problems in products we test. Ashok T in his session took a different position. He says industrialization in testing is about doing less through exploiting work done by fellow testers in the form of tools, test ideas, methods etc. He drew parallel with how software development community though its open source revolution - makes it possible to build application with writing less and less code. He stressed on creating open source revolution in testing so that testers can share their ideas so that we can use, reuse and grow testing repository. That would be true industrialization. There has been such work happening in our community - what we need a platform and such active participation/contribution.

Mindfulness Ashok in his session urged testers on mindfulness - acting with awareness of how we work, why we do what we do. Very nature of the mind is such that it wants wander and then programs in subconscious mind take over - run the what we do without our conscious engagement. Testers through their habits go about their day's business without being consciously aware of decisions, choices they make. Through mindfulness, testers would need to break the autopilot mode and carefully watch every step - this will enhance their skill, productivity and reduce errors they make in their work. Rarely I have seen such an advise to testers  - indeed a point to note.

Heuristics  James Bach in his session on Heuristics - went on in detail to explain how all testing, software development and Engineering is rooted in heuristics - fallible methods to solve problems. Those who follow context driven testing community are well aware of this term. James explained how heuristics need human judgement not mere following the rule -as heuristics can fail. James said in our daily life we use many heuristics without being aware. He urges, from his own training and experience, to be aware and name a heuristic when you use one.

Here is where I am reminded of mindfulness that Ashok suggested to use. By being mindful -we can recognize heuristics we use, when we recognize , we can name them, when we name them - we can share with fellow testers. That leads a community movement which manifests as Testing industrialization. Its exciting to see these two testing guru's ideas are connected in unimaginable ways. 

Sunday, September 16, 2018

Testers don't and can't prevent bugs : Alltruism or Sense of Pride ?

One of the fashion statement associated with testing these days is "testers should focus on preventing bugs rather than finding them". This is a very tricky idea and is full of traps for testers. Recently a post came up in software testing yahoo group that somehow got into this topic of preventing bugs.

Coming from the context driven school of testing and trained by likes of James Bach, Cem Kaner, Michael Bolton and others - I was skeptical about testers preventing bugs. Fundamental idea of our school of testing has been that as testers we bring to the light the information about bugs and risks in the software we test. Then we report it in a way to stakeholders (powers to be) to act on it.

Many testers fall into trap and take upon themselves (may be due to role/corporate hierarchy pressure) to task of preventing bugs. After all - who does not like someone who prevents bugs than someone who simply reports. Borrowing from manufacturing industry - many business leaders in IT and IT enabled business - firmly believe in prevention is better than cure. Who can resist the nobleness of preventing or saving "nine" by stitching in time.

Let us consider following two cases -

Testers prevent bugs in the requirements by asking question about ambiguity in requirements. Requirement bugs might not be counted as bugs by many - they might be termed as unclear requirements. Calling out what is not clear in requirements is one of valuable contribution of testers.

When Pairing with developers - testers prevent bugs as and when bugs are occur. For example tester may shout .. "hey you are missing exception handling code for that exception or hey you got if loop condition incorrect". That is closest you can get in preventing bugs.

In an email conversation with Michael says -

"we do not use a binary model “pass or fail”.  People who do that are setting themselves up for bad testing.  A product—any product—can “pass” a test but still have terrible problems.  A product can “fail” a test, yet  there’s no problem.  (For instance:  the square root of 2 is 1.4142136, right?  Well, it isn’t; the square root of two is not a rational number; it never ends, and certainly lot at the seventh decimal place.  But for many—even most—circumstances, 1.4142136 is good enough; just fine; not a problem."

This has been a great learning -- testers throw light on ambiguity - that does not mean they prevent bugs happening. Similarly,  in pair testing - testers spot the bug in shortest time possible but they did not present it from happening... that is "early bug detection".

Thanks James and Michael - lesson re-affirmed.

Sunday, March 04, 2018

Chief Value Officer vs. Chief Feelings Officer - Perills of Reification !!!

"Yet the danger if reification is all too real. We fall in love with our models, yet we need to be reminded that they are just models of the real world." Lynn Chiu


A good friend of mine Ray Arell in a tweet asks "why not have a CVO - Chief Value Officer". The word "value" always evoked a very strong internal response in me when I saw it being used in a way that Ray used. This word like others in the same league such as "Quality", "Customer Experience" - is notorious or victim of being reified (not rectified). Michael Bolton first introduced me to this word when we were discussing about abstract vs. concrete things. I learned from Michael that reification leads to gross misrepresentation of idea/word and leads to "gamification". It is a thinking fallacy and all intellectuals/thinkers need to be alert about such thing happening.

What is Reification?
In simple terms - reification refers to considering an abstract idea as though it is a concrete, countable, measurable thing. It is about wrongly understanding an idea as a thing. For example counting how many ideas are generated in a brain storming session is an act of reification as idea is not a thing - counting and doing all sorts of maths around it does not make sense. Here we say "idea" is reified as a thing. Other examples include making objects out of subject human experiences like emotions, feelings, values (say family values, social values) etc.

Marxist definition of reification is about "thingification" of social relationships. Among several perspectives and meanings for this term - I would like to use this definition for the purpose of this post - "A fallacy of treating abstraction as though it is a real thing"

Why reification is problematic?
First of all - reification is misinterpretation of reality of nature of what we are dealing with. It's a fallacy, an error in thinking and communicating. Reifying an idea into object is to strip off the subjectivity, mystery and complex richness of the idea. One common outcome of reified communication is both giver and reciever will have two different meanings and interpretation of what is being conveyed.

Consider yet another term "Quality". In software testing world we are all are familiar with this word. There are more than dozens of definitions of this word each fitting to a specific context and it demonstrates how the term quality offers itself to reification. Quality stands a mask for so many desirable attributes of a thing or a service. Instead of adjectives like "fast", robust, flexible, Easy to understand, cheap we can say quality and get away with bothering about all the specificity and correctness of what we want actually. That is power of reification but that is incorrect, manipulative and bad way to communicate.

Similarly with respect to motivation and change management - we often commit reification error. Social constructs such "percentage of work completion" is often regarded as measurements of real objects when it is the best an idea.

In the word of testing - there are famous examples of victims of reification. Requirements, Test cases and bugs. All these are complex ideas being generated as part of our quest to create software from requirements specified in natural language that gets interpreted and implemented into formal computer language. In the word of agile, we have stories that now replace requirements. A development lead announces in the first sprint meeting of a project  "we plan to delivery 18 stories in this sprint".  18 what ? stories. A test lead is asked "how many test cases you team plans to execute in this release"? In another case, during a project postmortem meeting - a comparison is made between number bugs logged in a given release to the number corresponding to previous release to assess the quality of "this" release. 18 stories, 3000 test cases, 270 bugs are examples of how in today's software world we ruthlessly reify abstract ideas and do math with these numbers. The act of reification allows use the numbers that do not have any inherent meaning of their own when context, giver, recipient and time are removed from them. What happens there after is pure game of manipulation.

Reification is thinking error ... it is a fallacy.

Value vs. Feelings 
In today's business world - the word "Value" is more attractive and sexy. We have terms like value stream, value proposition, value added service etc. Behind each of these phrases, hides a very clear objective, object or a concrete thing. It might be some money, timeline commitment, specific characteristic or outcome of a goods or service. It has become fashionable to use the term "value" instead. Why ? Since the meaning of value is subjective and open for interpretation - it allows one to use word value and imply one thing and later for the same value imply something else. In a sense - using value allows one to manipulate the situation to his/her advantage while not being wrong or incorrect about what is being conveyed through this loaded word "value".

In order to understand the full and correct meaning of word value - we require the context and who are we addressing to. By reifying the word value - we strip off that that richness, context and complexity. Then we start using the phrase to indicate multiple, sometimes disconnected and contradicting ideas as we have left behind context and recipient(s).


Instead of having a Chief Value officer, let us have a Chief Feelings Officer who can understand and deal with customer feelings and emotions about a goods or service delivery. Having this role, corporates can truly claim that they care about individual views and feelings about customers than a rolled, convoluted, metricized - measure such as customer experience.

Many still think that giving good customer experience is having a great looking GUI and exciting animation. Real customer experience, in my opinion is about caring for individual experience in their bare essential with all richness of emotion and context.

CFO - Chief Feelings Officer. Anyone ?

Saturday, November 18, 2017

Computer does what programmer asks it do : why there are bugs?

A colleague of mine said something so extraordinary about software bugs that I have never seen anyone talking about software bugs that way.  The discuss was about how current technologies and advances in Big Data, Machine learning and AI have or will change the way we do testing and how these can help testers in testing.  One of the underlying applications of these technologies is two fold approach - one mimic human action (vision, speech, hearing and thinking !!!!) and then make predictions about what will happen next.

When it comes prediction and testing, obvious topic is "defect/bug prediction".  Bugs are hardest things to predict due their very definition and nature.  This colleague of mine said something that captures this sentiment very well - "There are no bugs in a sense that computer (he wanted to say software... these days it has become a fashion to replace the word software to machine at all possible instances) does not malfunction on its own (barring hardware/power failures etc). Computer does what programmer wants it to do or coded it to do. The problem then lies with human programmer's mind (or brain) that gave computer an incorrect instruction."

Where does this takes us to? It follows from my colleague's logic that the problem then lies with programmer's mind that gave computer the "wrong" instruction. Predicting a bug then would mean predicting when a programmer gives wrong instruction. This is a hopeless pursuit as guessing when human mistake is unsolvable puzzle - at the most you have some heuristics.

If we go back to the idea that software bug occurs when programmer gives a wrong instruction to computer. This line of investigation is remarkable -- First of all how to identify an wrong instruction?
It turns out that a wrong instruction cannot be identified using say an algorithm or mathematical approach. An instruction (such as open a file, send a message to an inbox, save a picture) becomes "wrong" not by itself but the context or logic or user need or requirement. This then takes us straight to mechanism using which we specify the context, need or logic. That is the realm of "natural language".

Software bugs happen due to programmer "wrongly" translating a requirement which is in natural language to a world of computer language.  If we were to predict bugs using likes of Machine learning or AI - we need tools to spot this incorrect translation.

Looks promising ... right? The state of the art in Natural Language Processing (NLP) is about how closely computers (software actually....) can understand natural language. There are  stunning applications of NLP already.

When NLP comes close to understanding human language fullest - we move a step forward in the puzzle of spotting incorrect translation of software requirement to a computer instruction. I hope so....

But then nature (human) leaps to next puzzle for computers... limit of human intelligence and vastness of human communication. With brightest of human testers, we often fail to spot bugs in software - how an approximate and "artificial" system that mimics a portion of human capability do better in spotting bugs? An area to ponder .....
BTW - was my colleague right in saying "computer exactly does what programmer has asked it to do" Really ?