Saturday, November 18, 2017

Computer does what programmer asks it do : why there are bugs?

A colleague of mine said something so extraordinary about software bugs that I have never seen anyone talking about software bugs that way.  The discuss was about how current technologies and advances in Big Data, Machine learning and AI have or will change the way we do testing and how these can help testers in testing.  One of the underlying applications of these technologies is two fold approach - one mimic human action (vision, speech, hearing and thinking !!!!) and then make predictions about what will happen next.

When it comes prediction and testing, obvious topic is "defect/bug prediction".  Bugs are hardest things to predict due their very definition and nature.  This colleague of mine said something that captures this sentiment very well - "There are no bugs in a sense that computer (he wanted to say software... these days it has become a fashion to replace the word software to machine at all possible instances) does not malfunction on its own (barring hardware/power failures etc). Computer does what programmer wants it to do or coded it to do. The problem then lies with human programmer's mind (or brain) that gave computer an incorrect instruction."

Where does this takes us to? It follows from my colleague's logic that the problem then lies with programmer's mind that gave computer the "wrong" instruction. Predicting a bug then would mean predicting when a programmer gives wrong instruction. This is a hopeless pursuit as guessing when human mistake is unsolvable puzzle - at the most you have some heuristics.

If we go back to the idea that software bug occurs when programmer gives a wrong instruction to computer. This line of investigation is remarkable -- First of all how to identify an wrong instruction?
It turns out that a wrong instruction cannot be identified using say an algorithm or mathematical approach. An instruction (such as open a file, send a message to an inbox, save a picture) becomes "wrong" not by itself but the context or logic or user need or requirement. This then takes us straight to mechanism using which we specify the context, need or logic. That is the realm of "natural language".

Software bugs happen due to programmer "wrongly" translating a requirement which is in natural language to a world of computer language.  If we were to predict bugs using likes of Machine learning or AI - we need tools to spot this incorrect translation.

Looks promising ... right? The state of the art in Natural Language Processing (NLP) is about how closely computers (software actually....) can understand natural language. There are  stunning applications of NLP already.

When NLP comes close to understanding human language fullest - we move a step forward in the puzzle of spotting incorrect translation of software requirement to a computer instruction. I hope so....

But then nature (human) leaps to next puzzle for computers... limit of human intelligence and vastness of human communication. With brightest of human testers, we often fail to spot bugs in software - how an approximate and "artificial" system that mimics a portion of human capability do better in spotting bugs? An area to ponder .....
BTW - was my colleague right in saying "computer exactly does what programmer has asked it to do" Really ?

No comments: