One question that comes up again and again in Testing world today is about role of testing in the domain of applications in Machine learning and Artificial Intelligence. To be precise, many in testing community are curious and some-what confused about what they need to do differently (if at all) and what skills they need to acquire additionally. This is post is an initial attempt to share my thoughts in this direction.
What is an ML Application ?
(Machine Learning is considered to be a branch of Artificial Intelligence, hence Omitting using AI along with ML)
The term "Machine Learning" is not new, it was coined by Arthur Samuel in 1950. Definition given by Arthur was "ability of computers to learn without being explicitly being programmed". In reality, computers do not learn, but software programs learn - a small difference, if you chose to care. How do programs gain such ability to demonstrate such human-like ability to learn? Any any or every program be made to "learn" like this? What has enabled today's computer's technology enabled such possibility being realized? Answers to these questions take the post beyond the topic about ML, Testing and defects/bugs. In short - I would say ability of computers to store and process large volumes of data at the speed needed at processing transactions - has enabled Machine learning as Arthur Samuel might have envisaged.
What is Machine Learning application then? A program that uses a set of algorithms processing sets of specially selected and curated data about a problem that program intends to solve. Under the hood, the algorithms "fit" the data to some selected mathematical "function" called as "model" such that the programs logic is data driven not hard coded. When I say hard coded in ML parlance - you will not find explicit chunks of if-else or select-case or do-while depicting rules of logic. The "model" through "fitting", generates the logic that data presented to it shall comply.
What kind of problems ML programs can solve? Largely two categories of problems - prediction and suggestion. A machine learning program can classify a bunch of financial transactions (say credit card) as fraudulent (potentially) or genuine or recognize faces in a picture or auto complete what you are typing in a search box on a web page.
What does it mean for a program to learn ?
In simple language - learning for a program is to discover parameters of mathematical function that program uses to establish relation between input and output. Let us take an example of classification that aims to predict whether an image contains text or not. In this case the image and its properties (what each pixel tells about the whole picture) are inputs and output is a binary decision whether image contains text or not (1 or 0). For a human eye - it is easy to make the decision where as for computer - the problem needs to be presented as (an example) a mathematical function like y =f (x). This function will have its parameters that the program needs to compute. For this purpose the program needs to presented with loads of data (input images and decision whether there is text is there or not). By processing this data the program is expected to identify the relation between "y" and "x" which is a mathematical function like y=mx+c (here m and c are parameters of the function).
This process of arriving at parameters of the function by working through data is called as "learning". Once the program learns the relationship - then, it can predict "y" - decision that whether image contains text of not - given any new image that program has not "seen" before.
Needless to say computer (program) does not "see" the image like a human eye - it (program) sees the image as a matrix of numbers that indicate pixel color scale or density. There easy python modules/programs that can convert an image into a matrix of numbers that a learning program can consume.
Also important to note all that data that the program has "seen" or processed during the process of "learning" does not stay with the program. What is left in the program is just the "essence" of data that leads to establishing the relationship y=f(x) in the form of parameters of the function. The data that program uses to "learn" the relationship is called as "Training Data" - how innovative !!!
Coming back to main topic of the post - what does a bug mean in this context ? When a program incorrectly calls an image as containing text when image does not contain text - do we call that behavior as application bug? ML programmer would probably call it as "program is learning" or "program needs to see more data to increase its accuracy of prediction". In this way - every opportunity for program is learning, like we say a lawyer or doctor as "practicing" - ML program, probably never "performs" but always in the process of "learning" !!!
What do you say? If program does learning (I have dislike for the term "machine learning" as its not machine that learning - its the program that is learning. Try saying programming learning, or software learning !!! its funny) - what testers need to learn ? What is left for testers to learn if programs become intelligent ?
What is an ML Application ?
(Machine Learning is considered to be a branch of Artificial Intelligence, hence Omitting using AI along with ML)
The term "Machine Learning" is not new, it was coined by Arthur Samuel in 1950. Definition given by Arthur was "ability of computers to learn without being explicitly being programmed". In reality, computers do not learn, but software programs learn - a small difference, if you chose to care. How do programs gain such ability to demonstrate such human-like ability to learn? Any any or every program be made to "learn" like this? What has enabled today's computer's technology enabled such possibility being realized? Answers to these questions take the post beyond the topic about ML, Testing and defects/bugs. In short - I would say ability of computers to store and process large volumes of data at the speed needed at processing transactions - has enabled Machine learning as Arthur Samuel might have envisaged.
What is Machine Learning application then? A program that uses a set of algorithms processing sets of specially selected and curated data about a problem that program intends to solve. Under the hood, the algorithms "fit" the data to some selected mathematical "function" called as "model" such that the programs logic is data driven not hard coded. When I say hard coded in ML parlance - you will not find explicit chunks of if-else or select-case or do-while depicting rules of logic. The "model" through "fitting", generates the logic that data presented to it shall comply.
What kind of problems ML programs can solve? Largely two categories of problems - prediction and suggestion. A machine learning program can classify a bunch of financial transactions (say credit card) as fraudulent (potentially) or genuine or recognize faces in a picture or auto complete what you are typing in a search box on a web page.
What does it mean for a program to learn ?
In simple language - learning for a program is to discover parameters of mathematical function that program uses to establish relation between input and output. Let us take an example of classification that aims to predict whether an image contains text or not. In this case the image and its properties (what each pixel tells about the whole picture) are inputs and output is a binary decision whether image contains text or not (1 or 0). For a human eye - it is easy to make the decision where as for computer - the problem needs to be presented as (an example) a mathematical function like y =f (x). This function will have its parameters that the program needs to compute. For this purpose the program needs to presented with loads of data (input images and decision whether there is text is there or not). By processing this data the program is expected to identify the relation between "y" and "x" which is a mathematical function like y=mx+c (here m and c are parameters of the function).
This process of arriving at parameters of the function by working through data is called as "learning". Once the program learns the relationship - then, it can predict "y" - decision that whether image contains text of not - given any new image that program has not "seen" before.
Needless to say computer (program) does not "see" the image like a human eye - it (program) sees the image as a matrix of numbers that indicate pixel color scale or density. There easy python modules/programs that can convert an image into a matrix of numbers that a learning program can consume.
Also important to note all that data that the program has "seen" or processed during the process of "learning" does not stay with the program. What is left in the program is just the "essence" of data that leads to establishing the relationship y=f(x) in the form of parameters of the function. The data that program uses to "learn" the relationship is called as "Training Data" - how innovative !!!
Coming back to main topic of the post - what does a bug mean in this context ? When a program incorrectly calls an image as containing text when image does not contain text - do we call that behavior as application bug? ML programmer would probably call it as "program is learning" or "program needs to see more data to increase its accuracy of prediction". In this way - every opportunity for program is learning, like we say a lawyer or doctor as "practicing" - ML program, probably never "performs" but always in the process of "learning" !!!
What do you say? If program does learning (I have dislike for the term "machine learning" as its not machine that learning - its the program that is learning. Try saying programming learning, or software learning !!! its funny) - what testers need to learn ? What is left for testers to learn if programs become intelligent ?
No comments:
Post a Comment