Background

Artificial Intelligence:
Rise of the Machines

Four-million years ago, we were apes like species, took a hard decision

Four-million years ago, we were apes like species, took a hard decision and got down from tree-branches then embarked on the two-legged journey on earth-surface. From then, during millions of years, having struggled with countless challenges, finally we became the most dominant species on earth where our ancient relatives still spend their whole lives under tree-canopies.

Associate Tech Lead

Testimonial Author

Varunakantha Kittanpahuwa

Undoubtedly, the root cause is Intelligence. We proudly say that we are the most intelligent species on this planet since we have continuously been inventing new things, unveiling nature’s secrets hence understating the entire universe. Yet, someone asks what is Intelligence? The question goes to the same bucket where other famous unanswered questions were dropped such as what is Time, Gravity, Consciousness or Mind.

Intelligence has amazing characteristics such as Reasoning, Learning, Natural Language Processing, Creativity, Imagination, Numerical Calculations etc. In laymen’s view, all these things can be reinterpreted as the results of information processing yet there is a single distinguishable factor which implies that Intelligence isn’t something which is only information processing. That is the advanced form of intelligence occurs in conscious creatures like vertebrates, arthropods, reptiles, birds...etc. but not in plants, algae or microbes. Therefore, it’s obvious that Intelligence is a cognitive context while its characteristics are rather cognitive tasks.

Based on above facts, it’s not difficult to deduce the meaning of Artificial Intelligence (A.I) in such a way that a form of intelligence which does emerge in non-bio-logical, man-made systems like computers or any kind of information processing systems. Despite its synthetic nature, A.I also has many common characteristics with natural intelligence as discussed in the next part.

Reasoning

As humans we do Reasoning in different ways thus as a subject, in Logics there are multiple disciplines such as Propositional Logics for IF, ELSE, AND, OR type reasoning, First-Order Logics with more complex Universal quantifiers (physical intuition of for-all) and Existential quantifiers (physical intuition of there-exists) and Temporal Logics which matters Time as a factor...etc., yet computers can also perform these types of logics in a more complex manner. In 1930s, a new spectrum of a programming paradigm called Logic Programming also came to the arena and LISP was one of its predecessors and later Prolog took its place.

Then in 1980s, the term: Expert Systems achieved a great success in A.I with new theoretical concepts of Knowledgebase and Inference Engines, Forward- chaining and Backward-chaining...etc. As a result, several game-changing products such as MYCIN, XCON were developed and had success in their respective domains.

Learning

Human brain consists of hundred billions of microscale tiny information processing modules called Neurons.

Graph
Neurons are interconnected with each other while making another sophisticated computational models called Neural Networks and the size of those networks are in the range of one-million to hundred-millions. These networks are continuously changed in terms of making and breaking numerous connections with other neurons and this entire process is known as Neuroplasticity or the ability to change throughout the life or in an abstract manner: Learning. Thus in our lives, from birth to death, we are in a continuous journey of learning and adapting to new environments and our brains are fully supported for this requirement.

Having analyzed this nature, In 1943, two scientists : Warren McCulloch and Walter Pitts introduced a new computational model called Artificial Neural Networks (ANN) and later idea was proliferated by many computer scientists thus a new field in A.I : Machine Learning(ML) came to the arena. Today we have achieved a tremendous success in ML with several associated fields such as computational statistics, data analytics, predictive analytics ...etc.

The core concept of ML is giving the learning ability to computers. ML has handfuls of different techniques for different learning mechanisms and the most prominent technique is ANN, as discussed in above paragraph and let’s see how it works. In a simple sense, each circle represents a single neuron and each column represents a single layer of neurons then the connections between neurons are represented by arrow-lines. Numerical values are fed to the system as inputs and inside a neuron, those values are gone through a set of pre-selected mathematical functions called Activation-functions. Finally, the results coming from activation functions are multiplied by a set of initially assigned random values called weights and a Biases are added if required. Those weights are assigned per each arrow-line so that the particular calculations occur when data travelling from one layer to another. This entire process is called Feed-Forward- Propagation.

Since the initial weights are selected randomly, the final result given by Feed-Forward-Propagation (after a single execution cycle) does not match with the actual result that we anticipate so that a new process called Backpropagation is used to adjust the weights in order to minimize the difference between the anticipated- result and actual-result. Having executed many cycles, the weights get adjusted to deliver the best output and this entire process is called Training the network.

Mathematically, a relationship between the Domain Range is defined as y = f(x) and in computer science this approach tallies with finding the proper Algorithm which satisfies the required output against given inputs. But in ANNs, what we prefer is y = ∑ni=1wnf(x) where w values are the respective weights that needs to be tweaked and adjusted during the training process.

This is a highly abstract explanation on how Learning is achieved with ANN and each concept like Activation functions, Weights, Layers are inspired from natural neural networks found in animal brains.

Further, a sophisticated version of ANN called Deep Neural Networks leads to a revolutionary sub discipline of ML called Deep Learning. Unlike conventional ANNs, DNNs usually comprise hundreds of hidden layers which make them capable of doing more complex tasks like Image Processing and Voice Recognition...etc.

For Tech Geeks, recently introduced Google’s Tensor- Flow would be a better selection to play with these amazing technologies. (https://www.tensorflow.org)

Natural Language Processing

One of the distinguishable behaviors between Humans and other intelligent species like Chimpanzees, Dolphins and Elephants is the ability of using natural languages. Latest statistics show that almost seven thousand (7000) different languages exist and the interesting fact is none of them are deliberately created by linguists yet emerged and evolved throughout the generations. This implies the fact that language is rather an Emerging- Property of human cognition instead of a deliberate effort of manual creation. In complex systems, emerging properties or emergence is defined as Whole has properties that its parts do not have. For ex: Snowflakes are one of the emerging-properties in our physical system. There are no creators for these complexly

Graph
beautiful shapes but they emerge as the properties of climate changes. In A.I, natural language processing(NLP) plays a significant role since the famous Turing-Test is totally based on NLP. In 1950, Alan Turing, a famous computer scientist proposed a test to detect the machine’s ability to exhibit intelligent behaviors in an utterly simple way: when humans can have live discussions with a computer system in such a way that they cannot even distinguish whether other party is a human or machine, then that computer system is said to be an intelligent system.

In computer science, NLP is studied as a collective discipline of linguistics, statistics and cognitive science. In recent development, Machine Learning also has a significant contribution towards Machine Translation and Google-Translator is a good example.

Language Understanding is another important topic in NLP where understanding the semantics or meanings of a given sentence is a quite complex process. For ex: the sentence, he eats rice is 100% meaningful for a person who knows that rice is an edible food where another person who doesn’t know whether rice is a food, that sentence would be around 60% meaningful. However, the common strategy for extracting semantics is, detecting Subjects, Objects and Verbs of a given sentence as the first step since this classification

is independent from language type (whether Norwegian, Sinhala or English) then convert it into First-Order-Logic statements and finally process through a suitable logic programming language like Prolog against a large ontology.Ontologies are complex, hierarchically structured or mesh like entity- graphs which can be used to derive additional knowledge from known information. Ex: word human can be passed to an ontology and its associated information like: humans are mammals; humans are the most intelligent species...etc. information can be retrieved. The state-of-the-art development is, IBM Watson. Watson is a comprehensive NLP system who can understand English language. In 2011, he contested and beat his human contesters at famous Question Answering competition: American Jeopardy. Today, Watson is available as a series of cloud services. Tech Geeks, please visit: https://www.ibm.com/watson

Image Processing

As humans, we do visually perceive the world in a pretty complex manner. Indeed, it’s the most prominent way of perceiving the world for us. Biologically, brain processes vision in two different ways.

The abstract vision, which only detects the relative motion is literally hardcoded into brainstem and then triggers some reflexive-behaviors such as instantaneous hand movements for protecting eyes or head when collisions happen. This type of vision is more prominent in Reptilian brains as its necessary for their survival.

The complex vision which detects and analyze objects, understand its associations is processed in Visual Cortex. In fact, there are thirty (30) different places inside the visual cortex which perform the parallel processing and finally create our sophisticated visionary system.

In A.I, a separate field called Image Processing performs above tasks in radically different ways.

Graph
Inspired from the neural architecture of human and other animal brains, Convolutional Neural Networks (CNN) are used with powerful GPUs and massive data sets in order to train for identifying different shapes such as Human faces, hand-written texts, possible cancer cells in X-Rays, possible Glaucoma symptoms in Retina images...etc.

CNNs are deep neural networks comprising many hidden layers (10-150) then each layer is randomly get trained for identifying different features of an image such as the particular edges, curves, sub-shapes...etc. While middle layers are busy with extracting features, final layers do the classification and grouping tasks.

As a climax of object recognition, Tensorflow written Google’s inception v3/v4 network models have meticulously been trained for thousand (1000) objects which can be found in our day-to-day lives such as laptops, mouse, pen, desk...etc. This knowledge is absolutely free hence developers can play with them:


https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android

Synthetic Life & Synthetic Psyche

Our psyche seems to be the most complex and least understood subject in modern era.The unsolved (and almost untouched) mysteries like how Mind, Consciousness and Cognition really works, are the top secrets that need to be unlocked in order to fully understand the nature of intelligence.

Conclusively, the Holy Grail of A.I will be a synthetic form of a psyche that perceives the world through various electronic sensors, then generate thoughts like information-pieces from those perceptions. Subsequently, triggering all cognitive features such as Reasoning, Learning, Imagination, creativity ...etc.

Finally, this can be thought as a synthetic form of life which provides the medium to emerge and evolve a synthetic form of intelligence or what we usually refer to Artificial Intelligence, in an electronic medium like cyberspace, one day.

Get in touch