Four-million years ago, we were apes like species, took a hard decision
Associate Tech Lead
Undoubtedly, the root cause is Intelligence. We proudly say that we are the most intelligent species on this planet since we have continuously been inventing new things, unveiling nature’s secrets hence understating the entire universe. Yet, someone asks what is Intelligence? The question goes to the same bucket where other famous unanswered questions were dropped such as what is Time, Gravity, Consciousness or Mind.
Intelligence has amazing characteristics such as Reasoning, Learning, Natural Language Processing, Creativity, Imagination, Numerical Calculations etc. In laymen’s view, all these things can be reinterpreted as the results of information processing yet there is a single distinguishable factor which implies that Intelligence isn’t something which is only information processing. That is the advanced form of intelligence occurs in conscious creatures like vertebrates, arthropods, reptiles, birds...etc. but not in plants, algae or microbes. Therefore, it’s obvious that Intelligence is a cognitive context while its characteristics are rather cognitive tasks.
Based on above facts, it’s not difficult to deduce the meaning of Artificial Intelligence (A.I) in such a way that a form of intelligence which does emerge in non-bio-logical, man-made systems like computers or any kind of information processing systems. Despite its synthetic nature, A.I also has many common characteristics with natural intelligence as discussed in the next part.
As humans we do Reasoning in different ways thus as a subject, in Logics there are multiple disciplines such as Propositional Logics for IF, ELSE, AND, OR type reasoning, First-Order Logics with more complex Universal quantifiers (physical intuition of for-all) and Existential quantifiers (physical intuition of there-exists) and Temporal Logics which matters Time as a factor...etc., yet computers can also perform these types of logics in a more complex manner. In 1930s, a new spectrum of a programming paradigm called Logic Programming also came to the arena and LISP was one of its predecessors and later Prolog took its place.
Then in 1980s, the term: Expert Systems achieved a great success in A.I with new theoretical concepts of Knowledgebase and Inference Engines, Forward- chaining and Backward-chaining...etc. As a result, several game-changing products such as MYCIN, XCON were developed and had success in their respective domains.
Human brain consists of hundred billions of microscale tiny information processing modules called Neurons.
Having analyzed this nature, In 1943, two scientists : Warren McCulloch and Walter Pitts introduced a new computational model called Artificial Neural Networks (ANN) and later idea was proliferated by many computer scientists thus a new field in A.I : Machine Learning(ML) came to the arena. Today we have achieved a tremendous success in ML with several associated fields such as computational statistics, data analytics, predictive analytics ...etc.
The core concept of ML is giving the learning ability to computers. ML has handfuls of different techniques for different learning mechanisms and the most prominent technique is ANN, as discussed in above paragraph and let’s see how it works. In a simple sense, each circle represents a single neuron and each column represents a single layer of neurons then the connections between neurons are represented by arrow-lines. Numerical values are fed to the system as inputs and inside a neuron, those values are gone through a set of pre-selected mathematical functions called Activation-functions. Finally, the results coming from activation functions are multiplied by a set of initially assigned random values called weights and a Biases are added if required. Those weights are assigned per each arrow-line so that the particular calculations occur when data travelling from one layer to another. This entire process is called Feed-Forward- Propagation.
Since the initial weights are selected randomly, the final result given by Feed-Forward-Propagation (after a single execution cycle) does not match with the actual result that we anticipate so that a new process called Backpropagation is used to adjust the weights in order to minimize the difference between the anticipated- result and actual-result. Having executed many cycles, the weights get adjusted to deliver the best output and this entire process is called Training the network.
Mathematically, a relationship between the Domain Range is defined as y = f(x) and in computer science this approach tallies with finding the proper Algorithm which satisfies the required output against given inputs. But in ANNs, what we prefer is y = ∑ni=1wnf(x) where w values are the respective weights that needs to be tweaked and adjusted during the training process.
This is a highly abstract explanation on how Learning is achieved with ANN and each concept like Activation functions, Weights, Layers are inspired from natural neural networks found in animal brains.
Further, a sophisticated version of ANN called Deep Neural Networks leads to a revolutionary sub discipline of ML called Deep Learning. Unlike conventional ANNs, DNNs usually comprise hundreds of hidden layers which make them capable of doing more complex tasks like Image Processing and Voice Recognition...etc.
For Tech Geeks, recently introduced Google’s Tensor- Flow would be a better selection to play with these amazing technologies. (https://www.tensorflow.org)
One of the distinguishable behaviors between Humans and other intelligent species like Chimpanzees, Dolphins and Elephants is the ability of using natural languages. Latest statistics show that almost seven thousand (7000) different languages exist and the interesting fact is none of them are deliberately created by linguists yet emerged and evolved throughout the generations. This implies the fact that language is rather an Emerging- Property of human cognition instead of a deliberate effort of manual creation. In complex systems, emerging properties or emergence is defined as Whole has properties that its parts do not have. For ex: Snowflakes are one of the emerging-properties in our physical system. There are no creators for these complexly
As humans, we do visually perceive the world in a pretty complex manner. Indeed, it’s the most prominent way of perceiving the world for us. Biologically, brain processes vision in two different ways.
The abstract vision, which only detects the relative motion is literally hardcoded into brainstem and then triggers some reflexive-behaviors such as instantaneous hand movements for protecting eyes or head when collisions happen. This type of vision is more prominent in Reptilian brains as its necessary for their survival.
The complex vision which detects and analyze objects, understand its associations is processed in Visual Cortex. In fact, there are thirty (30) different places inside the visual cortex which perform the parallel processing and finally create our sophisticated visionary system.
In A.I, a separate field called Image Processing performs above tasks in radically different ways.
Our psyche seems to be the most complex and least understood subject in modern era.The unsolved (and almost untouched) mysteries like how Mind, Consciousness and Cognition really works, are the top secrets that need to be unlocked in order to fully understand the nature of intelligence.
Conclusively, the Holy Grail of A.I will be a synthetic form of a psyche that perceives the world through various electronic sensors, then generate thoughts like information-pieces from those perceptions. Subsequently, triggering all cognitive features such as Reasoning, Learning, Imagination, creativity ...etc.
Finally, this can be thought as a synthetic form of life which provides the medium to emerge and evolve a synthetic form of intelligence or what we usually refer to Artificial Intelligence, in an electronic medium like cyberspace, one day.