Miranz_Evolution of AI

 

Since the invention of wheels, every decade seems to be an era of improvement and striking new technologies. The world has evolved from agrarian societies to industrialization, computers, vehicles, portable devices, and AI. While AI is amongst the most popular fields in this decade, the idea was initiated back in 1950 by Alan Turing with a simple question “Can a machine imitate human intelligence?” In his seminal paper, “Computing Machinery and Intelligence”, he developed a game well known until today with the name “Turing Test” to which he called the “Imitation Game”. In this game, a human interrogator’s goal was to distinguish a human from a computer. In 1952 BBC interview, he claimed that by the year 2000, the interrogator would have less than 70% chance of differentiating a computer from a human. Turing was not alone questioning and believing in the power of computers and the field of AI, but there are several other names who’ve developed several subsequent programs to help people believe in the thinking power of computers.

AI was not the same as it is today. The dynamics, methodologies, and results have changed exponentially from what they used to be in the beginning. DARPA – Defense Advanced Research Projects Agency has devised a scheme to think of AI in the light of 4 attributes.

  • Perceiving
  • Learning
  • Abstracting
  • Reasoning

On the basis of these 4 attributes the era of AI is usually divided into three stages

Handcrafted Knowledge – The first wave of AI:

This wave was dependent on human experts to translate their knowledge into programs. This was the era of rule-based algorithms, where computers were forced to think with the eye of a human expert. There was no reasoning, perception or learning involved at the computer end. They were somewhat ‘cramed’ with static rules to follow. The technique, however, is very useful and is used until today but requires keen attention where every step to be taken shall be written down.

 

Statistical Learning – The second wave:

This wave has highly contaminated the machine learning and deep learning fields in present decads. The architecture requires huge datasets that in turn allow computers to ‘learn’, ‘percieve’, and respond accordingly. The essence lies in the power of computers to differentiate and identify trends in the data and thus formulate the decisions based on them. This is the highest level of understanding achieved to date. But can computer still replicate us even more?

Next Generation – Cognitive Thinking:

Natural language processing has been a recurring theme throughout AI, first using symbolic techniques from the first wave, then applying statistical methods from the second. However, these approaches fall well short of human reading capabilities, as they miss out on easily observed elements of how we read and understand the text. A new approach is needed to add together all the elements that humans use for reading.

 

AI in the present decade has its roots not only in the IT industry but the application from various business domains. This has given rise to a number of sub-fields of AI. To know more about it, have a look at Sub Fields of AI

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *