Michael Wooldridge’s book on Artificial Intelligence (AI) is one of those first books that instilled in me love for computer science (CS) and AI. It walked me through the entire development of AI, examining the ideas and enthusiasm of computer scientists and AI pioneers, and into understanding how they worked on success and failures to have shaped it as we see and use it today.

Here are some interesting elements from this book for CS enthusiasts that sparked interest in me to delve deeper into the subject:

Alan Turing created test for machines having human-like intelligence

The British mathematician brought the idea of ‘thinking’ machines. He designed the Turing test to establish that a machine is intelligent if its response was ‘indistinguishable’ from human behavior based on a series of questions.

The test was a great idea but has its shortcomings as some simple programs may confuse us with human-like intelligence. A chatbot Eugene Goostman is claimed to have passed the test. I wonder how Alexa or Siri would perform the test!! I am also curious if Botnik studio’s program that created the chapter ‘Harry Potter and the Portrait of What Looked Like a Large Pile of Ash,” would pass the test.

Searle argued that machines cannot have human awareness and understanding

John Searle refuted the cognitive abilities of machines saying that human-like machines are only ‘simulating’ the thoughts and comprehension of human mind based on ‘as-if’ scenarios without true understanding and consciousness of human mind.

He argued this through the Chinese Room example stating that computers are like a man who answers a Chinese questionnaire without knowing and understanding the language and passes the test by somehow following and observing the instructions in English. So, Google Assistant like weak AI are actually based on manipulation of symbols and the behavior is syntactic not intelligent.

Pioneers worked on components of intelligence rather than ‘General AI’

AI based on general human intelligence is too complex and wide to understand and simulate. So, the AI pioneers focused on only few elements of human intelligence like perception, reasoning, or problem solving. They created programs based on selective components of intelligence. For example, a program playing board game would use only problem-solving skill.

Heydays of AI brought in the ‘Search tree’ concept

During the golden age of AI development, some of the best computer enthusiasts brainstormed great ideas at the summer school organized by John McCarthy and understanding human intelligence and developing computer programs and tools like McCarthy’s LISP around it.

The ‘Search’ technique was developed resulting in success of search tree generation and tackling it through thumb rules or heuristics. This was the base of IBM’s computer ‘Deep blue’ architecture, which could process 200 million positions/sec, creating a search tree that Kasparov’s mind could not comprehend!!

Another remarkable creation was the first mobile autonomous robot SHAKEY that could accomplish real-world tasks, but it did not meet the goal of AI enthusiasts because of its cumbersome design and limitations to perceive and interpret the environment.

Knowledge-based and logical systems emerged after ‘AI Winter’

The optimism of AI enthusiasts could not translate enough into practical world applications of AI. But soon the researchers developed knowledge based expert system like MYCIN. It could resolve problems by using rules based on the human expert knowledge about blood diseases. Researchers focused on such systems believing that they would surpass human performance, but they still needed human intervention to validate conclusions of expert systems. Also, the system was restricted in operating in area that it was trained on. For instance, when I use an automated translator for German to English translations, I need to correct the statements according to the scenario based on my knowledge of German grammar and genders.

Mathematical logic and tools and languages like PROLOG were developed but researchers could still not use them for tackling ‘common sense’ problems and definitely not understanding General AI.

Robotics, behavioral AI, driverless cars ushered in revolutionary AI

Researchers like Rodney Brooks were not satisfied with the growth trajectory of knowledge and logic-based AI and suggested creating machines that can be used in the Physical world. He introduced robotics based new AI, that could exhibit intelligence working beyond knowledge and logic. For example, Autonomous cleaning robots that could explore the surroundings based on combination of behavioral rules.

Significant milestone of this new AI approach was computer-controlled vehicle STANLEY that could travel 132 miles autonomously. The journey from SHAKEY robot to STANLEY brought the idea of driverless cars that became a revolutionary success.

Machine learning became the focal point of Modern era

One of the greatest achievements of present-day AI is where machines can perform tasks without human instructions.

It could be ‘supervised learning’ where we provide abundant data for computers to train themselves into performing task like face recognition, or it could be more advanced ‘reinforcement learning’ where a machine can make decisions and continuously improve itself based on the feedback it receives. The best example is DeepMind’s AlphaGo program that self-trained to play the intuitive game of “Go’ through reinforcement learning using neural networks ultimately defeating the world Go champion Lee Sedol. This was a mindboggling AI feat. Dr Demis Hassabis’ Strachey lecture https://podcasts.ox.ac.uk/artificial-intelligence-and-future about the journey of AlphaGo reaffirmed my interest in computer science, and its application in the gaming industry.

Artificial Intelligence debate goes on…

AI has become indispensable, and we use it almost everywhere like digital assistants, GPS, online shopping or trading systems even without realizing it. I liked the term ‘Cognitive prosthetics’ from the book about how AI can help us in decision making. The concerns of unemployment, privacy and algorithmic bias, and autonomous weapons are the negative sides that are already debated. But the biggest fear is ‘Singularity’ where machines supersede humans. I understood this further listening to Stuart Rusell’s TED talk Stuart Russell: 3 principles for creating safer AI | TED Talk. That calls for measures like the ‘Three Laws of Robotics’ and some strong principles around ethical AI.


0 Comments

Leave a Reply