It is remarkable that we use so much technology in our daily lives whilst most of us know very little about the underlying mechanisms of how it works. From online dating to ordering shopping online from Alexa, to being given suggestions by Netflix of shows we might like, we are increasingly interacting with AI all the time. The intelligence of machines is developing exponentially according to experts in the field. Due to this explosion in machine intelligence, it’s important for everyone — from your grandma to a 5th grader — to know the basics of AI. This is not only because we use so many different applications of AI in our daily lives, but because the influence of AI on the future of humanity is only going to get greater for all of us at current pace. Furthermore, we are part of this grand experiment, with the majority of us increasingly spending more of our work and personal lives online. We are choosing and shaping our digital existence and so it makes sense to understand the underlying technology at play.
1- What does artificial intelligence mean? Artificial intelligence refers to the intelligence of machines. Artificial intelligence is often viewed as machines that are capable of learning and problem solving, similar to a human’s or an animal’s brain. Unlike a human or an animal, an incredible amount of programming is required for machines to recognize objects and understand how they relate to other variables but the capacities of machines to do these types of tasks — object recognition, facial recognition, gesture recognition — is increasing rapidly. The term artificial intelligence was coined by American computer scientist John McCarthy. He organized a meeting in the summer of 1956, known as the Dartmouth Conference, that would later be viewed as initiation of AI as a field of science. Other recognized proponents included Alan Turing, Marvin Minsky, Allen Newell and Herbert A Simon, who collectively championed the approach known as “symbolic reasoning”. Alan Turing, known for many important achievements — such as code breaking during WW2 — also invented the Turing Test in order to set the standard for an intelligent machine. He came up with the idea that rather than copy an adult brain, it would be better to simulate a child’s brain and then teach it to learn. Nifty programmers have been teaching machines to learn increasingly complex tasks ever since. Today, when applications ask us to verify if we are human users, we are asked to do the opposite of the Turing Test when we select pictures in the CAPTCHA test.
2- How is artificial intelligence different from biological intelligence? Biological intelligence refers to the somatic intelligence that characterizes human and animals, existing independently of our conscious control. Our hair grows, our cells repair, our blood becomes oxygenated whilst breathing. This means that we have multiple systems operating in parallel: our immune systems, cardiovascular systems, hormonal systems, muscular systems that are interconnected and self-regulating. Biological intelligence is highly complex and geared towards the survival of the species. Robots, in contrast, do not have the same systemic complexity.
3- What is an algorithm? The buzz word of our current zeitgeist, the word algorithm originated from the works of 9th century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī. An algorithm refers a set of rules or set of instructions that defines a set of operations. A computer will follow these instructions to solve a given problem, there are many algorithms for each problem. Computers optimize then the find the magic algorithm for reaching a goal most effectively. However, not all instructions are the same as there are many different languages. Each language has their own way of organizing commands which are called syntax. Common programming languages include Javascript for game and app development, Python for AI and machine learning, R for statistics and C++ for game development and graphics.
4- What is deep learning? Deep learning can be defined as “a technique of machine learning that involves multiple, hierarchically organized layers of artificial neurons”. Deep learning involves brain like structures called artificial neural networks that are given huge amounts of data and are trained to recognize patterns. The deep aspect of deep learning according to Jeff Dean is that these neural networks have multiple layers.
5- What has been achieved through deep learning? There are a range of different domains that deep learning has achieved massive scientific and commercial achievements in practical application. One of the most advance examples is speech recognition. Many of us use speech recognition software to help with tasks in our daily lives, whether it is Siri on an iPhone or Alexa in your home, speech recognition software is becoming increasingly accurate. This technology has been applied for applications such as deafness and disabilities and in military contexts. An application that features heavily in the news is game playing. Deep learning enables computers to play and win at games are a much-cited example, with the computer ‘Deep Blue’ beating Gary Kasparov in Chess in 1997 and more recently Alpha GO beating Lee Sedol at the game of Go. As mentioned, deep learning requires a large amount of data to detect patterns. Thus, through processing medical records, it is possible for computers to detect the risk of a disease through identifying early risk factors and therefore there is much scope for applying deep learning in healthcare. Through the early identification of disease, it is proposed that the use of AI could reduce medical costs and in theory prevent disease. Do you remember on Facebook when you would be asked to tag pictures of yourself or your friends, but now the name of the person is suggested to you? That again is a deep learning algorithm. From analysis of many pictures of your friends with their names and profiles, it is now possible for AI to recognize faces.
6- What are neural networks? A human brain cell is called a neuron. Human brains are highly sophisticated at processing information. We can effortlessly recognize numbers from writing on a page. Recognizing a written number on a page for a computer is much harder. To achieve this task, neural networks have been created to enable computers to learn things such as recognizing hand written digits. Neural networks have been used for the analysis of images, they can detect patterns and make sense of them. Input is taken into the network and then abstractions occur at multiple layers before the output is given.
7- What is reinforcement learning? The idea of reinforcement originally comes from psychology where animals could be trained to do things in return for rewards or conversely, the negative reinforcement of children can have specific effects i.e. both positive and negative. In the context of AI, reinforcement operates on the same principle: algorithms can achieve a goal and obtain rewards too. Incredibly, the reward can be delayed and an algorithm can undertake a range of complex steps to achieve it. Computers will optimize to achieve as much reward as possible. Computers repeat actions where they are behaving in ways that increase winning and decrease actions that prevent winning. Thus, the agent or the computer has to learn that set actions led to winning the game. It is noteworthy that algorithms require a huge amount of data to learn how to win the game.
Whilst AI is highly technical and an ever-expanding, ever-deepening field, it should not stop the general public from being aware of some of the key concepts, a small fraction of which has been covered here. The more we use AI and become increasingly intertwined with it in our daily lives, the more it makes sense to understand some of the mechanics behind the techno wizardry.