Sunday, 7 January 2018

Do You Believe in AI Fairy Tales?


https://xkcd.com/1613/
Automatic speech transcription, Self-driving cars, a computer program beating the world champion GO player and computers learning to play video games and achieving better results than humans. Astonishing results that makes you wonder what Artificial Intelligence (AI) can achieve now and in the future. Futurist Ray Kurzweil predicts that by 2029 computers will have human level intelligence and by 2045 computers will be smarter than humans, the so called “Singularity”. Some of us are looking forward to that, others think of it as their worst nightmare. In 2015 several top scientists and entrepreneurs called for caution over AI as it could be used to create something that cannot be controlled. Scenarios envisioned in movies like 2001, a Space Odyssey or the Terminator in which AI turns against humans, violating Asimov’s first law of robotics, are not the ones we’re looking forward to. Question is if these predictions and worries about the capabilities of AI, now or in the future, are realistic or just fairy tales.

What is AI?

AI is usually defined as the science of making computers do things that require intelligence when done by humans. To get a computer to do things it requires software. To let a computer do smart things it needs algorithms. Today the most common algorithms used in AI are Supervised learning, Transfer learning, Unsupervised learning and Reinforcement learning. Note that the nowadays popular term Deep Learning is just a form of Supervised Learning using (special forms of ) Neural Nets. Supervised learning takes both input and output data (labelled data) and uses algorithms to create computer models that are able to predict the correct label for new input data. Typical applications are image recognition, facial recognition, automatic transcription of audio, (speech to text) and automatic translation. Supervised learning takes a lot of data, about 50,000 hours of audio are required to train a human like performing speech transcription system. Transfer learning is similar to Supervised Learning but stores knowledge gained while solving one problem and applying it to a different but related problem. For example, applying knowledge gained while learning to recognise cars to recognise trucks. Unsupervised learning doesn’t use labelled data and tries to find patterns in data. There are little to no successful practical applications of Unsupervised learning however. Reinforcement learning also doesn’t use labelled data but uses feedback mechanisms to let the computer programme “learn” how to improve its behaviour. Reinforcement learning is used in AlphaGo (the programme that beat the GO world champion) and in teaching computers to play video games. Reinforcement learning is even more data hungry than the other AI techniques. Besides playing (video) games there are no practical applications of Reinforcement learning yet.


What makes AI successful?

As Andrew Ng, Coursera founder and Adjunct Professor at Stanford University indicates, the most successful applications of AI in practice use supervised learning. He estimates that 99% of the economic value created today with AI is using this approach.  The AI supported optimisation of ad placements on webpages is by far the most successful in terms of the additional revenue it generates for its users. Very little economic value is created with the remaining techniques, despite the high level of attention these have had in the media. Todays “rise” of AI may have struck you as a surprise. A couple of years ago we were not even aware of the practical usability of AI, let alone imagined that we would have AI on our phone (Siri) or in our house (Alexa) supporting us with everyday tasks. However, AI is nothing new, it has been researched since the 1960’s. The current leading algorithm used to estimate the Deep Learning neural networks, backpropagation, was popularised by Geoffrey Hinton in 1986, but has its roots somewhere in the 1960’s. Lack of data and computational power made the algorithm impractical. This has changed as the availability of (labelled) data has grown tremendously and, more importantly, computing power has increased significantly by the introduction of GPU computing. These two factors are the key reasons for AI to be successful today. So it’s not research driven progress, but engineering driven progress. Still, for the best performing supervised learning applications, super computers or High Performance Computing (HPC) systems are required because huge neural nets need to be constructed and estimated. To illustrate, Google’s AlphaGo programme ran on special hardware with 1202 CPUs and 176 GPUs when playing against Go Champion Lee Sedol. Many experts, among them Rodney Brooks, roboticist and AI researcher, questions if much progress can be expected as computational power is not expected to increase much further. Therefore, it could be that we're not at the beginning of an AI revolution, but at the end of one.

What can we expect from AI in the future?

Browsing through the newspapers and other media the number of stories on the achievements of AI and how it will impact the world is huge. Futurist predictions about what AI will allows us to do in the future are mind boggling. Will we really be able to upload our mind to a computer and live forever or learn Kung Fu like Neo in the Matrix movie? Most of these predictions state that AI will increase in power quickly assuming it is driven by an exponential law of progress, similar to Moore’s law. This is doubtful as for AI to acquire the predicted powers it not only requires faster computers, it also requires smarter and more capable software and algorithms. Trouble is, research progress doesn’t follow a law or pattern and therefore can’t be predicted. Deep Learning took 30 year to deliver value. Many AI researchers see it as an isolated event. As Rodney Brook says there is no “law” that dictates when the next breakthrough in AI will happen. It can be tomorrow, but it can also take a 100 years. I think most futurists make the same prediction mistake as many of us do. We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run (Roy Amara’s law). Take for example computers. When they were introduced in the 1950’s there was widespread fear that it would take over all jobs. Now 60 years later, most jobs are still there, new jobs have been created due to the introduction of computers and we have applications of computers we never even imagined.
https://www.warnerbros.com/matrix/photos

As Niels Bohr said many year ago: ”Predictions are hard, especially if they are about the future” this also applies to predicting how Artificial Intelligence will develop in the next years. AI today is capable of performing very narrow tasks well, but the success is very brittle. Change the rules of the task slightly and it needs to be retrained and tuned all over again. For sure there will be progress, and more activities we do will get automated. Andrew Ng has a nice rule of thumb for it, any mental activity that takes about of second of thought from a human will get automated with AI. This will impact jobs, but at a much slower rate than many predict. This will provide us the time to learn how to safely design and use this technology, similar to the way we learned to use computers. So, when we are realistic about what AI can do in the future, there is no need to get too excited or upset, sit back and enjoy Hollywood’s AI doomsday movies and other fairy tales about AI. If you have the time I recommend reading some of the work AI researchers publish, for example Rodney Brooks, Andrew Ng, John Holland or scholars like Jaron Lanier or Daniel Dennett.  

Post a Comment