Although dated by today’s standards, Wargames still has plenty of lessons to teach us about the limits of AI. A hit in 1983, the movie tapped into several trends that were fashionable at the time: video games, nuclear war and, yes, artificial intelligence.
It tells the story of a teenage gamer and computer geek who uses an early modem to hack into the Pentagon’s network. Thinking that he’s playing a new video game, he fools the Pentagon’s AI system into a nuclear launch countdown that plays out on the giant screens of NORAD’s underground headquarters until the inevitable last-minute reprieve.
While it’s great entertainment, Wargames illustrates many of misunderstandings and myths surrounding AI that persist to this day. Even forty years ago, we were being told that the singularity was imminent. In this case, the AI software attains consciousness and leads the planet to the brink of thermo-nuclear destruction.
It’s an illustration of what we call today the paperclip problem. This occurs when the machine puts all available resources into achieving its goal, even if this means the destruction of humanity. (By the way, Wargames appeared about a year before the original Terminator movie and the first fictional appearance of the equally homicidal Skynet).
Wargames also came to mind when I read this week how Google’s DeepMind has mastered 57 Atari video games from the early 1980s when the film first played in cinemas. Although the games are crude by today’s standards that’s still quite an achievement. Even so, mastering 40-year-old puzzles is far from the singularity that’s been predicted by experts and journalists for just as long.
The hype around the singularity also puts me in mind of nuclear fusion, which is always said to be thirty years away. As in Zeno’s paradox we close the distance but we’re always an infinite time away from reaching our destination. And that’s the problem. As journalists and human beings we prefer the next big story, especially one dragged from the plot of a science fiction movie where AI achieves super-human powers.
Good at one thing. Not so great at others
Instead we should remember that most operational deep learning solutions are good at one specific activity (recommending new music, voice recognition, playing simple video games) but not very versatile.
Take another DeepMind story from this week which revealed how machine learning can now fill the gaps in “over the Internet” voice conversations. In this case Google’s Duo software.
We all know bandwidth often fluctuates during a conversation leading to dropouts that irritate the brain and disrupt comprehension. The technology is now smart enough to recognize when such tiny gaps occur and to bridge them in real time so that the conversation sounds smoother.
What I really like about this solution is that it does all the things that good machine learning is meant to do. First, it solves a difficult problem. Filling in small gaps in speech is much harder than it sounds. Secondly, it’s incredibly useful. Millions of people benefit from the enhancements to their daily conversations.
It’s also unobtrusive. Working quietly in the background, it improves, but doesn’t disrupt the user experience. Finally, it’s timely. Trapped in some form of lockdown, millions of people now rely on voice over the internet and video conferencing to keep in touch with friends, family and work colleagues.
It’s also a powerful reminder that most often machine learning doesn’t do anything fundamentally original in terms of its operational features. It usually starts by replicating an existing human activity and then evolves to the point where the feature becomes ubiquitous, driving down the cost to the consumer. Sure, that’s not the source of a Hollywood blockbuster, but the impact of the technology on our everyday lives can be just as exciting. You just need to know where to look.