Google's AI Reasons Its Way around the London Underground

DeepMind’s latest technique uses external memory to solve tasks that require logic and reasoning—a step toward more humanlike AI

DeepMind's AI uses external memory to accomplish tasks that require reasoning, such as learning to navigate the London Underground.

Join Our Community of Science Lovers!

Artificial-intelligence (AI) systems known as neural networks can recognize imagestranslate languages and even master the ancient game of Go. But their limited ability to represent complex relationships between data or variables has prevented them from conquering tasks that require logic and reasoning.

In a paper published in Nature on October 12, the Google-owned company DeepMind in London reveals that it has taken a step towards overcoming this hurdle by creating a neural network with an external memory. The combination allows the neural network not only to learn, but to use memory to store and recall facts to make inferences like a conventional algorithm. This in turn enables it to tackle problems such as navigating the London Underground without any prior knowledge and solving logic puzzles. Though solving these problems would not be impressive for an algorithm programmed to do so, the hybrid system manages to accomplish this without any predefined rules.

Although the approach is not entirely new—DeepMind itself reported attempting a similar feat in a preprint in 2014—“the progress made in this paper is remarkable”, says Yoshua Bengio, a computer scientist at the University of Montreal in Canada.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Memory magic

A neural network learns by strengthening connections between virtual neuron-like units. Without a memory, such a network might need to see a specific London Undeground map thousands of times to learn the best way to navigate the tube.

DeepMind's new system—which they call a 'differentiable neural computer'—can make sense of a map it has never seen before. It first trains its neural network on randomly generated map-like structures (which could represent stations connected by lines, or other relationships), in the process learning how to store descriptions of these relationships in its external memory as well as answer questions about them. Confronted with a new map, the DeepMind system can write these new relationships—connections between Underground stations, in one example from the paper—to memory, and recall it to plan a route.

DeepMind’s AI system used the same technique to tackle puzzles that require reasoning. After training on 20 different types of question-and-answer problems, it learnt to make accurate deductions. For example, the system deduced correctly that a ball is in a playground, having been informed that “John picked up the football” and “John is in the playground”.  It got such problems right more than 96% of the time. The system performed better than ‘recurrent neural networks’, which also have a memory, but one that is in the fabric of the network itself, and so is less flexible than an external memory.

Although the DeepMind technique has proven itself on only artificial problems, it could be applied to real-world tasks that involve making inferences from huge amounts of data. This could solve questions whose answers are not explicitly stated in the data set, says Alex Graves, a computer scientist at DeepMind and a co-author on the paper. For example, to determine whether two people lived in the same country at the same time, the system might collate facts from their respective Wikipedia pages.

Although the puzzles tackled by DeepMind’s AI are simple, Bengio sees the paper as a signal that neural networks are advancing beyond mere pattern recognition to human-like tasks such as reasoning. “This extension is very important if we want to approach human-level AI.”

This article is reproduced with permission and was first published on October 13, 2016.

Elizabeth Gibney is a senior physics reporter for Nature magazine.

More by Elizabeth Gibney

First published in 1869, Nature is the world's leading multidisciplinary science journal. Nature publishes the finest peer-reviewed research that drives ground-breaking discovery, and is read by thought-leaders and decision-makers around the world.

More by Nature magazine

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe