The Minecraft video game was familiar to José Hernández-Orallo long before he started using it for his own research. The computer scientist, who devises ways to benchmark machine intelligence at the Polytechnic University of Valencia in Spain, first watched his own children play inside the 3D virtual world, which focuses on solving problems rather than shooting monsters.
In 2014, Microsoft bought Minecraft, and its science arm, Microsoft Research, gave its own researchers access to a new version of the game that allowed computer programs, as well as people, to explore and customize the 3D environment. Then, after inviting a small group of outside researchers that included Hernández-Orallo to download the machine-friendly version of the world, last July, Microsoft made it freely available to anyone, with the goal of speeding up progress in artificial intelligence (AI).
Now other companies have followed suit. On 3 December, DeepMind, a unit of Google headquartered in London, opened up its own 3D virtual world, DeepMind Lab, for download and customization by outside developers. The company initially created the world to train its own AI programs. Two days later, OpenAI, a research company in San Francisco, California, co-founded by entrepreneur Elon Musk, released a ‘meta-platform’ that enables AI programs to easily interact with dozens of 3D games originally designed for humans, as well as with some web browsers and smartphone apps.
All three releases provide researchers and software developers with easy ways to test programs in previously unseen situations, and for the programs to acquire new skills by teaching themselves to navigate novel situations that resemble real-world scenarios. “Environments like these have a very important role to play in the future of AI,” says Pedro Domingos, a machine-learning researcher at the University of Washington in Seattle.
Games have been test beds for AI for decades, but, typically, the algorithms have played them following predefined strategies. In recent years, the focus has shifted to machines that could learn from their own experience. In early 2015, DeepMind unveiled an algorithm that taught itself how to play classic Atari arcade games better than any human, by trial and error, without being told the goals of the games.
Such games are simple 2D worlds, though. ‘First-person’ 3D video games such as Minecraft—which visually embed the player in the environment—are a much closer approximation to the real world, and so make more sophisticated test beds.
Minecraft enables users to interact with virtual bricks, and use them to build structures, in addition to navigating and interacting with predefined structures. The version now available to developers, called Malmo, allows algorithms to do the same. Hernández-Orallo, for example, is using this to explore whether the environment can be used to create benchmarks for machine intelligence. Algorithms could compete to arrange bricks into something that looks the most like a certain object, say, or to navigate a maze—testing a much wider range of skills than the Turing test, the most famous test of machine intelligence, which focuses on the ability of an AI to chat like a human.
One of the things that made Minecraft attractive for conversion into an AI test bed is that it already enabled players to communicate using text messages. This could help an AI to learn to collaborate with humans in the real world, says computer scientist Katja Hofmann of Microsoft Research in Cambridge, UK, who led the team that created Malmo.
Virtual worlds are also particularly useful for developing AIs that are destined to eventually operate as physical robots, says Hofmann, because such environments are cheaper to customize, and faster and safer to practise in than the real world. They also allow robotics researchers to focus purely on the intelligence part of the equation—the mechanical challenges of physical robots can be a distraction.
In addition to Hernández-Orallo, Microsoft Research has collaborations with a handful of research labs that are using Malmo projects. But Hofmann suspects that many more are using it, perhaps around 100.
DeepMind Lab similarly allows researchers to create structures such as mazes, and their algorithms can learn to collect rewards as well as to navigate. DeepMind has also been experimenting with integrating “more naturalistic elements”, such as undulating terrains and plants, into the platform, says a spokeswoman. Now that the environment is open, the company hopes that other researchers will help to make the environments more challenging for the algorithms. “By open-sourcing it, we are allowing the wider research community to get involved in shaping this,” she says.
OpenAI’s meta-platform, Universe, takes things even further. By providing multiple, radically different environments for the same AI to sample, it could help to attack one of the hardest problems in the field: creating algorithms that can use previous experience when faced with new situations. For instance, deep neural networks, which mimic the layers of brain cells in the visual cortex, can quite quickly learn to navigate a 3D maze, but cannot transfer the knowledge to navigate another maze. “If you change the colour of the maze, the system is completely lost,” says Hernández-Orallo. “State-of-the-art technology fails dramatically.”
Microsoft is now working to make Malmo available through Universe. “Having a community platform will accelerate everyone,” says Greg Brockman, co-founder and chief technology officer of OpenAI.
This article is reproduced with permission and was first published on December 14, 2016.