Facebook is well known for its early and increasing use of artificial intelligence. The social media site uses AI to pinpoint its billion-plus users’ individual interests and tailor content accordingly by automatically scanning their newsfeeds, identifying people in photos and targeting them with precision ads. And now behind the scenes the social network’s AI researchers are trying to take this technology to the next level—from pure data-crunching logic to a nuanced form of “common sense” rivaling that of humans.

AI already lets machines do things like recognize faces and act as virtual assistants that can track down info on the Web for smartphone users. But to perform even these basic tasks the underlying learning algorithms rely on computer programs written by humans to feed them massive amounts of training data, a process known as machine learning. For machines to truly have common sense—to be able to figure out how the world works and make reasonable decisions based on that knowledge—they must be able to teach themselves without human supervision. Though this will not happen on a significant scale anytime soon, researchers are taking steps in that direction. In a blog posted Monday, for example, Facebook director of AI Yann LeCun and research engineer Soumith Chintala describe efforts at unsupervised machine learning through a technique called adversarial training.

This approach consists of two artificial neural networks, so called because they use algorithms designed to help them function a little like a human brain. A “generator” network creates images based on random data that it is fed. Researchers train the second “discriminator” network through machine learning to be able to tell the difference between a real image and a data file containing nonsensical patterns of shapes and colors. The discriminator then analyzes a series of files, some from a database of real images and others created by the generator network. Initially, the generator is not very good at creating realistic images and the discriminator easily flags them as fakes. Eventually, however, the generator is supposed to learn from the discriminator’s responses and begin to produce increasingly more realistic images. In this way the generator and discriminator are adversaries, with the former trying to fool the latter and the latter trying to avoid being fooled, according to Chintala.

Adversaries

Adversarial network generators tested up to now in AI labs—at Facebook and elsewhere—have typically failed to show significant improvement even after interacting with discriminators. In an attempt to remedy this, Facebook researchers created generators that have specially crafted structures of interconnected layers, an arrangement known in the AI community as a “deep convolutional generative adversarial network,” or DCGAN. Each of these layers consists of a particular algorithm applied to the input that the network gets. The generator’s first layer runs an algorithm that extracts raw pixels and simple motifs from a dataset representing an image. The next layer combines these motifs into slightly more complex arrangements. The next layers detect parts of objects, assemble them into objects and create scenes, respectively, until the entire image is created. “There’s a hierarchy of layers, which is where the word ‘deep’ comes from,” LeCun says.

The researchers found that their DCGANs could, among other things, learn to draw specific objects as the training progressed. They also got a better understanding of what happens as data move from layer to layer within the neural network. In addition, LeCun, Chintala and their colleagues tested their generator’s predictive capabilities by having it use raw data to produce video frames. In one experiment they fed the generator four frames of video and had it produce the next two frames based on those data. The resulting AI-generated frames looked like a realistic continuation of the action, whether it was a person walking or simply making head movements.

More intelligent assistants

LeCun thinks such predictive abilities could enhance Facebook’s ability to engage users, using the common sense the site has developed to essentially make educated guesses about them. “If we know how to build dialogue systems that have an idea of what the person dialoguing wants or thinks, that means we can have chatbots that are actually useful and interact with you in a natural way,” LeCun says. Improved predictive capabilities could likewise help improve the Facebook M virtual assistant, which faces growing competition from Apple’s Siri, Google’s upcoming Google Assistant, Amazon's Alexa and Microsoft's Cortana.

“There is still a long way—very long way—to go before [machines have common sense], but I share with [LeCun and his colleagues] the belief that exploring better unsupervised learning algorithms is a crucial key towards human-level AI,” says Yoshua Bengio, a University of Montreal computer science professor and a co-author of the 2014 study that introduced much of the AI world to generative adversarial networks. That work was led by Ian Goodfellow, a University of Montreal PhD student at the time and currently a research scientist at OpenAI, a non-profit AI research organization co-founded in December by Elon Musk and Y Combinator president Sam Altman. Bengio, who was not involved in the Facebook AI research, addressed deep learning’s progress in the June 2016 Scientific American article titled “Machines Who Learn.”

Facebook’s interest in unsupervised machine learning is part of a larger trend that has some of the largest Internet companies—including Amazon, Apple, Google, Microsoft and Twitter—buying AI startup companies and investing in their own studies. Earlier this month Microsoft Research announced its effort to develop a system that could tell a story based on a series of related images. Google’s AlphaGo program made headlines in March when it convincingly beat one of the world’s best Go players at his own game. AI likewise plays a crucial role in efforts by Alphabet, Inc.—Google’s parent company—to develop a driverless car. Apple executives in the past week talked at the company’s Worldwide Developer Conference about wrestling with efforts to advance AI in its products without hurting widely publicized efforts to ensure customer privacy. “It's obvious [that AI] is likely to completely change their business as well as the whole world's economy in a major way,” Bengio says.

LeCun agrees that AI is clearly a very strategic technology for any company that operates on the Web or has any kind of digital presence. “Not just for user interfaces or content filtering but in general,” he says. “People will interact with machines in a very natural way, and we need to get machines to understand people.”

In other words, LeCun adds, “We need machines to have common sense.”