If someone showed you a single character from an unfamiliar alphabet and asked you to copy it onto a sheet of paper, you could probably do it. A computer, though, would be stumped—even if it were equipped with state-of-the-art deep-learning algorithms such as those that Google uses to categorize photographs. These machine-learning systems require training on enormous sets of data to make even rudimentary distinctions between images. That may be fine for machines in the post office that sort letters by zip code. But for subtler problems, such as translating between languages on the fly, an approach learned from a handful of examples would be much more efficient.

Computers are closer to making this leap because of a machine-learning framework called Bayesian program learning, or BPL. A team of researchers at New York University, M.I.T. and the University of Toronto has shown that a computer using BPL can perform better than humans at recognizing and re-creating unfamiliar handwritten character sets based on exposure to a single example. (“Bayesian” refers to a kind of probabilistic reasoning that can be used to update uncertain hypotheses based on new evidence.)

The BPL approach to machine learning is fundamentally different than deep learning, which roughly models the human brain's basic pattern-recognition capabilities. Instead BPL takes inspiration from the ability of the human brain to infer a set of actions that might produce a given pattern. For example, it would recognize that the letter A can be built out of two angled strokes connected at the top, with a short, horizontal stroke in the middle. “The computer represents the A by assembling a simple program that generates examples of that letter, with different variations every time you run the code,” says Brenden Lake, a Moore-Sloan Data Science Fellow at N.Y.U. who collaborated on the research. Bayesian processes allow the software to cope with the uncertainty of re-creating unfamiliar letters out of smaller, previously known parts (for example, the horizontal stroke in an A).

This kind of machine learning is more versatile, as well as more efficient. The same processes that BPL software uses to deconstruct and then re-create an unknown letter could someday power AI applications that can infer cause-and-effect patterns in complex phenomena (such as the flow of a river) and then use them to address completely different systems. Human beings regularly employ this kind of abstract “lateral thinking”; BPL could unlock similar capabilities for computers. “We're trying to get computers to be able to learn concepts that can then be applied to many different tasks or domains,” Lake says. “That's a core aspect of human intelligence.”