What if you stopped learning after graduation? It sounds stultifying, but that is how most machine-learning systems are trained. They master a task once and then are deployed. But some computer scientists are now developing artificial intelligence that learns and adapts continuously, much like the human brain.

Machine-learning algorithms often take the form of a neural network, a large set of simple computing elements, or neurons, that communicate via connections between them that vary in strength, or “weight.” Consider an algorithm designed to recognize images. If it mislabels a picture during training, the weights are adjusted. When mistakes are reduced below a certain threshold, the weights are frozen at set values.

The new technique splits each weight into two values that combine to influence how much one neuron can activate another. The first value is trained and frozen as in traditional systems. But the second value continually adjusts in response to surrounding activity in the network. Critically, the algorithm also learns how adjustable to make these weights. So the neural network learns patterns of behavior, as well as how much to modify each part of that behavior in response to new circumstances. The researchers presented their technique in July at a conference in Stockholm, Sweden.

Applying the technique, the team created a network that learned to reconstruct half-erased photographs after seeing the full images only a few times. In contrast, a traditional neural network would need to see many more images before it could reconstruct the original. The researchers also created a network that learned to identify handwritten alphabet letters—which are nonuniform, unlike typed ones—after seeing one example.

In another task, neural networks controlled a character moving in a simple maze to find rewards. After one million trials, a network with the new semiadjustable weights could find each reward three times as often per trial as could a network with only fixed weights. The static parts of the semiadjustable weights apparently learned the structure of the maze, whereas the dynamic parts learned how to adapt to new reward locations. “This is really powerful,” says Nikhil Mishra, a computer scientist at the University of California, Berkeley, who was not involved in the research, “because the algorithms can adapt more quickly to new tasks and new situations, just like humans would.”

Thomas Miconi, a computer scientist at the ride-sharing company Uber and the paper's lead author, says his team now plans to tackle more complicated tasks, such as robotic control and speech recognition. In related work, Miconi wants to simulate “neuromodulation,” an instant networkwide adjustment of adaptability that allows humans to sop up information when something novel or important happens.