It took just a few decades for computers to evolve from room-size vacuum tube–based machines that cost as much as a house to cheap chip-powered desktop models with vastly more processing power. Similarly, the days of "personal robots"—inexpensive machines that can help out at home or the office—may be closer than we think. But first, says Alexander Stoytchev, an assistant professor of electrical and computer engineering at Iowa State University in Ames, robots have to be taught to do something we know instinctively: how to learn.

"A truly useful personal robot [must have] the ability to learn on its own from interactions with the physical and social environment," says Stoytchev, whose field of developmental robotics combines developmental psychology and neuroscience with artificial intelligence and robotic engineering. "It should not rely on a human programmer once it is purchased. It must be trainable."

Stoytchev and a team of grad students are developing software to teach robots to learn about as well as a two-year-old child. Their platform is a humanoid robot that sprouts two 60-pound (27-kilogram) Whole Arm Manipulators (WAM) made by Cambridge, Mass.,–based Barrett Technology, Inc., each tipped with a 2.6-pound (1.2-kilogram) three-fingered BarrettHand.

In one set of experiments, the robot was presented with 36 different objects, including hockey pucks and Tupperware. It could perform five different actions with each one—grasping, pushing, tapping, shaking and dropping—and had to identify and classify them based only on the sounds they made. After just one action the robot had a 72 percent success rate, but its accuracy soared with each successive action, reaching 99.2 percent after all five. The robot had learned to use a perceptual model to recognize and classify objects—and it could rely on this model to estimate how similar two objects were with only the sounds they made to guide it.

Another set of experiments showed the robot could learn to tell whether or not something was a container. The team presented the machine, topped with a 3-D camera, with objects of different shapes. By dropping a small block on each one and then pushing it, the robot learned to classify objects either as containers—those that moved together with the block ["co-moved"] more often when pushed—or as noncontainers. The robot could then use this knowledge to judge whether unfamiliar objects could hold things; in other words, it had learned, roughly, how to discern the unique characteristics of a container.

When personal robots finally hit retail chains, they might look something like HERB, the "Home Exploring Robotic Butler" created at an Intel lab in Pittsburgh. It is part of the company's Personal Robotics Project, whose goal is to make a truly autonomous robotic assistant that can perform routine tasks at human speeds in cluttered environments like homes or offices.

The three-foot (one-meter) machine balances a Barrett WAM arm and a BarrettHand atop the base of a Segway personal transporter with two small training wheels. To find its way around dynamic environments, HERB uses two laser range finders and a camera that let it tell people apart from fixed and movable objects like walls and chairs. (A rough layout map of the space is first programmed into the bot.) By observing how people move, the robot uses learning algorithms and probability distributions to predict where they'll go next to avoid running into them. "HERB knows people have intent, that they don't just move in a straight line," says Intel research scientist Sidd Srinivasa, one of the project's co-leaders. To figure out what an object is, HERB compares its live camera image with a set of 3-D models in its database, built up from representative images that researchers showed it earlier.

Manipulating objects in cluttered settings, like carrying a pitcher through a house without spilling anything, takes two skills. First, HERB has randomized planning algorithms to determine the best way to grip or move something as quickly as possible. For example, the robot might be given 30 seconds to "think" of a way to pick up a mug; if it finds one in 15, it then has 15 more seconds to improve its plan. "They're not optimal algorithms, but practical," Srinivasa says.

HERB also uses imitation learning to figure out how to handle objects by watching how people handle them. "We're much better at demonstrating actions than explaining them," Srinivasa says. "HERB takes human examples and learns to generalize from them. It's not just repeating what you're doing." This helps the robot deal with new, unfamiliar items. During a daylong public demonstration in October, HERB moved around in a model kitchen, opened cabinets and a refrigerator, and handed objects to visitors or put them in a recycling bin—all with only a few missteps.

Srinivasa would eventually like HERB to learn some simple social rules—such as knowing to go around a group of people rather than through them—as well as how to deal with completely unfamiliar environments, even in the dark. A useful robotic assistant is about a decade away, he estimates. "Moore's law"—the rule of thumb first posited by Intel co-founder Gordon Moore in 1965 that the number of transistors on a chip doubles every two years—"is on our side." he says

In the meantime, learning robots will sometimes surprise their own creators. Srinivasa tells how an early version of HERB puzzled researchers when it was grabbing coffee cups to place them in a dishwasher rack. It used a strange hand position, with one of its "thumbs" pointing down. Then they realized this was a "far more efficient motion" used by professional bartenders, Srinivasa says:  "They lift from underneath and pour in a single smooth motion, like in the movie Cocktail."  He calls these surprises "one of joys of doing manipulation research."