With the invention of the Roomba vacuum cleaner, it is no longer far-fetched to imagine robots helping us carry out daily chores—not to mention more complex tasks such as assisting surgery. But nobody wants an unpleasant robot in his or her life—any more than one wants to be saddled with a disgruntled human helper. Enter robots with personality, capable of developing emotional relationships with humans.

Sound futuristic? Well, the future could be here sooner than you think. A consortium of researchers, psychologists and computer scientists has just launched a $13-million project dubbed "Living with Robots and Interactive Companions" (LIREC) to study interactions between humans and robots. The goal: to come up with information they can use to design robo-companions with whom people will feel comfortable. "What we are developing is a technology," says project coordinator Peter McOwan,  a computer science professor at Queen Mary, University of London, in England. "We believe it'll improve the quality of human life."

Humans have been making automatons (neuter of automatos, "acting of itself") for thousands of years: The ancient Greeks had wooden pigeons moved by steam; the Prague Astronomical Clock, now known for its hourly parade of animated apostles around the top, has been a tourist attraction since the 15th century; and the first commercial robots used on assembly lines were introduced in the 1950s. But it was not until the late 1990s that the groundbreaking social robot, Kismet, was built at the Massachusetts Institute of Technology. Kismet  was designed from information gleaned from psychological and behavioral studies of infants. A decade later, universities from Yale to Carnegie Mellon  all have social robotics labs.

Researchers in these labs envision a world in which robots routinely assist humans with domestic tasks or act as companions in, say, nursing homes or hospitals. Robots are already used to teach autistic children, to deliver medicine in hospitals and to act as personal trainers. If robots such as these had personality, researchers explain, people would be less hesitant to trust them and more likely to use them, making them more effective and valuable. A robot "with personality is not just a toy," says Dag Sverre Syrdal, research assistant and psychologist with the Adaptive Systems Research Group (ASRG) —a LIREC group—at the University of Hertfordshire in England. "We are not just seeing if it can be done. It does have an application."

At the moment, robots have very general personalities, such as "introverted" or "extroverted," using exaggerated expressions to communicate with humans. Humans, however, are able to take these uncomplicated stereotypes and anthropomorphize them. "You build up simple connections, and you develop and assign personality," McOwan says. "We [researchers] are not building robots with emotions, we are building [them] to mimic emotions.” When designing robots, the mimicry goes beyond making emotional-savvy machines. Researchers also codify societal norms—we need robots with 'robotiquette.’

In a 2007 paper Kerstin Dautenhahn, head of the ASRG, describes "robotiquette" as social rules for robot behavior. These social norms are as straightforward as walking on the right in a hallway, or saying "excuse me" if you bump into someone. Like emotional intelligence, social mores such as these are programmed into the robot using algorithmic code, making the machines human incarnate. But just how humanlike do we want our robots to be?

In 1970 Japanese researcher Masahiro Mori proposed a hypothesis called "the uncanny valley." It states that if someone were to graph the relationship between robots' resemblance to humans versus robot likeability, the slope of the graph would steadily increase until a point when robots became much too humanlike. At this undefined precipice, the slope of the graph would abruptly turn negative, plummeting below zero. "There's a constant tension of making robots that appear intelligent, but aren't too intelligent," says Reid  Simmons, a research professor at Carnegie Mellon's Robotics Institute.

To avoid this valley, LIREC plans to devote a significant amount of time to studying human–robot interactions to find out what situations make people uncomfortable. They will analyze human relations with robots such as Pleo,  an interactive toy dinosaur, and KASPAR,  a childlike machine conceptualized by ASRG's Dautenhahn. A group in Hungary also plans to study human relations with pet dogs and apply their findings to robots.

One of the biggest challenges, Queen Mary's McOwan says, will be studying a principle known as migration—the movement of a robot between platforms: for instance, from a robotic body in your living room to a graphical face on your computer screen. LIREC is the first group ever to probe how humans react as a "familiar" robot changes from a physical into a virtual being. But to get there, science will first have to make robots with familiar personalities.