In a 2007 paper Kerstin Dautenhahn, head of the ASRG, describes "robotiquette" as social rules for robot behavior. These social norms are as straightforward as walking on the right in a hallway, or saying "excuse me" if you bump into someone. Like emotional intelligence, social mores such as these are programmed into the robot using algorithmic code, making the machines human incarnate. But just how humanlike do we want our robots to be?
In 1970 Japanese researcher Masahiro Mori proposed a hypothesis called "the uncanny valley." It states that if someone were to graph the relationship between robots' resemblance to humans versus robot likeability, the slope of the graph would steadily increase until a point when robots became much too humanlike. At this undefined precipice, the slope of the graph would abruptly turn negative, plummeting below zero. "There's a constant tension of making robots that appear intelligent, but aren't too intelligent," says Reid Simmons, a research professor at Carnegie Mellon's Robotics Institute.
To avoid this valley, LIREC plans to devote a significant amount of time to studying human–robot interactions to find out what situations make people uncomfortable. They will analyze human relations with robots such as Pleo, an interactive toy dinosaur, and KASPAR, a childlike machine conceptualized by ASRG's Dautenhahn. A group in Hungary also plans to study human relations with pet dogs and apply their findings to robots.
One of the biggest challenges, Queen Mary's McOwan says, will be studying a principle known as migration—the movement of a robot between platforms: for instance, from a robotic body in your living room to a graphical face on your computer screen. LIREC is the first group ever to probe how humans react as a "familiar" robot changes from a physical into a virtual being. But to get there, science will first have to make robots with familiar personalities.