Journalist Charles Choi talks about work being done to make robots self-aware. Plus, we test your knowledge about some recent science in the news. Web pages related to this episode include "Automaton, Know Thyself: Robots Become Self-Aware" and "New Challenges for Evolution Education"
Podcast Transcription
Steve: Welcome to Science Talk, the more of less weekly podcast of Scientific American, posted on March 2nd, 2011. I've been informed that on the last of couple of episodes I said 2010. I'm still writing 1943 on my checks. I'm Steve Mirsky. So on the last episode, when it was ending, I said I'd be back in a few hours, that was many days ago. So before we get to the science—well, there's some science involved here—so I thought you might be interested to know why there's been such a delay between the last episode and this one, which is basically the second part of that episode. Well, I was all set to roll this one out last Friday when I woke up in the morning to raw sewage backing up into the house. That's a situation that must be dealt with immediately. So we did a little investigation and we found that the entire pipe leading out to the main city sewer line had to be replaced, which has been what's going on for last few days. But that was just the beginning of a fascinating day. A little later in the day, my neighbor rang me up to tell me that there was soot appearing mysteriously on his front porch, and I was certainly to blame for it somehow. We looked at the front of our adjoining houses and realized that the soot, no surprise, was emanating from our shared chimney. Then I confronted him with scientific fact, because I pointed out to him that while he had oil heat, I had gas heat, and whatever soot was emerging from the chimney was surely the result of inefficient burning of his oil, while my cleaner-burning gas could not possibly be to blame. Later in the day, we did have the incident where a live mouse was running around, brought into the house by one of my cats before the cat did him in—we have two cats; Don't worry there are not 43 running around the house. And finally in the evening to get away from it all, I drove away to try to get a quiet meal at a little restaurant I know, and before I could say, "What was that?" it became very apparent that what we had hit in the car was a skunk, because the smell just started to permeate the entire vehicle. So we started the day with raw sewage, and we ended it with the mercaptans that you can find in any good skunk musk. So that would explain in a nutshell why you haven't heard from me in a few days. I've not had to bathe in tomato juice, but it's been close. So anyway, when last we met, we were talking about the AAAS meeting, the annual meeting of the American Association for the Advancement of Science that had wrapped up on Monday, February 21st in Washington, D.C. One additional short conversation that I had with another journalist was with Charles Choi who is a frequent contributor to Scientific American magazine and to our Web site. So let me now share with you the conversation that I had with Charles.
Choi: The thing that really drew my attention was work from roboticist Hod Lipson over at Cornell University. The title of his talk was "Self-Reflecting Machines" and talking about installing self-awareness in robots, and how could you not want to learn more about that?
Steve: So a self-reflecting machine—we're not talking about in front of the mirror; we're talking about a machine that's somewhat aware of its own existence and activities.
Choi: Exactly and you know he breaks it down into a number of different levels. I mean, in 2006, he and his colleagues had research that appeared in the journal Science, and that basically started off with a robot that could reflect upon its own body. It created a model of its own body, a self-image, and from there it could figure what it was doing and how to move itself, and if a limb got removed, it could rejigger its self-image and keep going. When I saw self-aware machines, I mean, I thought about Skynet from Terminator.
Steve: Of course.
Choi: Yeah just like everybody else, but you know, there are very solid practical reasons for wanting to do this. Basically, you know, robots are great, you know. I mean they're superhuman, at speed, at power, at working 24/7 repetitively without getting bored. But if one little thing is out of place then it gets messed up really easily. So you want robots to be able to adapt to their own situations, and having that self-image, I mean, reflecting upon their own actions, is key. You know, they can look at themselves and say, "All right, what's working, what's not working and how can I change myself in order to make this work to achieve my goals."
Steve: But are they, I mean, they're clearly not conscious.
Choi: No, No.
Steve: So can you really call it self-reflecting or is it just, I mean I guess it counts, but really they've been, in the programming, you've given it an additional batch of programming to do this.
Choi: See that's just the first step. The second step, which they detailed even further here, and which hasn't been published yet, I mean, they essentially put two brains in one robot. You know, you had one robot pursuing its goals, chasing one color of dot and avoiding this other color of dot—you had these lights projecting dots on the ground.
Steve: One of the brains was doing this?
Choi: One of the brains was doing this and the other brain was modeling what the first brain was doing. And let's say, you know, halfway through the researchers changed the rules of the game and, you know, the robot got points essentially for chasing you know the other color and avoiding the color they originally wanted to chase, well, you know, then this reflective brain could say, "Hey, all right, something's different now." And it would actually fool, it had a way of fooling the first brain. It basically messed around with the data; it filtered it so that it made red look blue or blue look red.
Steve: The second brain did that?
Choi: Yeah the second brain.
Steve: To what purpose?
Choi: To make it accommodate the changed rules.
Steve: Oh, I see. Okay.
Choi: Right and yeah, I mean, this is a bit of programming, and this is a very simple system obviously, right, but if you scale it up, like you know, 10 trillion times, you know, I mean, maybe this is the atom of consciousness, you know, the atom of self-awareness upon which, you know, what we call metacognition—you know, thinking about thinking—is based; and even what we call theory of mind, thinking about what others are thinking. They did similar experiments to that, exploring theory of mind as well. They had one robot—the observer let's call it—observe another robot, the actor. And, you know, the actor's goal was to move toward a light, right, but the actor did so in a very kind of erratic, spiraling, round-about way. So you had the first observer, you know, observe the actor, you know, a couple of, you know, dozen times or something, and it essentially projected, it created a model of how this robot was going to act and it did so well enough to be able to lay a trap, you know, that the actor robot could run over. I mean, so this is all about exploring the very basics of giving robots a form of self-awareness and it could raise a very interesting question of whether or not this is how self-awareness works in us.
Steve: Or especially in small organisms, like, you know, a flatworm.
Choi: Exactly, exactly. I mean, you know there is continuum of self-awareness. I mean, its not just, you know, it's not binary, I mean, and it had to develop somehow. So you know, using these artificial systems, you know, one could experiment. One could say "All right, how is our model of self-awareness right or wrong?" And can we develop, by developing robots that mimic what we think, we can experiment, we can test, we can falsify, you know, hypotheses. I mean, that's science.
Steve: And if consciousness is an emergent property, if you keep piling on this ability, maybe eventually you'll have a conscious robot without you having done anything specific to develop consciousness.
Choi: Exactly. The researchers have stressed that very much. They are not programming in consciousness. You know, they're not programming these robots at all. They're letting, through these kinds of evolutionary algorithms, they're letting a form of self awareness develop on its own. And its kind of a black box, especially the more complicated the system gets, it will become a black box. And that keeps it a little mysterious. I'm not quite sure how I feel about that. I mean, obviously we'd love to be able to look into it and figure out how it works, but maybe it's not possible; maybe consciousness will forever be kind of a black box.
Steve: You can read Charles Choi's article, "Automaton Know Thyself: Robots Become Self-Aware" on our Web site.
Steve: Now it's time to play TOTALL……. Y BOGUS. Here are four science stories; only three are true, see if you know which story is TOTALL……. Y BOGUS.
Story 1: New Oscar winner, Natalie Portman was the coauthor of a paper published in a scientific journal.
Story 2: The year 2010 was the safest year in the history of commercial aviation.
Story 3: People who idealized their mate actually have the happiest marriages, for the first few years anyway.
And story 4: A computer analysis of tennis players has ranked Bjorn Borg as the best male tennis player in the open era starting in 1968. While you think about those stories, let me urge you: Try the Scientific American smart phone app—what're you waiting for?—featuring material from the magazine. You get many of the departments including David Pogue's technology column, Christine Gorman's look at the science of medicine, Michael Shermer's Skeptic page and my Antigravity page to keep the magazine and now your smart phone, you know, buoyant. As for TOTALL……. Y BOGUS, your time is up.
Story 1 is true. When she was a student at Syosset High School on Long Island, Oscar winner Natalie Portman coauthored a paper under her original name, Natalie Hershlag, that was published in the Journal of Chemical Education. The paper was titled a "Simple Method to Demonstrate the Enzymatic Production of Hydrogen from Sugar".
Story 2 is true. Last year, 2010, was an incredibly safe year for commercial aviation. Major airlines did not suffer a single fatality. And the accident rate, which refers to irreparable damage to the aircraft, was one per 1.6 million flights. That's according to a report released by the International Air Transport Association.
And story 3 is true. You'd think that people who idealize their mate would become disappointed when that person reveals themselves to be all too human. But a study in journal Psychological Science finds that such people have the happiest marriages, for the first three years anyway. That was the extent of the study. Maybe the real human within hasn't been revealed after only three years. For more, check out the March 2nd episode of the daily SciAm podcast, 60-Second-Science.
All of which means that story 4, about Bjorn Borg being ranked the best male tennis player in the open era is TOTALL……. Y BOGUS, because the honor has gone to Jimmy Connors. That's according to research published in the journal Public Library of Science One. The usual calculation involves weeks ranked at number one in the world. Grand Slam titles is another factor, but the new measurement looked at all Grand Slam and ATP matches an individual ever played and weighed the quality of the opposition. Using those criteria, Connors comes out on top.
Steve: Well that's it for this episode. Get your science news at www.ScientificAmerican.com where you can find the In-Depth Report, "New Challenges for Evolution Education". Five years after the Dover trial pushed intelligent design out of public school classrooms, how has evolution instruction fared? Find out by looking at that idea that In-Depth Report. We have an article by Lauri Lebo who was one of the veteran reporters at the actual Dover trial as well as an interview with Jennifer Miller, one of the teachers from Dover High School. And follow us on Twitter—you'll get a tweet about each new article posted to our Web site—our Twitter handle is @sciam. For Science Talk, the podcast of Scientific American, I'm Steve Mirsky. Thanks for clicking on us.