- Robots that make autonomous decisions, such as those being designed to assist the elderly, may face ethical dilemmas even in seemingly everyday situations.
- One way to ensure ethical behavior in robots that interact with humans is to program general ethical principles into them and let them use those principles to make decisions on a case-by-case basis.
- Artificial-intelligence techniques can produce the principles themselves by abstracting them from specific cases of ethically acceptable behavior using logic.
- The authors have followed this approach and for the first time programmed a robot to act based on an ethical principle.
In the classic nightmare scenario of dystopian science fiction, machines become smart enough to challenge humans—and they have no moral qualms about harming, or even destroying, us. Today’s robots, of course, are usually developed to help people. But it turns out that they face a host of ethical quandaries that push the boundaries of artificial intelligence, or AI, even in quite ordinary situations.
Imagine being a resident in an assisted-living facility—a setting where robots will probably become commonplace soon. It is almost 11 o’clock one morning, and you ask the robot assistant in the dayroom for the remote so you can turn on the TV and watch The View. But another resident also wants the remote because she wants to watch The Price Is Right. The robot decides to hand the remote to her. At first, you are upset. But the decision, the robot explains, was fair because you got to watch your favorite morning show the day before. This anecdote is an example of an ordinary act of ethical decision making, but for a machine, it is a surprisingly tough feat to pull off.
This article was originally published with the title Robot Be Good.