Isaac Asimov’s first law of robotics is that a robot can’t harm a human or allow a human to come to harm. The purpose behind the law is to avert robot apocalypse scenarios and generally assuage people’s fear of artificial life. The problem with the law, though, is implementation. Robots don’t speak English—how would one code or program such a law, especially given how vague the notion of harm is? Does taking jobs from humans constitute harm? In Asimov’s short story “Liar,” a mind-reading robot realizes that harm can also be emotional, and lies to humans to avoid hurting their feelings, which of course only harms them more in the long run. All of this raises the bigger issue of whether robots can be programmed or taught to behave ethically, which is the subject of debate among roboticists. A recent experiment conducted by Alan Winfield of the UK’s Bristol Robotics Laboratory sheds some light on this question, and raises a new question: do we really want our robots to try and be ethical?
The experiment revolved around a task designed to exemplify Asimov’s first law. Only instead of interacting with humans, the robot subject interacted with robot substitutes. But the rule remained the same—the study robot, A, was programmed to move toward a goal at the opposite end of the table, and to “save” any of the human substitute bots (h-robots) if at all possible as they moved toward a hole.