Isaac Asimov’s Robot Visions Is Truly Visionary

By Joelle Renstrom | Updated

Robot VisionsMost people are familiar with Isaac Asimov’s I, Robot, a collection of nine short stories first published together in 1950 (the individual stories themselves almost all came out separately in the 1940s). The stories include Asimov’s groundbreaking robot tales, as well as principles such as the three laws of robotics, which have influenced pretty much every robot story since. His book Robot Visions combines those stories with short works of nonfiction in which he reflects on everything from the feasibility of the three laws to his predictions about the roles of robots in the future. The combination of fiction and nonfiction provides a wonderful lens into Asimov’s mind, as well as important points and questions regarding robots that are becoming more and more pressing and relevant.

Asimov was highly influenced by R.U.R., the first work featuring robots—killer robots who overthrow humanity, to be specific. In a short essay called “Robots I Have Known,” Asimov references author Karal Capek’s work, and describes the idea of robots that emerged from the play and from other robot fiction as “a sinister form, large, metallic, vaguely human, moving like a machine and speaking with no emotion.” It’s this description that Asimov seeks to challenge, particularly with regard to his creation of the laws that constrain robots and thus protect humanity. First, a robot cannot harm or allow harm to come to a human (this was later broadened into the “zeroth” law, which substitutes the word “humanity” in for “human,” thus allowing robots to act on the behalf of the collective good, rather than simply the individual good). Secondly, a robot must obey orders given by humans (unless they violate law number one), and third, that a robot must act in self-preservation (so long as this doesn’t violate laws one or two).

About the common perception of robots, Asimov says, “The key word in the description is ‘sinister’ and therein lies a tragedy, for no science-fiction theme wore out its welcome as quickly as did the robot.” It’s ironic that the master of robot fiction would ever say such a thing, but in reading his nonfiction it’s clear that the problem isn’t really the robot itself, but rather that the only “robot-plot that seemed available to the average author [was] the mechanical man that proved a menace, the creature that turned against its creator, the robot that became a threat to humanity.” In his opinion, far too many stories contained the “weary moral that ‘there are some things mankind must never seek to learn.’”

Asimov believed that such a moral was overly simplistic, clichéd, and unrealistic. Sure, it makes for some dramatic plot twists, but he didn’t agree that this was the natural progression of the robots or the robot-human relationship. In Robot Visions, he lays out his belief that robots are “story material, not as blasphemous imitations of life, but merely as advanced machines.” While so much robot fiction focuses on the lack of distinction between man and machine, that line remained clear for Asimov. Even in short stories such as “Reason,” in which a robot essentially develops its own religion because it cannot accept that an inferior being such as man created it, or “Runaround,” a story in which a robot becomes paradoxically trapped by a situation in which it must break one of the laws to obey another. The latter is the first story in which the three laws explicitly appear.

Asimov’s robots are sentient. They understand what they are and that they’re different from humans. They also understand that they are machines, and don’t get that confused with being human. In “Reason,” the robot Q-T has a a superiority complex, which spawns its doubts that humans really created it, but even then Q-T performs perfectly, preventing catastrophic harm from befalling the humans with whom it argued about its origin.

Such stories illustrate Asimov’s view that “a machine does not ‘turn against its creator’ if it is properly designed.” The operative word being “if,” of course. He goes on to acknowledge that “when a machine, such as a power-saw, seems to do so by occasionally lopping off a limb, this regrettable tendency toward evil is combatted by the installation of safety devices. Analogous safety devices would, it seemed obvious, be developed in the case of robots.” His idea of a fail-safe is the three laws. Sure, it’s not like the safety on a gun, and implementing the three laws would require careful and deliberate programming of a robot’s “brain,” or the “positronic brain” as he calls it, but Asimov argues that such programming is the safety measure humans are looking for, saying, “I have managed to convince myself that the Three Laws are both necessary and sufficient for human safety in regard to robots.”

But Asimov knows that even if humankind is able to program robots with some version of the three laws, our fear won’t simply disappear. “Mankind may know of the existence of the Three Laws on an intellectual level and yet have an ineradicable fear and distrust for robots on an emotional level,” he says, referring to the term he coined: the “Frankenstein complex.”

He also understands that there are practical and logistical concerns with regards to robots, such as “the possible replacement of human labor by robot labor.” But this isn’t necessarily a bad thing, according to Asimov. “Who says…that all teachers must be human beings or even animate?” he asks, when thinking about how humans can keep people “imaginative and creative.”

He doesn’t think robots will have any desire to wipe out humanity, and as demonstrated in his story “The Evitable Conflict,” the zeroth law will continue to guide robots, writing, “The robots finally win the mastery after all, but only for the good of man.” Asimov goes so far as to suggest that people develop “laws of humanics,” that guide humans’ treatment of each other (“a human being must never injure another human being”), as well as laws that guide humans’ treatment of robots, like “a human being must not harm a robot.” He suggests that it’s ridiculous that we have laws that constrain robots, when humans themselves seem unable to obey similar edicts. He has a point there.

Asimov’s thoughts and words about robots serve as a relevant guide to the future, particularly when it comes to coexisting with our mechanical counterparts. And while Asimov acknowledges that “fear of supplantation” or being made “obsolete” is understandably humankind’s greatest worry, he poses a thought-provoking question in one of his essays: “If a computer can be built to be as intelligent as a human being, why can’t it be made more intelligent as well?…Maybe that’s what evolution is all about…Maybe it is time we were replaced.”

Isaac Asimov Interview with Bill Moyers 1988 from MOOC-Ed on Vimeo.