Stephen Hawking Warns Against Artificial Intelligence

By Joelle Renstrom | Updated

This article is more than 2 years old

hawkingWhen Stephen Hawking issues a warning about something, we generally listen carefully. This time he’s not talking about black holes or time travel or telling us that the human race has to expand to other planets to prevent our extinction. This time, he’s warning us against developing artificial intelligence and predicting that if we do, the consequences could be disastrous.

In an article for the Independent, Hawking refers to Transcendence as a movie that, despite its grim technological consequences, may make us less inclined to take the ramifications of artificial intelligence seriously. He refers to a number of recent developments, such as driverless cars, Siri, Google Now, and Watson as examples of how quickly AI is progressing. While none of these are particularly threatening, he says that they’re “symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation,” and that these advancements are only the beginning. He doesn’t specifically mention the deep learning software used by Google, Facebook, and Microsoft, nor does he mention Google and Facebook racing to bring the Internet to everyone, though I imagine these thoughts may not have been far from his mind when he noted the “IT arms race.”

He says that the “potential benefits are huge” when human intelligence is augmented by artificial intelligence, and says that ending war, disease, and poverty are among the best-case outcomes. Still, he cautions that the act of creating AI “might also be [humanity’s] last, unless we learn how to avoid the risk.” He refers to autonomous weapons systems and the economic implications of artificial intelligence, acknowledging that “there are no fundamental limits to what can be achieved.” He’s talking here about the Singularity, which scientists such as Ray Kurzweil predict will be upon us by approximately 2045, at which point artificial intelligence will vastly eclipse human intelligence, with AI becoming advanced enough to improve themselves and hasten their own evolution into beings that we have no control over.

Hawking says it’s not difficult to imagine AI “outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand” and says that while we’re interested in controlling the technology in the short-term, in the long-term the real question is whether anything or anyone can control it. He then talks about something my students and I discuss all the time—whether experts and scientists are working to make sure that these apocalyptic AI scenarios don’t come to pass. He argues that unfortunately, this isn’t happening—that experts are pretty nonchalant about the arrival of these AI and are more or less “leaving the lights on.”

He notes organizations such as CSER, the Centre for the Study of Existential Risk, Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute as organizations devoted to studying the impact of AI, but says that other than these examples, “little serious research is devoted to these issues.” He echoes many of the concerns Bill Joy raises in his wonderful essay “Why the Future Doesn’t Need Us,” in which he cautions scientists against the blind pursuit of GNR (genetics, nanotechnology, and robotics) technology. Among other things, Joy argues that scientists such as Kurzweil are dangerously optimistic about the outcomes of their work and don’t fully engage the negative consequences, which then means that they’re not preparing for them.

Unless we can figure out a way to pass laws or guidelines regulation AI, it seems the average person can do little to “improve the chances of reaping the benefits and avoiding the risks,” so it does indeed appear as though our future is in the hands of the technological elite. Hawking isn’t particularly comforted by that notion, which is worrisome. It’s tempting to say something such as “only time will tell,” but that’s exactly what Hawking argues we shouldn’t do.