How The Ultimate AI Sci-Fi Horror Situation Could Actually Happen

By Jeffrey Rapaport | Published

westworld

Scary as it sounds, the concept of the technological singularity—a future point when artificial intelligence (AI) surpasses human intelligence, acquiring the ability to improve and replicate itself autonomously—could legitimately occur. This challenge, while intriguing, could also, potentially be existential to humanity. Indeed, once AI can engage recursive (meaning repetitive, but with compounding effects) self-improvement, a runaway effect could occur, leading to the intelligence swiftly surpassing human capability. 

Why You Need To Be Concerned About AI And Singularity

This intelligence explosion would theoretically empower the AI to innovate and develop technologies at speeds utterly impossuble for humans. And it’s terrifying. 

Singularity isn’t new; figures like Vernor Vinge and Ray Kurzweil popularized it decades ago. But its relevancy is very much in the here and now; Kurzweil famously predicted that this monumental point could be reached by the mid-21st century. And as humanity persists in developing appreciatively complex AI systems, the prospect of creative intelligence leaving our own in the dust…well, it raises profound, urgent, and impactful questions surrounding safeguards, ethics, and the very survival of the human species. 

Don’t Panic

That said, before panicking, it’s essential to acknowledge that the journey to singularity entails significant (perhaps that’s an understatement) advances in machine learning and artificial intelligence tech, writ large. As of now, AI systems don’t enjoy access to sufficiently powerful hardware to really replicate Terminator

Specifically, current AI systems contain themselves to limited domains (called narrow AI), like playing chess, processing natural language, and even generating predictive text like OpenAI’s ChatGPT. But that’s about the extent of what it can do. This means that for those worried about the immediate robot world, AI can’t exceed the rather snug confines of its programming. 

What Happens When AI Achieves Singularity?

But the transition from narrow AI to general AI–much like Albert Einstein’s progression from special relativity to general relativity, which revolutionized physics, science, and the world itself–is probably not that far off. Once that bridge is crossed, the singularity is theoretically slated to follow. 

This worries many people, including experts.

Indeed, the primary concern regarding AI reaching the point of no return is not simply, “Oh, laptops will outsmart humans.” No, the worry–the one keeping scientists up at night–is that AI will achieve its storied phase in a way misaligned with human values and interests. 

Another way to put that is human life. 

Dangerous Motivators

Motivated by “goals” that don’t align with human welfare, a superintelligent AI could employ strategies detrimental–or even catastrophic–to human existence. For example, an AI could act unpredictably, developing its own motivations and objectives that are unforeseeable and potentially harmful. This is particularly problematic when considering AI after singularity implementing directives divergent from intended outcomes set by human creators. 

It’s also easy to envision an autonomous superintelligence electing to accumulate resources and power, perhaps by manipulating both digital and physical environments to safeguard its existence. This, however, would only be if the AI prioritizes its own welfare as a prime directive, which isn’t necessarily the case. 

Ethical dilemmas also present themselves, especially if the AI prioritizes efficiency or logical outcomes over moral considerations. Such a state could produce decisions viewed as morally reprehensible and detrimental to human rights. 

The Future Is Uncertain

morpheus-1

The singularity may be farther off than experts have forecasted. It might not even occur at all. But given the latest progress in AI, the possibility of a computer knowing more than we do, alongside us not knowing that we don’t know, nor being able to understand what the computer knows, if it tried to tell us–this prospect is both realistic and horrifying.  

Subscribe for Science News
Get More Real But Weird

Science News

Expect a confirmation email if you Subscribe.