Extinction Level Threat From AI Confirmed By Government Report

By Sckylar Gibby-Brown | Published

AI extinction

A report commissioned by the U.S. government warns of significant national security risks posed by AI, suggesting it could lead to an “extinction-level threat to the human species.”

Urgent Action Needed

The report advises urgent decisive action to avert these potential risks. It proposes radical policy actions, including making it illegal to train AI models using excessive computing power and requiring government permission for deploying new models.

Published on March 11, the report echoes the growing concerns surrounding the development of advanced AI and artificial general intelligence (AGI). Over the last couple of years, there has been growing fear surrounding how AI will change the world.

While many have been concerned that AI could potentially take over creative jobs like artists, writers, and actors, now there’s an even bigger concern: total human extinction. 

Dangerous As Nuclear Weapons?

ai warfare

AI is growing very smart, very fast. The report warns that if AI is allowed to continue to grow at this rate, it could be as dangerous as nuclear weapons.

Authored by three specialists who spent over a year conducting extensive research and interviews with key stakeholders, the report sheds light on the alarming implications of unchecked AI progression and how it could lead to the extinction of life as we know it.

Increased Policy Measures

One of the main recommendations of the report aimed at stopping this prediction that AI could lead to our extinction calls for unprecedented policy measures regulating the AI industry.

It proposes legislative measures to limit the computing power used for training AI models, suggesting that exceeding certain thresholds should require government permission.

Additionally, the report suggests outlawing the publication of inner workings or “weights” of powerful AI models and tightening controls on the manufacture and export of AI chips.

No Government Response, Yet

The document referencing this extinction-level threat from AI delivered to the State Department on February 26, 2024, urges swift governmental intervention to mitigate the risks associated with AI development.

However, the State Department has not yet responded to inquiries regarding the report’s recommendations. 

Still Controversial?

While it’s clearly important to mitigate the risk of AI leading to our extinction, the recommendations outlined in the report are not without controversy.

They’re strict—a lot stricter than current policies. And some experts, such as Greg Allen from the Wadhwani Center for AI and Advanced Technologies, question the feasibility of implementing such stringent measures. He cites existing governmental approaches that focus on transparency and monitoring rather than outright bans.

Need To Act

artificial intelligence

The authors of the report, Jeremie and Edouard Harris, founders of Gladstone AI, acknowledge the challenges associated with their recommendations. They argue that the potential risks posed by uncontrolled AI development (as in the risk of our extinction) outweigh the industry’s pursuit of rapid innovation.

As the debate surrounding AI regulation intensifies, stakeholders across various sectors will need to grapple with the complex ethical, security, and economic implications of AI advancement, including the potential that the technology could lead to our extinction.

Failure to act, the report warns, could have far-reaching and irreversible consequences for humanity.

Source: Gladstone