AI Picture Of The Pentagon Exploding Caused The Stock Market To Crash

An AI-generated photo of an explosion near the Pentagon is being blamed for a significant drop in the U.S. Stock Market.

By Charlene Badasie | Published

AI pentagon

An AI-generated photo of an explosion near the Pentagon in Washington, D.C., caused the United States stock market to crash. The image was shared by a verified account on Twitter called Bloomberg Feed, accompanied by a misleading caption that said, “Large Explosion Near the Pentagon Complex in Washington, DC – Initial Report.”

Although Twitter verification is meaningless since anyone can pay for the blue checkmark, the AI-generated Pentagon image had real-world consequences. According to The Byte, the “news” went viral after a user with over 650,000 followers shared the photo at 10:06 am. Four minutes later, the stock market fell by 0.26%.

The Arlington Police Department acted quickly to quell the panic by stating that the AI Pentagon image was indeed a fake. “There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public,” the department said on social media.

While law enforcement hasn’t confirmed if the image was made using artificial intelligence tools, it does feature some of the hallmarks of AI-generated images. The columns on the building in the hoax photo vary in size and the fence blends into the sidewalk in some places. The post, which was also shared by a Russian state-media Twitter account with more than 3 million followers, has since been deleted.

Fortunately, the markets swiftly bounced back once the AI Pentagon photo was revealed to be a hoax. Similarly, the prominent cryptocurrency Bitcoin faced a short crash as the fake news circulated, causing it to drop to $26,500. The digital currency stabilized shortly after and is currently being traded at $26,882.

This form of online deception has sparked significant concerns among critics of unregulated AI advancement. Experts in the field have previously expressed their apprehension about malicious individuals’ potential misuse of advanced AI systems, enabling the dissemination of misinformation and triggering chaos within online communities.

The AI-generated Pentagon image isn’t the first instance of trickery. Other fake viral images that misled the public included faux photos of Pope Francis in a Balenciaga jacket, a picture of former President Donald Trump being arrested, and deepfake videos featuring celebrities like Elon Musk endorsing cryptocurrency scams. Fake X-rated video footage of Harry Potter star Emma Watson also surfaced online.

As a result, hundreds of tech experts have called for a six-month pause on advanced AI development until proper safety guidelines are established. Dr. Geoffrey Hinton, known as the Godfather of AI, voluntarily exited his role at Google to showcase his concerns about potential risks without harming his former employer’s reputation.

Instances of misinformation, like the AI Pentagon image, add fuel to the ongoing discourse about establishing a comprehensive ethical and regulatory framework for artificial intelligence. As this technology becomes an increasingly prominent tool in the hands of disinformation agents, the consequences can be more chaotic than a temporary stock market crash.

A lack of transparency, accountability, and ethical considerations could amplify these risks. But until some form of regulation is implemented worldwide, instances of fake news and other dangerous trends are bound to increase.