People Are Being Arrested For Using AI To Write News

A man in China was arrested for using AI to create a fake news story.

By Zack Zagranis | Published


AI can seem like a fun tool to mess around with when you want to see Harry Potter as a Jedi or a baby, but in the wrong hands, it can be dangerous and even lead to jail time. According to The Byte, a man in China was arrested for using ChatGPT to write a fake news story. The sorry falsely claimed that an accident involving a train resulted in the death of nine people.

The man, identified only as Hong, was arrested in China’s Gansu province for “using artificial intelligence technology to concoct false and untrue information,” according to local police who charged Hong with “picking quarrels and provoking trouble.” It’s a rather broad charge that carries with it a sentence of five to ten years in prison if Hong is convicted.

The incident marks the first time an arrest was publicly made under China’s new AI regulations. The regulations are an attempt by the Chinese government to curb the use of “deep synthesis” technology to spread disinformation online.

China’s new Administrative Provisions on Deep Synthesis for Internet Information Service make it illegal for anyone to use AI to make deepfakes unless the content is explicitly labeled as such and can be traced back to its original source. The provisions also require anyone using deep synthesis technology to alter someone’s voice or image to contact that person first and gain their consent.

Deepfakes are defined as videos in which a person’s face and/or body has been digitally altered using AI so that they appear to be someone else. The process has so far largely been used for entertainment purposes, such as Matt Stone and Trey Parker’s web series Sassy Justice. Justice is a political satire series created by the South Park creators to poke fun at Donald Trump and other political figures.


But deepfakes can be used for nefarious purposes as well. Take the current deepfake trend infecting the adult entertainment industry. The internet is riddled with pornographic videos featuring the heads of celebrities like Scarlett Johansson and Taylor Swift grafted onto the bodies of adult film stars. Most reputable adult sites prohibit deepfake content, but there’s no way to stop it all.

And that’s the problem, stopping it. China’s approach reads as overly severe, but what’s the alternative? How can legislators make sure AI doesn’t completely overrun the internet without infringing on people’s individual liberties? Is it even possible?

At the rate that AI programs like ChatGPT are growing and learning, it might not be. Even China’s draconian approach might not be enough to prevent the spread of AI-manipulated media.

But is that such a bad thing? While AI, just like any other tool, has the potential for evil, it also carries with it the potential for good. AI can augment the capabilities of differently abled individuals or perform mundane tasks giving people more free time.

Whether it’s used to generate fake news stories like the one in China or pictures of Jason Mamoa as a Minecraft character, one thing we can be sure of is that AI isn’t going away anytime soon. Let’s just hope the end result is more Star Trek than Terminator.