The rise of AI has been annoying enough in creative spaces because it has caused plenty of tech bros to think they became amazing artists overnight. However, the scarier possibility of AI has always been its ability to create deliberate misinformation: considering how easily millions of people regularly get bamboozled by misleading emails and social media posts, AI has the potential to make all of that worse. According to Deadline, YouTube agrees, and the streaming giant is issuing “new guidelines for AI-enhanced videos uploaded to its platform” and providing “labels to inform viewers when the video they’re seeing has been altered or synthetically created.”
Strict AI Guidelines
Theoretically, these new guidelines will apply to any YouTube videos enhanced by AI. That could happen in fairly innocuous cases where, for instance, someone uses AI to enhance a low-resolution image or video. Unsurprisingly, though, the company is more worried about artificial intelligence being used to provide misinformation regarding very sensitive subjects.
According to Vice Presidents of Product Management Jennifer Flannery O’Connor and Emily Moxley, one area of concern is the use of public officials’ images: such videos could make thousands or even millions of people believe that, say, Joe Biden or Donald Trump said or did something scandalous when they did no such thing. Speaking of those two, YouTube is also concerned about how the use of AI could impact elections. With the presidential election next year, it’s important to put certain controls in place before AI-enhanced political misinformation spreads like digital wildfire.
Stopping The Spread Of Misinformation
Other areas of concern for YouTube include AI videos covering “ongoing conflicts and public health crises.” It’s easy to imagine artificial intelligence misleading people about the conflict between Israel and Palestine or misleading people about major health pandemics such as COVID-19. In extreme cases, such misinformation could be more than just annoying: it could possibly get someone killed.
Penalties For Not Disclosing The Use Of AI
What does YouTube plan to do when content creators don’t disclose their use of AI? There are a variety of potential penalties, including the forced removal of said content and the removal of said creators from the YouTube Partner Program. However, the company has stressed that it will be working closely with content creators to review the new requirements, which will hopefully minimize creators getting surprise suspensions.
While such restrictions sound very useful and necessary, it’s worth noting that YouTube has a more complex relationship with AI than the average user may suspect. For example, it has added “a searchable cache of AI material called Dream Screen designed for the Shorts platform” in the hopes that more users who love TikTok will flock to YouTube’s new alternative.
When YouTube CEO Neal Mohan was asked about the potential of users abusing this to create deepfakes and other misleading AI videos, he simply noted “there might be challenges” but that all such videos would be “subject to YouTube community guidelines.”
Speaking of deepfakes, one way that YouTube hopes to keep AI limited is by adding the option for users to make a privacy request to remove “AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice.” Hopefully, this will help individuals keep their faces and voices from being stolen for content while also restricting the proliferation of copycat content (where, say, a would-be musician uses AI to sound more like a famous singer).
The AI Frenzy
Some might say YouTube hasn’t gone far enough, but these restrictions are a welcome first step in keeping AI restricted before it runs completely amok. Collectively, we’ve all learned that the AI takeover looks a lot less like a killer Terminator and a lot more like a deepfake of Arnold Schwarzenegger trying to sell us something. With any luck, YouTube’s new policy means we won’t be hearing “I’ll be back” from the worst content creators who rely on AI as a crutch for their awful YouTube videos.