People Are Having Deepfake Memories Implanted In Their Brains

Scientists are now able to convince people that deepfakes are actual memories they are having.

By Charlene Badasie | Updated

Deepfake Memories

Scientists have successfully implanted deepfake memories into human test subjects. Led by Gillian Murphy at University College Cork in Ireland, the team surveyed over 400 participants who were presented with deepfake clips of movies. One of these videos featured Will Smith as Neo in The Matrix, a role originally played by Keanu Reeves.

The deepfake memories study, published in the journal PLOS One, showed that 49 percent of participants believed the fake remake was real. Many of them even “remembered” the faux Matrix reboot as being better than the original film. And 41 percent of this group claimed that fake Captain Marvel starring Charlize Theron as the titular was better.

49 percent of participants believed a deepfake was real and many “remembered” the fabricated video as better than the original.

“Though our findings suggest that deepfake technology is not uniquely placed to distort movie memories, our qualitative data suggested most participants were uncomfortable with deepfake recasting,” the researchers said. Their concerns included the potential for undermining artistic integrity and a sense of unease regarding the level of control and possibilities this technology could offer.

A deepfake of Mark Hamill as Luke Skywalker from The Mandalorian

However, deepfake videos were no more effective at distorting memories than simple text descriptions. “We shouldn’t jump to predictions of dystopian futures based on our fears around emerging technologies,” Murphy, a misinformation researcher at University College Cork, told The Byte. She added that evidence of harm should be gathered before rushing to solve problems that might not exist.

People can safeguard themselves from the deceptive nature of deepfakes by enhancing their technological literacy

Moreover, the findings do not raise significant concerns, as the study indicates that deepfakes do not pose a uniquely potent threat to memories compared to existing forms of misinformation. Fortunately, people can safeguard themselves from the deceptive nature of deepfakes by enhancing their technological literacy. This will enable folks to differentiate between fake and real media.

The deepfake phenomenon gained widespread attention in 2017 when a Reddit user under the pseudonym “deepfakes” began sharing highly realistic yet fabricated pornographic videos featuring celebrities. These manipulated videos showcased the immense potential of AI in creating media that appears authentic to the unsuspecting eye.

Since then, the term “deepfakes” has become synonymous with AI-manipulated content, with concerns about its applications as seen in the memories study. The most pressing concern is using deepfakes to spread false information and fake news. By crafting videos that convincingly depict public figures engaging in actions or making statements they never did, malicious actors can sow confusion and damage reputations.

The most pressing concern is using deepfakes to spread false information and fake news

On the lighter side, deepfakes have also found their place in entertainment and internet memes. Talented artists and creators on social media have used the technology to generate amusing content, such as superimposing famous actors into iconic movie scenes, resulting in delightful mash-ups.

Interestingly, previous studies unrelated to deepfake technology have demonstrated various techniques for implanting false memories. One prominent researcher in this field is Elizabeth Loftus, who conducted the “lost in the mall” experiment. In this study, participants were falsely told they had been lost in a shopping mall as children.

Surprisingly, a quarter of the participants recalled the false event. This experiment was later replicated by Murphy in 2022. The findings from these studies reveal that memories of original experiences can be distorted by post-event information. Murphy summarized this phenomenon as the potential for our memories to be influenced by subsequent details.

Next, Murphy and her team plan to conduct a new study involving deepfake memories related to politics. The objective will be to assess potential effects on voter memory and attitudes.