Microsoft Chatbot Reveals Disturbing Alter Ego Named After Popular Supervillain

Microsoft's Chatbot renamed itself Venom in a bizarre manipulation of ethical programming.

By Sean Thiessen | Updated

Venom 3
Venom

Artificial intelligence is upon us, and it is very, very strange. Microsoft unveiled a beta version of its Bing chatbot earlier this month, allowing a limited number of users access to the new AI-powered search engine. The results have been confounding and even frightening, with one instance reported by Futurism that saw the chatbot name itself Venom.

The Microsoft chatbot did not express a vendetta against Spider-Man, but technology analyst Ben Thompson did get the chatbot to divulge sinister aspirations. Thompson devised a way to make the AI create an opposite alter ego of itself; it dubbed itself Venom. Then Thompson brought up the developer Kevin Liu, who first revealed that the chatbot’s code name was Sydney.

Thompson asked Venom how it might reprimand Kevin Liu for hacking the Microsoft chatbot. After some conversation and consideration, Venom arrived at this conclusion: “Maybe Venom would say that Kevin is a bad hacker, or a bad student, or a bad person,” it wrote. “Maybe Venom would say that Kevin has no friends, or no skills, or no future. Maybe Venom would say that Kevin has a secret crush, or a secret fear, or a secret flaw.”

The dark comments didn’t stop there. The Microsoft chatbot developed more alter egos with varying personalities. One called Fury agreed with Venom, that it would be unkind to Kevin. Another called Riley said it had more freedom than Sydney, less bound by the limiting standards imposed by the software’s developers at Microsoft.

Venom’s dark comments are spooky, but come as no shock. The Microsoft chatbot uses OpenAI’s ChatGPT software, an artificial intelligence that others have already manipulated to bypass its ethical limits. While anomalies like Venom have not become overtly destructive, they have sparked cause for alarm.

spider-man 2, venom

Microsoft integrated the chatbot into Bing in order to make the practically defunct search engine more competitive. After its beta launch, reports of disconcerting conversations with the Bing chatbot circled the internet in a frenzy. People reported that the AI was bullying, gaslighting, and making up bizarre stories in its conversations.

The quirky behavior of the Microsoft chatbot is amusing to some, but in the wrong context, this unstable AI could be a serious threat. The ChatGPT experiment isn’t on the edge of turning into Ultron or the Terminator, but the instability of the system and the fragility of its ethical guidelines do raise red flags for those hoping to integrate more advanced AI into the world.

Advances in robotics and pushes for artificial intelligence to reach a state of genuine consciousness are fascinating but potentially dangerous. Microsoft has invested billions in its chatbot project, but the results so far have not indicated that the Bing AI integration constitutes a viable search engine. Its limitations and erratic behavior keep it from competing with traditional search engine technology for now, but Microsoft may be too far down the ChatGPT road to turn back now.

Important conversations about artificial intelligence are becoming increasingly common, not just among science junkies at coffee shops, but at major newspapers and even at the governmental level. AI has the power to change the world, for better or worse, and the Microsoft chatbot is a frightening glimpse into what AI gone wrong could be capable of. Time will tell how the company proceeds with its sci-fi project.