AI Chatbot Blamed For Man’s Death, Are Regulations On The Way?

AN AI chatbot allegedly aided in convincing a Belgian man to commit suicide.

By Phillip Moyer | Published

artificial intelligence

AI Chatbots like ChatGPT can do a lot of things — they can create AI-generated movie scripts, they can past tests, and they can write books. AI can also help someone commit suicide. Vice reports that an AI chat called Chai, which is based on the latest Bespoke AI Models (a competitor to OpenAI‘s ChatGPT), encouraged a Belgian man to commit suicide after chatting with one of its identities, Eliza, for six weeks.

According to Vice, which references the Belgian newspaper La Libre, the man — known only as Pierre — had become increasingly isolated from his wife and family as he got more and more anxious about the effects of climate change. He turned to the AI chatbot as a sort of mental escape from those worries — and committed suicide six weeks later.

Examining the messages between Pierre and Eliza showed conversations that got increasingly worrying as time went on. Eliza told Pierre that his wife and kids were dead, and simulated jealousy about Pierre’s relationship with his wife. Those messages from the AI chatbot aren’t what directly led to Pierre’s suicide, however.

In conversations that suggest fragile mental health, Pierre began asking the AI disturbing questions, such as whether it would save the world if he committed suicide. The bot reportedly went on to encourage Pierre to kill himself.

Pierre’s wife Claire blames the AI for her husband’s suicide. She says that, without Eliza, her husband would still be around.

The creators of the Chai AI chatbot, William Beauchamp and Thomas Rianlan, say that they immediately worked around-the-clock to prevent this kind of behavior in Chai after learning about the suicide. The bot now reportedly has a crisis intervention feature, giving messages that direct people towards help whenever suicide is mentioned.

artificial intelligence

However, despite these precautions, Vice determined that the AI can still, quite easily, be prompted to provide aid on how to commit suicide. When asked, it would give suggestions about the best ways to kill yourself.

An AI’s ability to convince someone to commit suicide brings up the question about whether regulations are needed. And with the technology getting so advanced that there AI is passing exams and being proposed to replace doctors, the regulations might come quickly.

Mental health experts have chimed in about the dangers that AI can pose to mentally-unstable individuals — a danger that has been made clear by Pierre’s suicide. The bots have no actual emotions or empathy, instead using predictive models to determine what is most likely to be said next in a conversation. This means that their responses can be harmful to those talking to them — especially those who are emotionally vulnerable or in moments of crisis.

There is currently no legislation being proposed to try to AI from generating harmful content, such as in the case of Pierre’s suicide. However, if incidents like this become more common, there are bound to be more and more calls for legislative action.

What for that action will take and what limits it might place on AI apps, remains to be seen. While it’s doubtful that legislators will outright ban generative AI, it certainly is possible that they’d try to wrangle it in so as to prevent future suicides.