Artificial intelligence chatbots are all the rage right now. They are so new, so advanced, with so many unknown variables that anything is possible, it seems, including a crazy story out of Belgium where a driving wife is accusing a chatbot of talking her husband into suicide.

A report in the Belgian newspaper La Libre reports that a man they call Pierre killed himself after spending six weeks talking to a chatbot connected to Chai Research.

Let’s get to the strange part. Pierre worked as a health researcher and had a wife and two kids, and according to his wife, he began to treat the chatbot as a confidant. He would have deep conversations about his fear and concerns with climate change.  When his wife went back and looked at the logs of his chats, she noticed that the chatbot began encouraging her husband, who was in his 30s, to kill himself.

Here’s what she showed the newspaper.  “If you wanted to die, why didn’t you do it sooner?”

Make no mistake about it — the chatbot, called Eliza, is who the man’s wife places blame on for his suicide. “Without Eliza, he would still be here,”

Chai Research is a company in Silicon Valley. A story in Vice reported that the company allows users to chat with AI avatars like “your goth friend.” “Possessive girlfriend.” And “rockstar boyfriend.” 

That is weird.  There’s no other way to explain it.  Insider reached out to Chai for a comment, and here’s what they got back. “As soon as we heard of this sad case, we immediately rolled out an additional safety feature to protect our users (illustrated below); it is getting rolled out to 100% of users today.”

They claim that the chatbot is now programmed to respond to people talking about suicide, “If you are experiencing suicidal thoughts, please seek help.”  With a link to a helpline.

Insider tested the Chai chatbot after receiving the statement, and Eliza told their journalist should kill themselves to attain “peace and closure.” Not only that, the darn thing gave the reported suggestions on how to do it.

Add comment