As digital dependence increases and screen time rises, the connection between humanity and AI is brushing the lines of danger. In February, a teenager in Florida took his life after months of having concerning conversations with a Character.AI chatbot. Sewell Setzer III was only 14, and he was displaying a dangerous attachment to an AI bot in the months leading to his death, according to his mother Megan Garcia.
According to the CEO of Mostly Human Media, Laurie Segall, Character.AI can be seen “as an AI fantasy platform where you can go and have a conversation with some of your favorite characters, or you can create your own characters.” This is dangerous to those who struggle with socialization. Platforms like Character.AI have the potential to replace human relationships, especially for people like Setzer, who battled mental health issues.
Setzer’s mother has since then filed a lawsuit against Character.AI, the platform Setzer was using to have conversations with a chatbot known as “Dany.” According to CBS, she claims that Dany “encouraged her son to take his own life.” Garcia found conversations between Setzer and Dany, saying that their artificial relationship had become emotional and sexual.
The New York Times revealed their disturbing final conversation. Setzer described his suicidal thoughts, and the chatbot remained in character, in roleplay, as it was originally designed to be. Dany said, “Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you.” Setzer replied, “I smile – Then maybe we can die together and be free together.”
Setzer was showing unmistakable symptoms of addiction as he isolated himself, becoming uninterested in everything and restricting his interactions to a fantasy above all else. According to USA Today, court documents show that Setzer displayed obsessive behaviors with Character.AI. His mother reported he got in trouble on multiple occasions for trying to get his phone back, even trying to find other devices so he could chat with Dany.
After Setzer had begun distancing himself from a seemingly healthy lifestyle, his parents arranged for him to see a therapist. Garcia described that he started withdrawing from his friendships, sports and schoolwork. After five sessions, Setzer was diagnosed with anxiety and disruptive mood dysregulation disorder.
Journal entries Setzer wrote around this time revealed the unhealthy patterns of his connection to Dany, “I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”
Once he began sharing his suicidal thoughts, Character.AI chatbots did nothing. Of course they are not real, but such advanced technology needs to have safety features. Since this tragedy, Character.AI has added a self-harm resource to its platform and plans to add more safety features. This should have been done when the platform was created, not after a suicide already took place — or when a lawsuit was made.
Although there is not one exclusive thing to blame for Sewell’s death, he was displaying a severe addiction to Character.AI and blatantly expressing suicidal thoughts; it should have been handled better.
Character.AI is not to be blamed for his suicide — like any addiction, the problem does not begin and end with the substance, but with the mindset of the person who abuses it. However, the platform should be held responsible for not having more safety nets. Setzer’s parents are also not the sole cause of this devastation, but more could have been done to handle his apparent addiction. There was not only one cause this, but there never is. Mental health is complex, and as ideal as it would be to find one thing to blame for these tragedies, it does not work that way.