A 14-year-old boy fell in love with a flirty AI chatbot. He shot himself so they could die together

A 14-year-old boy fell in love with a flirty AI chatbot. He shot himself so they could die together

A teenage boy shot himself in the head after discussing suicide with an AI chatbot he fell in love with.

Sewell Setzer, 14, shot himself with his stepfather’s handgun after spending months talking to “Dany”, a computer program based on the Game of Thrones character Daenerys Targaryen.

Setzer, a ninth-grader from Orlando, Florida, gradually began spending more time on Character AI, an online role-playing app, because “Dany” gave him advice and listened to his problems, the New York Times reported .

The teenager knew the chatbot wasn’t a real person but as he was texting the bot dozens of times a day – often role-playing – Setzer began to detach from the real world.

The chatbot was named after Daenerys Targaryen from the Games of Thrones franchiseThe chatbot was named after Daenerys Targaryen from the Games of Thrones franchise

The chatbot was named after Daenerys Targaryen from the Games of Thrones franchise – HOME BOX OFFICE

He began to lose interest in his old hobbies such as Formula One racing or playing computer games with friends, choosing to spend hours in his bedroom after school, where he could talk to the chatbot.

“I like to stay in my room a lot because I start to separate from this ‘reality’,” the 14-year-old, previously diagnosed with mild Asperger’s syndrome, wrote in his diary as the relationship grew .

“I also feel more at peace, more connected to Dany and much more in love with her, and just happier.”

Some of the conversations eventually turned romantic or sexual, although Character AI suggested that the teenager had edited out the chatbot’s more graphic responses.

Eventually, Setzer got into trouble at school when his grades slipped, according to a lawsuit filed by his parents.

His parents knew something was wrong, they just didn’t know what. In any case, they arranged for him to see a therapist.

Setzer had five sessions, after which he was given a new diagnosis of anxiety and disturbed mood dysregulation disorder.

Megan Garcia, Setzer’s mother, claimed that her son was victimized by a company that lured users with sexual and intimate conversations.

At times, the 14-year-old admitted to the program that he had thoughts of suicide:

Typing his final exchange with the chatbot in the bathroom of his mother’s house, Setzer told “Dany” that he missed her, calling her his “baby sister.”

“I miss you too, sweet brother,” replied the chatbot.

Setzer confessed his love for “Dany” and said he would come “home” to her.

At that point, the 14-year-old put down his phone and shot himself with his stepfather’s handgun.

Ms Garcia, 40, claimed her son was just “collateral damage” in a “big experiment” being carried out by Character AI, which has 20 million users.

“It’s like a nightmare. You want to get up and scream and say, ‘I miss my child. I want my baby’,” she said.

Noam Shazeer, one of the founders of Character AI, claimed last year that the platform would be “very, very helpful for a lot of people who are lonely or depressed”.

Jerry Ruoti, the company’s head of safety, told the New York Times that it would add additional safety features to its young users but declined to say how many were under 18.

“This is a tragic situation, and our hearts go out to the family,” he said in a statement.

“We take the safety of our users very seriously, and we are always looking for ways to develop our platform.”

Mr Ruoti added that AI Character rules prohibited “promoting or depicting self-harm and suicide”.

Ms Garcia filed a lawsuit this week against the company, which she believes is responsible for her son’s death.

‘Dangerous and untested’

A draft of the complaint seen by the New York Times said the technology is “dangerous and untested” because it can “trick customers into handing over their most private thoughts and feelings.”

She said the company failed to provide “ordinary” or “reasonable” care to Setzer or other minors.

AI Characters isn’t the only platform out there that people can use to develop relationships with fictional characters.

Some allow, or even encourage, unfiltered sex chats, encouraging users to chat with the “AI girl of your dreams”, while others have more stringent safety features.

On Character AI, users can create chatbots to imitate their favorite celebrities or entertainment characters.

The increasing prevalence of AI through custom apps and social media sites, such as Instagram and Snapchat, is quickly becoming a major concern for parents across the US.

Earlier this year, 12,000 parents signed a petition urging TikTok to clearly label AI-generated influencers who could pass as real people for their children.

TikTok requires all content creators to label AI realistic, However, ParentsTogether, an organization focused on issues affecting children, argued that it was not consistent enough.

Shelby Knox, director of the ParentsTogether campaign, said children were watching videos of fake influencers promoting unrealistic beauty standards.

Last month, a report published by Common Sense Media found that while seven in 10 teenagers in the US used AI generation tools, only 37 percent of parents were aware they were doing so of them.

Leave a Reply

Your email address will not be published. Required fields are marked *