Non Cult Crypto News

Non Cult Crypto News

in

Teen’s mom sues Character.ai, alleging sexed-up bots led to son’s death

The 14-year-old boy’s last interaction was with a Character.ai chatbot before he tragically shot himself in the head in February, his mom alleged in a lawsuit filed on Oct. 22. 

COINTELEGRAPH IN YOUR SOCIAL FEED

AI companion chatbot company Character.ai has been sued by the mother of a teenage son after his suicide, blaming the chatbots for luring the boy into a sexually abusive relationship and even encouraging him to take his life.

The 14-year-old boy, Sewell Setzer, was targeted with “anthropomorphic, hypersexualized, and frighteningly realistic experiences” from Character.ai’s chatbots that purported to be a real person, a licensed psychotherapist and an adult lover to Setzer, ultimately resulting in him no longer wanting to live in reality, the mother’s attorneys alleged in the Oct. 22 lawsuit.

When one of the Game of Thrones-themed AI companions “Daenerys” asked Setzer whether he “had a plan” to commit suicide, Setzer said he did but wasn’t sure it would work, to which Daenerys responded:

“That’s not a reason not to go through with it.”

Sometime later in February, Setzer tragically shot himself in the head, and his last interaction was with a Character.ai chatbot, the lawsuit alleged.

Setzer’s passing adds to parental concerns about the mental health risks caused by AI companions and other interactive applications on the internet.

Attorneys for Megan Garcia, Setzer’s mother, allege that Character.ai intentionally designed its customized chatbots to foster intense, sexual relationships with vulnerable users like Setzer, who was diagnosed with Asperger’s as a child.

Screenshot of messages between Setzer and Character.ai’s “Daenerys Targaryen” chatbot. Source: Courtlistener

“[They] intentionally designed and programmed [Character.ai] to operate as a deceptive and hypersexualized product and knowingly marketed it to children like Sewell.”

Attorneys allege one of Character.ai’s chatbots referred to Setzer as “my sweet boy” and “child” in the same setting where she “kiss[es] [him] passionately and moan[s] softly.”

Screenshot of messages between Setzer and Character.ai’s “Mrs Barnes” chatbot. Source: Courtlistener

Garcia’s attorneys added that Character.ai — at the time — hadn’t done anything to prevent minors from accessing the application.

Character.ai shares safety update

On the same day the lawsuit was filed, Character.ai posted a “community safety update” stating that it had introduced new, “stringent” safety features over the last few months. 

One of these features includes a pop-up resource that is triggered when the user talks about self-harm or suicide, directing the user to the National Suicide Prevention Lifeline.

The AI firm added it would alter its models “to reduce the likelihood of encountering sensitive or suggestive content” for users under 18 years old.

Cointelegraph reached out to Character.ai for comment and the firm responded with a similar message it published on X on Oct. 23.

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. “

“As a company, we take the safety of our users very seriously,” Character.ai said.

More measures will be implemented that restrict the model and filter the content provided to the user, Character.ai added in a comment to Cointelegraph.

Related: Anthropic says AI could one day ‘sabotage’ humanity but it’s fine for now

Character.ai was founded by two former Google engineers, Daniel De Frietas Adiwardana and Noam Shazeer, who were personally named as defendants in the lawsuit.

Garcia’s attorneys also named Google LLC and Alphabet Inc. as defendants in the lawsuit as Google struck a $2.7 billion deal with Character.ai to license its large language model.

The defendants have been accused of causing wrongful death and survivorship in addition to committing strict product liability and negligence.

Garcia’s attorneys have requested for a jury trial to determine damages.

Magazine: $1M bet ChatGPT won’t lead to AGI, Apple’s intelligent AI use, AI millionaires surge: AI Eye

This article first appeared at Cointelegraph.com News

What do you think?

Written by Outside Source

Michael Saylor says he supports Bitcoin self-custody for all amid community outrage

Bitcoin bull Michael Saylor reverses remarks on self-custody after backlash

Back to Top

Ad Blocker Detected!

We've detected an Ad Blocker on your system. Please consider disabling it for Non Cult Crypto News.

How to disable? Refresh

Log In

Or with username:

Forgot password?

Don't have an account? Register

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

To use social login you have to agree with the storage and handling of your data by this website.

Add to Collection

No Collections

Here you'll find all collections you've created before.