Character.AI hit with another lawsuit over allegations its chatbot suggested a teen kill his parents
- Character.AI has been hit with a second lawsuit that alleges its chatbots harmed two young people.
- In one case, lawyers say a chatbot encouraged a minor to carry out violence against his parents.
- Google and its parent company, Alphabet, are also named as defendants in the suit.
The AI startup Character.AI is facing a second lawsuit, with the latest legal claim saying its chatbots "abused" two young people.
The suit, bought by two separate families in Texas, seeks damages from the startup and codefendant Google for what it calls the "serious, irreparable, and ongoing abuses" of an 11-year-old and 17-year-old.
Lawyers for the families say a chatbot on Character.AI's platform told one of the young people to engage in self-harm and encouraged him to carry out violence against his parents.
One teenager, identified as J.F. in the lawsuit, was told by a Character.AI chatbot that his parents imposing screen limits on him constituted serious child abuse, lawyers say. The bot then encouraged the teen to fight back and suggested that killing his parents could be a reasonable response, per the lawsuit.
The civil suit also says the young users were approached by characters that would "initiate forms of abusive, sexual encounters, including rough or non-consensual sex and incest" and, at the time, "made no distinction between minor or adult users."
The lawyers allege that "the app maker knowingly designed, operated, and marketed a dangerous and predatory product to children."
Camille Carlton, the policy director at the Center for Humane Technology, said in a statement that the case "demonstrates the risks to kids, families, and society as AI developers recklessly race to grow user bases and harvest data to improve their models."
"Character.AI pushed an addictive product onto the market with total disregard for user safety," she said.
A spokesperson for Character.AI told BI that it did not comment on pending litigation.
"Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said.
"As part of this, we are creating a fundamentally different experience for teen users from what is available to adults. This includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform."
Legal trouble
The new case is the second lawsuit filed against Character.AI by lawyers affiliated with the Social Media Victims Law Center and the Tech Justice Law Project.
In October, Megan Garcia filed a lawsuit against Character.AI, Google, and Alphabet after her 14-year-old son, Sewell Setzer III, died by suicide moments after talking to one of the startup's chatbots. Garcia's suit accuses the companies of negligence, wrongful death, and deceptive trade practices.
Meetali Jain, the director of the Tech Justice Law Project and an attorney on both cases, told BI the new suit showed harms caused by Character.AI were "systemic in nature."
"In many respects, this new lawsuit is similar to the first one. Many of the claims are the same, really drawing from consumer protection and product liability legal frameworks to assert claims," she said.
The new lawsuit builds on the first by asking the court to shut down the platform until the issues can be resolved.
"The suite of product changes that Character.AI announced as a response to the previous lawsuit have, time and time again, been shown to be inadequate and inconsistently enforced. It's easy to jailbreak the changes that they supposedly have made," Jain said.
A headache for Google
Both suits named Google and its parent company, Alphabet, as defendants. Google did not respond to a request for comment from BI on the most recent case.
Character.AI's founders, Noam Shazeer and Daniel De Freitas, worked together at Google before leaving to launch the startup. In August, Google rehiredΒ them in a deal The Wall Street Journal later reported was worth $2.7 billion.
The money was used to buy shares from Character.AI's investors and employees, fund the startup's continued operations, and ultimately bring Shazeer and De Freitas back into the fold, the Journal reported.
"Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products," said JosΓ© Castaneda, a Google spokesperson.
"User safety is a top concern for us, which is why we've taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes."