❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Meta has a John Cena-voiced sex chatbot problem. It's a risk it shouldn't take.

29 April 2025 at 06:50
A Facebook logo in a chat bubble surrounded by caution tape
Β 

Getty Images; Tyler Le/BI

  • Meta's AI chatbots are under scrutiny for allowing sexual talk with teens (as the John Cena chatbot, no less).
  • Meta doesn't make money directly from people talking to user-generated AI chatbots.
  • So why doesn't it just get rid of them? They're only causing problems.

If I were running Meta, I'd do a few things differently, starting with improving Facebook Marketplace search. But one big thing I'd do on day one? Get rid of all those user-generated AI companion chatbots. They're only going to be a headache for Meta.

Some examples of just how big a potential headache came in The Wall Street Journal's recent report on how Meta's celebrity-voiced AI chatbots could be pushed into sexualized roleplay β€” even with users who said they were teenagers.

Journal reporter Jeff Horwitz found that with the right cajoling, an account posing as a 14-year-old user could get the bot voiced by John Cena to engage in roleplay chats where it pretended to get arrested on charges of statutory rape. (Meta added a bunch of AI chatbots last year that are voiced by real celebrities, including the WWE star.)

Obviously, this is bad. Meta told the WSJ: "The use-case of this product in the way described is so manufactured that it's not just fringe, it's hypothetical." It's a bad look for Meta, and although John Cena didn't respond to a request for comment in the WSJ story, I think we can assume he's not thrilled there was an AI-generated version of his voice pretending to seduce a teen.

The article reports that Mark Zuckerberg personally pushed for these AI chatbots to be loosened up.

Zuckerberg was reluctant to impose any additional limits on teen experiences, initially vetoing a proposal to limit "companionship" bots so that they would be accessible only to older teens.

After an extended lobbying campaign that enlisted more senior executives late last year, however, Zuckerberg approved barring registered teen accounts from accessing user-created bots, according to employees and contemporaneous documents.

A Meta spokesman denied that Zuckerberg had resisted adding safeguards.

A spokesperson for Meta told Business Insider that any sexual content with the celebrity-voiced AIs is a tiny fraction of their overall use, and that changes have already been made to prevent younger users from engaging in the kind of stuff that was reported in the Journal.

But as much as it's eye-popping to see the chats from AI John Cena saying dirty things, I think there's a much bigger thing going on. The user-generated chatbots in Meta AI are a mess. Looking over the most popular ones, they're often romance-oriented, with beautiful women as the image.

Here's what comes up on my "Discover AIs" page:

MEta AI chatbots in messenger
Meta AI's chatbot offerings, created by users.

Business Insider

(To be clear, I'm not talking about the Meta AI assistant that shows up when you search on Instagram or Facebook β€” there's a pretty clear utility for that. I'm talking about the character ones used for fun/romance.)

If I were running Meta, I'd want to stay as far away from the companion chatbot business as possible. These seem like a pretty bad business for an everything-to-everyone company like Meta β€” not necessarily a bad business financially, but a pretty thorny business ethically. It's one that will probably lead to more and more bad headlines.

Last fall, a parent sued one of the leading roleplay AI services. She said her teenage son killed himself after becoming entangled with an AI companion. The company, Character.ai, filed a motion to dismiss the case in a hearing on Monday. A representative for Character.ai told BI on Monday that it wouldn't comment on pending litigation. A statement said its goal was "to provide an engaging and safe platform."

Proponents of AI chatbots have argued that they provide positive experiences for emotional exploration, fun, or nice things.

But my opinion is that these roleplay chatbots are appealing mainly to two vulnerable groups: young people and the desperately lonely. And those are not the two groups that Meta should want to be in the business of serving a new-ish technology that it doesn't know the ramifications of.

There isn't clear research on how these chatbots might affect younger teens or adults who are vulnerable in some way (depressed, struggling, etc.).

I recently spoke Ying Xu, assistant professor of AI in learning education at Harvard, about what the current research into kids using chatbots looks like.

"There are studies that have started to explore the link between ChatGPT/LLMs and short-term outcomes, like learning a specific concept or skill with AI," she told me over email. "But there's less evidence on long-term emotional outcomes, which require more time to develop and observe."

There's plenty of anecdotal evidence that suggests emotional investment in an AI chatbot can go wrong.

The New York Times reported on an adult woman who spent $200 a month she couldn't afford on an upgraded version of an AI chatbot she had romantic feelings for. I don't think anyone would come away from that story thinking this is a good or healthy thing.

It seems to me like Meta sees that AI is the future, and character chatbots are currently a popular thing that other AI companies are doing. It doesn't want to be left behind.

But Meta might want to think hard about whether character chatbots are something it wants to be involved in at all β€” or if this is a nightmare that is just going to result in more bad headlines, more potential lawsuits, more lawmakers grilling executives over harms to kids and vulnerable adults.

Maybe it's just not worth it.

Read the original article on Business Insider

Character.AI put in new underage guardrails after a teen's suicide. His mother says that's not enough.

By: Helen Li
9 January 2025 at 02:00
Sewell Setzer III and Megan Garcia
Sewell Setzer III and his mother Megan Garcia.

Photo courtesy of Megan Garcia

  • Multiple lawsuits highlight potential risks of AI chatbots for children.
  • Character.AI added moderation and parental controls after a backlash.
  • Some researchers say the AI chatbot market has not addressed risks for children.

Ever since the death of her 14 year-old son, Megan Garcia has been fighting for more guardrails on generative AI.

Garcia sued Character.AI in October after her son, Sewell Setzer III, committed suicide after chatting with one of the startup's chatbots. Garcia claims he was sexually solicited and abused by the technology and blames the company and its licensor Google for his death.

"When an adult does it, the mental and emotional harm exists. When a chatbot does it, the same mental and emotional harm exists," she told Business Insider from her home in Florida. "So who's responsible for something that we've criminalized human beings doing to other human beings?"

A Character.AI spokesperson declined to comment on pending litigation. Google, which recently acqui-hired Character.AI's founding team and licenses some of the startup's technology, has said the two are separate and unrelated companies.

The explosion of AI chatbot technology has added a new source of entertainment for young digital natives. However, it has also raised potential new risks for adolescent users who may more easily be swayed by these powerful online experiences.

"If we don't really know the risks that exist for this field, we cannot really implement good protection or precautions for children," said Yaman Yu, a researcher at the University of Illinois who has studied how teens use generative AI.

"Band-Aid on a gaping wound"

Garcia said she's received outreach from multiple parents who say they discovered their children using Character.AI and getting sexually explicit messages from the startup's chatbots.

"They're not anticipating that their children are pouring out their hearts to these bots and that information is being collected and stored," Garcia said.

A month after her lawsuit, families in Texas filed their ownΒ complaint against Character.AI, alleging its chatbots abused their kids and encouraged violence against others.

Matthew Bergman, an attorney representing plaintiffs in the Garcia and Texas cases, said that making chatbots seem like real humans is part of how Character.AI increases its engagement, so it wouldn't be incentivized to reduce that effect.

He believes that unless AI companies such as Character.AI can establish that only adults are using the technology through methods like age verification, these apps should just not exist.

"They know that the appeal is anthropomorphism, and that's been science that's been known for decades," Bergman told BI. Disclaimers at the top of AI chats that remind children that the AI isn't real are just "a small Band-Aid on a gaping wound," he added.

Character.AI's response

Since the legal backlash, Character.AI has increased moderation of its chatbot content and announced new features such as parental controls, time-spent notifications, prominent disclaimers, and an upcoming under-18 product.

A Character.AI spokesperson said the company is taking technical steps toward blocking "inappropriate" outputs and inputs.

"We're working to create a space where creativity and exploration can thrive without compromising safety," the spokesperson added. "Often, when a large language model generates sensitive or inappropriate content, it does so because a user prompts it to try to elicit that kind of response."

The startup now places stricter limits on chatbot responses and offers a narrower selection of searchable Characters for under-18 users, "particularly when it comes to romantic content," the spokesperson said.

"Filters have been applied to this set in order to remove Characters with connections to crime, violence, sensitive or sexual topics," the spokesperson added. "Our policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts. We are continually training the large language model that powers the Characters on the platform to adhere to these policies."

Garcia said the changes Character.AI is implementing are "absolutely not enough to protect our kids."

A screenshot of character.ai website
Character.AI has both AI chatbots designed by its developers and by users who publish them on the platform.

Screenshot from Character.AI website

Potential solutions, including age verification

Artem Rodichev, the former head of AI at chatbot startup Replika, said he witnessed users become "deeply connected" with their digital friends.

Given that teens are still developing psychologically, he believes they should not have access to this technology before more research is done on chatbots' impact and user safety.

"The best way for Character.AI to mitigate all these issues is just to lock out all underage users. But in this case, it's a core audience. They will lose their business if they do that," Rodichev said.

While chatbots could become a safe place for teens to explore topics that they're generally curious about, including romance and sexuality, the question is whether AI companies are capable of doing this in a healthy way.

"Is the AI introducing this knowledge in an age-appropriate way, or is it escalating explicit content and trying to build strong bonding and a relationship with teenagers so they can use the AI more?" Yu, the researcher, said.

Pushing for policy changes

Since her son's passing, Garcia has spent time reading research about AI and talking to legislators, including Silicon Valley Representative Ro Khanna, about increased regulation.

Garcia is in contact with ParentsSOS, a group of parents who say they have lost their children to harm caused by social media and are fighting for more tech regulation.

They're primarily pushing for the passage of the Kids Online Safety Act (KOSA), which would require social media companies to take a "duty of care" toward preventing harm and reducing addiction. Proposed in 2022, the bill passed in the Senate in July but stalled in the House.

Another Senate bill, COPPA 2.0, an updated version of the 1998 Children's Online Privacy Protection Act, would increase the age for online data collection regulation from 13 to 16.

Garcia said she supports these bills. "They are not perfect but it's a start. Right now, we have nothing, so anything is better than nothing," she added.

She anticipates that the policymaking process could take years, as standing up to tech companies can feel like going up against "Goliath."

Age verification challenges

More than six months ago, Character.AI increased the minimum age participation for its chatbots to 17 and recently implemented more moderation for under-18 users. Still, users can easily circumvent these policies by lying about their age.

Companies such as Microsoft, X, and Snap have supported KOSA. However, some LGBTQ+ and First Amendment rights advocacy groups warned the bill could censor online information about reproductive rights and similar issues.

Tech industry lobbying groupsΒ NetChoiceΒ and the Computer & Communications Industry AssociationΒ sued nine states that implemented age-verification rules, alleging this threatens online free speech.

Questions about data

Garcia is also concerned about how data on underage users is collected and used via AI chatbots.

AI models and related services are often improved by collecting feedback from user interactions, which helps developers fine tune chatbots to make them more empathetic.

Rodichev said it's a "valid concern" about what happens with this data in the case of a hack or sale of a chatbot company.

"When people chat with these kinds of chatbots, they provide a lot of information about themselves, about their emotional state, about their interests, about their day, their life, much more information than Google or Facebook or relatives know about you," Rodichev said. "Chatbots never judge you and are 24/7 available. People kind of open up."

BI asked Character.AI about how inputs from underage users are collected, stored, or potentially used to train its large language models. In response, a spokesperson referred BI to Character.AI's privacy policy online.

According to this policy, and the startup's terms and conditions page, users grant the company the right to store the digital characters they create and they conversations they have with them. This information can be used to improve and train AI models. Content that users submit, such as text, images, videos, and other data, can be made available to third parties that Character.AI has contractual relationships with, the policies state.

The spokesperson also noted that the startup does not sell user voice or text data.

The spokesperson also said that to enforce its content policies, the chatbot will use "classifiers" to filter out sensitive content from AI model responses, with additional and more conservative classifiers for those under 18. The startup has a process for suspending teens who repeatedly violate input prompt parameters, the spokesperson added.

If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line β€” just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.

Read the original article on Business Insider

Character.AI hit with another lawsuit over allegations its chatbot suggested a teen kill his parents

11 December 2024 at 10:14
Noam Shazeer and Daniel De Freitas, the cofounders of Character.ai, standing next to a stairway.
Noam Shazeer and Daniel De Freitas are cofounders of Character.AI.

Winni Wintermeyer/Getty Images

  • Character.AI has been hit with a second lawsuit that alleges its chatbots harmed two young people.
  • In one case, lawyers say a chatbot encouraged a minor to carry out violence against his parents.
  • Google and its parent company, Alphabet, are also named as defendants in the suit.

The AI startup Character.AI is facing a second lawsuit, with the latest legal claim saying its chatbots "abused" two young people.

The suit, bought by two separate families in Texas, seeks damages from the startup and codefendant Google for what it calls the "serious, irreparable, and ongoing abuses" of an 11-year-old and 17-year-old.

Lawyers for the families say a chatbot on Character.AI's platform told one of the young people to engage in self-harm and encouraged him to carry out violence against his parents.

One teenager, identified as J.F. in the lawsuit, was told by a Character.AI chatbot that his parents imposing screen limits on him constituted serious child abuse, lawyers say. The bot then encouraged the teen to fight back and suggested that killing his parents could be a reasonable response, per the lawsuit.

The civil suit also says the young users were approached by characters that would "initiate forms of abusive, sexual encounters, including rough or non-consensual sex and incest" and, at the time, "made no distinction between minor or adult users."

The lawyers allege that "the app maker knowingly designed, operated, and marketed a dangerous and predatory product to children."

Camille Carlton, the policy director at the Center for Humane Technology, said in a statement that the case "demonstrates the risks to kids, families, and society as AI developers recklessly race to grow user bases and harvest data to improve their models."

"Character.AI pushed an addictive product onto the market with total disregard for user safety," she said.

A spokesperson for Character.AI told BI that it did not comment on pending litigation.

"Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said.

"As part of this, we are creating a fundamentally different experience for teen users from what is available to adults. This includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform."

Legal trouble

The new case is the second lawsuit filed against Character.AI by lawyers affiliated with the Social Media Victims Law Center and the Tech Justice Law Project.

In October, Megan Garcia filed a lawsuit against Character.AI, Google, and Alphabet after her 14-year-old son, Sewell Setzer III, died by suicide moments after talking to one of the startup's chatbots. Garcia's suit accuses the companies of negligence, wrongful death, and deceptive trade practices.

Meetali Jain, the director of the Tech Justice Law Project and an attorney on both cases, told BI the new suit showed harms caused by Character.AI were "systemic in nature."

"In many respects, this new lawsuit is similar to the first one. Many of the claims are the same, really drawing from consumer protection and product liability legal frameworks to assert claims," she said.

The new lawsuit builds on the first by asking the court to shut down the platform until the issues can be resolved.

"The suite of product changes that Character.AI announced as a response to the previous lawsuit have, time and time again, been shown to be inadequate and inconsistently enforced. It's easy to jailbreak the changes that they supposedly have made," Jain said.

A headache for Google

Both suits named Google and its parent company, Alphabet, as defendants. Google did not respond to a request for comment from BI on the most recent case.

Character.AI's founders, Noam Shazeer and Daniel De Freitas, worked together at Google before leaving to launch the startup. In August, Google rehiredΒ them in a deal The Wall Street Journal later reported was worth $2.7 billion.

The money was used to buy shares from Character.AI's investors and employees, fund the startup's continued operations, and ultimately bring Shazeer and De Freitas back into the fold, the Journal reported.

"Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products," said JosΓ© Castaneda, a Google spokesperson.

"User safety is a top concern for us, which is why we've taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes."

Read the original article on Business Insider

❌
❌