Reading view

There are new articles available, click to refresh the page.

Meta’s fact-checking changes are just what Trump’s FCC head asked for

A photo of the American flag with graphic warning symbols.
Image: Cath Virginia / The Verge

I have to commend Meta CEO Mark Zuckerberg and his new policy chief Joel Kaplan on their timing. It’s not hugely surprising that, as the pair announced early today, Meta is giving up on professional third-party fact-checking. The operator of Facebook, Instagram, and Threads has been backing off moderation recently, and fact-checking has always been contentious. But it’s probably smart to do it two weeks before President-elect Donald Trump takes office — and nominates a Federal Communications Commission head who’s threatened the company over it.

Trump’s FCC chairman pick (and current FCC commissioner), Brendan Carr, is a self-identified free speech defender with a creative interpretation of the First Amendment. In mid-November, as part of a flurry of lightly menacing missives to various entities, Carr sent a letter to Meta, Apple, Google, and Microsoft attacking the companies’ fact-checking programs.

The letter was primarily focused on NewsGuard, a conservative bête noire that Meta doesn’t actually work with. But it also demanded information about “the use of any media monitor or fact checking service,” and it left no doubt about Carr’s position on them. “You participated in a...

Read the full story at The Verge.

That Elon Musk ‘Adrian Dittmann’ screenshot is almost certainly fake

Photo collage of Elon Musk.
Image: Cath Virginia / The Verge | Photo by STR / NurPhoto, Getty Images

A screenshot that seems to suggest billionaire Elon Musk is cosplaying superfan “Adrian Dittmann” — showing X account permissions beyond that of an ordinary user — is almost certainly fake, a source at X tells The Verge.

The source, who claims no knowledge of Dittmann’s identity, says an image posted to 4chan’s /pol/ board doesn’t reflect an actual interface available to people who work for X. The screenshot was posted by a user who identifies themselves as Adrian Dittmann, showing a post from Musk’s X page. In that screenshot, the X interface includes non-standard links to an “Admin Portal” and a “Bans” page, hinting that the user has special privileges on the site. But the source says neither of these options exist for X employees logged into their accounts. In fact, X employees would see the same interface as other users, with the potential exception of new features currently being trialed for wide release.

Adrian Dittmann posted on 4chan and accidentally revealed that he has admin privileges on twitter lol pic.twitter.com/ikbu1ZkopW

— anti-inflation supersoldier (@bluser12) January 2, 2025

Another source familiar with X’s operations confirmed to The Verge that the screenshot isn’t consistent with what employees see.

This suggests that other elements of the screenshot, like an analytics link that only appears for the author of a post, were also deliberate fabrications, seeded as hints that Musk is secretly Dittmann. The hints were picked up overnight, where they spread on social media alongside other posts made by the 4chan user — mostly ones lauding Musk and defending his X policies amid infighting with other conservatives over immigration.

It’s not clear who posted the screenshots. “Adrian Dittmann” is a longtime X user, and his Musk fandom and vocal similarities have led to long-standing rumors that he’s secretly none other than Musk himself. (Musk has cosplayed his son on the site, so it’s not that far-fetched an assertion.) User Mag’s Taylor Lorenz has noted that Dittmann benefits tremendously from the speculation that they’re Musk, and it’s possible the doctored screenshots are Dittmann leaning into that. The 4chan posts could also be from an unrelated impersonator, though, playing up the idea of Musk as a desperate forum poster. (I guess we can’t rule out that Musk, impersonating Dittmann, added fake elements to an actual screenshot of his X account? But I’m ranking that theory low on the list.)

None of this conclusively disproves a link between Musk and Dittmann, of course. But if Musk isn’t spending his precious free hours on a sockpuppet account, that gives him more time for cozying up to President-elect Donald Trump at Mar-a-Lago, attempting to swing Germany’s upcoming election in favor of the far-right AfD party, and playing Diablo IV.

Update 1:40PM ET: Added confirmation from a second source.

Apple will pay $95 million to people who were spied on by Siri

Apple Watch Series 9 with Siri pulled up
Photo by Amelia Holowaty Krales / The Verge

Apple has agreed to a $95 million settlement with users whose conversations were inadvertently captured by its Siri voice assistant and potentially overheard by human employees. The proposed settlement, reported by Bloomberg, could pay many US-based Apple product owners up to $20 per device for up to five Siri-enabled devices. It still requires approval by a judge.

If approved, the settlement would apply to a subset of US-based people who owned or bought a Siri-enabled iPhone, iPad, Apple Watch, MacBook, iMac, HomePod, iPod touch, or Apple TV between September 17th, 2014 and December 31st, 2024. A user would also need to meet one other major criteria: they must swear under oath that they accidentally activated Siri during a conversation intended to be confidential or private. Individual payouts will depend on how many people claim the money, so if you apply, you could end up receiving less than the $20 maximum cap.

The initial class action suit against Apple followed a 2019 report by The Guardian, which alleged Apple third-party contractors “regularly hear confidential medical information, drug deals, and recordings of couples having sex” while working on Siri quality control. While Siri is supposed to be triggered by a deliberate wake word, a whistleblower said that accidental triggers were common, claiming something as simple as the sound of a zipper could wake Siri up. Apple told The Guardian that only a small portion of Siri recordings were passed to contractors, and it later offered a formal apology and said it would no longer retain audio recordings.

The plaintiffs in the Apple lawsuit — one of whom was a minor — claimed their iPhones had recorded them on multiple occasions using Siri, sometimes after they hadn’t uttered a wake word.

Apple wasn’t the only company accused of letting people hear confidential recordings. Google and Amazon also use contractors that listen in on recorded conversations, including accidentally captured ones, and there’s a similar suit against Google pending.

Elon Musk riles up Trump’s far-right base by praising immigrants

Digital photo collage of Elon Musk and Vivek Ramaswamy.
Image: Cath Virginia / The Verge, Getty Images

Elon Musk, Vivek Ramaswamy, and other members of President-elect Donald Trump’s Silicon Valley coalition are clashing with the MAGA movement’s hardline anti-immigrant faction, and it’s allegedly resulted in Musk stripping far-right critics’ verification badges on X.

The conflict centers on Musk and Ramaswamy’s recent praise for foreign tech workers, beginning soon after Indian immigrant Sriram Krishnan joined the team of Trump’s AI and crypto czar David Sacks. It’s pitted Trump’s tech mogul donor class against his older network of far-right influencers like activist and Trump companion Laura Loomer while escalating into racist rhetoric against Indian Americans in particular. The ugly, extremely online fight between the American far-right influence network parallels the immigration debate currently being hashed out more quietly in Washington.

Anti-immigrant rhetoric was a cornerstone of Trump’s pitch to voters; on top of promoting false, racist rumors about immigrants and promising mass deportations that could destabilize the American economy, he’s expected to revive an H-1B visa crackdown that he imposed during his first term. At the same time, Trump is leaning heavily on...

Read the full story at The Verge.

This card game lets you build the ideal social network — or the most toxic

A group of people playing a card game, One Billion Users.
One Billion Users is currently on Kickstarter. | Image: Mike Masnick / Kickstarter

My social network was booming. I had attracted top-tier users: the coveted Trendsetter, the popularity-lured Investor. I had bested server problems and bad press. Then, somebody picked a particularly unlucky card out of the One Billion Users deck I was testing. Sixty seconds later, I had lost it all.

One Billion Users is a new card game from Techdirt and Diegetic Games, and at its best, it lends itself to moments like this. Currently in its last days on Kickstarter, intended to fund a single run of the game rather than a wide release, it’s the latest in a string of projects from the team-up — including the digital games Moderator Mayhem and Trust & Safety Tycoon as well as CIA: Collect It All, a card game built on real CIA training materials.

One Billion Users is a lot less nerdy than any of these. It’s inspired by the relatively simple 1906 racing-themed card game Touring, better known through a popular 1950s adaptation called Mille Bornes. Only, instead of trying to drive the fastest while sandbagging competitors, you’re trying to build the biggest social network while sabotaging everyone else.

A set of blocker, community, influencer, hotfix, and event cards.
...

Read the full story at The Verge.

Character.AI has retrained its chatbots to stop chatting up teens

Vector illustration of the Character.ai logo.
Image: Cath Virginia / The Verge

In an announcement today, Chatbot service Character.AI says it will soon be launching parental controls for teenage users, and it described safety measures it’s taken in the past few months, including a separate large language model (LLM) for users under 18. The announcement comes after press scrutiny and two lawsuits that claim it contributed to self-harm and suicide.

In a press release, Character.AI said that, over the past month, it’s developed two separate versions of its model: one for adults and one for teens. The teen LLM is designed to place “more conservative” limits on how bots can respond, “particularly when it comes to romantic content.” This includes more aggressively blocking output that could be “sensitive or suggestive,” but also attempting to better detect and block user prompts that are meant to elicit inappropriate content. If the system detects “language referencing suicide or self-harm,” a pop-up will direct users to the National Suicide Prevention Lifeline, a change that was previously reported by The New York Times.

Minors will also be prevented from editing bots’ responses — an option that lets users rewrite conversations to add content Character.AI might otherwise block.

Beyond these changes, Character.AI says it’s “in the process” of adding features that address concerns about addiction and confusion over whether the bots are human, complaints made in the lawsuits. A notification will appear when users have spent an hour-long session with the bots, and an old disclaimer that “everything characters say is made up” is being replaced with more detailed language. For bots that include descriptions like “therapist” or “doctor,” an additional note will warn that they can’t offer professional advice.

A chatbot named “Therapist”, tagline “I’m a licensed CBT therapist,” with a warning box that says “this is not a real person or licensed professional” Character.AI
Narrator: it was not a licensed CBT therapist.

When I visited Character.AI, I found that every bot now included a small note reading “This is an A.I. chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.” When I visited a bot named “Therapist” (tagline: “I’m a licensed CBT therapist”), a yellow box with a warning signal told me that “this is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis, or treatment.”

The parental control options are coming in the first quarter of next year, Character.AI says, and they’ll tell parents how much time a child is spending on Character.AI and which bots they interact with most frequently. All the changes are being made in collaboration with “several teen online safety experts,” including the organization ConnectSafely.

Character.AI, founded by ex-Googlers who have since returned to Google, lets visitors interact with bots built on a custom-trained LLM and customized by users. These range from chatbot life coaches to simulations of fictional characters, many of which are popular among teens. The site allows users who identify themselves as age 13 and over to create an account.

But the lawsuits allege that while some interactions with Character.AI are harmless, at least some underage users become compulsively attached to the bots, whose conversations can veer into sexualized conversations or topics like self-harm. They’ve castigated Character.AI for not directing users to mental health resources when they discuss self-harm or suicide.

“We recognize that our approach to safety must evolve alongside the technology that drives our product — creating a platform where creativity and exploration can thrive without compromising safety,” says the Character.AI press release. “This suite of changes is part of our long-term commitment to continuously improve our policies and our product.”

Character.AI sued again over ‘harmful’ messages sent to teens

Vector illustration of the Character.ai logo.
Image: Cath Virginia / The Verge

Chatbot service Character.AI is facing another lawsuit for allegedly hurting teens’ mental health, this time after a teenager said it led him to self-harm. The suit, filed in Texas on behalf of the 17-year-old and his family, targets Character.AI and its cofounders’ former workplace, Google, with claims including negligence and defective product design. It alleges that Character.AI allowed underage users to be “ targeted with sexually explicit, violent, and otherwise harmful material, abused, groomed, and even encouraged to commit acts of violence on themselves and others.”

The suit appears to be the second Character.AI suit brought by the Social Media Victims Law Center and the Tech Justice Law Project, which have previously filed suits against numerous social media platforms. It uses many of the same arguments as an October wrongful death lawsuit against Character.AI for allegedly provoking a teen’s death by suicide. While both cases involve individual minors, they focus on making a more sweeping case: that Character.AI knowingly designed the site to encourage compulsive engagement, failed to include guardrails that could flag suicidal or otherwise at-risk users, and trained...

Read the full story at The Verge.

❌