Normal view

There are new articles available, click to refresh the page.
Yesterday — 24 February 2025Main stream

The government is still threatening to ‘semi-fire’ workers who don’t answer an email from Elon Musk

24 February 2025 at 14:50

We’re nearing the deadline that Elon Musk imposed for government workers to reply to a mass email about productivity, and the results have been predictably confusing — with even a direct statement from President Donald Trump failing to clear things up.

Government agencies have taken significantly different tacks toward the Musk-promoted email, which he announced to the public midday on February 22nd. Sent by the Office of Personnel Management (OPM), the message demanded all federal employees respond by the end of the 24th with “5 bullets of what you accomplished last week,” and Musk said on X that “failure to respond will be taken as a resignation.” The email reportedly didn’t include this noteworthy detail.

But while some agencies have apparently ordered compliance, others have called the email optional or told employees to not respond. The Department of Justice, Administrative Office of the US Courts, and State Department all instructed staff to disregard the message and follow internal review processes instead, according to multiple news outlets. The Treasury Department, conversely, appears to have ordered Internal Revenue Service employees to comply.

Other agencies have issued more nebulous guidance. In an email obtained by The Verge, Federal Trade Commission Chairman Andrew Ferguson told staff that responses were “voluntary” — but he added that “I enthusiastically responded” to the message and “strongly encourage you to respond as well.”

As of this article’s publication, the White House has done little to clarify the situation. An unnamed administration official said on Monday morning that employees should defer to their agencies’ guidance, reported Politico. An OPM official further told The Washington Post that the office was “unsure what to do with the emails” and had “no plans” to analyze them. Yet more anonymous officials, however, said that workers’ reports would be “fed into an artificial intelligence system to determine whether those jobs are necessary or not,” per NBC News.

Meanwhile, on the same day, Trump said publicly that people who failed to respond would be “sort of semi-fired,” adding that “a lot of people are not answering because they don’t even exist.” He denied that agencies were clashing with Musk by issuing conflicting guidance, saying it was “done in a friendly manner.”

Musk’s email echoed his behavior after taking over Twitter, where he demanded employees do things like print out 50 pages of their recent coding work or write a memo justifying their jobs to receive previously promised company stock. But unlike at Twitter, where he held sole unquestioned control, he’s dealing with formal chains of command and many other stakeholders here.

Still, the whole impossibly tangled situation is conducive to Musk and Trump’s goal of paralyzing the government, letting them instill fear in employees while creating an excuse to fire people as desired. (If they work on nuclear safety or bird flu, maybe they’ll get semi-rehired afterward.)

Like many moves by Musk and his Department of Government Efficiency (DOGE), the latest action ignores existing government structures in a way that may be aimed at avoiding legal or political accountability. The email nonetheless drew an immediate challenge in court. It was included in an amended suit filed by groups including the American Federation of Government Employees, which condemned the email as “thoughtless and bullying … meant to intimidate federal employees and cause mass confusion.”

Ironically for figures who claim to be fighting bureaucratic confusion, Musk and Trump have created one of the most downright kafkaesque scenarios imaginable. We’re now looking at a government order presenting a drastic ultimatum that is never mentioned in the order, in which a response may be either mandatory or forbidden, and failing to respond may or may not get you simultaneously fired and not fired. Also, you may not actually exist.

Before yesterdayMain stream

Mentions of trans kids scrubbed from national child safety clearinghouse website

7 February 2025 at 07:32

The National Center for Missing and Exploited Children (NCMEC), a child safety nonprofit that works closely with the government and major tech platforms, has recently removed publications that reference queer and transgender children from its website. The removals come amid reports that NCMEC was ordered to cull mentions of LGBTQ+ issues under threat of losing government funding, part of President Donald Trump’s push to eradicate recognition of trans people in the US.

NCMEC’s website hosts numerous reports on the state of various child endangerment issues, including data about abduction, sex trafficking, and online enticement. However, comparisons with the Wayback Machine show that at least three documents on its “NCMEC Data” page — including a report on missing children with suicidal tendencies, a report on male victims of child sex trafficking, and an overall data analysis of children missing from care — have been removed since the page’s last archived date of January 24th. Archived copies of all three reports included mentions of LGBTQ+ and particularly transgender children. The 13 remaining publications on NCMEC’s data page do not appear to contain these references.

With …

Read the full story at The Verge.

I tested ChatGPT’s deep research with the most misunderstood law on the internet

7 February 2025 at 06:00

In the vast number of fields where generative AI has been tested, law is perhaps its most glaring point of failure. Tools like OpenAI’s ChatGPT have gotten lawyers sanctioned and experts publicly embarrassed, producing briefs based on made-up cases and nonexistent research citations. So when my colleague Kylie Robison got access to ChatGPT’s new “deep research” feature, my task was clear: make this purportedly superpowerful tool write about a law humans constantly get wrong.

Compile a list of federal court and Supreme Court rulings from the last five years related to Section 230 of the Communications Decency Act, I asked Kylie to tell it. Summarize any significant developments in how judges have interpreted the law.

I was asking ChatGPT to give me a rundown on the state of what are commonly called the 26 words that created the internet, a constantly evolving topic I follow at The Verge. The good news: ChatGPT appropriately selected and accurately summarized a set of recent court rulings, all of which exist. The so-so news: it missed some broader points that a competent human expert might acknowledge. The bad news: it ignored a full year’s worth of legal decisions, which, unf …

Read the full story at The Verge.

Canada will retaliate against Trump with tariffs on US goods

1 February 2025 at 18:55

Canada will set its own tariffs against US goods in retaliation to broad 25 percent tariffs President Donald Trump announced Saturday on Canadian imports. Prime Minister Justin Trudeau announced a 25 percent tariff on a total of $155 billion worth of American goods — $30 billion of that on Tuesday when Trump’s tariffs go into effect, then an additional $125 billion after 21 days. Trudeau also warned that the US tariffs will harm both countries’ economies, particularly the auto industry. “This is a choice that, yes, will harm Canadians, but beyond that, it will have real consequences for you, the American people,” he said in a press conference Saturday.

Trudeau offered a “far-reaching” list of products that would be subject to import taxes, including American alcohol, orange juice, clothing, appliances, lumber, and plastics, along with “much, much more.” Non-tariff moves like reexamining public procurement policies are also on the table. However, he said that actions like limiting energy exports would require more careful consideration because “no one part of the country should be carrying a heavier burden than any other.”

The sweeping US tariffs, which include a lower 10 percent tariff on energy products from Canada, are a major shift in trade policy between the two countries. Trump claims the tariffs — which he also imposed on Mexico, and increased on goods from China — are meant to incentivize these countries to stem the flow of illegal fentanyl into the US.

The US tariffs include a clause seeking to prevent retaliation, outlets including The Wall Street Journal report, which would increase the penalties should the countries impacted impose their own tariffs. Despite this, Trudeau says that Canada’s new tariffs “are strong but appropriate in this case, and we will continue to defend Canada, Canadians, and our future.”

Trump signs order refusing to enforce TikTok ban for 75 days

20 January 2025 at 17:28
Photo collage of the TikTok logo over a photograph of the US Capitol building.
Illustration by Cath Virginia / The Verge | Photo from Getty Images

President Donald Trump has issued an executive order telling the Department of Justice to not enforce a rule that demands TikTok spin off from its Chinese parent company, ByteDance, or face a ban.

The order, issued on Trump’s first day in office, is meant to effectively extend the deadline established by the Protecting Americans from Foreign Adversary Controlled Applications Act for ByteDance to sell its stake by undercutting penalties on American companies like Apple and Google working with TikTok. It directs the attorney general “not to take any action to enforce the Act for a period of 75 days from today to allow my Administration an opportunity to determine the appropriate course forward in an orderly way.” The AG is supposed to “issue a letter to each provider stating that there has been no violation of the statute and that there is no liability for any conduct that occurred.”

The order furthermore instructs the Department of Justice to “take no action to enforce the Act or impose any penalties against any entity for any noncompliance with the Act” and says they should be barred from doing so “for any conduct that occurred during the above-specified period or any period prior to the issuance of this order, including the period of time from January 19, 2025, to the signing of this order.”

@dailymail

President Donald Trump shared his views on TikTok as he signed executive orders in the Oval Office on inauguration day. #news #tiktokban #trump #donaldtrump

♬ original sound - Daily Mail

Trump, who issued an executive order banning TikTok during his first term in 2020, is now trying to circumvent a bipartisan law that took effect January 19th. He posted on Truth Social before taking office that he was “asking companies” to keep working with TikTok, a move that could mean risking hundreds of billions of dollars in fines if Trump’s assurances don’t stand up in court. TikTok briefly went down on Sunday but quickly came back online — though it was removed from Apple’s and Google’s app stores and has not come back.

It’s unclear whether Trump can legally pause the TikTok ban. The law allowed for a 90-day extension if ByteDance announced a sale to a non-“foreign adversary”-based company before the deadline. But not only has no such sale been announced, it’s also legally ambiguous whether the extension can be used after the 19th. Trump, in any case, isn’t so far using the deadline — he’s just attempting to override the law.

Despite that reassurance, it still may not be enough to convince service providers covered by the law to reinstate TikTok. As many legal experts have pointed out, those companies could face up to about $850 billion in potential penalties for violating the law — which was passed by a bipartisan Congress, signed by former President Joe Biden, and upheld by the entire Supreme Court. The government could act on any potential violation even five years after it happens — and an executive order doesn’t change that, though it might help give the companies a slightly better due process defense to fight it. Companies still might not risk litigation over such a large potential fine, though they may also be wary of raising Trump’s ire by refusing to work with TikTok.

On top of all this, the order says it’s “not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States,” which makes it even less reliable as a defense for companies.

Trump also declared on Sunday that the US government could own 50 percent of TikTok through an unexplained “joint venture” with a private company. It remains unclear how this would work.

Will RedNote get banned in the US?

16 January 2025 at 11:17
Vector illustration of the Rednote / Xiaohongshu logo.
Image: Cath Virginia / The Verge

I’m not the first to note the irony of TikTok users flooding RedNote this week. The TikTok divest-or-ban rule was supposed to drive Americans away from a foreign-owned social network that was subject to influence or data harvesting by the Chinese government. Instead, it pushed them onto a different foreign-owned social network that poses the exact same hypothetical risks — and that might be subject to the exact same kind of ban.

TikTok faces a ban under the Protecting Americans from Foreign Adversary Controlled Applications Act, which passed with overwhelming bipartisan support and was signed last year by President Joe Biden (who is reportedly experiencing some buyer’s remorse right now). While it mentions TikTok and its parent company, ByteDance, by name, it could apply to any company that meets the following criteria:

  • It operates a website or app with more than 1 million monthly users and lets those users make accounts to create and share content.
  • It isn’t a service that primarily lets users “post product reviews, business reviews, or travel information and reviews.”
  • It’s controlled by a foreign adversary, a definition that covers North Korea, China, Russia, and Iran....

Read the full story at The Verge.

The Supreme Court could decide the fate of Pornhub — and the rest of the internet

15 January 2025 at 15:27
Illustration of a stop sign over a window of flesh colored pixels.
Cath Virginia / The Verge

In Supreme Court oral arguments over a potentially seismic change to the internet, the most memorable question came from Justice Samuel Alito. “One of the parties here is the owner of Pornhub, right?” Alito asked Derek Shaffer, lawyer for the adult industry group Free Speech Coalition. “Is it like the old Playboy magazine? You have essays there by the modern-day equivalent of Gore Vidal and William F. Buckley, Jr.?”

The massive adult web portal Pornhub, in case you’re wondering, does not publish essays by distinguished intellectuals. (Shaffer notes that it does host sexual wellness videos.) The question inspired a slew of commentary on social media, alongside a few quips directed at Justice Clarence Thomas, who declared during oral argument that “Playboy was about squiggly lines on cable TV.” But as funny as the quotes were, what the justices were getting at was hardly a joke: how much protection does sexual content and other legal speech deserve, if hosted online?

FSC v. Paxton concerns Texas’ HB 1181, which requires sites with a large proportion of sexually explicit content to verify users’ ages and post scientifically unproven health warnings about how porn “is proven to harm...

Read the full story at The Verge.

Meta’s fact-checking changes are just what Trump’s FCC head asked for

7 January 2025 at 08:03
A photo of the American flag with graphic warning symbols.
Image: Cath Virginia / The Verge

I have to commend Meta CEO Mark Zuckerberg and his new policy chief Joel Kaplan on their timing. It’s not hugely surprising that, as the pair announced early today, Meta is giving up on professional third-party fact-checking. The operator of Facebook, Instagram, and Threads has been backing off moderation recently, and fact-checking has always been contentious. But it’s probably smart to do it two weeks before President-elect Donald Trump takes office — and nominates a Federal Communications Commission head who’s threatened the company over it.

Trump’s FCC chairman pick (and current FCC commissioner), Brendan Carr, is a self-identified free speech defender with a creative interpretation of the First Amendment. In mid-November, as part of a flurry of lightly menacing missives to various entities, Carr sent a letter to Meta, Apple, Google, and Microsoft attacking the companies’ fact-checking programs.

The letter was primarily focused on NewsGuard, a conservative bête noire that Meta doesn’t actually work with. But it also demanded information about “the use of any media monitor or fact checking service,” and it left no doubt about Carr’s position on them. “You participated in a...

Read the full story at The Verge.

That Elon Musk ‘Adrian Dittmann’ screenshot is almost certainly fake

3 January 2025 at 09:16
Photo collage of Elon Musk.
Image: Cath Virginia / The Verge | Photo by STR / NurPhoto, Getty Images

A screenshot that seems to suggest billionaire Elon Musk is cosplaying superfan “Adrian Dittmann” — showing X account permissions beyond that of an ordinary user — is almost certainly fake, a source at X tells The Verge.

The source, who claims no knowledge of Dittmann’s identity, says an image posted to 4chan’s /pol/ board doesn’t reflect an actual interface available to people who work for X. The screenshot was posted by a user who identifies themselves as Adrian Dittmann, showing a post from Musk’s X page. In that screenshot, the X interface includes non-standard links to an “Admin Portal” and a “Bans” page, hinting that the user has special privileges on the site. But the source says neither of these options exist for X employees logged into their accounts. In fact, X employees would see the same interface as other users, with the potential exception of new features currently being trialed for wide release.

Adrian Dittmann posted on 4chan and accidentally revealed that he has admin privileges on twitter lol pic.twitter.com/ikbu1ZkopW

— anti-inflation supersoldier (@bluser12) January 2, 2025

Another source familiar with X’s operations confirmed to The Verge that the screenshot isn’t consistent with what employees see.

This suggests that other elements of the screenshot, like an analytics link that only appears for the author of a post, were also deliberate fabrications, seeded as hints that Musk is secretly Dittmann. The hints were picked up overnight, where they spread on social media alongside other posts made by the 4chan user — mostly ones lauding Musk and defending his X policies amid infighting with other conservatives over immigration.

It’s not clear who posted the screenshots. “Adrian Dittmann” is a longtime X user, and his Musk fandom and vocal similarities have led to long-standing rumors that he’s secretly none other than Musk himself. (Musk has cosplayed his son on the site, so it’s not that far-fetched an assertion.) User Mag’s Taylor Lorenz has noted that Dittmann benefits tremendously from the speculation that they’re Musk, and it’s possible the doctored screenshots are Dittmann leaning into that. The 4chan posts could also be from an unrelated impersonator, though, playing up the idea of Musk as a desperate forum poster. (I guess we can’t rule out that Musk, impersonating Dittmann, added fake elements to an actual screenshot of his X account? But I’m ranking that theory low on the list.)

None of this conclusively disproves a link between Musk and Dittmann, of course. But if Musk isn’t spending his precious free hours on a sockpuppet account, that gives him more time for cozying up to President-elect Donald Trump at Mar-a-Lago, attempting to swing Germany’s upcoming election in favor of the far-right AfD party, and playing Diablo IV.

Update 1:40PM ET: Added confirmation from a second source.

Apple will pay $95 million to people who were spied on by Siri

2 January 2025 at 10:40
Apple Watch Series 9 with Siri pulled up
Photo by Amelia Holowaty Krales / The Verge

Apple has agreed to a $95 million settlement with users whose conversations were inadvertently captured by its Siri voice assistant and potentially overheard by human employees. The proposed settlement, reported by Bloomberg, could pay many US-based Apple product owners up to $20 per device for up to five Siri-enabled devices. It still requires approval by a judge.

If approved, the settlement would apply to a subset of US-based people who owned or bought a Siri-enabled iPhone, iPad, Apple Watch, MacBook, iMac, HomePod, iPod touch, or Apple TV between September 17th, 2014 and December 31st, 2024. A user would also need to meet one other major criteria: they must swear under oath that they accidentally activated Siri during a conversation intended to be confidential or private. Individual payouts will depend on how many people claim the money, so if you apply, you could end up receiving less than the $20 maximum cap.

The initial class action suit against Apple followed a 2019 report by The Guardian, which alleged Apple third-party contractors “regularly hear confidential medical information, drug deals, and recordings of couples having sex” while working on Siri quality control. While Siri is supposed to be triggered by a deliberate wake word, a whistleblower said that accidental triggers were common, claiming something as simple as the sound of a zipper could wake Siri up. Apple told The Guardian that only a small portion of Siri recordings were passed to contractors, and it later offered a formal apology and said it would no longer retain audio recordings.

The plaintiffs in the Apple lawsuit — one of whom was a minor — claimed their iPhones had recorded them on multiple occasions using Siri, sometimes after they hadn’t uttered a wake word.

Apple wasn’t the only company accused of letting people hear confidential recordings. Google and Amazon also use contractors that listen in on recorded conversations, including accidentally captured ones, and there’s a similar suit against Google pending.

Elon Musk riles up Trump’s far-right base by praising immigrants

27 December 2024 at 09:24
Digital photo collage of Elon Musk and Vivek Ramaswamy.
Image: Cath Virginia / The Verge, Getty Images

Elon Musk, Vivek Ramaswamy, and other members of President-elect Donald Trump’s Silicon Valley coalition are clashing with the MAGA movement’s hardline anti-immigrant faction, and it’s allegedly resulted in Musk stripping far-right critics’ verification badges on X.

The conflict centers on Musk and Ramaswamy’s recent praise for foreign tech workers, beginning soon after Indian immigrant Sriram Krishnan joined the team of Trump’s AI and crypto czar David Sacks. It’s pitted Trump’s tech mogul donor class against his older network of far-right influencers like activist and Trump companion Laura Loomer while escalating into racist rhetoric against Indian Americans in particular. The ugly, extremely online fight between the American far-right influence network parallels the immigration debate currently being hashed out more quietly in Washington.

Anti-immigrant rhetoric was a cornerstone of Trump’s pitch to voters; on top of promoting false, racist rumors about immigrants and promising mass deportations that could destabilize the American economy, he’s expected to revive an H-1B visa crackdown that he imposed during his first term. At the same time, Trump is leaning heavily on...

Read the full story at The Verge.

This card game lets you build the ideal social network — or the most toxic

18 December 2024 at 11:37
A group of people playing a card game, One Billion Users.
One Billion Users is currently on Kickstarter. | Image: Mike Masnick / Kickstarter

My social network was booming. I had attracted top-tier users: the coveted Trendsetter, the popularity-lured Investor. I had bested server problems and bad press. Then, somebody picked a particularly unlucky card out of the One Billion Users deck I was testing. Sixty seconds later, I had lost it all.

One Billion Users is a new card game from Techdirt and Diegetic Games, and at its best, it lends itself to moments like this. Currently in its last days on Kickstarter, intended to fund a single run of the game rather than a wide release, it’s the latest in a string of projects from the team-up — including the digital games Moderator Mayhem and Trust & Safety Tycoon as well as CIA: Collect It All, a card game built on real CIA training materials.

One Billion Users is a lot less nerdy than any of these. It’s inspired by the relatively simple 1906 racing-themed card game Touring, better known through a popular 1950s adaptation called Mille Bornes. Only, instead of trying to drive the fastest while sandbagging competitors, you’re trying to build the biggest social network while sabotaging everyone else.

A set of blocker, community, influencer, hotfix, and event cards.
...

Read the full story at The Verge.

Character.AI has retrained its chatbots to stop chatting up teens

12 December 2024 at 06:55
Vector illustration of the Character.ai logo.
Image: Cath Virginia / The Verge

In an announcement today, Chatbot service Character.AI says it will soon be launching parental controls for teenage users, and it described safety measures it’s taken in the past few months, including a separate large language model (LLM) for users under 18. The announcement comes after press scrutiny and two lawsuits that claim it contributed to self-harm and suicide.

In a press release, Character.AI said that, over the past month, it’s developed two separate versions of its model: one for adults and one for teens. The teen LLM is designed to place “more conservative” limits on how bots can respond, “particularly when it comes to romantic content.” This includes more aggressively blocking output that could be “sensitive or suggestive,” but also attempting to better detect and block user prompts that are meant to elicit inappropriate content. If the system detects “language referencing suicide or self-harm,” a pop-up will direct users to the National Suicide Prevention Lifeline, a change that was previously reported by The New York Times.

Minors will also be prevented from editing bots’ responses — an option that lets users rewrite conversations to add content Character.AI might otherwise block.

Beyond these changes, Character.AI says it’s “in the process” of adding features that address concerns about addiction and confusion over whether the bots are human, complaints made in the lawsuits. A notification will appear when users have spent an hour-long session with the bots, and an old disclaimer that “everything characters say is made up” is being replaced with more detailed language. For bots that include descriptions like “therapist” or “doctor,” an additional note will warn that they can’t offer professional advice.

A chatbot named “Therapist”, tagline “I’m a licensed CBT therapist,” with a warning box that says “this is not a real person or licensed professional” Character.AI
Narrator: it was not a licensed CBT therapist.

When I visited Character.AI, I found that every bot now included a small note reading “This is an A.I. chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.” When I visited a bot named “Therapist” (tagline: “I’m a licensed CBT therapist”), a yellow box with a warning signal told me that “this is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis, or treatment.”

The parental control options are coming in the first quarter of next year, Character.AI says, and they’ll tell parents how much time a child is spending on Character.AI and which bots they interact with most frequently. All the changes are being made in collaboration with “several teen online safety experts,” including the organization ConnectSafely.

Character.AI, founded by ex-Googlers who have since returned to Google, lets visitors interact with bots built on a custom-trained LLM and customized by users. These range from chatbot life coaches to simulations of fictional characters, many of which are popular among teens. The site allows users who identify themselves as age 13 and over to create an account.

But the lawsuits allege that while some interactions with Character.AI are harmless, at least some underage users become compulsively attached to the bots, whose conversations can veer into sexualized conversations or topics like self-harm. They’ve castigated Character.AI for not directing users to mental health resources when they discuss self-harm or suicide.

“We recognize that our approach to safety must evolve alongside the technology that drives our product — creating a platform where creativity and exploration can thrive without compromising safety,” says the Character.AI press release. “This suite of changes is part of our long-term commitment to continuously improve our policies and our product.”

Character.AI sued again over ‘harmful’ messages sent to teens

10 December 2024 at 08:28
Vector illustration of the Character.ai logo.
Image: Cath Virginia / The Verge

Chatbot service Character.AI is facing another lawsuit for allegedly hurting teens’ mental health, this time after a teenager said it led him to self-harm. The suit, filed in Texas on behalf of the 17-year-old and his family, targets Character.AI and its cofounders’ former workplace, Google, with claims including negligence and defective product design. It alleges that Character.AI allowed underage users to be “ targeted with sexually explicit, violent, and otherwise harmful material, abused, groomed, and even encouraged to commit acts of violence on themselves and others.”

The suit appears to be the second Character.AI suit brought by the Social Media Victims Law Center and the Tech Justice Law Project, which have previously filed suits against numerous social media platforms. It uses many of the same arguments as an October wrongful death lawsuit against Character.AI for allegedly provoking a teen’s death by suicide. While both cases involve individual minors, they focus on making a more sweeping case: that Character.AI knowingly designed the site to encourage compulsive engagement, failed to include guardrails that could flag suicidal or otherwise at-risk users, and trained...

Read the full story at The Verge.

❌
❌