Trump’s FCC Head Warns EU Content Rules ‘Incompatible’ With Free Speech

It's the latest warning from the Trump administration that it wants Big Tech left alone.
Content warning: this article contains graphic descriptions of violence against people and animals.
An “error” in Instagram Reels caused its algorithm to show some users video after video of horrific violence, animal abuse, murders, dead bodies, and other gore, Meta told 404 Media. The company said “we apologize for the mistake.”
Sometime in the last few days, this error caused people’s Reels algorithms to suddenly change. A 404 Media reader who has a biking-related Instagram account reached out to me and said that his feed, which is “typically dogs and bikes,” had become videos of people getting killed: “I had never seen someone being eaten by a shark, followed by someone getting killed by a car crash, followed by someone getting shot,” he told 404 Media.
To test this, the person let me login to his Instagram account, and I scrolled Reels for about 15 minutes. There were a couple videos about dogs and a couple videos about bikes, but the vast majority of videos were hidden behind a “sensitive content” warning. I will describe videos I saw when I clicked through the warnings, many of which had thousands of likes and hundreds of comments:
Meta’s Oversight Board, the company’s independent group created to help with sensitive policy decisions, is preparing to weigh in on CEO Mark Zuckerberg’s recent changes to how Facebook, Instagram, and Threads handle hate speech, Engadget reported. Zuckerberg announced an overhaul of its content moderation policies in January, shortly before the inauguration of U.S. President Donald […]
© 2024 TechCrunch. All rights reserved. For personal use only.
Artemiy Pavlov, the founder of a small but mighty music software brand called Sinevibes, spent more than 15 years building a YouTube channel with all original content to promote his business' products. Over all those years, he never had any issues with YouTube's automated content removal system—until Monday, when YouTube, without issuing a single warning, abruptly deleted his entire channel.
"What a 'nice' way to start a week!" Pavlov posted on Bluesky. "Our channel on YouTube has been deleted due to 'spam and deceptive policies.' Which is the biggest WTF moment in our brand's history on social platforms. We have only posted demos of our own original products, never anything else...."
Officially, YouTube told Pavlov that his channel violated YouTube's "spam, deceptive practices, and scam policy," but Pavlov could think of no videos that might be labeled as violative.
© Talaj | iStock / Getty Images Plus
Meta CEO Mark Zuckerberg defended his decision to scale back Meta’s content moderation policies in a Friday appearance on Joe Rogan’s podcast. Zuckerberg faced widespread criticism for the decision, including from employees inside his own company. “Probably depends on who you ask,” said Zuckerberg when asked how Meta’s updates have been received. The key updates […]
© 2024 TechCrunch. All rights reserved. For personal use only.
Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.
According to an internal memo viewed by Axios and verified by Ars, Meta's vice president of human resources, Janelle Gale, told Meta employees that the shift was due to "legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing."
It's another move by Meta that some view as part of the company's larger effort to align with the incoming Trump administration's politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.
© Bloomberg / Contributor | Bloomberg
Google searches for how to cancel and delete Facebook, Instagram, and Threads accounts have seen explosive rises in the U.S. since Meta CEO Mark Zuckerberg announced that the company will end its third-party fact-checking system, loosen content moderation policies, and roll back previous limits to the amount of political content in user feeds. Critics see […]
© 2024 TechCrunch. All rights reserved. For personal use only.
Telegram launched a new page touting its moderation efforts, which have spiked since its founder's arrest.
© 2024 TechCrunch. All rights reserved. For personal use only.
For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly detecting new or unknown CSAM remained a bigger challenge for platforms as new victims continued to be victimized. Now, AI may be ready to change that.
Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an API expanding access to an AI model designed to flag unknown CSAM. It's the earliest use of AI technology striving to expose unreported CSAM at scale.
An expansion of Thorn's CSAM detection tool, Safer, the AI feature uses "advanced machine learning (ML) classification models" to "detect new or previously unreported CSAM," generating a "risk score to make human decisions easier and faster."
© Aurich Lawson | Getty Images