Normal view

There are new articles available, click to refresh the page.
Today — 3 March 2025Main stream
Before yesterdayMain stream

Instagram 'Error' Turned Reels Into Neverending Scroll of Murder, Gore, and Violence

27 February 2025 at 07:16
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Instagram 'Error' Turned Reels Into Neverending Scroll of Murder, Gore, and Violence

Content warning: this article contains graphic descriptions of violence against people and animals.

An “error” in Instagram Reels caused its algorithm to show some users video after video of horrific violence, animal abuse, murders, dead bodies, and other gore, Meta told 404 Media. The company said “we apologize for the mistake.”

Sometime in the last few days, this error caused people’s Reels algorithms to suddenly change. A 404 Media reader who has a biking-related Instagram account reached out to me and said that his feed, which is “typically dogs and bikes,” had become videos of people getting killed: “I had never seen someone being eaten by a shark, followed by someone getting killed by a car crash, followed by someone getting shot,” he told 404 Media. 

💡
Do you know more about why this happened? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at [email protected].

To test this, the person let me login to his Instagram account, and I scrolled Reels for about 15 minutes. There were a couple videos about dogs and a couple videos about bikes, but the vast majority of videos were hidden behind a “sensitive content” warning. I will describe videos I saw when I clicked through the warnings, many of which had thousands of likes and hundreds of comments: 

  • An elephant repeatedly stepping on and flattening a man
  • A man attacking a pig with a wrench
  • A close-up video of someone who had just been shot in the head
  • A woman crying while laying on top of a loved one who had just been shot to death
  • A man on a motorcycle stopping next to a pedestrian and shooting them in the head with a pistol
  • A pile of dead bodies in what looked to be a war-type situation
  • A small plane crash in front of a crowd of people
  • A group of people beating a crocodile to death
  • A few videos by an account called “PeopleDeadDaily” 
  • A man being lit on fire
  • A man shooting a cashier at point blank range

Meta’s Oversight Board is reviewing the company’s new hate speech policies

26 February 2025 at 15:53

Meta’s Oversight Board, the company’s independent group created to help with sensitive policy decisions, is preparing to weigh in on CEO Mark Zuckerberg’s recent changes to how Facebook, Instagram, and Threads handle hate speech, Engadget reported. Zuckerberg announced an overhaul of its content moderation policies in January, shortly before the inauguration of U.S. President Donald […]

© 2024 TechCrunch. All rights reserved. For personal use only.

“Zero warnings”: Longtime YouTuber rails against unexplained channel removal

Artemiy Pavlov, the founder of a small but mighty music software brand called Sinevibes, spent more than 15 years building a YouTube channel with all original content to promote his business' products. Over all those years, he never had any issues with YouTube's automated content removal system—until Monday, when YouTube, without issuing a single warning, abruptly deleted his entire channel.

"What a 'nice' way to start a week!" Pavlov posted on Bluesky. "Our channel on YouTube has been deleted due to 'spam and deceptive policies.' Which is the biggest WTF moment in our brand's history on social platforms. We have only posted demos of our own original products, never anything else...."

Officially, YouTube told Pavlov that his channel violated YouTube's "spam, deceptive practices, and scam policy," but Pavlov could think of no videos that might be labeled as violative.

Read full article

Comments

© Talaj | iStock / Getty Images Plus

Mark Zuckerberg defends Meta’s latest pivot in three-hour Joe Rogan interview

10 January 2025 at 15:22

Meta CEO Mark Zuckerberg defended his decision to scale back Meta’s content moderation policies in a Friday appearance on Joe Rogan’s podcast. Zuckerberg faced widespread criticism for the decision, including from employees inside his own company. “Probably depends on who you ask,” said Zuckerberg when asked how Meta’s updates have been received. The key updates […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Meta kills diversity programs, claiming DEI has become “too charged”

Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.

According to an internal memo viewed by Axios and verified by Ars, Meta's vice president of human resources, Janelle Gale, told Meta employees that the shift was due to "legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing."

It's another move by Meta that some view as part of the company's larger effort to align with the incoming Trump administration's politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.

Read full article

Comments

© Bloomberg / Contributor | Bloomberg

Google searches for deleting Facebook, Instagram explode after Meta ends fact-checking

9 January 2025 at 08:28

Google searches for how to cancel and delete Facebook, Instagram, and Threads accounts have seen explosive rises in the U.S. since Meta CEO Mark Zuckerberg announced that the company will end its third-party fact-checking system, loosen content moderation policies, and roll back previous limits to the amount of political content in user feeds.  Critics see […]

© 2024 TechCrunch. All rights reserved. For personal use only.

AI helps Telegram remove 15 million suspect groups and channels in 2024

13 December 2024 at 15:51

Telegram launched a new page touting its moderation efforts, which have spiked since its founder's arrest.

© 2024 TechCrunch. All rights reserved. For personal use only.

Child safety org flags new CSAM with AI trained on real child sex abuse images

For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly detecting new or unknown CSAM remained a bigger challenge for platforms as new victims continued to be victimized. Now, AI may be ready to change that.

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an API expanding access to an AI model designed to flag unknown CSAM. It's the earliest use of AI technology striving to expose unreported CSAM at scale.

An expansion of Thorn's CSAM detection tool, Safer, the AI feature uses "advanced machine learning (ML) classification models" to "detect new or previously unreported CSAM," generating a "risk score to make human decisions easier and faster."

Read full article

Comments

© Aurich Lawson | Getty Images

❌
❌