❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

AI helps Telegram remove 15 million suspect groups and channels in 2024

13 December 2024 at 15:51

Telegram launched a new page touting its moderation efforts, which have spiked since its founder's arrest.

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Bluesky at a crossroads as users petition to banΒ Jesse SingalΒ over anti-trans views, harassment

13 December 2024 at 06:24

Now with 25 million users, Bluesky is facing a test that will determine whether or not its platform will still be seen as a safe space and place of refuge from the toxicity of X. In recent days, a large number of users on Bluesky have been urging the company to ban one newcomer for […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Meta eases up on issuing β€˜first strikes’ for Facebook users and Instagram creators

5 December 2024 at 09:07

Days after Meta admitted that it’s been over-moderating its content, with mistakes impacting creators, the company announced an expansion of a new policy that will help to keep creators from being penalized after their first time violating Meta’s Community Standards. On Thursday, Meta said that the policy, which launched in August for Facebook creators, will […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Child safety org flags new CSAM with AI trained on real child sex abuse images

For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly detecting new or unknown CSAM remained a bigger challenge for platforms as new victims continued to be victimized. Now, AI may be ready to change that.

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an API expanding access to an AI model designed to flag unknown CSAM. It's the earliest use of AI technology striving to expose unreported CSAM at scale.

An expansion of Thorn's CSAM detection tool, Safer, the AI feature uses "advanced machine learning (ML) classification models" to "detect new or previously unreported CSAM," generating a "risk score to make human decisions easier and faster."

Read full article

Comments

Β© Aurich Lawson | Getty Images

❌
❌