❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

A tsunami of AI deepfakes was expected this election year. Here's why it didn't happen.

18 December 2024 at 02:00
Oren Etzioni
Oren Etzioni, founder of TrueMedia.org.

Oren Etzioni

  • Generative AI tools have made it easier to create fake images, videos, and audio.
  • That sparked concern that this busy election year would be disrupted by realistic disinformation.
  • The barrage of AI deepfakes didn't happen. An AI researcher explains why and what's to come.

Oren Etzioni has studied artificial intelligence and worked on the technology for well over a decade, so when he saw the huge election cycle of 2024 coming, he got ready.

India, Indonesia, and the US were just some of the populous nations sending citizens to the ballot box. Generative AI had been unleashed upon the world about a year earlier, and there were major concerns about a potential wave of AI-powered disinformation disrupting the democratic process.

"We're going into the jungle without bug spray," Etzioni recalled thinking at the time.

He responded by starting TrueMedia.org, a nonprofit that uses AI-detection technologies to help people determine whether online videos, images, and audio are real or fake.

The group launched an early beta version of its service in April, so it was ready for a barrage of realistic AI deepfakes and other misleading online content.

In the end, the barrage never came.

"It really wasn't nearly as bad as we thought," Etzioni said. "That was good news, period."

He's still slightly mystified by this, although he has theories.

First, you don't need AI to lie during elections.

"Out-and-out lies and conspiracy theories were prevalent, but they weren't always accompanied by synthetic media," Etzioni said.

Second, he suspects that generative AI technology is not quite there yet, particularly when it comes to deepfake videos.Β 

"Some of the most egregious videos that are truly realistic β€” those are still pretty hard to create," Etzioni said. "There's another lap to go before people can generate what they want easily and have it look the way they want. Awareness of how to do this may not have penetrated the dark corners of the internet yet."

One thing he's sure of: High-end AI video-generation capabilities will come. This might happen during the next major election cycle or the one after that, but it's coming.

With that in mind, Etzioni shared learnings from TrueMedia's first go-round this year:

  • Democracies are still not prepared for the worst-case scenario when it comes to AI deepfakes.
  • There's no purely technical solution for this looming problem, and AI will need regulation.Β 
  • Social media has an important role to play.Β 
  • TrueMedia achieves roughly 90% accuracy, although people asked for more. It will be impossible to be 100% accurate, so there's room for human analysts.
  • It's not always scalable to have humans at the end checking every decision, so humans only get involved in edge cases, such as when users question a decision made by TrueMedia's technology.Β 

The group plans to publishΒ research on its AI deepfake detection efforts, and it's working on potential licensing deals.Β 

"There's a lot of interest in our AI models that have been tuned based on the flurry of uploads and deepfakes," Etzioni said. "We hope to license those to entities that are mission-oriented."

Read the original article on Business Insider

Meta debuts a tool for watermarking AI-generated videos

12 December 2024 at 16:00

Meta is open sourcing a method to watermark AI-generated videos. Called Meta Video Seal, the company says it's robust against compression and edits.

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Your AI clone could target your family, but there’s a simple defense

On Tuesday, the US Federal Bureau of Investigation advised Americans to share a secret word or phrase with their family members to protect against AI-powered voice-cloning scams, as criminals increasingly use voice synthesis to impersonate loved ones in crisis.

"Create a secret word or phrase with your family to verify their identity," wrote the FBI in an official public service announcement (I-120324-PSA).

For example, you could tell your parents, children, or spouse to ask for a word or phrase to verify your identity if something seems suspicious, such as "The sparrow flies at midnight," "Greg is the king of burritos," or simply "flibbertigibbet." (As fun as these sound, your password should be secret and not the same as these.)

Read full article

Comments

Β© GSO Images via Getty Images

PlayAI clones voices on command

25 November 2024 at 04:00

Back in 2016, Hammad Syed and Mahmoud Felfel, an ex-WhatsApp engineer, thought it’d be neat to build a text-to-speech Chrome extension for Medium articles. The extension, which could read any Medium story aloud, was featured on Product Hunt. A year later, it spawned an entire business. β€œWe saw a bigger opportunity in helping individuals and […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

❌
❌