❌

Reading view

There are new articles available, click to refresh the page.

A tsunami of AI deepfakes was expected this election year. Here's why it didn't happen.

Oren Etzioni
Oren Etzioni, founder of TrueMedia.org.

Oren Etzioni

  • Generative AI tools have made it easier to create fake images, videos, and audio.
  • That sparked concern that this busy election year would be disrupted by realistic disinformation.
  • The barrage of AI deepfakes didn't happen. An AI researcher explains why and what's to come.

Oren Etzioni has studied artificial intelligence and worked on the technology for well over a decade, so when he saw the huge election cycle of 2024 coming, he got ready.

India, Indonesia, and the US were just some of the populous nations sending citizens to the ballot box. Generative AI had been unleashed upon the world about a year earlier, and there were major concerns about a potential wave of AI-powered disinformation disrupting the democratic process.

"We're going into the jungle without bug spray," Etzioni recalled thinking at the time.

He responded by starting TrueMedia.org, a nonprofit that uses AI-detection technologies to help people determine whether online videos, images, and audio are real or fake.

The group launched an early beta version of its service in April, so it was ready for a barrage of realistic AI deepfakes and other misleading online content.

In the end, the barrage never came.

"It really wasn't nearly as bad as we thought," Etzioni said. "That was good news, period."

He's still slightly mystified by this, although he has theories.

First, you don't need AI to lie during elections.

"Out-and-out lies and conspiracy theories were prevalent, but they weren't always accompanied by synthetic media," Etzioni said.

Second, he suspects that generative AI technology is not quite there yet, particularly when it comes to deepfake videos.Β 

"Some of the most egregious videos that are truly realistic β€” those are still pretty hard to create," Etzioni said. "There's another lap to go before people can generate what they want easily and have it look the way they want. Awareness of how to do this may not have penetrated the dark corners of the internet yet."

One thing he's sure of: High-end AI video-generation capabilities will come. This might happen during the next major election cycle or the one after that, but it's coming.

With that in mind, Etzioni shared learnings from TrueMedia's first go-round this year:

  • Democracies are still not prepared for the worst-case scenario when it comes to AI deepfakes.
  • There's no purely technical solution for this looming problem, and AI will need regulation.Β 
  • Social media has an important role to play.Β 
  • TrueMedia achieves roughly 90% accuracy, although people asked for more. It will be impossible to be 100% accurate, so there's room for human analysts.
  • It's not always scalable to have humans at the end checking every decision, so humans only get involved in edge cases, such as when users question a decision made by TrueMedia's technology.Β 

The group plans to publishΒ research on its AI deepfake detection efforts, and it's working on potential licensing deals.Β 

"There's a lot of interest in our AI models that have been tuned based on the flurry of uploads and deepfakes," Etzioni said. "We hope to license those to entities that are mission-oriented."

Read the original article on Business Insider

People will share misinformation that sparks β€œmoral outrage”

Rob Bauer, the chair of a NATO military committee, reportedly said, β€œIt is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of peopleΒ found outrageously dangerous.

But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.

Why do stories like this get so many views and shares? β€œThe vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. β€œMaybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.

Read full article

Comments

Β© Ricardo Mendoza Garbayo

A food-industry nutritionist shares 3 tricks companies use to 'science-wash' products, and how to spot them

Emily Prpa sits at a table holding a ramekin of seeds.
It's important to look into the research behind a company's health claims, Emily Prpa said.

Jade Alana

  • Science-washing is when a company markets itself as science-backed without proper research.
  • Some companies attach doctors to a product to give it credibility or cherry-pick research.
  • A nutritionist who works in the industry explained how to notice these tricks.

A nutritionist who works in the food industry broke down the marketing tricks companies use to make them seem more grounded in science than they are.

The global wellness industry is now estimated at $6.3 trillion, giving companies lots of incentives to draw in consumers even if the science behind their products isn't solid β€” a process called science-washing.

Emily Prpa, a nutrition lecturer at King's College London who also works in the food industry, described them in a call with Business Insider.

Trick 1: Skipping proper research

Prpa said the first thing you should do when deciding if a product is worth buying is check if the company is doing its own research.

Firsthand research is expensive, and companies often cobble together existing studies to back up their products.

For example, they could cite the benefits of individual ingredients without actually assessing whether the combination in their product is effective.

This is especially prevalent with greens powders, which contain lots of different vitamins, minerals, and probiotics.

"A lot of the time, I see that they have vitamins and minerals that compete for the same absorption channel, so actually you're not going to be getting the exact dose that they say you will on the packet," she said.

Companies can also cherry-pick which studies they include, she said, ignoring unfavorable research.

If a company has funded independent clinical trials to test their product, they're likely to make that clear on their website, Prpa said. "They'll often be very proud to state that because it's giving them that distance and showing that this isn't biased."

Trick 2: Flashy endorsements

Sometimes a company will attach a doctor or other professional to a product. They might bring them on as a scientific or medical advisor or publicize that they endorse the product.

This can give the consumers the impression that the company is credible, but that's not always the case, Prpa said. "Does that product really hold up to the claims on the website?"

It isn't necessarily a red flag if a doctor is the face of a product, Prpa said, but that alone does not mean a product has health benefits.

"You are not buying the medic as your private consultant, you are buying a product," she said.

Trick 3: Promises that sound too good to be true

It sounds simple: if a product is marketed as a fix for stacks of health problems, it's probably overpromising.

"I don't know a product or an app on the market that can simultaneously lower blood sugars, improve sleep, improve cognition, and focus," Prpa said. "If it is sounding too good, it probably is."

Read the original article on Business Insider

❌