❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

It’s remarkably easy to inject new medical misinformation into LLMs

It's pretty easy to see the problem here: The Internet is brimming with misinformation, and most large language models are trained on a massive body of text obtained from the Internet.

Ideally, having substantially higher volumes of accurate information might overwhelm the lies. But is that really the case? A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn't identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.

While the paper is focused on the intentional "poisoning" of an LLM during training, it also has implications for the body of misinformation that's already online and part of the training set for existing LLMs, as well as the persistence of out-of-date information in validated medical databases.

Read full article

Comments

Β© Just_Super

Meta drops fact-checking, loosens its content moderation rules

7 January 2025 at 04:35

Meta, the parent of Facebook, Instagram and Whatsapp, today announced a major overhaul of its content moderation policies, taking off some guardrails that it had put in place over several years, in response to criticism that it had helped spread political and health misinformation. In a blog post called β€œMore speech, fewer mistakes”, Meta’s new […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

A doctor who calls out diet misinformation shared 3 red flags that could harm your health

27 December 2024 at 23:32
A woman watches a video of a doctor speaking on her smart phone.
It's impossible to be knowledgeable about all health claims, so Dr. Idrees Mughal recommends looking out for common tricks.

SDI Productions/Getty Images

  • Bogus health advice is widespread online, and often follows a few strategies.
  • Misinformation can harm a person's physical and mental health.
  • Look out for big claims and conspiratorial tones, Dr. Idrees Mughal advised.

A doctor who challenges nutrition misinformation online and wrote a book about common tricks, listed red flags to look out for.

Health misinformation can cause real damage to physical and mental health so it's crucial to learn how to spot it, said Dr. Idrees Mughal, a medical doctor with a master's degree in nutritional research.

Mughal was speaking last month at an online nutrition misinformation conference hosted by the Royal College of Medicine and the nutrition organization Nutritank.

It's impossible to be knowledgeable about all health claims, but being able to spot patterns can help you spot fakery, Mughal said.

He are his three red flags:

Absolute language

Words like "most," "top," "worst," "best," "always," and "never" do not belong in health advice because they don't consider individual differences, Mughal said. "No one who is truly evidence-based would use terms like these."

People have different needs and goals, and no one ingredient or diet can be the top way to eat for the whole population, he said.

Take nuts for example: They are a good source of fiber, protein, and healthy fats, and some studies suggest that eating them regularly is linked to longevity. But nut allergies are widespread, and can be fatal β€” so the advice won't work for everyone.

A quick fix

"The promise of a quick fix is always a massive red flag," Mughal said.

People are much more receptive to things that can be done fast. Silver-bullet-type content tends to garner more engagements, clicks, and likes, he said.

But chronic diseases that can be impacted by our lifestyle choices, such as obesity, type 2 diabetes, and cardiovascular disease, require a long-term lifestyle management treatment plan. "If you didn't develop them overnight, you're not going to fix them overnight," he said.

Creating an 'us versus them' mentality

Health misinformation can undermine public health and lead to mistrust in medical professionals, Mughal said.

Some wellness influencers leverage this mistrust to market themselves and create an "us versus them" mentality, he said.

Rather than providing evidence-based information, they might say things like, "The healthcare industry doesn't want you to know this. I'm about to let you in on a huge secret," which frames them as an expert with hidden knowledge, he said. At the same time, it encourages you to distrust the more established authorities

"It's a very kind of predatory wellness marketing tactic," Mughal said.

Read the original article on Business Insider

A tsunami of AI deepfakes was expected this election year. Here's why it didn't happen.

18 December 2024 at 02:00
Oren Etzioni
Oren Etzioni, founder of TrueMedia.org.

Oren Etzioni

  • Generative AI tools have made it easier to create fake images, videos, and audio.
  • That sparked concern that this busy election year would be disrupted by realistic disinformation.
  • The barrage of AI deepfakes didn't happen. An AI researcher explains why and what's to come.

Oren Etzioni has studied artificial intelligence and worked on the technology for well over a decade, so when he saw the huge election cycle of 2024 coming, he got ready.

India, Indonesia, and the US were just some of the populous nations sending citizens to the ballot box. Generative AI had been unleashed upon the world about a year earlier, and there were major concerns about a potential wave of AI-powered disinformation disrupting the democratic process.

"We're going into the jungle without bug spray," Etzioni recalled thinking at the time.

He responded by starting TrueMedia.org, a nonprofit that uses AI-detection technologies to help people determine whether online videos, images, and audio are real or fake.

The group launched an early beta version of its service in April, so it was ready for a barrage of realistic AI deepfakes and other misleading online content.

In the end, the barrage never came.

"It really wasn't nearly as bad as we thought," Etzioni said. "That was good news, period."

He's still slightly mystified by this, although he has theories.

First, you don't need AI to lie during elections.

"Out-and-out lies and conspiracy theories were prevalent, but they weren't always accompanied by synthetic media," Etzioni said.

Second, he suspects that generative AI technology is not quite there yet, particularly when it comes to deepfake videos.Β 

"Some of the most egregious videos that are truly realistic β€” those are still pretty hard to create," Etzioni said. "There's another lap to go before people can generate what they want easily and have it look the way they want. Awareness of how to do this may not have penetrated the dark corners of the internet yet."

One thing he's sure of: High-end AI video-generation capabilities will come. This might happen during the next major election cycle or the one after that, but it's coming.

With that in mind, Etzioni shared learnings from TrueMedia's first go-round this year:

  • Democracies are still not prepared for the worst-case scenario when it comes to AI deepfakes.
  • There's no purely technical solution for this looming problem, and AI will need regulation.Β 
  • Social media has an important role to play.Β 
  • TrueMedia achieves roughly 90% accuracy, although people asked for more. It will be impossible to be 100% accurate, so there's room for human analysts.
  • It's not always scalable to have humans at the end checking every decision, so humans only get involved in edge cases, such as when users question a decision made by TrueMedia's technology.Β 

The group plans to publishΒ research on its AI deepfake detection efforts, and it's working on potential licensing deals.Β 

"There's a lot of interest in our AI models that have been tuned based on the flurry of uploads and deepfakes," Etzioni said. "We hope to license those to entities that are mission-oriented."

Read the original article on Business Insider

People will share misinformation that sparks β€œmoral outrage”

Rob Bauer, the chair of a NATO military committee, reportedly said, β€œIt is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of peopleΒ found outrageously dangerous.

But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.

Why do stories like this get so many views and shares? β€œThe vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. β€œMaybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.

Read full article

Comments

Β© Ricardo Mendoza Garbayo

A food-industry nutritionist shares 3 tricks companies use to 'science-wash' products, and how to spot them

2 December 2024 at 04:06
Emily Prpa sits at a table holding a ramekin of seeds.
It's important to look into the research behind a company's health claims, Emily Prpa said.

Jade Alana

  • Science-washing is when a company markets itself as science-backed without proper research.
  • Some companies attach doctors to a product to give it credibility or cherry-pick research.
  • A nutritionist who works in the industry explained how to notice these tricks.

A nutritionist who works in the food industry broke down the marketing tricks companies use to make them seem more grounded in science than they are.

The global wellness industry is now estimated at $6.3 trillion, giving companies lots of incentives to draw in consumers even if the science behind their products isn't solid β€” a process called science-washing.

Emily Prpa, a nutrition lecturer at King's College London who also works in the food industry, described them in a call with Business Insider.

Trick 1: Skipping proper research

Prpa said the first thing you should do when deciding if a product is worth buying is check if the company is doing its own research.

Firsthand research is expensive, and companies often cobble together existing studies to back up their products.

For example, they could cite the benefits of individual ingredients without actually assessing whether the combination in their product is effective.

This is especially prevalent with greens powders, which contain lots of different vitamins, minerals, and probiotics.

"A lot of the time, I see that they have vitamins and minerals that compete for the same absorption channel, so actually you're not going to be getting the exact dose that they say you will on the packet," she said.

Companies can also cherry-pick which studies they include, she said, ignoring unfavorable research.

If a company has funded independent clinical trials to test their product, they're likely to make that clear on their website, Prpa said. "They'll often be very proud to state that because it's giving them that distance and showing that this isn't biased."

Trick 2: Flashy endorsements

Sometimes a company will attach a doctor or other professional to a product. They might bring them on as a scientific or medical advisor or publicize that they endorse the product.

This can give the consumers the impression that the company is credible, but that's not always the case, Prpa said. "Does that product really hold up to the claims on the website?"

It isn't necessarily a red flag if a doctor is the face of a product, Prpa said, but that alone does not mean a product has health benefits.

"You are not buying the medic as your private consultant, you are buying a product," she said.

Trick 3: Promises that sound too good to be true

It sounds simple: if a product is marketed as a fix for stacks of health problems, it's probably overpromising.

"I don't know a product or an app on the market that can simultaneously lower blood sugars, improve sleep, improve cognition, and focus," Prpa said. "If it is sounding too good, it probably is."

Read the original article on Business Insider

❌
❌