Normal view

There are new articles available, click to refresh the page.
Today — 10 January 2025Main stream

Meta kills diversity programs, claiming DEI has become “too charged”

Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.

According to an internal memo viewed by Axios and verified by Ars, Meta's vice president of human resources, Janelle Gale, told Meta employees that the shift was due to "legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing."

It's another move by Meta that some view as part of the company's larger effort to align with the incoming Trump administration's politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.

Read full article

Comments

© Bloomberg / Contributor | Bloomberg

How to delete Facebook, Instagram, and Threads

10 January 2025 at 11:39

In the wake of Meta’s decision to remove its third-party fact-checking system and loosen content moderation policies, Google searches on how to delete Facebook, Instagram, and Threads have been on the rise. People who are angry with the decision accuse Meta CEO Mark Zuckerberg of cozying up to the incoming Trump administration at the expense […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Meta Deletes Trans and Nonbinary Messenger Themes

10 January 2025 at 09:08
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Meta Deletes Trans and Nonbinary Messenger Themes

Meta deleted nonbinary and trans themes for its Messenger app this week, around the same time that the company announced it would change its rules to allow users to declare that LGBTQ+ people are “mentally ill,” 404 Media has learned.

Meta’s Messenger app allows users to change the color scheme and design of their  chat windows with different themes. For example, there is currently a “Squid Game” theme, a “Minecraft” theme, a “Basketball” theme, and a “Love” theme, among many others. 

These themes regularly change, but for the last few years they have featured a “trans” theme and a “nonbinary” theme, which had color schemes that matched the trans pride flag and the non-binary pride flag. Meta did not respond to a request for comment about why the company removed these themes, but the change comes right as Mark Zuckerberg’s company is publicly and loudly shifting rightward to more closely align itself with the views of the incoming Donald Trump administration. 404 Media reported Thursday that many employees are protesting the anti LGBTQ+ changes and that “it’s total chaos internally at Meta right now” because of the changes.

💡
Do you work at Meta? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702.

The trans theme was announced for Pride Month in June 2021, and the nonbinary theme was announced in June 2022 in blog posts that highlighted Meta’s apparent support for trans and nonbinary people. Both of these posts are no longer online. Other blogs about updates to Messenger have been moved over from the old website they were originally published on to new URLs on the Meta newsroom, but these two blog posts have not.

“This June and beyond, we want people to #ConnectWithPride because when we show up as the most authentic version of ourselves, we can truly connect with people,” the post announcing the trans theme originally said. “Starting today, in support of the LGBTQ+ community and allies, Messenger is launching new expression features and celebrating the artists and creators who not only developed them, but inspire us each and every day.” 

Yesterday — 9 January 2025Main stream

‘It’s Total Chaos Internally at Meta Right Now’: Employees Protest Zuckerberg’s Anti LGBTQ Changes

9 January 2025 at 12:58
‘It’s Total Chaos Internally at Meta Right Now’: Employees Protest Zuckerberg’s Anti LGBTQ Changes

Meta employees are furious with the company’s newly announced content moderation changes that will allow users to say that LGBTQ+ people have “mental illness,” according to internal conversations obtained by 404 Media and interviews with five current employees. The changes were part of a larger shift Mark Zuckerberg announced Monday to do far less content moderation on Meta platforms. 

“I am LGBT and Mentally Ill,” one post by an employee on an internal Meta platform called Workplace reads. “Just to let you know that I’ll be taking time out to look after my mental health.” 

On Monday, Mark Zuckerberg announced that the company would be getting “back to our roots around free expression” to allow “more speech and fewer mistakes.” The company said “we’re getting rid of a number of restrictions on topics like immigration, gender identity, and gender that are the subject of frequent political discourse and debate.” A review of Meta’s official content moderation policies show, specifically, that some of the only substantive changes to the policy were made to specifically allow for “allegations of mental illness or abnormality when based on gender or sexual orientation.” It has long been known that being LGBTQ+ is not a sign of “mental illness,” and the false idea that sexuality or gender identification is a mental illness has long been used to stigmatize and discriminate against LGBTQ+ people.

Earlier this week, we reported that Meta was deleting internal dissent about Zuckerberg's appointment of UFC President Dana White to the Meta board of directors.

💡
Do you work at Meta? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702.

Google searches for deleting Facebook, Instagram explode after Meta ends fact-checking

9 January 2025 at 08:28

Google searches for how to cancel and delete Facebook, Instagram, and Threads accounts have seen explosive rises in the U.S. since Meta CEO Mark Zuckerberg announced that the company will end its third-party fact-checking system, loosen content moderation policies, and roll back previous limits to the amount of political content in user feeds.  Critics see […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Before yesterdayMain stream

Mark Zuckerberg's content-moderation changes come after a long line of nightmares

8 January 2025 at 11:58
Mark Zuckerberg

Credit: Anadolu/Getty, Irina Gutyryak/Getty, Tyler Le/BI

  • Content moderation has always been a nightmare for Meta.
  • Its new content-moderation policy is a huge change — and it could be an improvement.
  • Mark Zuckerberg's "apology tour" from the past few years seems to be officially over.

Mark Zuckerberg's changes to Meta's content-moderation policies are potentially huge.

To fully understand their gravity, it's useful to look at how Meta got here. And to consider what these changes might actually mean for users: Are they a bow to an incoming Trump administration? Or an improvement to a system that's gotten Zuckerberg and Co. lots of heat before? Or a little of both?

Content moderation has always been a pit of despair for Meta. In its blog post announcing the changes on Tuesday, Meta's new head of policy, Joel Kaplan, talked about wanting to get back to Facebook's roots in "free speech." Still, those roots contain a series of moderation fires, headaches, and constant adjustments to the platform's policies.

Starting in 2016, moderation troubles just kept coming like a bad "We Didn't Start the Fire" cover. Consider this roundup:

Whatever your political alignment, it seems like Meta has been trapped in a vicious cycle of making a policy — or lacking a policy — then reversing itself to try to clean up a mess.

As Charlie Warzel pointed out in The Atlantic, Zuckerberg has sometimes blamed external forces when he's faced with situations like some of the ones above.

That's maybe until now. As Zuckerberg posted on Threads on Wednesday, "Some people may leave our platforms for virtue signaling, but I think the vast majority and many new users will find that these changes make the products better."

Maybe the big changes were already brewing this past September when Zuckerberg appeared at a live event and said, "One of the things that I look back on and regret is I think we accepted other people's view of some of the things that they were asserting that we were doing wrong, or were responsible for, that I don't actually think we were."

In other words, as of this week, the apology tour seems to have ended.

What will Meta's changes mean for you and me, the users?

What will the changes mean? Who knows! I can make a few predictions:

The "community note" system might work pretty well — or at least not worse than the current human- and AI-led fact-checking system.

There might be more content in your feeds that you don't like — political speech that you find abhorrent, for example.

It's also possible that while certain content might exist on the platform, you won't actually come across it because it will have been downgraded. "Freedom of speech, not freedom of reach" has been X's mantra (though considering the flow of truly vile content that has proliferated in my feed there in the past year or so, I don't think that's been particularly effective).

One other piece of the announcement is that Meta will focus its AI-powered filtering efforts on the highest-risk content (terrorism, drugs, and child endangerment). For lesser violations, the company said, it will rely more on user reports. Meta hasn't given details on how exactly this will work, but I imagine it could have a negative effect on common issues like bullying and harassment.

A large but less glamorous part of content moderation is removing "ur ugly" comments on Instagram — and that's the kind of stuff that will rely on user reporting.

It's also quite possible that bad actors will take advantage of the opening. Facebook is nothing if not a place to buy used furniture while various new waves of pillagers attempt to test and game the algorithms for profit or menace — just consider the current wave of AI slop, some of which appears at least in part to be a profitable scam operation run from outside the US.

What do the changes mean for Meta?

If these changes had been rolled out slowly, one at a time, they might have seemed like reasonable measures just on their face. Community notes? Sure. Loosening rules on certain hot political topics? Well, not everyone will like it, but Meta can claim some logic there. Decreasing reliance on automatic filters and admitting that too many non-violations have been swept up in AI dragnets? People would celebrate that.

No one thought Meta's moderation before the announced changes was perfect. There were lots of complaints (correctly) about how it banned too much stuff by mistake — which this new policy is aiming to fix.

And switching from third-party fact-checkers to a community-notes system isn't necessarily bad. The fact-checking system wasn't perfect, and community notes on X, the system Meta is modeling its own after, can be quite useful. Even acknowledging that, yes, X has sometimes become a cesspit for bad content, the root cause isn't the community notes.

Still, it's impossible to weigh the merits of each aspect of the new policy and have blinders on when it comes to the 800-pound political gorilla in the room.

There's one pretty obvious way of looking at Meta's announcement of sweeping changes to its moderation policy: It's a move to cater to an incoming Trump administration. It's a sign that Zuckerberg has shifted to the right, as he drapes himself in some of the cultural signifiers of the bro-y Zynternet (gold chain, $900,000 watch, longer hair, new style, front row at an MMA match).

Together, every piece of this loudly signals that Zuckerberg either A., genuinely believed he'd been forced to cave on moderation issues in the past, or B., knows that making these changes will please Trump. I don't really think the distinction between A and B matters too much anyway. (Meta declined to comment.)

This probably isn't the last of the changes

I try to avoid conflating "Meta" with "Mark Zuckerberg" too much. It's a big company! There are many smart people who care deeply about the lofty goals of social networking who create policy and carry out the daily work of trust and safety.

Part of me wonders how much Zuckerberg wishes this boring and ugly part of the job would fade away — there are so many more shiny new things to work on, like AI or mixed-reality smart glasses. Reworking the same decade-old policies so that people can insult each other 10% more is probably less fun than MMA fighting or talking to AI researchers.

Content moderation has always been a nightmare for Meta. Scaling it back, allowing more speech on controversial topics, and outsourcing fact-checking to the community seems like a short-term fix for having to deal with this unpleasant and thankless job. I can't help but imagine that another overhaul will come due sometime in the next four years.

Read the original article on Business Insider

Facebook Marketplace to display eBay listings to appease EU regulators

8 January 2025 at 06:59

Meta is set to start displaying eBay listings in its own Facebook Marketplace classifieds platform, in an effort to appease European regulators. Back in November, Meta was hit with a €798 million fine by the European Commission (EC) in Europe for breaching antitrust rules. The EC contended that Meta created “unfair trading conditions” by connecting […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Ex-NFL reporter Michele Tafoya rips Mark Zuckerberg over damage done in wake of Meta's fact-checking programs

8 January 2025 at 02:00

Former NFL sideline reporter Michele Tafoya ripped Meta CEO Mark Zuckerberg on Tuesday after the billionaire announced he would get rid of Facebook’s fact-checking program.

The third-party fact-checking system will be replaced with community notes similar to X, Zuckerberg said in a video.

CLICK HERE FOR MORE SPORTS COVERAGE ON FOXNEWS.COM

The company’s system was put into place after the 2016 election and was used to "manage content" and misinformation on its platforms, largely due to "political pressure," executives said, but admitted the system has "gone too far." Political bias from the fact-checkers appeared to be one of the main issues.

Tafoya appeared on OutKick’s "Don’t @ Me with Dan Dakich" to talk about Zuckerberg’s decision. Dakich asked her what gave Zuckerberg the right to do the about-face now.

"Absolutely nothing. This is not unique to Facebook. I had a guest on my podcast yesterday, Gad Saad, a professor out of Canada, so much has gone on up there under the Justin Trudeau administration that has been really similar," Tafoya said. "People being absolutely wiped out of their professions. We’re talking doctors, researchers, professors, medical experts because they either said something kind of cutesy that someone was uncomfortable with.

META ENDS FACT-CHECKING PROGRAM AS ZUCKERBERG VOWS TO RESTORE FREE EXPRESSION ON FACEBOOK, INSTAGRAM

"This suppression of human thought, this suppression of human opinion, is completely antithetical to America and free speech. People don’t see it happening or they’re OK with it. This should be massive, flashing red light.

"Mark Zuckerberg knows what he did was wrong, and now he’s going to try and fix it and hope we just say, ‘Oh, good for you, you fixed it, Mark.’"

Meta’s chief global affairs officer, Joel Kaplan, told Fox News Digital earlier Tuesday that using community notes is a better option.

"Instead of going to some so-called expert, it instead relies on the community and the people on the platform to provide their own commentary to something that they’ve read," Kaplan explained, noting that if a note gets support from "the broadest cross-section of users," that note can be attached to the content for others to see.

"We think that’s a much better approach rather than relying on so-called experts who bring their own biases into the program," Kaplan said.

Fox News’ Brooke Singman contributed to this report.

Follow Fox News Digital’s sports coverage on X and subscribe to the Fox News Sports Huddle newsletter.

Mark Zuckerberg says Meta's 'community notes' are inspired by Elon Musk's X. Here's how they work — and how they don't.

8 January 2025 at 01:31
Meta Mark Zuckerberg
Meta CEO Mark Zuckerberg said the company's platforms would prioritize speech and free expression.

Getty Images

  • Mark Zuckerberg's plan to replace fact checkers with "community notes" is a familiar one.
  • A similar system of community moderation is already in place on Elon Musk's X.
  • On X, community notes let users add context to posts. Meta has said it seems to work well.

Mark Zuckerberg says Meta will use "community notes" to moderate content on its platforms like Facebook and Instagram — but what exactly does that mean, and how has it worked on other platforms?

Meta said the feature would function much like it does on Elon Musk's platform, where certain contributors can add context to posts they think are misleading or need clarification. This type of user-generated moderation would largely replace Meta's human fact-checkers.

"We've seen this approach work on X — where they empower their community to decide when posts are potentially misleading and need more context and people across a diverse range of perspectives decide what sort of context is helpful for other users to see," Meta said in its announcement Tuesday.

Musk, who has a sometimes-tense relationship with Zuckerberg, appeared to approve of the move, posting "This is cool" on top of a news article about the changes at Meta.

So, will it be cool for Meta and its users? Here's a primer on "community notes" — how it came to be, and how it's been working so far on X:

How the 'community notes' feature was born

The idea of "community notes" first came about at Twitter in 2019, when a team of developers at the company, now called X, theorized that a crowdsourcing model could solve the main problems with content moderation. Keith Coleman, X's vice president of product who helped create the feature, told Asterisk magazine about its genesis in an interview this past November.

Coleman told the outlet that X's previous fact-checking procedures, run by human moderators, had three main problems: dedicated staff couldn't fact-check claims in users' posts fast enough, there were too many posts to monitor, and the general public didn't trust a Big Tech company to decide what was or wasn't misleading.

This is cool pic.twitter.com/kUkrvu6YKY

— Elon Musk (@elonmusk) January 7, 2025

Coleman told Asterisk that his team developed a few prototypes and settled on one that allowed users to submit notes that could show up on a post.

"The idea was that if the notes were reasonable, people who saw the post would just read the notes and could come to their own conclusion," he said.

And in January 2021, the company launched a pilot program of the feature, then called "Birdwatch," just weeks after the January 6 Capitol riot. On its first day, the pilot program had 500 contributors.

Coleman told the outlet that for the first year or so of the pilot program — which showed community notes not directly on users' posts but on a separate "Birdwatch" website — the product was very basic, but over time, it evolved and performed much better than expected.

When Musk took over the platform in 2022, he expanded the program beyond the US, renamed it "community notes," and allowed more users to become contributors.

Around the same time, he disassembled Twitter's trust and safety team, undid many of the platform's safety policies, and lowered the guardrails on content moderation. Musk said in 2022 that the community notes tool had "incredible potential for improving information accuracy."

It's unclear how many users participate in community notes contributors. It's one of the platform's main sources of content moderation. X didn't immediately respond to a request for comment from BI.

How the community notes feature works on X

The community notes feature is set to roll out on Meta's Instagram, Facebook, and Threads platforms over the next few months, the company said in a statement shared with BI. Meta said the feature on its platforms would be similar to X's.

On X, community notes act as a crowd-sourced way for users themselves to moderate content without the company directly overseeing that process.

A select group of users who sign up as "contributors" can write a note adding context to any post that could be misleading or contain misinformation.

Then, other contributors can rate that note as helpful or not. Once enough contributors from different points of view vote on the note as helpful, then a public note gets added underneath the post in question.

For instance, here's an example of a community note attached to a recent X post:

January moment pic.twitter.com/92nRy2eiW0

— Just Posting Ls (@MomsPostingLs) January 7, 2025

X has made the complex ranking algorithm behind the feature transparent and open-source, and users can view it online and download the latest data.

X says that community notes "do not represent X's viewpoint and cannot be edited or modified by our teams," adding that a community-flagged post is only removed if it violates X's rules, terms of service, or privacy policies.

Similar to X, Meta said its community notes will be written and rated by contributing users. It said the company will not write notes or decide which ones show up. Also like X, Meta said that its community notes "will require agreement between people with a range of perspectives to help prevent biased ratings."

Facebook, Instagram, and Threads users can sign up now to be among the first contributors to the new tool.

"As we make the transition, we will get rid of our fact-checking control, stop demoting fact-checked content and, instead of overlaying full-screen interstitial warnings you have to click through before you can even see the post, we will use a much less obtrusive label indicating that there is additional information for those who want to see it," Joel Kaplan, Meta's chief global affairs officer, said in Tuesday's statement.

Potential pros and cons of community notes

One possible issue with the feature is that by the time a note gets added to a potentially misleading post, the post may have already been widely viewed — spreading misinformation before it can be tamped down.

Another issue is that for a note to be added, contributors from across the political spectrum need to agree that a post is problematic or misleading, and in today's polarized political environment, concurring on facts has sometimes become increasingly difficult.

One possible advantage to the feature, though, is that the general public may be more likely to trust a consensus from their peers rather than an assessment handed down by a major corporation.

Maarten Schenk, cofounder and chief technology officer of Lead Stories, a fact-checking outlet, told the Poynter Institute that one benefit of X's community notes is that it doesn't use patronizing language.

"It avoids accusations or loaded language like 'This is false,'" Schenk told Poynter. "That feels very aggressive to a user."

And community notes can help combat misinformation in some ways. For example, researchers at the University of California, San Diego's Qualcomm Institute found in an April 2024 study that the X feature helped offset false health information in posts related to COVID-19. They also helped add accurate context.

In announcing the move, Zuckerberg said Meta's past content moderation practices have resulted in "too many mistakes" and "too much censorship." He said the new feature will prioritize free speech and help restore free expression on Meta's platforms.

Both President-elect Donald Trump and Musk have championed the cause of free speech online, railed against content moderation as politically biased censorship, and criticized Zuckerberg for his role overseeing the public square of social media.

One key person appeared pleased with the change: Trump said Tuesday that Zuckerberg had "probably" made the changes in response to previous threats issued by the president-elect.

Read the original article on Business Insider

Advertisers say Meta's content-moderation changes make them uneasy. They won't stop spending.

8 January 2025 at 01:25
Jim Kaplan and Mark Zuckerberg
Meta execs Joel Kaplan and Mark Zuckerberg have outlined a new, looser approach to content moderation.

Getty Images

  • Some advertisers are expressing concerns about Meta's commitment to brand safety.
  • Meta this week unveiled a new approach to content moderation, removing third-party fact-checkers.
  • Many ad industry insiders doubt it'll lead to major spending shifts, however.

Meta's new plan to shake up its content-moderation policies has some advertisers worried about the social giant's brand-safety standards. Despite that, ad insiders who spoke with Business Insider generally didn't expect the changes to hurt Meta's business.

"It's the final nail in the coffin for platform responsibility," an ad agency veteran told BI. They and some others interviewed asked for anonymity to protect business relationships; their identities are known to BI.

The industry reaction — or lack of it — reflects both advertisers' reliance on Meta and the shifting conversation around how brands should approach "brand safety" or "suitability," which refer to when marketers try to avoid funding or appearing next to content they deem unsuitable.

"A lot of brands have shied away from platforms that are too tied to news or controversy, mostly out of fear of cancel culture," said Toni Box, EVP of brand experience at the media agency Assembly. "But at some point, we have to ask: Are we missing opportunities to connect with people during meaningful moments because we don't trust audiences to tell the difference between a news story and an ad?"

The brand-safety tides are shifting

Meta used to bend over backward to address advertisers' brand-safety concerns. But brands weren't mentioned in Meta CEO Mark Zuckerberg's video announcing the changes or in policy chief Joel Kaplan's interview on Tuesday morning with Fox News' "Fox and Friends."

Instead, their pitch was about preventing the censorship of speech. Meta said it plans to replace third-party fact-checkers with a community-based fact-checking program, addressing criticism that the previous system was too partisan and was often overcorrective. The company also said it would loosen some content moderation restrictions on topics that are "part of mainstream discourse" and be more open to reintroducing political content to people's feeds.

Meta did give a very brief public nod to advertisers. A Meta spokesperson pointed BI to a LinkedIn post from Meta ads exec Nicola Mendelsohn that said the company continued to be focused on ensuring brand safety and suitability by offering a suite of tools for advertisers. In an email from Meta account reps to ad buyers, copies of which were viewed by BI, the company said it knew how important it was to continue giving advertisers transparency and control over their brand suitability. And in an interview with BI, Meta's chief marketing officer Alex Schultz said advertisers' primary brand safety concerns were around hate speech and adult nudity and that its tools would focus on "precision and not be taking down things we shouldn't be taking down."

Despite private grumbling from some advertisers about the changes, and how they appeared to be timed to appease incoming President Donald Trump, industry insiders said they don't expect much public blowback on Meta.

Advertiser boycotts and similar actions were once seen as a point of leverage for marketers. One high-profile example was the 2020 #StopHateFor Profit movement when hundreds of major brands protested Meta's policies on hate speech and misinformation.

But brand safety has recently become a political hot potato and been a flash point for some influential, right-leaning figures.

Last year, the chairman of the House Judiciary Committee, Jim Jordan, began investigating whether advertisers had illegally colluded to demonetize conservative platforms and voices. Elon Musk's X went on to sue the Global Alliance for Responsible Media, the brand-safety initiative at the center of Jordan's investigation, and some of its advertiser members after they withdrew ad dollars from the platform. GARM discontinued activities days later. Jordan has continued to press advertisers about their involvement in GARM, and X's litigation against it and some of its members is ongoing.

A media agency employee told BI that they had clients who were now more cautious about criticizing platforms in public or saying they would pull spending.

Industry analysts also said that — politics aside — many marketers would likely continue to spend with Meta so long as it delivered them the audiences and ad performance they had come to expect. Meta commands about 21% of the US digital ad market, behind only Google, according to data firm EMARKETER.

"For us, after Google, Meta is the next-best performer as far as ROI is concerned," said Shamsul Chowdhury, VP of paid social at the digital ad agency Jellyfish, referring to the return on investment advertisers get from their campaigns.

Advertisers are split on whether the changes will improve Meta's platforms

Some advertisers who spoke with BI said they had outstanding questions about the new thresholds Meta would apply to removing posts, what's on the road map for monitoring trends around misinformation, and whether they would still be able to effectively apply their own third-party brand suitability software to content on Meta's apps.

Advertisers said they would pay close attention to how Meta's Community Notes-like feature would work in practice, especially as some hadn't been impressed with X's performance in this area with a similar feature.

"This is a major step back and likely going to result in serious issues where social platforms, not just Meta, are going to hide behind the notion that their users do the moderation and fact-checking for them and they are free speech platforms," said Ruben Schreurs, CEO of the marketing consultancy Ebiquity.

It's not entirely clear how effective X's Community Notes have been. A study published last year by researchers at the University of Luxembourg, University of Melbourne, and JLU Giessen concluded that X's "Community Notes might be too slow to effectively reduce engagement with misinformation in the early (and most viral) stage of diffusion." Still, a separate study from the Qualcomm Institute within UC San Diego found Community Notes helped counter false information about Covid vaccines.

Some advertising execs supported Meta's announcement. Two media agency reps said increasing the number of conversations people are having on the platform could benefit Meta and advertisers alike by boosting engagement.

"I think the best news is free speech and mitigation of harmful or dangerous content remains the primary focus of this maturing program, and Meta has taken a forward position here," said John Donahue, founder of the digital media consultancy Up and to the Right.

Mike Zaneis of the ad initiative the Trustworthy Accountability Group said Meta's announcement should be seen as an evolution of the platform's brand-safety standards and not a retreat from protecting users and marketers.

"The speed and accuracy of the Community Notes tool is impressive and it's the increased transparency that makes a fundamental difference for users and marketers alike," Zeneis said of X's implementation of the concept so far. "If something seems to be working, we shouldn't discourage others from adopting the approach just because it hasn't been precisely tested."

Read the original article on Business Insider

Why Zuckerberg killed fact-checking as he keeps cozying up to Trump

8 January 2025 at 00:00

Mark Zuckerberg, who often bends with the political winds, is getting out of the fact-checking business.

And this is part of a broader effort by the Meta CEO to ingratiate himself with Donald Trump after a long and testy relationship.

After a previous outcry, Zuck made a great show of declaring that Facebook would hire fact-checkers to combat misinformation on the globally popular site. That was a clear sign that Facebook was becoming more of a journalistic organization rather than a passive poster of users’ opinions (and dog pictures).

But it didn’t work. In fact, it led to more info-suppression and censorship. Why should anyone believe a bunch of unknown fact-checkers working for one of the increasingly unpopular tech titans?

MESSY BACKSTAGE JOCKEYING IN TRUMP TRANSITION COULD SHAPE HILL STRATEGY 4 YEARS AFTER JAN 6

Now Zuckerberg is pulling the plug, announcing his decision in a video to underscore its big-deal nature:

"The problem with complex systems is they make mistakes. Even if they accidentally censor just 1 percent of posts. That’s millions of people. And we’ve reached a point where it’s just too many mistakes and too much censorship. The recent elections also feel like a cultural tipping point towards once again prioritizing speech."

Let me jump in here. Zuckerberg bluntly admits, with that line about "cultural tipping point," that he’s following the conventional wisdom–and, of course, the biggest tipping point is Trump’s election to a second term. And skeptics are portraying this as a bow to the president-elect and his team.

TRUMP THREATENS MORE LAWSUITS AGAINST MEDIA AS ABC TO PAY $15 MILLION TO SETTLE CASE

"So we’re gonna get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms…

"We’re going to get rid of fact checkers" and replace them with community notes, already used on X. "After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. 

"We tried in good faith to address those concerns without becoming the arbiters of truth. But the fact checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the U.S." 

It was Zuckerberg, along with the previous management at Twitter, that banned Trump after the Capitol riot. This led to plenty of Trumpian attacks on Facebook, and the president-elect told me he had flipped his position on banning TikTok because it would help Facebook, which he viewed as the greater danger.

Trump said last summer that Zuckerberg plotted against him in 2020 and would "spend the rest of his life in prison" if he did it again.

The president-elect boiled it down in a posting: "ZUCKERBUCKS, DON’T DO IT!"

Here’s a bit more from Z: "We’re going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse. What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas. And it’s gone too far." 

Indeed it has. And I agree with that. In 2020, social media, led by Twitter, suppressed the New York Post story on Hunter Biden’s laptop, dismissing it as Russian disinformation, though a year and a half later the establishment press suddenly declared hey, the laptop report was accurate.

DONALD TRUMP’S TOUGH TALK—BUY GREENLAND! TAKE BACK PANAMA CANAL!—SPARKS DEFIANCE FROM MANY REPUBLICAN REBELS

Let’s face it: People like Zuckerberg and Elon Musk (now embroiled in a war of words with British Prime Minister Keir Starmer over an alleged coverup of gang rapes of young girls when Starmer was chief prosecutor) have immense clout. They are the new gatekeepers. With so-called legacy media less relevant–as we see with the mass exodus of top talent from Jeff Bezos’ Washington Post and the recent rise of podcasts–they control much of the public dialogue. And yes, they are private companies that can do what they want. 

At yesterday’s marathon news conference, a reporter asked Trump about Zuckerberg: "Do you think he’s directly responding to the threats that you have made to him in the past with promises?"

"Probably. Yeah, probably," Trump said, twisting the knife a bit.

Meanwhile, having made the obligatory trek to Mar-a-Lago for dinner, the CEO has taken a number of steps to join forces with the new administration. And it doesn’t hurt that Meta is kicking in a million bucks to the Trump inaugural.

Zuck named prominent Republican lawyer Joel Kaplan as chief of global affairs, replacing a former British deputy prime minister. On "Fox & Friends" yesterday, Kaplan said: 

"We’ve got a real opportunity now. We’ve got a new administration and a new president coming in who are big defenders of free expression, and that makes a difference. One of the things we’ve experienced is that when you have a U.S. president, an administration that’s pushing for censorship, it just makes it open season for other governments around the world that don’t even have the protections of the First Amendment to really put pressure on US companies. We’re going to work with President Trump to push back on that kind of thing around the world."

We’re going to work with President Trump. Got it?

What’s more, Zuckerberg is adding Dana White, chief executive officer of United Fighting Championship, to the Meta board. White is a longtime Trump ally, so MAGA now has a voice inside the company.

In other words, get with the program.

Footnote: At his news conference, where Trump seemed angry about the latest court battles and plans to sentence him, the incoming president said–or "didn’t rule out," in journalistic parlance– "military coercion" against two of his latest targets.

"Well, we need Greenland for national security purposes," he said. And Americans lost many lives building the Panama Canal. "It might be that you’ll have to do something." 

He’s not going to use military force against either one. But his answer stirs the pot, as he knew it would.

Meta's done with fact-checking — and its CMO says Trump and changing 'vibes in America' are major reasons why

7 January 2025 at 17:11
Alex Schultz
Meta CMO Alex Schultz said the incoming Trump administration influenced Meta's content moderation changes.

Richard Bord/WireImage

  • Meta said Tuesday it plans to drop third-party fact-checkers in favor of a community notes feature.
  • Meta CMO Alex Schultz told Business Insider that the election of Donald Trump to the presidency influenced the shift.
  • He also said a change in how Americans view censorship and content moderation played a role.

After Meta announced it was ditching fact checkers, Alex Schultz, the company's chief marketing officer, said in an interview with Business Insider on Tuesday that the election of Donald Trump as president influenced the decision.

"Look, we're going to adjust to any administration and we always do and that, I think, is appropriate," Schultz said at CES 2025 in Las Vegas on Tuesday, adding, "We've worked with the Biden administration through its term. We'll work with the Trump administration through its term. Elections have consequences."

Earlier on Tuesday, Meta announced it would stop using third-party fact-checkers in favor of user-generated community notes.

The company also said it was moving some of its content moderation teams from California, which typically votes Democratic, to Texas, which typically votes Republican. Meta CEO Mark Zuckerberg said the move would "help remove the concern that biased employees are overly censoring content."

Schultz added that in addition to the incoming administration playing a role, the timing of the decision was also influenced by a shift in "the vibes in America."

Schultz said there's a change in how Americans broadly view censorship, free speech, and content moderation, which he said was signaled by the results of the election.

"It's a big, big shift," he said. "So I think, yeah, we're responding to that at this time because that's the logical time to do it."

Zuckerberg said the new community notes feature would be similar to the one used on Elon Musk's X, formerly Twitter, which allows users to add notes to posts that potentially contain misinformation or are missing context.

Schultz told BI the announced changes also bring Zuckerberg "back to the core of what he cares about."

"I think fundamentally he's been pushed into a place that was further than he wanted to be in terms of censorship and in content moderation," Schultz said, adding Zuckerberg was "taking advantage of the moment to do what he thinks is right."

Meta's content moderation policies have been scrutinized for years. Four years ago, Facebook banned President Donald Trump from the platform for policy violations, sparking the ire of Republicans, who have accused the site of silencing conservative views.

Schultz said he thought those complaints of bias were fair and that Meta could not find fact-checking organizations on the political right at the same rate as left-leaning ones. He said community notes on X have been more successful at getting people from across the political spectrum to contribute.

However, he said Meta will take a different approach than X when it comes to relations with the brands that advertise on their platform.

"We're not going out there denigrating our advertisers and putting them in terrible positions," he said, alluding to critical comments Musk has made about some of X's advertisers. X sued a group of advertisers in August, accusing them of antitrust violations.

Schultz said Facebook would maintain its brand safety tools that allow companies some control over the kinds of content their ads appear next to.

He also said the primary concerns for their big advertisers are around hate speech and adult nudity, rather than content addressed by fact-checkers, and that the brand safety tools will remain focused on those areas.

"We're going to focus on precision and not be taking down things we shouldn't be taking down," he said.

Read the original article on Business Insider

Fact-checking firm staffed by CNN alums takes Meta axing hard: 'Surprised and disappointed'

7 January 2025 at 15:39

A prominent fact-checking organization used by Facebook to moderate political content reacted to news that it will revamp its fact-checking to better avoid bias with an article outlining its disappointment and disagreement with the move. 

"Lead Stories was surprised and disappointed to first learn through media reports and a press release about the end of the Meta Third-Party Fact-Checking Partnership of which Lead Stories has been a part since 2019," Lead Stories editor Maarten Schenk wrote on Tuesday in response to an announcement from Meta that it would be significantly altering its fact-checking process to "restore free expression."

Lead Stories, a Facebook fact checker employing several former CNN alumni including Alan Duke and Ed Payne, has become one of the more prominent fact checkers used by Facebook in recent years. 

Fox News Digital first reported on Tuesday that Meta is ending its fact-checking program and lifting restrictions on speech to "restore free expression" across Facebook, Instagram and Meta platforms, admitting its current content moderation practices have "gone too far." 

CONSERVATIVES REJOICE OVER 'JAW DROPPING' META CENSORSHIP ANNOUNCEMENT: 'HUGE WIN FOR FREE SPEECH'

"After Trump first got elected in 2016 the legacy media wrote nonstop about how misinformation was a threat to democracy," Meta CEO Mark Zuckerberg said in a video message on Tuesday. "We tried in good faith to address these concerns without becoming the arbiters of truth. But fact-checkers have just been too politically biased and have destroyed more trust than they created, especially in the U.S.."

"What political bias?" the article from Lead Stories asks before explaining that it is "disappointing to hear Mark Zuckerberg accuse the organizations in Meta's U.S. third-party fact checking program of being "too politically biased.’"

"Especially since one of the requirements Meta imposed for being part of a partnership included being a verified signatory of the IFCN's Code of Principles, which explicitly requires a "commitment to non-partisanship and fairness,’" the article states. "In all the years we have been part of the partnership, we or the IFCN never received any complaints from Meta about any political bias, so we were quite surprised by this statement."

Meta said in its announcement that it will move toward a system of moderation that is more in line with Community Notes at X, which Lead Stories seemed to take issue with. 

"However, In our experience and that of others, Community Notes on X are often slow to appear, sometimes downright inaccurate and unlikely to appear on controversial posts because of an inability to reach agrement [sic] or consensus among users," Lead Stories wrote. "Ultimately, the truth doesn't care about consensus or agreement: the shape of the Earth stays the same even if social media users can't agree on it."

JONATHAN TURLEY: META'S ZUCKERBERG MAKES A FREE SPEECH MOVE THAT COULD BE TRULY TRANSFORMATIONAL

Lead Stories added that Community Notes is "entirely non-transparent about its contributors: readers are left guessing about their bias, funding, allegiance, sources or expertise and there is no way for appeals or corrections" while "fact-checkers, on the other hand, are required by the IFCN to be fully transparent about who they are, who funds them and what methodology and sources they use to come to their conclusions."

Schenk added, "Fact-checking is about adding verified and sourced information so people can make up their mind about what to believe. It is an essential part of free speech."

In a statement to Fox News Digital, Duke said that Lead Stories plans to press on.

"Lead Stories will continue, although we have to reduce our output with no support from Meta," Duke said. "We are global, with most of our business now outside the USA. We publish in eight languages other than English, which is what will be affected."

Some conservatives took to social media to blast Lead Stories over their article lamenting the change at Meta after years of conservative pushback to Facebook’s fact checkers as a whole on key news stories, including the suppression of the bombshell reporting on Hunter Biden’s laptop.  

"Of all the fact-checking companies, Lead Stories is the worst," British American conservative writer Ian Haworth posted on X. "Couldn't be happier that they'll soon be circling the drain."

TRUMP SAYS META HAS ‘COME A LONG WAY’ AFTER ZUCKERBERG ENDS FACT-CHECKING ON PLATFORMS

The executive director of Politifact, a fact checker also used by Facebook, issued a strong rebuke of Zuckerberg following Tuesday's announcement. 

"If Meta is upset it created a tool to censor, it should look in the mirror," Aaron Sharockman said in a statement he posted on X following Zuckerberg’s announcement.

Sharockman fumed, "The decision to remove independent journalists from Facebook’s content moderation program in the United States has nothing to do with free speech or censorship. Mark Zuckerberg’s decision could not be less subtle."

He threw back Zuckerberg’s accusation of political bias, stating that Meta’s platforms, not the fact-checkers, were the entities that actually censored posts

"Let me be clear: the decision to remove or penalize a post or account is made by Meta and Facebook, not fact-checkers. They created the rules," Sharockman said.

At the conclusion of his Lead Stories post, Schenk wrote, "Even though we are obviously disappointed by this news, Lead Stories wishes to thank the many people at Meta we have worked with over the past years and we will continue our fact checking mission. To paraphrase the slogan on our main page: ‘Just because it's now trending without a fact-checking label still won't make it true.’"

Fox News Digital's Gabriel Hays and Brooke Singman contributed to this report.

'Blood on your hands': A look back at Mark Zuckerberg's tense moments in congressional hearings

7 January 2025 at 13:59

Meta CEO Mark Zuckerberg's newly unveiled freedom of speech policies signal a major shift in the Facebook social media platform's content moderation strategy, following years of congressional clashes over alleged "censorship" and the regulation of political information.

"We’re going to get back to our roots and focus on reducing mistakes, simplifying our policies and restoring free expression on our platforms," Zuckerberg said in a video posted Tuesday morning. "More specifically, we’re going to get rid of fact-checkers and replace them with Community Notes similar to X, starting in the U.S." 

META ENDS FACT-CHECKING PROGRAM AS ZUCKERBERG VOWS TO RESTORE FREE EXPRESSION ON FACEBOOK, INSTAGRAM

Zuckerberg's shift in content moderation comes amid a history of being grilled by politicians on both sides of the aisle on Capitol Hill. 

In January 2024, Sen. Josh Hawley, R-Mo., confronted Zuckerberg during a heated exchange about the harmful impact of social media on users, particularly young girls. The questioning followed revelations from internal Meta studies that indicated a significant number of teenage girls were exposed to harmful content, including unwanted nudity, sexual advances, and material promoting self-harm, within just one week.

"So, you didn’t take any action, you didn’t fire anybody, you haven’t compensated a single victim. Let me ask you this. There are families of victims here today. Have you apologized to the victims? Would you like to apologize now?" Hawley said, drawing applause from the audience.

In response, Zuckerberg rose from his seat and addressed the crowd directly, saying, "I’m sorry for everything you’ve all been through. No one should have to go through the things that your families suffered."

MUSK PROVES HUNTER BIDEN CENSORSHIP CAME FROM COLLUSION AMONG BIDEN CAMPAIGN, LAW ENFORCEMENT AND TWITTER

Zuckerberg added, "This is why we’ve invested so much… and will continue through industry-leading efforts to make sure that no [one has] to go through what your families have had to suffer."

In that same hearing, Sen. Lindsey Graham, R-S.C., the ranking member of the Senate Judiciary Committee, delivered a scathing rebuke of the tech giant CEO.

"Mr. Zuckerberg, you and the companies before us. I know you don't mean it to be so, but you have blood on your hands," Graham said. "You have a product that's killing people."

Graham's remark came in light of South Carolina state Rep. Brandon Guffey suing Instagram following the suicide of his 17-year-old son, Gavin. Gavin took his own life after falling victim to an extortion scheme run by a group operating through the Meta-owned app.

In 2018, then-House lawmakers grilled Zuckerberg over the site’s failure to protect the personal information of 87 million users. Zuckerberg, who co-founded Facebook in 2004 from his Harvard dorm room, said in a Facebook post at the time, "Looking back, it’s clear we were too slow identifying election interference in 2016, and we need to do better in future elections."

In November 2020, then-Twitter CEO Jack Dorsey and Zuckerberg both faced the Senate Judiciary Committee in a hearing titled "Breaking the News: Censorship, Suppression, and the 2020 Election." The session put the spotlight on the tech giants' controversial content moderation decisions, including the suppression of the New York Post story about Hunter Biden just weeks before the presidential election.

HOUSE WEAPONIZATION PANEL RELEASES 17,000-PAGE REPORT EXPOSING 'TWO-TIERED SYSTEM OF GOVERNMENT'

Testifying remotely, both CEOs acknowledged missteps and outlined how they'd handle similar challenges in the future. Zuckerberg highlighted Facebook's expansive voting initiatives, which he called "the largest voting information campaign in American history." According to his testimony, over 140 million users visited the Voting Information Center on Facebook and Instagram, with 33 million accessing it on Election Day alone. The campaign reportedly helped 4.5 million people register to vote.

To combat misinformation and voter suppression, Zuckerberg detailed measures like partnerships with election officials, the removal of false claims, and warnings applied to over 150 million pieces of content reviewed by independent fact-checkers. Facebook also implemented "policies prohibiting explicit or implicit misrepresentations about how or when to vote as well as attempts to use threats related to COVID-19 to scare people into not voting," according to Zuckerberg’s testimony.

Meta’s third-party fact-checking program was put in place after the 2016 election and had been used to "manage content" and misinformation on its platforms, largely due to "political pressure," executives said, but admitted the system has "gone too far." 

Last year, Zuckerberg sent a letter to the House Judiciary Committee, in which he admitted that he felt pressure from the Biden administration, particularly with regard to COVID-19 content, and even subjects like satire and humor. 

"The thing is, as American companies, when other governments around the world that don’t have our tradition or our First Amendment, when they see the United States government pressuring U.S. companies to take down content, it is just open season then for those governments to put more pressure [on their companies]," explained Meta’s chief global affairs officer, Joel Kaplan. "We do think it is a real opportunity to work with the Trump administration and to work on free expression at home."

In a statement to Fox News Digital, Liz Huston, Trump-Vance transition spokesperson said, "President Trump has always been a champion of free speech, and his landslide victory put an end to the Biden era of oppressive censorship."

'President Trump's return to the White House is a signal to Americans that their fundamental right to free speech is once again safe," she added.

Fox News Digital's Brooke Singman and Adonis Hoffman contributed to this report.

Meta fact-checkers call an emergency meeting after Mark Zuckerberg pulls the plug

7 January 2025 at 11:29
Mark Zuckerberg

Brendan Smialowski/AFP/Getty

  • Meta is ending US fact-checking partnerships and shifting to crowdsourced moderation tools.
  • The International Fact-Checking Network called an emergency meeting after the announcement.
  • Meta's decision affects the financial sustainability of fact-checking organizations.

The International Fact-Checking Network has convened an emergency meeting of its members following Meta's announcement on Tuesday that it will end its third-party fact-checking partnerships in the US and replace them with a crowdsourced moderation tool similar to X's community notes.

In an exclusive interview with Business Insider, the IFCN's director, Angie Holan, confirmed that the meeting, scheduled for Wednesday, was organized in direct response to Meta's decision.

"We hold these meetings monthly, but we called this one specifically because of today's news," she said.

The meeting is expected to draw between 80 and 100 attendees from the IFCN's network of fact-checkers, which spans 170 organizations worldwide. Not all the expected attendees are Meta fact-checking partners, though many of them have a stake in the program's future and its global implications.

The IFCN has long played a crucial role in Meta's fact-checking ecosystem by accrediting organizations for Meta's third-party program, which began in 2016 after the US presidential election that year.

Certification from the IFCN signaled that a fact-checking organization met rigorous editorial and transparency standards. Meta's partnerships with these certified organizations became a cornerstone of its efforts to combat misinformation, focusing on flagging false claims, contextualizing misinformation, and curbing its spread.

'People are upset'

Holan described the mood among fact-checkers as somber and frustrated.

"This program has been a major part of the global fact-checking community's work for years," she said. "People are upset because they saw themselves as partners in good standing with Meta, doing important work to make the platform more accurate and reliable."

She noted that fact-checkers were not responsible for removing posts, only for labeling misleading content and limiting its virality.

"It was never about censorship but about adding context to prevent false claims from going viral," Holan said.

A last-minute heads-up

An employee at PolitiFact, one of the first news organizations to partner with Meta on its Third-Party Fact-Checking Program in December 2016, said the company received virtually no warning from Meta before the program was killed.

"The PolitiFact team found out this morning at the same time as everyone else," the employee told BI.

An IFCN employee who was granted anonymity told BI that the organization itself got a heads-up only "late yesterday" via email that something was coming. It asked for a 6 a.m. call — about an hour before Meta's blog post written by its new Republican policy head, Joel Kaplan, went live.

"I had a feeling it was bad news," this employee said.

Meta did not respond to a request for comment.

Financial fallout for fact-checkers

Meta's decision could have serious financial consequences for fact-checking organizations, especially those that relied heavily on funding from the platform.

According to a 2023 report published by the IFCN, income from Meta's Third-Party Fact-Checking Program and grants remain fact-checkers' predominant revenue streams.

"Fact-checking isn't going away, and many robust organizations existed before Meta's program and will continue after it," Holan said. "But some fact-checking initiatives were created because of Meta's support, and those will be vulnerable."

She also underscored the broader challenges facing the industry, saying that fact-checking organizations share the same financial pressures as newsrooms. "This is bad news for the financial sustainability of fact-checking journalism," she said.

Skepticism toward community notes

Meta plans to replace its partnerships with community notes, a crowd-based system modeled after X's approach.

Holan expressed doubt that this model could serve as an effective substitute for expert-led fact-checking.

"Community notes on X have only worked in cases where there's bipartisan agreement — and how often does that happen?" she said. "When two political sides disagree, there's no independent way to flag something as false."

It's not yet clear how Meta's implementation of community notes will work.

'We'll be here after' Meta's program

Despite the uncertainty, Holan remains steadfast in the IFCN's mission.

"The IFCN was here before Meta's program, and we'll be here after it," she said. "We may look different in size and scope, but we'll continue promoting the highest standards in fact-checking and connecting organizations that want to collaborate worldwide."

Holan said Wednesday's meeting would focus on supporting IFCN members as they navigate this transition.

"We're here to help them figure out the best way forward," she said.

If you're a current or former Meta employee, contact this reporter from a nonwork device securely on Signal at +1-408-905-9124 or email him at [email protected].

Read the original article on Business Insider

❌
❌