❌

Reading view

There are new articles available, click to refresh the page.

Mark Zuckerberg defends Meta’s latest pivot in three-hour Joe Rogan interview

Meta CEO Mark Zuckerberg defended his decision to scale back Meta’s content moderation policies in a Friday appearance on Joe Rogan’s podcast. Zuckerberg faced widespread criticism for the decision, including from employees inside his own company. β€œProbably depends on who you ask,” said Zuckerberg when asked how Meta’s updates have been received. The key updates […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Meta's chief marketing officer warns 'too much censorship is actually harmful' for LGBTQ+ community in internal forum

Meta CMO Alex Schultz
Alex Schultz Meta Chief Marketing Officer

Courtesy of Business Insider

  • Meta's chief marketing officer Alex Schultz is concerned that "too much censorship" is harmful.
  • Schultz's comments come after Meta updated several policies, including content moderation.
  • The new guidelines change what is permissible to be said about LGBTQ+ people.

Meta's chief marketing officer warned that greater censorship on its platforms could "harm speech" from the LGBTQ+ community aiming to push back against hate.

Alex Schultz posted his feelings on Meta's decision to change its policy on hateful conduct earlier this week in a post on its internal forum.

"My perspective is we've done well as a community when the debate has happened and I was shocked with how far we've gone with censorship of the debate," Schultz wrote in the post, seen by Business Insider.

He added that his friends and family were shocked to see him receive abuse as a gay man in the past, but that it helped them to realize hatred exists.

"Most of our progress on rights happened during periods without mass censorship like this and pushing it underground, I think, has coincided with reversals," he said.

"Obviously, I don't like people saying things that I consider awful but I worry that the solution of censoring that doesn't work as well as you might hope. So I don't know the answer, this stuff is really complicated, but I am worried that too much censorship is actually harmful and that's may have been where we ended up."

Earlier this week, the company adjusted its moderation guidelines to allow statements on its platforms claiming that LGBTQ+ people are "mentally ill" and removed trans and nonbinary-themed chat options from its Messenger app, features that had previously been showcased as part of the company's support for Pride Month.

Schultz also said that he does not think that censorship and cancel culture have helped the LGBTQ+ movement.

He wrote, "We don't enforce these things perfectly," and cited an example of a mistake of taking down images of two men kissing and removing a slur word toward gay people rather than a deliberate move by a "bigoted person in operations."

Schultz added, "So the more rules we have, the more mistakes we make…Moderation is hard and we'll always get it wrong somewhat. The more rules, the more censorship, the more we'll harm speech from our own community pushing back on hatred."

The company's latest decision to roll back its DEI programs has sparked intense internal debate and public scrutiny. The announcement, delivered via an internal memo by VP of HR Janelle Gale, said that the company would dismantle its dedicated DEI team and eliminate diversity programs in its hiring process.

The company said Tuesday it will replace third-party fact-checkers on Facebook, Instagram, and Threads with a community notes system, mirroring the approach used on Elon Musk's platform, X.

Schulz told BI in an interview earlier this week that the election of Donald Trump and a broader shift in public sentiment around free speech played significant roles in these decisions.

He acknowledged that internal and external pressures had led Meta to adopt more restrictive policies in recent years, but the company is now taking steps to regain control over its approach to content moderation.

Meta's internal forum, Workplace, saw reactions ranging from anger and disappointment to cautious optimism about the company's direction.

One employee lamented the rollback as "another step backward" for Meta, while others raised concerns about the message it sends to marginalized communities that rely on Meta's platforms.

At Meta's offices in Silicon Valley, Texas, and New York, facilities managers were instructed to remove tampons from men's bathrooms, which the company had provided for nonbinary and transgender employees who use the men's room and may require sanitary products, The New York Times reported on Friday.

Meta didn't immediately respond to a request for comment from BI.

You can email Jyoti Mann at [email protected], send her a secure message on Signal @jyotimann.11 or DM her via X @jyoti_mann1

If you're a current or former Meta employee, contact this reporter from a nonwork device securely on Signal at +1-408-905-9124 or email him at [email protected].

Read the original article on Business Insider

Mark Zuckerberg tells Joe Rogan that he thinks Trump will protect American companies' 'strategic advantage'

Mark Zuckerberg attends the UFC 300 event at T-Mobile Arena on April 13, 2024 in Las Vegas, Nevada.
Mark Zuckerberg said that he thinks Trump will defend American tech companies abroad.

Jeff Bottari/Getty Images

  • Mark Zuckerberg told Joe Rogan he's "optimistic" about how Trump will impact American businesses.
  • On the nearly 3-hour podcast episode, Zuck said he thinks Trump will defend American tech abroad.
  • The conversation comes days after Meta got rid of third-party fact-checkers.

Mark Zuckerberg told Joe Rogan in a podcast episode on Friday that he thinks President-elect Donald Trump will help American businesses, calling technology companies in particular a "bright spot" in the economy.

"I think it's a strategic advantage for the United States that we have a lot of the strongest companies in the world, and I think it should be part of the US' strategy going forward to defend that," Zuckerberg said during the nearly three-hour episode of 'The Joe Rogan Experience.' "And it's one of the things that I'm optimistic about with President Trump is, I think he just wants America to win."

Zuckerberg told Rogan the government should defend America's tech industry abroad to ensure it remains strong, and that he is "optimistic" Trump will do so.

The episode dropped just days after Meta significantly altered its content moderation policies, replacing third-party fact checkers with a community-notes system similar to that on Elon Musk's X. Trump praised the change earlier this week and said it was "probably" a response to threats he's made against the Meta CEO.

Zuckerberg, clad in a black tee and gold necklace emblematic of his new style, told Rogan that the change reflects the nation's "cultural pulse" as it was reflected in the presidential election results. At the beginning of the episode, Zuckerberg bashed how President Joe Biden's administration handled content moderation, especially during the pandemic.

A representative for Biden didn't immediately respond to a request for comment from Business Insider.

The episode and Meta's flurry of changes are part of efforts from Zuckerberg to improve his relationship with Trump. Meta has confirmed to BI that it's donating $1 million to Trump's inaugural fund, along with other tech companies like Microsoft and Google.

Read the original article on Business Insider

How to delete Facebook, Instagram, and Threads

In the wake of Meta’s decision to remove its third-party fact-checking system and loosen content moderation policies, Google searches on how to delete Facebook, Instagram, and Threads have been on the rise. People who are angry with the decision accuse Meta CEO Mark Zuckerberg of cozying up to the incoming Trump administration at the expense […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Meta employees react after the rollback of DEI programs — both for and against

Mark Zuckerberg attends Senate Judiciary Committee hearing in January 2024.
Meta CEO Mark Zuckerberg.

The Washington Post/The Washington Post via Getty Images

  • On Meta's internal forum, its employees criticized its decision to roll back DEI initiatives.
  • It follows changes to Meta's content-moderation policies, which got rid of third-party fact-checkers.
  • Meta's VP of HR said the term DEI had "become charged" and "suggests preferential treatment."

Meta employees spoke out on its internal forum against the tech giant's decision Friday to roll back its diversity, equity, and inclusion program.

Staffers criticized the move in comments on the post announcing the changes on the internal platform Workplace. More than 390 employees reacted with a teary-eyed emoji to the post, which was seen by Business Insider and written by the company's vice president of human resources, Janelle Gale.

Gale said Meta would "no longer have a team focused on DEI." Over 200 workers reacted with a shocked emoji, 195 with an angry emoji, while 139 people liked the post, and 57 people used a heart emoji.

"This is unfortunate disheartening upsetting to read," an employee wrote in a comment that had more than 200 likes.

Another person wrote, "Wow, we really capitulated on a lot of our supposed values this week."

A different employee wrote, "What happened to the company I joined all those years ago."

Reactions were mixed, though. One employee wrote, "Treating everyone the same, no more, no less, sounds pretty reasonable to me." The comment had 45 likes and heart reactions.

The decision follows sweeping changes made to Meta's content-moderation policies, which Meta CEO Mark Zuckerberg announced Tuesday. The changes include eliminating third-party fact-checkers in favor of a community-notes model similar to that on Elon Musk's X.

As part of the changes to Meta's policy on hateful conduct, the company said it would allow users to say people in LGBTQ+ communities are mentally ill for being gay or transgender.

"We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like 'weird,'" Meta said in the updated guidelines.

One employee wrote in response to the DEI changes that, in addition to the updated hate-speech guidelines, "this is another step backward for Meta."

They added: "I am ashamed to work for a company which so readily drops its apparent morals because of the political landscape in the US."

In the post announcing the decision to drop many of its DEI initiatives, Gale said the term DEI had "become charged," partly because it's "understood by some as a practice that suggests preferential treatment of some groups over others."

"Having goals can create the impression that decisions are being made based on race or gender," she said, adding: "While this has never been our practice, we want to eliminate any impression of it."

One employee told BI the moves "go against what we as a company have tried to do to protect people who use our platforms, and I have found all of this really hard to read."

Meta did not respond to a request for comment by the time of publication.

Do you work at Meta? Contact the reporters from a nonwork email and device at [email protected], [email protected], and [email protected].

Read the original article on Business Insider

Meta eliminates DEI programs

Axios reports that Meta is eliminating its biggest DEI efforts, effective immediately, including ones that focused on hiring a diverse workforce, training, and sourcing supplies from diverse-owned companies. Its DEI department will also be eliminated.Β Β  In a memo leaked to the outlet, Meta said it was making these changes because the β€œlegal and policy landscape […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Leaked memo: Meta rolls back its DEI programs

Mark Zuckerberg
Meta is scaling back its DEI programs.

Brendan Smialowski/AFP/Getty

  • Meta is dropping many of its DEI initiatives, BI confirmed.
  • The company sent a memo announcing the changes on Friday.
  • Meta's VP of human resources said the legal and policy landscape in the US was changing.

Meta is rolling back its DEI programs, Business Insider has learned.

The company's vice president of human resources, Janelle Gale, announced the move on its internal communication platform, Workplace, on Friday, which was seen by BI.

"We will no longer have a team focused on DEI," Gale wrote in the memo.

"The legal and policy landscape surrounding diversity, equity and inclusion efforts in the United States is changing," she wrote. "The Supreme Court of the United States has recently made decisions signaling a shift in how courts will approach DEI."

She added the term DEI has "become charged" partly because it is "understood by some as a practice that suggests preferential treatment of some groups over others."

Meta confirmed the changes when contacted by Business Insider.

On Monday, Meta said that it is also replacing fact-checkers with community notes on its platforms, including Facebook, Instagram, and Threads.

Meta is the latest company to back away from DEI in the wake of backlash, legal challenges, and the reelection of Donald Trump as president.

Read the full memo:

Hi all,

I wanted to share some changes we're making to our hiring, development and procurement practices. Before getting into the details, there is some important background to lay out:

The legal and policy landscape surrounding diversity, equity and inclusion efforts in the United States is changing. The Supreme Court of the United States has recently made decisions signaling a shift in how courts will approach DEI. It reaffirms longstanding principles that discrimination should not be tolerated or promoted on the basis of inherent characteristics. The term "DEI" has also become charged, in part because it is understood by some as a practice that suggests preferential treatment of some groups over others.

At Meta, we have a principle of serving everyone. This can be achieved through cognitively diverse teams, with differences in knowledge, skills, political views, backgrounds, perspectives, and experiences. Such teams are better at innovating, solving complex problems and identifying new opportunities which ultimately helps us deliver on our ambition to build products that serve everyone. On top of that, we've always believed that no-one should be given - or deprived of -opportunities because of protected characteristics, and that has not changed.

Given the shifting legal and policy landscape, we're making the following changes:

  • On hiring, we will continue to source candidates from different backgrounds, but we will stop using the Diverse Slate Approach. This practice has always been subject to public debate and is currently being challenged. We believe there are other ways to build an industry-leading workforce and leverage teams made up of world-class people from all types of backgrounds to build products that work for everyone.
  • We previously ended representation goals for women and ethnic minorities. Having goals can create the impression that decisions are being made based on race or gender. While this has never been our practice, we want to eliminate any impression of it.
  • We are sunsetting our supplier diversity efforts within our broader supplier strategy. This effort focused on sourcing from diverse-owned businesses; going forward, we will focus our efforts on supporting small and medium sized businesses that power much of our economy. Opportunities will continue to be available to all qualified suppliers, including those who were part of the supplier diversity program.
  • Instead of equity and inclusion training programs, we will build programs that focus on how to apply fair and consistent practices that mitigate bias for all, no matter your background.
  • We will no longer have a team focused on DEI. Maxine Williams is taking on a new role at Meta, focused on accessibility and engagement.

What remains the same are the principles we've used to guide our People practices:

  1. We serve everyone. We are committed to making our products accessible, beneficial and universally impactful for everyone.
  2. We build the best teams with the most talented people. This means sourcing people from a range of candidate pools, but never making hiring decisions based on protected characteristics (e.g. race, gender etc.). We will always evaluate people as individuals.
  3. We drive consistency in employment practices to ensure fairness and objectivity for all. We do not provide preferential treatment, extra opportunities or unjustified credit to anyone based on protected characteristics nor will we devalue impact based on these characteristics.
  4. We build connection and community. We support our employee communities, people who use our products, and those in the communities where we operate. Our employee community groups (MRGs) continue to be open to all.

Meta has the privilege to serve billions of people every day. It's important to us that our products are accessible to all, and are useful in promoting economic growth and opportunity around the world. We continue to be focused on serving everyone, and building a multi-talented, industry-leading workforce from all walks of life.

Do you work at Meta? Contact the reporters from a non-work email and device at [email protected]; [email protected]; and [email protected].

Read the original article on Business Insider

71 organizations and counting have signed a letter warning Mark Zuckerberg about ending fact-checking on Meta in the US

At the Meta Connect developer conference, Mark Zuckerberg, head of the Facebook group Meta, shows the prototype of computer glasses that can display digital objects in transparent lenses.
Fact-checking organizations are pushing back against a recent Meta decision for the US.

Andrej Sokolow/picture alliance via Getty Images

  • The International Fact-Checking Network warned of Meta's move to crowdsourced fact-checking.
  • A group of 71 fact checkers said the change is "a step backward" for accuracy.
  • The group proposed crowdsourcing in conjunction with professionals, a "new model."

The fact-checking group that has worked with Meta for years wrote Mark Zuckerberg an open letter on Thursday, warning him about the company's move toward crowdsourced moderation in the US.

"Fact-checking is essential to maintaining shared realities and evidence-based discussion, both in the United States and globally," wrote the International Fact-Checking Network, part of the nonprofit media organization Poynter Institute.

As of 11:30 p.m. ET on Thursday, 71 organizations from across the world had signed the letter. Poynter is updating its post as the list of organizations grows.

The group said Meta's decision, announced Tuesday, to replace third-party fact-checkers with crowdsourced moderation on Facebook, Instagram, and Threads in the US "is a step backward for those who want to see an internet that prioritizes accurate and trustworthy information."

Meta told the IFCN about the end of its partnership less than an hour before publishing the post about the switch, Business Insider reported. The change could have serious financial repercussions for the fact-checking organizations that rely on Meta for revenue.

The organization said Meta has fact-checking partnerships in more than 100 countries.

"If Meta decides to stop the program worldwide, it is almost certain to result in real-world harm in many places," IFCN wrote. Meta has not announced plans to end the fact-checking program globally.

Meta said it plans to build a crowdsourced moderation system this year similar to the community notes used by Elon Musk's X, where people can weigh in on posts ranging from the serious to the mundane. Musk laid off hundreds of X's trust and safety workers after he bought the company in 2022, and X has since been slow to act on some misinformation, BI previously reported.

IFCN wrote that community notes could be used in conjunction with professional fact-checkers, a "new model" for collaboration.

"The need for this is great: If people believe social media platforms are full of scams and hoaxes, they won't want to spend time there or do business on them," IFCN wrote.

Nearly 3.3 billion people used a Meta product every day in September, according to the company's most recent financials β€” more than 40% of the world's population.

Ad insiders who spoke to BI this week said they didn't expect the changes to hurt the company's business. Meta has more than a fifth of the US digital ad market β€” in second place behind Google, per data from BI's sister company EMARKETER.

A spokesperson for Meta declined to comment.

Read the original article on Business Insider

β€˜It’s Total Chaos Internally at Meta Right Now’: Employees Protest Zuckerberg’s Anti LGBTQ Changes

β€˜It’s Total Chaos Internally at Meta Right Now’: Employees Protest Zuckerberg’s Anti LGBTQ Changes

Meta employees are furious with the company’s newly announced content moderation changes that will allow users to say that LGBTQ+ people have β€œmental illness,” according to internal conversations obtained by 404 Media and interviews with five current employees. The changes were part of a larger shift Mark Zuckerberg announced Monday to do far less content moderation on Meta platforms.Β 

β€œI am LGBT and Mentally Ill,” one post by an employee on an internal Meta platform called Workplace reads. β€œJust to let you know that I’ll be taking time out to look after my mental health.” 

On Monday, Mark Zuckerberg announced that the company would be getting β€œback to our roots around free expression” to allow β€œmore speech and fewer mistakes.” The company said β€œwe’re getting rid of a number of restrictions on topics like immigration, gender identity, and gender that are the subject of frequent political discourse and debate.” A review of Meta’s official content moderation policies show, specifically, that some of the only substantive changes to the policy were made to specifically allow for β€œallegations of mental illness or abnormality when based on gender or sexual orientation.” It has long been known that being LGBTQ+ is not a sign of β€œmental illness,” and the false idea that sexuality or gender identification is a mental illness has long been used to stigmatize and discriminate against LGBTQ+ people.

Earlier this week, we reported that Meta was deleting internal dissent about Zuckerberg's appointment of UFC President Dana White to the Meta board of directors.

πŸ’‘
Do you work at Meta? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702.

Google searches for deleting Facebook, Instagram explode after Meta ends fact-checking

Google searches for how to cancel and delete Facebook, Instagram, and Threads accounts have seen explosive rises in the U.S. since Meta CEO Mark Zuckerberg announced that the company will end its third-party fact-checking system, loosen content moderation policies, and roll back previous limits to the amount of political content in user feeds.Β  Critics see […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Meta's moderation shake-up highlights a political divide among influencers

Mark Zuckerberg
Meta CEO Mark Zuckerberg, pictured, debuted new content-moderation policies this week.

BRENDAN SMIALOWSKI/ Getty Images

  • The reaction among creators to Meta's content-moderation changes has largely fallen along political lines.
  • Some influencers worry the changes could cause harm to the LGBTQ+ community.
  • Others questioned Meta's decision to feature more political content.

Getting "Zucked" β€” a term for having your account suspended or content removed due to community violations β€” is a staple in the creator lexicon.

Despite that, creators who spoke with Business Insider had mixed reactions to Meta CEOΒ Mark Zuckerberg'sΒ plans to reduce content moderation in the name of free speech.

On Tuesday, Meta unveiled new policies that included winding down fact-checking, loosening content moderation, and introducing X-style "Community Notes."

The creator community largely reacted along political lines, with some left-leaning influencers expressing disappointment.

"This is really about just pandering to the Trump administration in a way that feels extremely obvious," LGBTQ+ advocate and "Gay News" host Josh Helfgott told BI.

Left-leaning filmmaker Michael McWhorter also said he felt the changes were catering to Trump and his MAGA base.

"You're not trying to balance things out," McWhorter said of Meta. "We are shifting to the other side of things."

Elsewhere, some right-leaning creators cheered the changes.

Christopher Townsend, an Air Force vet and conservative rapper with over 300,000 Instagram followers, told BI he thought the policy overhaul was "a great step toward the decentralization of information and the end to the control legacy media has had on the prevailing narrative."

Instagram head Adam Mosseri posted a video on Wednesday outlining how the new policies would impact creators. He said the company would correct its "over-enforcement" of content moderation and begin recommending political content again.

"If you're a creator who likes to post about political content, this should mean that you feel comfortable doing so on any of our platforms," Mosseri said. "We will now show political recommendations."

Meta didn't respond to a request for comment.

Some are wary of Community Notes

As part of the policy overhaul, Meta is getting rid of fact-checkers in favor of Community Notes in the style of Elon Musk's X. Users will be able to volunteer to contribute to Community Notes, which will appear on content when people with a range of different perspectives agree a correction is in order.

"Like X, it gives the user community more authority over the platform instead of biased third-party administrators," Townsend said.

McWhorter said that while Community Notes were a "great equalizer," he felt they were not an adequate replacement for fact-checking. He said he wished Meta would rely on a combination of both systems.

A former Instagram staffer told BI that they felt placing the responsibility to moderate content on users and creators "on a platform with massive global reach and historical harmful content issues" was a step in the wrong direction. They asked for anonymity to protect business relationships; their identity is known to BI.

Concerns about anti-LGBTQ+ discourse

Helfgott expressed concern about Meta's plan to decrease moderation around certain political topics. The company's blog post specifically noted immigration and gender identity as areas of debate where it plans to decrease restrictions.

Helfgott said that while Meta's plans were described in the language of "political discourse," he felt the changes could lead to bullying of the LGBTQ+ community.

Alongside Tuesday's announcement, Meta updated its Hateful Conduct policy.

"We do allow allegations of mental illness or abnormality when based on gender or sexual orientation," the company wrote, "given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like 'weird.'"

"This is the most anti-LGBTQ announcement that a social-media platform has made in recent memory," Helfgott said.

While McWhorter told BI he felt his content had been Zucked β€” or unfairly suppressed β€” in the past, he said he'd prefer a stricter moderating system even if it had "flaws."

"I'd rather that I take the hit for a joke that it didn't understand than that stuff being allowed to be spread all over the platform," he said, referring to potentially harmful posts.

Meta's increased political emphasis marks an about-face

Some creators were flummoxed by Meta's about-face on the amount of political content it plans to recommend. The company had previously cut back significantly on promoting political content in feeds in recent years.

Malynda Hale, a creator and activist with 65,000 followers, said this change could benefit political creators but questioned the company's motives.

"I think the fact that Meta is going to be serving up more political content is actually positive for creators like myself, but I don't think it's with the intention to keep the community informed," she told BI.

She said she felt Meta wanted to boost engagement even at the cost of division and disagreement.

Despite some misgivings, the creators who spoke with BI said they weren't going anywhere.

"I'll work with the system as it's presented to me, and I'll find my way to work around it," McWhorter said. "I constantly have to do that on all different platforms."

Helfgott said he felt "handcuffed" by Meta because if he stopped posting on Instagram, he would lose out on millions of people seeing his content each month.

"Meta knows this," he said. "They know that creators may not like this, but we need the reach, and we will keep posting there."

Read the original article on Business Insider

Mark Zuckerberg says users may leave Meta after fact-checking shutdown for 'virtue signaling'

Meta logo on banner

Chesnot/Getty

  • Mark Zuckerberg dismisses concerns over users leaving after Meta ends U.S. fact-checking.
  • Meta plans to replace third party fact-checking with a crowdsourced Community Notes system like X's.
  • Zuckerberg is confident Community Notes will improve user experience and attract new users.

Mark Zuckerberg dismissed concerns about users leaving Meta platforms in response to the company's decision to end its U.S. fact-checking program, saying any exits would be "virtue signaling."

In a reply on Threads to a user's post criticizing Meta's influence and suggesting that people feel trapped on the platform, Zuckerberg struck a defiant tone.

"No – I'm counting on these changes actually making our platforms better," he wrote.

I think Community Notes will be more effective than fact-checkers, reducing the number of people whose accounts get mistakenly banned is good, people want to be able to discuss civic topics and make arguments that are in the mainstream of political discourse, etc. Some people may leave our platforms for virtue signaling, but I think the vast majority and many new users will find that these changes make the products better.

Zuckerberg's response to the Threads user named Mary-Frances Makichen, who has 253 followers and is a "Spiritual Director" and author according to their bio, came just one day after Meta announced it would replace its third-party fact-checking partnerships with a crowdsourced Community Notes system similar to the one used by X.

Mass departures from social media platforms for symbolic reasons are not unprecedented.

On Election Day in the US, more than a quarter million X users deleted their accounts in protest against owner Elon Musk's deepening ties to the Trump administration.

Zuckerberg, however, appears unfazed, betting that Community Notes will enhance Meta's user experience and attract new audiences rather than drive them away.

If you're a current or former Meta employee, contact this reporter from a nonwork device securely on Signal at +1-408-905-9124 or email him at [email protected].

Read the original article on Business Insider

Mark Zuckerberg's content-moderation changes come after a long line of nightmares

Mark Zuckerberg

Credit: Anadolu/Getty, Irina Gutyryak/Getty, Tyler Le/BI

  • Content moderation has always been a nightmare for Meta.
  • Its new content-moderation policy is a huge change β€” and it could be an improvement.
  • Mark Zuckerberg's "apology tour" from the past few years seems to be officially over.

Mark Zuckerberg's changes to Meta's content-moderation policies are potentially huge.

To fully understand their gravity, it's useful to look at how Meta got here. And to consider what these changes might actually mean for users: Are they a bow to an incoming Trump administration? Or an improvement to a system that's gotten Zuckerberg and Co. lots of heat before? Or a little of both?

Content moderation has always been a pit of despair for Meta. In its blog post announcing the changes on Tuesday, Meta's new head of policy, Joel Kaplan, talked about wanting to get back to Facebook's roots in "free speech." Still, those roots contain a series of moderation fires, headaches, and constant adjustments to the platform's policies.

Starting in 2016, moderation troubles just kept coming like a bad "We Didn't Start the Fire" cover. Consider this roundup:

Whatever your political alignment, it seems like Meta has been trapped in a vicious cycle of making a policy β€” or lacking a policy β€” then reversing itself to try to clean up a mess.

As Charlie Warzel pointed out in The Atlantic, Zuckerberg has sometimes blamed external forces when he's faced with situations like some of the ones above.

That's maybe until now. As Zuckerberg posted on Threads on Wednesday, "Some people may leave our platforms for virtue signaling, but I think the vast majority and many new users will find that these changes make the products better."

Maybe the big changes were already brewing this past September when Zuckerberg appeared at a live event and said, "One of the things that I look back on and regret is I think we accepted other people's view of some of the things that they were asserting that we were doing wrong, or were responsible for, that I don't actually think we were."

In other words, as of this week, the apology tour seems to have ended.

What will Meta's changes mean for you and me, the users?

What will the changes mean? Who knows! I can make a few predictions:

The "community note" system might work pretty well β€” or at least not worse than the current human- and AI-led fact-checking system.

There might be more content in your feeds that you don't like β€”Β political speech that you find abhorrent, for example.

It's also possible that while certain content might exist on the platform, you won't actually come across it because it will have been downgraded. "Freedom of speech, not freedom of reach" has been X's mantra (though considering the flow of truly vile content that has proliferated in my feed there in the past year or so, I don't think that's been particularly effective).

One other piece of the announcement is that Meta will focus its AI-powered filtering efforts on the highest-risk content (terrorism, drugs, and child endangerment). For lesser violations, the company said, it will rely more on user reports. Meta hasn't given details on how exactly this will work, but I imagine it could have a negative effect on common issues like bullying and harassment.

A large but less glamorous part of content moderation is removing "ur ugly" comments on Instagram β€” and that's the kind of stuff that will rely on user reporting.

It's also quite possible that bad actors will take advantage of the opening. Facebook is nothing if not a place to buy used furniture while various new waves of pillagers attempt to test and game the algorithms for profit or menace β€” just consider the current wave of AI slop, some of which appears at least in part to be a profitable scam operation run from outside the US.

What do the changes mean for Meta?

If these changes had been rolled out slowly, one at a time, they might have seemed like reasonable measures just on their face. Community notes? Sure. Loosening rules on certain hot political topics? Well, not everyone will like it, but Meta can claim some logic there. Decreasing reliance on automatic filters and admitting that too many non-violations have been swept up in AI dragnets? People would celebrate that.

No one thought Meta's moderation before the announced changes was perfect. There were lots of complaints (correctly) about how it banned too much stuff by mistake β€” which this new policy is aiming to fix.

And switching from third-party fact-checkers to a community-notes system isn't necessarily bad. The fact-checking system wasn't perfect, and community notes on X, the system Meta is modeling its own after, can be quite useful. Even acknowledging that, yes, X has sometimes become a cesspit for bad content, the root cause isn't the community notes.

Still, it's impossible to weigh the merits of each aspect of the new policy and have blinders on when it comes to the 800-pound political gorilla in the room.

There's one pretty obvious way of looking at Meta's announcement of sweeping changes to its moderation policy: It's a move to cater to an incoming Trump administration. It's a sign that Zuckerberg has shifted to the right, as he drapes himself in some of the cultural signifiers of the bro-y Zynternet (gold chain, $900,000 watch, longer hair, new style, front row at an MMA match).

Together, every piece of this loudly signals that Zuckerberg either A., genuinely believed he'd been forced to cave on moderation issues in the past, or B., knows that making these changes will please Trump. I don't really think the distinction between A and B matters too much anyway. (Meta declined to comment.)

This probably isn't the last of the changes

I try to avoid conflating "Meta" with "Mark Zuckerberg" too much. It's a big company! There are many smart people who care deeply about the lofty goals of social networking who create policy and carry out the daily work of trust and safety.

Part of me wonders how much Zuckerberg wishes this boring and ugly part of the job would fade away β€” there are so many more shiny new things to work on, like AI or mixed-reality smart glasses. Reworking the same decade-old policies so that people can insult each other 10% more is probably less fun than MMA fighting or talking to AI researchers.

Content moderation has always been a nightmare for Meta. Scaling it back, allowing more speech on controversial topics, and outsourcing fact-checking to the community seems like a short-term fix for having to deal with this unpleasant and thankless job. I can't help but imagine that another overhaul will come due sometime in the next four years.

Read the original article on Business Insider

Meta ending 3rd-party fact checkers 'transformative,' but other legal issues remain, says expert

The decision by Meta CEO Mark Zuckerberg to end Facebook's work with third-party fact-checkers and ease some of its content restrictions is a potentially "transformative" momentΒ for the platform, experts said, but one that is unlikely to shield the company from liability in ongoing court proceedings.

The updates were announced by Zuckerberg, who said in a video that the previous content restrictions used on Facebook and Instagram β€” which were put into place after the 2016 elections β€” had "gone too far" and allowed for too much political bias from outside fact-checkers.

Meta will now replace that system with a "Community Notes"-style program, similar to the approach taken by social media platform X, he said. X is owned by Elon Musk, the co-director of the planned Department of Government Efficiency.

"We’ve reached a point where it’s just too many mistakes and too much censorship," Zuckerberg said. "The recent elections also feel like a cultural tipping point toward once again prioritizing speech. So we are going to get back to our roots, focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms."

META ENDS FACT-CHECKING PROGRAM AS ZUCKERBERG VOWS TO RESTORE FREE EXPRESSION ON FACEBOOK, INSTAGRAM

The news was praised by President-elect Donald Trump, who told Fox News Digital that he thought Meta's presentation "was excellent." Β "They have come a long way," Trump said.

Still, it is unlikely to ease the legal liability for Meta, which in recent months has been hit with the possibility of a multibillion-dollar class action lawsuit stemming from a privacy scandal involving the political consulting firm Cambridge Analytica.Β 

The Supreme Court in November rejected Meta's effort to block the lawsuit, leaving in place an appellate court ruling that allowed the class action suit to move forward.Β 

Meta has also been the target of multiple Republican-led investigations in Congress. Republicans on the House Subcommittee on the Weaponization of the Federal Government probed Meta's activity and communication with the federal government and the Biden administration last year as part of a broader investigation into alleged censorship.Β 

The platform also came under scrutiny by the House Oversight Committee in August, as part of an investigation into claims that the platform suppressed information about the July 13 assassination attempt of Trump.Β 

MORE THAN 100 FORMER JUSTICE DEPT OFFICIALS URGE SENATE TO CONFIRM PAM BONDI AS AG

Combined, these factors make it unlikely that Meta will see its legal problems go away anytime soon, law professor and Fox News contributor Jonathan Turley told Fox News Digital in an interview.

"Facebook is now looking at a tough patch ahead," he said. "Not only do the Republicans carry both houses of Congress as well as the White House, but there is ongoing litigation in the social media case in Texas."

Additionally, the Supreme Court's conservative majority is also unlikely to be sympathetic to the views of Meta in any case centered on First Amendment protections and rights to free speech.

The House investigations and litigation have both forced more of Meta's actions into public viewβ€” something Turley said expects to come under further scrutinyΒ inΒ the discovery process in Missouri v. Biden, a case that centers on allegations of political censorship.

"That discovery is still revealing new details," Turley said. "So Meta understood that in the coming months, more details would be forthcoming on its censorship program."

Still, he said, this "could be a transformative moment," Turley said.Β 

"And an alliance of Zuckerberg with [Elon] Musk could turn the tide in this fight over free speech," Turley said. "And as one of Zuckerberg's most vocal critics Β I welcome him to this fight."

Mark Zuckerberg says Meta's 'community notes' are inspired by Elon Musk's X. Here's how they work — and how they don't.

Meta Mark Zuckerberg
Meta CEO Mark Zuckerberg said the company's platforms would prioritize speech and free expression.

Getty Images

  • Mark Zuckerberg's plan to replace fact checkers with "community notes" is a familiar one.
  • A similar system of community moderation is already in place on Elon Musk's X.
  • On X, community notes let users add context to posts. Meta has said it seems to work well.

Mark Zuckerberg says Meta will use "community notes" to moderate content on its platforms like Facebook and Instagram β€” but what exactly does that mean, and how has it worked on other platforms?

Meta said the feature would function much like it does on Elon Musk's platform, where certain contributors can add context to posts they think are misleading or need clarification. This type of user-generated moderation would largely replace Meta's human fact-checkers.

"We've seen this approach work on X β€” where they empower their community to decide when posts are potentially misleading and need more context and people across a diverse range of perspectives decide what sort of context is helpful for other users to see," Meta said in its announcement Tuesday.

Musk, who has a sometimes-tense relationship with Zuckerberg, appeared to approve of the move, posting "This is cool" on top of a news article about the changes at Meta.

So, will it be cool for Meta and its users? Here's a primer on "community notes" β€” how it came to be, and how it's been working so far on X:

How the 'community notes' feature was born

The idea of "community notes" first came about at Twitter in 2019, when a team of developers at the company, now called X, theorized that a crowdsourcing model could solve the main problems with content moderation. Keith Coleman, X's vice president of product who helped create the feature, told Asterisk magazine about its genesis in an interview this past November.

Coleman told the outlet that X's previous fact-checking procedures, run by human moderators, had three main problems: dedicated staff couldn't fact-check claims in users' posts fast enough, there were too many posts to monitor, and the general public didn't trust a Big Tech company to decide what was or wasn't misleading.

This is cool pic.twitter.com/kUkrvu6YKY

β€” Elon Musk (@elonmusk) January 7, 2025

Coleman told Asterisk that his team developed a few prototypes and settled on one that allowed users to submit notes that could show up on a post.

"The idea was that if the notes were reasonable, people who saw the post would just read the notes and could come to their own conclusion," he said.

And in January 2021, the company launched a pilot program of the feature, then called "Birdwatch," just weeks after the January 6 Capitol riot. On its first day, the pilot program had 500 contributors.

Coleman told the outlet that for the first year or so of the pilot program β€” which showed community notes not directly on users' posts but on a separate "Birdwatch" website β€” the product was very basic, but over time, it evolved and performed much better than expected.

When Musk took over the platform in 2022, he expanded the program beyond the US, renamed it "community notes," and allowed more users to become contributors.

Around the same time, he disassembled Twitter's trust and safety team, undid many of the platform's safety policies, and lowered the guardrails on content moderation. Musk said in 2022 that the community notes tool had "incredible potential for improving information accuracy."

It's unclear how many users participate in community notes contributors. It's one of the platform's main sources of content moderation. X didn't immediately respond to a request for comment from BI.

How the community notes feature works on X

The community notes feature is set to roll out on Meta's Instagram, Facebook, and Threads platforms over the next few months, the company said in a statement shared with BI. Meta said the feature on its platforms would be similar to X's.

On X, community notes act as a crowd-sourced way for users themselves to moderate content without the company directly overseeing that process.

A select group of users who sign up as "contributors" can write a note adding context to any post that could be misleading or contain misinformation.

Then, other contributors can rate that note as helpful or not. Once enough contributors from different points of view vote on the note as helpful, then a public note gets added underneath the post in question.

For instance, here's an example of a community note attached to a recent X post:

January moment pic.twitter.com/92nRy2eiW0

β€” Just Posting Ls (@MomsPostingLs) January 7, 2025

X has made the complex ranking algorithm behind the feature transparent and open-source, and users can view it online and download the latest data.

X says that community notes "do not represent X's viewpoint and cannot be edited or modified by our teams," adding that a community-flagged post is only removed if it violates X's rules, terms of service, or privacy policies.

Similar to X, Meta said its community notes will be written and rated by contributing users. It said the company will not write notes or decide which ones show up. Also like X, Meta said that its community notes "will require agreement between people with a range of perspectives to help prevent biased ratings."

Facebook, Instagram, and Threads users can sign up now to be among the first contributors to the new tool.

"As we make the transition, we will get rid of our fact-checking control, stop demoting fact-checked content and, instead of overlaying full-screen interstitial warnings you have to click through before you can even see the post, we will use a much less obtrusive label indicating that there is additional information for those who want to see it," Joel Kaplan, Meta's chief global affairs officer, said in Tuesday's statement.

Potential pros and cons of community notes

One possible issue with the feature is that by the time a note gets added to a potentially misleading post, the post may have already been widely viewed β€” spreading misinformation before it can be tamped down.

Another issue is that for a note to be added, contributors from across the political spectrum need to agree that a post is problematic or misleading, and in today's polarized political environment, concurring on facts has sometimes become increasingly difficult.

One possible advantage to the feature, though, is that the general public may be more likely to trust a consensus from their peers rather than an assessment handed down by a major corporation.

Maarten Schenk, cofounder and chief technology officer of Lead Stories, a fact-checking outlet, told the Poynter Institute that one benefit of X's community notes is that it doesn't use patronizing language.

"It avoids accusations or loaded language like 'This is false,'" Schenk told Poynter. "That feels very aggressive to a user."

And community notes can help combat misinformation in some ways. For example, researchers at the University of California, San Diego's Qualcomm Institute found in an April 2024 study that the X feature helped offset false health information in posts related to COVID-19. They also helped add accurate context.

In announcing the move, Zuckerberg said Meta's past content moderation practices have resulted in "too many mistakes" and "too much censorship." He said the new feature will prioritize free speech and help restore free expression on Meta's platforms.

Both President-elect Donald Trump and Musk have championed the cause of free speech online, railed against content moderation as politically biased censorship, and criticized Zuckerberg for his role overseeing the public square of social media.

One key person appeared pleased with the change: Trump said Tuesday that Zuckerberg had "probably" made the changes in response to previous threats issued by the president-elect.

Read the original article on Business Insider

Advertisers say Meta's content-moderation changes make them uneasy. They won't stop spending.

Jim Kaplan and Mark Zuckerberg
Meta execs Joel Kaplan and Mark Zuckerberg have outlined a new, looser approach to content moderation.

Getty Images

  • Some advertisers are expressing concerns about Meta's commitment to brand safety.
  • Meta this week unveiled a new approach to content moderation, removing third-party fact-checkers.
  • Many ad industry insiders doubt it'll lead to major spending shifts, however.

Meta's new plan to shake up its content-moderation policies has some advertisers worried about the social giant's brand-safety standards. Despite that, ad insiders who spoke with Business Insider generally didn't expect the changes to hurt Meta's business.

"It's the final nail in the coffin for platform responsibility," an ad agency veteran told BI. They and some others interviewed asked for anonymity to protect business relationships; their identities are known to BI.

The industry reaction β€” or lack of it β€” reflects both advertisers' reliance on Meta and the shifting conversation around how brands should approach "brand safety" or "suitability," which refer to when marketers try to avoid funding or appearing next to content they deem unsuitable.

"A lot of brands have shied away from platforms that are too tied to news or controversy, mostly out of fear of cancel culture," said Toni Box, EVP of brand experience at the media agency Assembly. "But at some point, we have to ask: Are we missing opportunities to connect with people during meaningful moments because we don't trust audiences to tell the difference between a news story and an ad?"

The brand-safety tides are shifting

Meta used to bend over backward to address advertisers' brand-safety concerns. But brands weren't mentioned in Meta CEO Mark Zuckerberg's video announcing the changes or in policy chief Joel Kaplan's interview on Tuesday morning with Fox News' "Fox and Friends."

Instead, their pitch was about preventing the censorship of speech. Meta said it plans to replace third-party fact-checkers with a community-based fact-checking program, addressing criticism that the previous system was too partisan and was often overcorrective. The company also said it would loosen some content moderation restrictions on topics that are "part of mainstream discourse" and be more open to reintroducing political content to people's feeds.

Meta did give a very brief public nod to advertisers. A Meta spokesperson pointed BI to a LinkedIn post from Meta ads exec Nicola Mendelsohn that said the company continued to be focused on ensuring brand safety and suitability by offering a suite of tools for advertisers. In an email from Meta account reps to ad buyers, copies of which were viewed by BI, the company said it knew how important it was to continue giving advertisers transparency and control over their brand suitability. And in an interview with BI, Meta's chief marketing officer Alex Schultz said advertisers' primary brand safety concerns were around hate speech and adult nudity and that its tools would focus on "precision and not be taking down things we shouldn't be taking down."

Despite private grumbling from some advertisers about the changes, and how they appeared to be timed to appease incoming President Donald Trump, industry insiders said they don't expect much public blowback on Meta.

Advertiser boycotts and similar actions were once seen as a point of leverage for marketers. One high-profile example was the 2020 #StopHateFor Profit movement when hundreds of major brands protested Meta's policies on hate speech and misinformation.

But brand safety has recently become a political hot potato and been a flash point for some influential, right-leaning figures.

Last year, the chairman of the House Judiciary Committee, Jim Jordan, began investigating whether advertisers had illegally colluded to demonetize conservative platforms and voices. Elon Musk's X went on to sue the Global Alliance for Responsible Media, the brand-safety initiative at the center of Jordan's investigation, and some of its advertiser members after they withdrew ad dollars from the platform. GARM discontinued activities days later. Jordan has continued to press advertisers about their involvement in GARM, and X's litigation against it and some of its members is ongoing.

A media agency employee told BI that they had clients who were now more cautious about criticizing platforms in public or saying they would pull spending.

Industry analysts also said that β€” politics aside β€” many marketers would likely continue to spend with Meta so long as it delivered them the audiences and ad performance they had come to expect. Meta commands about 21% of the US digital ad market, behind only Google, according to data firm EMARKETER.

"For us, after Google, Meta is the next-best performer as far as ROI is concerned," said Shamsul Chowdhury, VP of paid social at the digital ad agency Jellyfish, referring to the return on investment advertisers get from their campaigns.

Advertisers are split on whether the changes will improve Meta's platforms

Some advertisers who spoke with BI said they had outstanding questions about the new thresholds Meta would apply to removing posts, what's on the road map for monitoring trends around misinformation, and whether they would still be able to effectively apply their own third-party brand suitability software to content on Meta's apps.

Advertisers said they would pay close attention to how Meta's Community Notes-like feature would work in practice, especially as some hadn't been impressed with X's performance in this area with a similar feature.

"This is a major step back and likely going to result in serious issues where social platforms, not just Meta, are going to hide behind the notion that their users do the moderation and fact-checking for them and they are free speech platforms," said Ruben Schreurs, CEO of the marketing consultancy Ebiquity.

It's not entirely clear how effective X's Community Notes have been. A study published last year by researchers at the University of Luxembourg, University of Melbourne, and JLU Giessen concluded that X's "Community Notes might be too slow to effectively reduce engagement with misinformation in the early (and most viral) stage of diffusion." Still, a separate study from the Qualcomm Institute within UC San Diego found Community Notes helped counter false information about Covid vaccines.

Some advertising execs supported Meta's announcement. Two media agency reps said increasing the number of conversations people are having on the platform could benefit Meta and advertisers alike by boosting engagement.

"I think the best news is free speech and mitigation of harmful or dangerous content remains the primary focus of this maturing program, and Meta has taken a forward position here," said John Donahue, founder of the digital media consultancy Up and to the Right.

Mike Zaneis of the ad initiative the Trustworthy Accountability Group said Meta's announcement should be seen as an evolution of the platform's brand-safety standards and not a retreat from protecting users and marketers.

"The speed and accuracy of the Community Notes tool is impressive and it's the increased transparency that makes a fundamental difference for users and marketers alike," Zeneis said of X's implementation of the concept so far. "If something seems to be working, we shouldn't discourage others from adopting the approach just because it hasn't been precisely tested."

Read the original article on Business Insider

Mark Zuckerberg unveils his latest persona: Elon Musk

Zuck morphs into Musk.

Toby Melville/Pool Photo via AP; BRENDAN SMIALOWSKI/AFP via Getty Images; Chelsea Jia Feng/BI

While Mark Zuckerberg and Elon Musk never did face off in that cage match, "Uncle Elon" has bested Zuck in the political arena, becoming one of the most powerful unelected figures in modern US history. Now, in hopes of forging a friendlier relationship with the Trump administration a second time around, Zuckerberg seems to be following a new mantra: If you can't beat Elon, be him.

On Tuesday, Meta announced it would end third-party fact-checking and replace it with a more hands-off content-moderation policy in which users police one another through community notes β€” just like Musk's X. In a video announcing the changes, Zuckerberg said that "governments and legacy media" had pushed for more censorship in recent years, and that Meta had decided its "complex systems" had "too many mistakes and too much censorship." "The recent elections," the Meta CEO added, "also feel like a cultural tipping point towards, once again, prioritizing speech." His language would have sounded natural coming out of the mouth of Musk, who shared Zuckerberg's video on X and dubbed Meta's move "cool."

Community notes is only the latest page Zuckerberg has taken from his billionaire rival's playbook. Whether conducting mass layoffs or removing the guardrails to social media or joining forces with Musk against their shared competitor OpenAI or spending time at Mar-a-Lago, Zuckerberg has been following Musk's lead more often.

This isn't the first time Zuckerberg, who has helmed Facebook since he was 19, has reinvented himself. From the brash, hoodie-wearing Harvard dropout in Facebook's early days to the suit-wearing, meat-smoking, Silicon Valley nice guy in the years after the company went public to the hardened, martial-arts-practicing "wartime"-mode Zuck who emerged in the wake of the most turbulent period in company history, Zuckerberg has fashioned several personas that approximate what his company most needs him to be at the time. In 2025, don't let his longer hair, oversize T-shirts, and statement jewelry fool you. The persona Mark Zuckerberg has taken on to ensure Meta's success as his historical adversary Donald Trump returns to the White House acts a lot like Donald Trump's right-hand man, Elon Musk.


When Musk bought Twitter in 2022, he shaved content moderation to bare bones in the name of free speech and cut more than 80% of its staff, sending shockwaves through the tech world. Many speculated that Twitter would crack under the pressure, and die. When, despite some hiccups, the platform continued to function largely as normal, Zuckerberg, like several other tech CEOs, applauded Musk for making Twitter "leaner" (doing so on the Musk superfan Lex Fridman's podcast). Meta also laid off 11,000 workers days after Musk took over Twitter, and Zuckerberg then dubbed 2023 "the year of efficiency" at Meta, cutting another 10,000 people. Zuckerberg now also plans to move trust-and-safety workers from California to Texas, following in step with Musk, who has relocated X from San Francisco to Texas, where he has also located Starlink and The Boring Company.

And as Zuckerberg stayed quieter throughout the 2024 presidential election after Meta took heat for misinformation in 2016 and 2020, Musk did the opposite. The world's richest man appeared onstage alongside Trump, backed Trump with more than $250 million, and posted to X incessantly in support of the now president-elect. Musk has again come out on top, as he now sits at the pinnacle of political influence and is poised to radically reshape government spending as he and Vivek Ramaswamy spearhead the Department of Government Efficiency. Big Tech's other power players who want a favorable relationship with Trump are left to follow in his path.

Since November, Apple's Tim Cook, Open AI's Sam Altman, and Amazon have each donated to Trump's inaugural fund. Zuckerberg has done that and more, including visiting Mar-a-Lago to have dinner with the president-elect; naming UFC CEO Dana White, a close Trump ally, to Meta's board; and promoting Joel Kaplan, a longtime Republican lobbyist, to chief global affairs officer. (On Tuesday, Kaplan gave an interview on "Fox & Friends" to promote the company's content-moderation changes.)

All of this is meant to quell a once adversarial relationship between Zuckerberg and Trump, who had threatened to imprison Zuckerberg if his social sites interfered with the 2024 election and years ago accused Facebook of being "anti-Trump" and colluding against him (Zuckerberg pushed back against such claims).

As my colleague Peter Kafka wrote of the community notes news: "There's no way to see Zuckerberg's moves as anything other than a straightforward attempt to please Trump and the incoming president's conservative allies, who have often complained that Zuckerberg's properties were biased against them." Even Trump said Tuesday that Meta was "probably" responding to his own past threats against Zuck by pivoting.

The very fact-checkers who will soon be dismissed by Meta began with a program in December 2016, after Facebook faced harsh criticism for its role in spreading misinformation in Trump's first election a month prior. Meta actively worked to downplay political content following the January 6, 2021, insurrection, and it suspended Trump from Facebook and Instagram (Meta lifted the suspension in early 2023, saying the public should be able to access what politicians are saying; the move came shortly after Musk allowed Trump back on Twitter). When Meta launched Threads, its own Twitter competitor, in 2023, the Instagram head, Adam Mosseri, said the new app would not encourage breaking news and politics posts. But Musk, who has rebuilt X in his own image to favor conservative and far-right accounts, has found that a social-media site can win when embracing the president. On Tuesday, Meta also said it would reverse course and stop downgrading political content, and start phasing politics back into users' feeds. (A Meta spokesperson referred to past public statements but did not provide new comment for this story.)

Early research on X's community notes shows the move has led to mixed results when it comes to combating misinformation. But changes at X have certainly proved a mammoth victory for Musk, whose wealth has grown by an estimated $200 billion since the election. As he's molded himself more in Uncle Elon's image, Nephew Zuck may also find himself in Trump's favor.


Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends.

Read the original article on Business Insider

❌