Meta CEO Mark Zuckerberg defended his decision to scale back Meta’s content moderation policies in a Friday appearance on Joe Rogan’s podcast. Zuckerberg faced widespread criticism for the decision, including from employees inside his own company. “Probably depends on who you ask,” said Zuckerberg when asked how Meta’s updates have been received. The key updates […]
Former DEI lead Maxine Williams tried to cushion the blow of Meta's plan to rollback DEI programs.
The company has several employee-resource groups, known as MRGs.
After more than a decade as Meta's Chief Diversity Officer, Williams is taking on a new role.
Meta Chief Diversity Officer Maxine Williams told staff in a memo on Friday that the company's decision to back off DEI efforts won't impact employee-resource groups, according to an internal post viewed by Business Insider.
Employee-resource groups, or ERGs, are worker-led communities that create a sense of belonging at a company. Meta has several of these groups. MRGs are Meta employee resource groups, and BRGs are Black employee resource groups.
In a post to an internal forum, Williams tried to cushion the blow of Meta's decision on Friday to rollback its diversity, equity, and inclusion program. Some staff criticized the move, while at least one worker called it "pretty reasonable."
"I wanted to take a moment to acknowledge that these changes may be difficult to understand and process since they represent a significant shift in our strategies for achieving the cognitive diversity we value," Williams wrote.
She stressed that the changes won't impact Meta's support for MRGs and BRGs.
"You play a critical role in creating a place for community and connection — among us and with the company," she added.
"I have watched you show support, share resources, and bond through learning, understanding, and appreciating our various backgrounds. Our Global Communities contribute to the richness of our experiences as we learn from each other and leverage our different backgrounds, working together to build products for the world."
Williams has been Meta's chief diversity officer for more than a decade. On Friday, she told staff that she's taking on a new role focused on Accessibility and Engagement.
"But I, and my team, will continue to support you as contribute to our global community at Meta," she wrote.
"It’s time to get back to our roots around free expression on Facebook and Instagram," Zuckerberg said when he announced the controversial changes to Meta's content moderation policy
Meta's chief marketing officer Alex Schultz is concerned that "too much censorship" is harmful.
Schultz's comments come after Meta updated several policies, including content moderation.
The new guidelines change what is permissible to be said about LGBTQ+ people.
Meta's chief marketing officer warned that greater censorship on its platforms could "harm speech" from the LGBTQ+ community aiming to push back against hate.
"My perspective is we've done well as a community when the debate has happened and I was shocked with how far we've gone with censorship of the debate," Schultz wrote in the post, seen by Business Insider.
He added that his friends and family were shocked to see him receive abuse as a gay man in the past, but that it helped them to realize hatred exists.
"Most of our progress on rights happened during periods without mass censorship like this and pushing it underground, I think, has coincided with reversals," he said.
"Obviously, I don't like people saying things that I consider awful but I worry that the solution of censoring that doesn't work as well as you might hope. So I don't know the answer, this stuff is really complicated, but I am worried that too much censorship is actually harmful and that's may have been where we ended up."
Earlier this week, the company adjusted its moderation guidelines to allow statements on its platforms claiming that LGBTQ+ people are "mentally ill" and removed trans and nonbinary-themed chat options from its Messenger app, features that had previously been showcased as part of the company's support for Pride Month.
Schultz also said that he does not think that censorship and cancel culture have helped the LGBTQ+ movement.
He wrote, "We don't enforce these things perfectly," and cited an example of a mistake of taking down images of two men kissing and removing a slur word toward gay people rather than a deliberate move by a "bigoted person in operations."
Schultz added, "So the more rules we have, the more mistakes we make…Moderation is hard and we'll always get it wrong somewhat. The more rules, the more censorship, the more we'll harm speech from our own community pushing back on hatred."
The company's latest decision to roll back its DEI programs has sparked intense internal debate and public scrutiny. The announcement, delivered via an internal memo by VP of HR Janelle Gale, said that the company would dismantle its dedicated DEI team and eliminate diversity programs in its hiring process.
Schulz told BI in an interview earlier this week that the election of Donald Trump and a broader shift in public sentiment around free speech played significant roles in these decisions.
He acknowledged that internal and external pressures had led Meta to adopt more restrictive policies in recent years, but the company is now taking steps to regain control over its approach to content moderation.
One employee lamented the rollback as "another step backward" for Meta, while others raised concerns about the message it sends to marginalized communities that rely on Meta's platforms.
At Meta's offices in Silicon Valley, Texas, and New York, facilities managers were instructed to remove tampons from men's bathrooms, which the company had provided for nonbinary and transgender employees who use the men's room and may require sanitary products, The New York Times reported on Friday.
Meta didn't immediately respond to a request for comment from BI.
You can email Jyoti Mann at [email protected], send her a secure message on Signal @jyotimann.11or DM her via X @jyoti_mann1
If you're a current or former Meta employee, contact this reporter from a nonwork device securely on Signal at +1-408-905-9124 or email him at [email protected].
Mark Zuckerberg told Joe Rogan he's "optimistic" about how Trump will impact American businesses.
On the nearly 3-hour podcast episode, Zuck said he thinks Trump will defend American tech abroad.
The conversation comes days after Meta got rid of third-party fact-checkers.
Mark Zuckerberg told Joe Rogan in a podcast episode on Friday that he thinks President-elect Donald Trump will help American businesses, calling technology companies in particular a "bright spot" in the economy.
"I think it's a strategic advantage for the United States that we have a lot of the strongest companies in the world, and I think it should be part of the US' strategy going forward to defend that," Zuckerberg said during the nearly three-hour episode of 'The Joe Rogan Experience.' "And it's one of the things that I'm optimistic about with President Trump is, I think he just wants America to win."
Zuckerberg told Rogan the government should defend America's tech industry abroad to ensure it remains strong, and that he is "optimistic" Trump will do so.
The episode dropped just days after Meta significantly altered its content moderation policies, replacing third-party fact checkers with a community-notes system similar to that on Elon Musk's X. Trump praised the change earlier this week and said it was "probably" a response to threats he's made against the Meta CEO.
Zuckerberg, clad in a black tee and gold necklace emblematic of his new style, told Rogan that the change reflects the nation's "cultural pulse" as it was reflected in the presidential election results. At the beginning of the episode, Zuckerberg bashed how President Joe Biden's administration handled content moderation, especially during the pandemic.
A representative for Biden didn't immediately respond to a request for comment from Business Insider.
In the wake of Meta’s decision to remove its third-party fact-checking system and loosen content moderation policies, Google searches on how to delete Facebook, Instagram, and Threads have been on the rise. People who are angry with the decision accuse Meta CEO Mark Zuckerberg of cozying up to the incoming Trump administration at the expense […]
Staffers criticized the move in comments on the post announcing the changes on the internal platform Workplace. More than 390 employees reacted with a teary-eyed emoji to the post, which was seen by Business Insider and written by the company's vice president of human resources, Janelle Gale.
Gale said Meta would "no longer have a team focused on DEI." Over 200 workers reacted with a shocked emoji, 195 with an angry emoji, while 139 people liked the post, and 57 people used a heart emoji.
"This is unfortunate disheartening upsetting to read," an employee wrote in a comment that had more than 200 likes.
Another person wrote, "Wow, we really capitulated on a lot of our supposed values this week."
A different employee wrote, "What happened to the company I joined all those years ago."
Reactions were mixed, though. One employee wrote, "Treating everyone the same, no more, no less, sounds pretty reasonable to me." The comment had 45 likes and heart reactions.
The decision follows sweeping changes made to Meta's content-moderation policies, which Meta CEO Mark Zuckerberg announced Tuesday. The changes include eliminating third-party fact-checkers in favor of a community-notes model similar to that on Elon Musk's X.
As part of the changes to Meta's policy on hateful conduct, the company said it would allow users to say people in LGBTQ+ communities are mentally ill for being gay or transgender.
"We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like 'weird,'" Meta said in the updated guidelines.
One employee wrote in response to the DEI changes that, in addition to the updated hate-speech guidelines, "this is another step backward for Meta."
They added: "I am ashamed to work for a company which so readily drops its apparent morals because of the political landscape in the US."
In the post announcing the decision to drop many of its DEI initiatives, Gale said the term DEI had "become charged," partly because it's "understood by some as a practice that suggests preferential treatment of some groups over others."
"Having goals can create the impression that decisions are being made based on race or gender," she said, adding: "While this has never been our practice, we want to eliminate any impression of it."
One employee told BI the moves "go against what we as a company have tried to do to protect people who use our platforms, and I have found all of this really hard to read."
Meta did not respond to a request for comment by the time of publication.
Axios reports that Meta is eliminating its biggest DEI efforts, effective immediately, including ones that focused on hiring a diverse workforce, training, and sourcing supplies from diverse-owned companies. Its DEI department will also be eliminated. In a memo leaked to the outlet, Meta said it was making these changes because the “legal and policy landscape […]
Meta is dropping many of its DEI initiatives, BI confirmed.
The company sent a memo announcing the changes on Friday.
Meta's VP of human resources said the legal and policy landscape in the US was changing.
Meta is rolling back its DEI programs, Business Insider has learned.
The company's vice president of human resources, Janelle Gale, announced the move on its internal communication platform, Workplace, on Friday, which was seen by BI.
"We will no longer have a team focused on DEI," Gale wrote in the memo.
"The legal and policy landscape surrounding diversity, equity and inclusion efforts in the United States is changing," she wrote. "The Supreme Court of the United States has recently made decisions signaling a shift in how courts will approach DEI."
She added the term DEI has "become charged" partly because it is "understood by some as a practice that suggests preferential treatment of some groups over others."
Meta confirmed the changes when contacted by Business Insider.
Meta is the latest company to back away from DEI in the wake of backlash, legal challenges, and the reelection of Donald Trump as president.
Read the full memo:
Hi all,
I wanted to share some changes we're making to our hiring, development and procurement practices. Before getting into the details, there is some important background to lay out:
The legal and policy landscape surrounding diversity, equity and inclusion efforts in the United States is changing. The Supreme Court of the United States has recently made decisions signaling a shift in how courts will approach DEI. It reaffirms longstanding principles that discrimination should not be tolerated or promoted on the basis of inherent characteristics. The term "DEI" has also become charged, in part because it is understood by some as a practice that suggests preferential treatment of some groups over others.
At Meta, we have a principle of serving everyone. This can be achieved through cognitively diverse teams, with differences in knowledge, skills, political views, backgrounds, perspectives, and experiences. Such teams are better at innovating, solving complex problems and identifying new opportunities which ultimately helps us deliver on our ambition to build products that serve everyone. On top of that, we've always believed that no-one should be given - or deprived of -opportunities because of protected characteristics, and that has not changed.
Given the shifting legal and policy landscape, we're making the following changes:
On hiring, we will continue to source candidates from different backgrounds, but we will stop using the Diverse Slate Approach. This practice has always been subject to public debate and is currently being challenged. We believe there are other ways to build an industry-leading workforce and leverage teams made up of world-class people from all types of backgrounds to build products that work for everyone.
We previously ended representation goals for women and ethnic minorities. Having goals can create the impression that decisions are being made based on race or gender. While this has never been our practice, we want to eliminate any impression of it.
We are sunsetting our supplier diversity efforts within our broader supplier strategy. This effort focused on sourcing from diverse-owned businesses; going forward, we will focus our efforts on supporting small and medium sized businesses that power much of our economy. Opportunities will continue to be available to all qualified suppliers, including those who were part of the supplier diversity program.
Instead of equity and inclusion training programs, we will build programs that focus on how to apply fair and consistent practices that mitigate bias for all, no matter your background.
We will no longer have a team focused on DEI. Maxine Williams is taking on a new role at Meta, focused on accessibility and engagement.
What remains the same are the principles we've used to guide our People practices:
We serve everyone. We are committed to making our products accessible, beneficial and universally impactful for everyone.
We build the best teams with the most talented people. This means sourcing people from a range of candidate pools, but never making hiring decisions based on protected characteristics (e.g. race, gender etc.). We will always evaluate people as individuals.
We drive consistency in employment practices to ensure fairness and objectivity for all. We do not provide preferential treatment, extra opportunities or unjustified credit to anyone based on protected characteristics nor will we devalue impact based on these characteristics.
We build connection and community. We support our employee communities, people who use our products, and those in the communities where we operate. Our employee community groups (MRGs) continue to be open to all.
Meta has the privilege to serve billions of people every day. It's important to us that our products are accessible to all, and are useful in promoting economic growth and opportunity around the world. We continue to be focused on serving everyone, and building a multi-talented, industry-leading workforce from all walks of life.
The International Fact-Checking Network warned of Meta's move to crowdsourced fact-checking.
A group of 71 fact checkers said the change is "a step backward" for accuracy.
The group proposed crowdsourcing in conjunction with professionals, a "new model."
The fact-checking group that has worked with Meta for years wrote Mark Zuckerberg an open letter on Thursday, warning him about the company's move toward crowdsourced moderation in the US.
"Fact-checking is essential to maintaining shared realities and evidence-based discussion, both in the United States and globally," wrote the International Fact-Checking Network, part of the nonprofit media organization Poynter Institute.
As of 11:30 p.m. ET on Thursday, 71 organizations from across the world had signed the letter. Poynter is updating its post as the list of organizations grows.
The group said Meta's decision, announced Tuesday, to replace third-party fact-checkers with crowdsourced moderation on Facebook, Instagram, and Threads in the US "is a step backward for those who want to see an internet that prioritizes accurate and trustworthy information."
Meta told the IFCN about the end of its partnership less than an hour before publishing the post about the switch, Business Insider reported. The change could have serious financial repercussions for the fact-checking organizations that rely on Meta for revenue.
The organization said Meta has fact-checking partnerships in more than 100 countries.
"If Meta decides to stop the program worldwide, it is almost certain to result in real-world harm in many places," IFCN wrote. Meta has not announced plans to end the fact-checking program globally.
Meta said it plans to build a crowdsourced moderation system this year similar to the community notes used by Elon Musk's X, where people can weigh in on posts ranging from the serious to the mundane. Musk laid off hundreds of X's trust and safety workers after he bought the company in 2022, and X has since been slow to act on some misinformation, BI previously reported.
IFCN wrote that community notes could be used in conjunction with professional fact-checkers, a "new model" for collaboration.
"The need for this is great: If people believe social media platforms are full of scams and hoaxes, they won't want to spend time there or do business on them," IFCN wrote.
Nearly 3.3 billion people used a Meta product every day in September, according to the company's most recent financials — more than 40% of the world's population.
Ad insiders who spoke to BI this week said they didn't expect the changes to hurt the company's business. Meta has more than a fifth of the US digital ad market — in second place behind Google, per data from BI's sister company EMARKETER.
Meta employees are furious with the company’s newly announced content moderation changes that will allow users to say that LGBTQ+ people have “mental illness,” according to internal conversations obtained by 404 Media and interviews with five current employees. The changes were part of a larger shift Mark Zuckerberg announced Monday to do far less content moderation on Meta platforms.
“I am LGBT and Mentally Ill,” one post by an employee on an internal Meta platform called Workplace reads. “Just to let you know that I’ll be taking time out to look after my mental health.”
On Monday, Mark Zuckerberg announced that the company would be getting “back to our roots around free expression” to allow “more speech and fewer mistakes.” The company said “we’re getting rid of a number of restrictions on topics like immigration, gender identity, and gender that are the subject of frequent political discourse and debate.” A review of Meta’s official content moderation policies show, specifically, that some of the only substantive changes to the policy were made to specifically allow for “allegations of mental illness or abnormality when based on gender or sexual orientation.” It has long been known that being LGBTQ+ is not a sign of “mental illness,” and the false idea that sexuality or gender identification is a mental illness has long been used to stigmatize and discriminate against LGBTQ+ people.
Google searches for how to cancel and delete Facebook, Instagram, and Threads accounts have seen explosive rises in the U.S. since Meta CEO Mark Zuckerberg announced that the company will end its third-party fact-checking system, loosen content moderation policies, and roll back previous limits to the amount of political content in user feeds. Critics see […]
The reaction among creators to Meta's content-moderation changes has largely fallen along political lines.
Some influencers worry the changes could cause harm to the LGBTQ+ community.
Others questioned Meta's decision to feature more political content.
Getting "Zucked" — a term for having your account suspended or content removed due to community violations — is a staple in the creator lexicon.
Despite that, creators who spoke with Business Insider had mixed reactions to Meta CEO Mark Zuckerberg's plans to reduce content moderation in the name of free speech.
On Tuesday, Meta unveiled new policies that included winding down fact-checking, loosening content moderation, and introducing X-style "Community Notes."
The creator community largely reacted along political lines, with some left-leaning influencers expressing disappointment.
"This is really about just pandering to the Trump administration in a way that feels extremely obvious," LGBTQ+ advocate and "Gay News" host Josh Helfgott told BI.
Left-leaning filmmaker Michael McWhorter also said he felt the changes were catering to Trump and his MAGA base.
"You're not trying to balance things out," McWhorter said of Meta. "We are shifting to the other side of things."
Elsewhere, some right-leaning creators cheered the changes.
Christopher Townsend, an Air Force vet and conservative rapper with over 300,000 Instagram followers, told BI he thought the policy overhaul was "a great step toward the decentralization of information and the end to the control legacy media has had on the prevailing narrative."
Instagram head Adam Mosseri posted a video on Wednesday outlining how the new policies would impact creators. He said the company would correct its "over-enforcement" of content moderation and begin recommending political content again.
"If you're a creator who likes to post about political content, this should mean that you feel comfortable doing so on any of our platforms," Mosseri said. "We will now show political recommendations."
Meta didn't respond to a request for comment.
Some are wary of Community Notes
As part of the policy overhaul, Meta is getting rid of fact-checkers in favor of Community Notes in the style of Elon Musk's X. Users will be able to volunteer to contribute to Community Notes, which will appear on content when people with a range of different perspectives agree a correction is in order.
"Like X, it gives the user community more authority over the platform instead of biased third-party administrators," Townsend said.
McWhorter said that while Community Notes were a "great equalizer," he felt they were not an adequate replacement for fact-checking. He said he wished Meta would rely on a combination of both systems.
A former Instagram staffer told BI that they felt placing the responsibility to moderate content on users and creators "on a platform with massive global reach and historical harmful content issues" was a step in the wrong direction. They asked for anonymity to protect business relationships; their identity is known to BI.
Concerns about anti-LGBTQ+ discourse
Helfgott expressed concern about Meta's plan to decrease moderation around certain political topics. The company's blog post specifically noted immigration and gender identity as areas of debate where it plans to decrease restrictions.
Helfgott said that while Meta's plans were described in the language of "political discourse," he felt the changes could lead to bullying of the LGBTQ+ community.
"We do allow allegations of mental illness or abnormality when based on gender or sexual orientation," the company wrote, "given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like 'weird.'"
"This is the most anti-LGBTQ announcement that a social-media platform has made in recent memory," Helfgott said.
While McWhorter told BI he felt his content had been Zucked — or unfairly suppressed — in the past, he said he'd prefer a stricter moderating system even if it had "flaws."
"I'd rather that I take the hit for a joke that it didn't understand than that stuff being allowed to be spread all over the platform," he said, referring to potentially harmful posts.
Meta's increased political emphasis marks an about-face
Some creators were flummoxed by Meta's about-face on the amount of political content it plans to recommend. The company had previously cut back significantly on promoting political content in feeds in recent years.
Malynda Hale, a creator and activist with 65,000 followers, said this change could benefit political creators but questioned the company's motives.
"I think the fact that Meta is going to be serving up more political content is actually positive for creators like myself, but I don't think it's with the intention to keep the community informed," she told BI.
She said she felt Meta wanted to boost engagement even at the cost of division and disagreement.
Despite some misgivings, the creators who spoke with BI said they weren't going anywhere.
"I'll work with the system as it's presented to me, and I'll find my way to work around it," McWhorter said. "I constantly have to do that on all different platforms."
Helfgott said he felt "handcuffed" by Meta because if he stopped posting on Instagram, he would lose out on millions of people seeing his content each month.
"Meta knows this," he said. "They know that creators may not like this, but we need the reach, and we will keep posting there."
In a reply on Threads to a user's post criticizing Meta's influence and suggesting that people feel trapped on the platform, Zuckerberg struck a defiant tone.
"No – I'm counting on these changes actually making our platforms better," he wrote.
I think Community Notes will be more effective than fact-checkers, reducing the number of people whose accounts get mistakenly banned is good, people want to be able to discuss civic topics and make arguments that are in the mainstream of political discourse, etc. Some people may leave our platforms for virtue signaling, but I think the vast majority and many new users will find that these changes make the products better.
Zuckerberg's response to the Threads user named Mary-Frances Makichen, who has 253 followers and is a "Spiritual Director" and author according to their bio, came just one day after Meta announced it would replace its third-party fact-checking partnerships with a crowdsourced Community Notes system similar to the one used by X.
Mass departures from social media platforms for symbolic reasons are not unprecedented.
On Election Day in the US, more than a quarter million X users deleted their accounts in protest against owner Elon Musk's deepening ties to the Trump administration.
Zuckerberg, however, appears unfazed, betting that Community Notes will enhance Meta's user experience and attract new audiences rather than drive them away.
If you're a current or former Meta employee, contact this reporter from a nonwork device securely on Signal at +1-408-905-9124 or email him at [email protected].
Content moderation has always been a nightmare for Meta.
Its new content-moderation policy is a huge change — and it could be an improvement.
Mark Zuckerberg's "apology tour" from the past few years seems to be officially over.
Mark Zuckerberg's changes to Meta's content-moderation policies are potentially huge.
To fully understand their gravity, it's useful to look at how Meta got here. And to consider what these changes might actually mean for users: Are they a bow to an incoming Trump administration? Or an improvement to a system that's gotten Zuckerberg and Co. lots of heat before? Or a little of both?
Content moderation has always been a pit of despair for Meta. In its blog post announcing the changes on Tuesday, Meta's new head of policy, Joel Kaplan, talked about wanting to get back to Facebook's roots in "free speech." Still, those roots contain a series of moderation fires, headaches, and constant adjustments to the platform's policies.
Starting in 2016, moderation troubles just kept coming like a bad "We Didn't Start the Fire" cover. Consider this roundup:
The Facebook Files and whistleblower leaks about Instagram and teen mental health.
Whatever your political alignment, it seems like Meta has been trapped in a vicious cycle of making a policy — or lacking a policy — then reversing itself to try to clean up a mess.
That's maybe until now. As Zuckerberg posted on Threads on Wednesday, "Some people may leave our platforms for virtue signaling, but I think the vast majority and many new users will find that these changes make the products better."
Maybe the big changes were already brewing this past September when Zuckerberg appeared at a live event and said, "One of the things that I look back on and regret is I think we accepted other people's view of some of the things that they were asserting that we were doing wrong, or were responsible for, that I don't actually think we were."
In other words, as of this week, the apology tour seems to have ended.
What will Meta's changes mean for you and me, the users?
What will the changes mean? Who knows! I can make a few predictions:
The "community note" system might work pretty well — or at least not worse than the current human- and AI-led fact-checking system.
It's also possible that while certain content might exist on the platform, you won't actually come across it because it will have been downgraded. "Freedom of speech, not freedom of reach" has been X's mantra (though considering the flow of truly vile content that has proliferated in my feed there in the past year or so, I don't think that's been particularly effective).
One other piece of the announcement is that Meta will focus its AI-powered filtering efforts on the highest-risk content (terrorism, drugs, and child endangerment). For lesser violations, the company said, it will rely more on user reports. Meta hasn't given details on how exactly this will work, but I imagine it could have a negative effect on common issues like bullying and harassment.
A large but less glamorous part of content moderation is removing "ur ugly" comments on Instagram — and that's the kind of stuff that will rely on user reporting.
It's also quite possible that bad actors will take advantage of the opening. Facebook is nothing if not a place to buy used furniture while various new waves of pillagers attempt to test and game the algorithms for profit or menace — just consider the current wave of AI slop, some of which appears at least in part to be a profitable scam operation run from outside the US.
What do the changes mean for Meta?
If these changes had been rolled out slowly, one at a time, they might have seemed like reasonable measures just on their face. Community notes? Sure. Loosening rules on certain hot political topics? Well, not everyone will like it, but Meta can claim some logic there. Decreasing reliance on automatic filters and admitting that too many non-violations have been swept up in AI dragnets? People would celebrate that.
No one thought Meta's moderation before the announced changes was perfect. There were lots of complaints (correctly) about how it banned too much stuff by mistake — which this new policy is aiming to fix.
Still, it's impossible to weigh the merits of each aspect of the new policy and have blinders on when it comes to the 800-pound political gorilla in the room.
There's one pretty obvious way of looking at Meta's announcement of sweeping changes to its moderation policy: It's a move to cater to an incoming Trump administration. It's a sign that Zuckerberg has shifted to the right, as he drapes himself in some of the cultural signifiers of the bro-y Zynternet (gold chain, $900,000 watch, longer hair, new style, front row at an MMA match).
Together, every piece of this loudly signals that Zuckerberg either A., genuinely believed he'd been forced to cave on moderation issues in the past, or B., knows that making these changes will please Trump. I don't really think the distinction between A and B matters too much anyway. (Meta declined to comment.)
This probably isn't the last of the changes
I try to avoid conflating "Meta" with "Mark Zuckerberg" too much. It's a big company! There are many smart people who care deeply about the lofty goals of social networking who create policy and carry out the daily work of trust and safety.
Part of me wonders how much Zuckerberg wishes this boring and ugly part of the job would fade away — there are so many more shiny new things to work on, like AI or mixed-reality smart glasses. Reworking the same decade-old policies so that people can insult each other 10% more is probably less fun than MMA fighting or talking to AI researchers.
Content moderation has always been a nightmare for Meta. Scaling it back, allowing more speech on controversial topics, and outsourcing fact-checking to the community seems like a short-term fix for having to deal with this unpleasant and thankless job. I can't help but imagine that another overhaul will come due sometime in the next four years.
The decision by Meta CEO Mark Zuckerberg to end Facebook's work with third-party fact-checkers and ease some of its content restrictions is a potentially "transformative" moment for the platform, experts said, but one that is unlikely to shield the company from liability in ongoing court proceedings.
The updates were announced by Zuckerberg, who said in a video that the previous content restrictions used on Facebook and Instagram — which were put into place after the 2016 elections — had "gone too far" and allowed for too much political bias from outside fact-checkers.
Meta will now replace that system with a "Community Notes"-style program, similar to the approach taken by social media platform X, he said. X is owned by Elon Musk, the co-director of the planned Department of Government Efficiency.
"We’ve reached a point where it’s just too many mistakes and too much censorship," Zuckerberg said. "The recent elections also feel like a cultural tipping point toward once again prioritizing speech. So we are going to get back to our roots, focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms."
The news was praised by President-elect Donald Trump, who told Fox News Digital that he thought Meta's presentation "was excellent." "They have come a long way," Trump said.
Still, it is unlikely to ease the legal liability for Meta, which in recent months has been hit with the possibility of a multibillion-dollar class action lawsuit stemming from a privacy scandal involving the political consulting firm Cambridge Analytica.
The Supreme Court in November rejected Meta's effort to block the lawsuit, leaving in place an appellate court ruling that allowed the class action suit to move forward.
Meta has also been the target of multiple Republican-led investigations in Congress. Republicans on the House Subcommittee on the Weaponization of the Federal Government probed Meta's activity and communication with the federal government and the Biden administration last year as part of a broader investigation into alleged censorship.
The platform also came under scrutiny by the House Oversight Committee in August, as part of an investigation into claims that the platform suppressed information about the July 13 assassination attempt of Trump.
Combined, these factors make it unlikely that Meta will see its legal problems go away anytime soon, law professor and Fox News contributor Jonathan Turley told Fox News Digital in an interview.
"Facebook is now looking at a tough patch ahead," he said. "Not only do the Republicans carry both houses of Congress as well as the White House, but there is ongoing litigation in the social media case in Texas."
Additionally, the Supreme Court's conservative majority is also unlikely to be sympathetic to the views of Meta in any case centered on First Amendment protections and rights to free speech.
The House investigations and litigation have both forced more of Meta's actions into public view— something Turley said expects to come under further scrutiny in the discovery process in Missouri v. Biden, a case that centers on allegations of political censorship.
"That discovery is still revealing new details," Turley said. "So Meta understood that in the coming months, more details would be forthcoming on its censorship program."
Still, he said,this "could be a transformative moment," Turley said.
"And an alliance of Zuckerberg with [Elon] Musk could turn the tide in this fight over free speech," Turley said. "And as one of Zuckerberg's most vocal critics I welcome him to this fight."
Mark Zuckerberg's plan to replace fact checkers with "community notes" is a familiar one.
A similar system of community moderation is already in place on Elon Musk's X.
On X, community notes let users add context to posts. Meta has said it seems to work well.
Mark Zuckerberg says Meta will use "community notes" to moderate content on its platforms like Facebook and Instagram — but what exactly does that mean, and how has it worked on other platforms?
Meta said the feature would function much like it does on Elon Musk's platform, where certain contributors can add context to posts they think are misleading or need clarification. This type of user-generated moderation would largely replace Meta's human fact-checkers.
"We've seen this approach work on X — where they empower their community to decide when posts are potentially misleading and need more context and people across a diverse range of perspectives decide what sort of context is helpful for other users to see," Meta said in its announcement Tuesday.
So, will it be cool for Meta and its users? Here's a primer on "community notes" — how it came to be, and how it's been working so far on X:
How the 'community notes' feature was born
The idea of "community notes" first came about at Twitter in 2019, when a team of developers at the company, now called X, theorized that a crowdsourcing model could solve the main problems with content moderation. Keith Coleman, X's vice president of product who helped create the feature, told Asterisk magazine about its genesis in an interview this past November.
Coleman told the outlet that X's previous fact-checking procedures, run by human moderators, had three main problems: dedicated staff couldn't fact-check claims in users' posts fast enough, there were too many posts to monitor, and the general public didn't trust a Big Tech company to decide what was or wasn't misleading.
Coleman told Asterisk that his team developed a few prototypes and settled on one that allowed users to submit notes that could show up on a post.
"The idea was that if the notes were reasonable, people who saw the post would just read the notes and could come to their own conclusion," he said.
And in January 2021, the company launched a pilot program of the feature, then called "Birdwatch," just weeks after the January 6 Capitol riot. On its first day, the pilot program had 500 contributors.
Coleman told the outlet that for the first year or so of the pilot program — which showed community notes not directly on users' posts but on a separate "Birdwatch" website — the product was very basic, but over time, it evolved and performed much better than expected.
When Musk took over the platform in 2022, he expanded the program beyond the US, renamed it "community notes," and allowed more users to become contributors.
It's unclear how many users participate in community notes contributors. It's one of the platform's main sources of content moderation. X didn't immediately respond to a request for comment from BI.
How the community notes feature works on X
The community notes feature is set to roll out on Meta's Instagram, Facebook, and Threads platforms over the next few months, the company said in a statement shared with BI. Meta said the feature on its platforms would be similar to X's.
On X, community notes act as a crowd-sourced way for users themselves to moderate content without the company directly overseeing that process.
A select group of users who sign up as "contributors" can write a note adding context to any post that could be misleading or contain misinformation.
Then, other contributors can rate that note as helpful or not. Once enough contributors from different points of view vote on the note as helpful, then a public note gets added underneath the post in question.
For instance, here's an example of a community note attached to a recent X post:
X has made the complex ranking algorithm behind the feature transparent and open-source, and users can view it online and download the latest data.
X says that community notes "do not represent X's viewpoint and cannot be edited or modified by our teams," adding that a community-flagged post is only removed if it violates X's rules, terms of service, or privacy policies.
Similar to X, Meta said its community notes will be written and rated by contributing users. It said the company will not write notes or decide which ones show up. Also like X, Meta said that its community notes "will require agreement between people with a range of perspectives to help prevent biased ratings."
Facebook, Instagram, and Threads users can sign up now to be among the first contributors to the new tool.
"As we make the transition, we will get rid of our fact-checking control, stop demoting fact-checked content and, instead of overlaying full-screen interstitial warnings you have to click through before you can even see the post, we will use a much less obtrusive label indicating that there is additional information for those who want to see it," Joel Kaplan, Meta's chief global affairs officer, said in Tuesday's statement.
Potential pros and cons of community notes
One possible issue with the feature is that by the time a note gets added to a potentially misleading post, the post may have already been widely viewed — spreading misinformation before it can be tamped down.
Another issue is that for a note to be added, contributors from across the political spectrum need to agree that a post is problematic or misleading, and in today's polarized political environment, concurring on facts has sometimes become increasingly difficult.
One possible advantage to the feature, though, is that the general public may be more likely to trust a consensus from their peers rather than an assessment handed down by a major corporation.
Maarten Schenk, cofounder and chief technology officer of Lead Stories, a fact-checking outlet, told the Poynter Institute that one benefit of X's community notes is that it doesn't use patronizing language.
"It avoids accusations or loaded language like 'This is false,'" Schenk told Poynter. "That feels very aggressive to a user."
And community notes can help combat misinformation in some ways. For example, researchers at the University of California, San Diego's Qualcomm Institute found in an April 2024 study that the X feature helped offset false health information in posts related to COVID-19. They also helped add accurate context.
In announcing the move, Zuckerberg said Meta's past content moderation practices have resulted in "too many mistakes" and "too much censorship." He said the new feature will prioritize free speech and help restore free expression on Meta's platforms.