Mark Zuckerberg says Meta's 'community notes' are inspired by Elon Musk's X. Here's how they work — and how they don't.
- Mark Zuckerberg's plan to replace fact checkers with "community notes" is a familiar one.
- A similar system of community moderation is already in place on Elon Musk's X.
- On X, community notes let users add context to posts. Meta has said it seems to work well.
Mark Zuckerberg says Meta will use "community notes" to moderate content on its platforms like Facebook and Instagram β but what exactly does that mean, and how has it worked on other platforms?
Meta said the feature would function much like it does on Elon Musk's platform, where certain contributors can add context to posts they think are misleading or need clarification. This type of user-generated moderation would largely replace Meta's human fact-checkers.
"We've seen this approach work on X β where they empower their community to decide when posts are potentially misleading and need more context and people across a diverse range of perspectives decide what sort of context is helpful for other users to see," Meta said in its announcement Tuesday.
Musk, who has a sometimes-tense relationship with Zuckerberg, appeared to approve of the move, posting "This is cool" on top of a news article about the changes at Meta.
So, will it be cool for Meta and its users? Here's a primer on "community notes" β how it came to be, and how it's been working so far on X:
How the 'community notes' feature was born
The idea of "community notes" first came about at Twitter in 2019, when a team of developers at the company, now called X, theorized that a crowdsourcing model could solve the main problems with content moderation. Keith Coleman, X's vice president of product who helped create the feature, told Asterisk magazine about its genesis in an interview this past November.
Coleman told the outlet that X's previous fact-checking procedures, run by human moderators, had three main problems: dedicated staff couldn't fact-check claims in users' posts fast enough, there were too many posts to monitor, and the general public didn't trust a Big Tech company to decide what was or wasn't misleading.
This is cool pic.twitter.com/kUkrvu6YKY
β Elon Musk (@elonmusk) January 7, 2025
Coleman told Asterisk that his team developed a few prototypes and settled on one that allowed users to submit notes that could show up on a post.
"The idea was that if the notes were reasonable, people who saw the post would just read the notes and could come to their own conclusion," he said.
And in January 2021, the company launched a pilot program of the feature, then called "Birdwatch," just weeks after the January 6 Capitol riot. On its first day, the pilot program had 500 contributors.
Coleman told the outlet that for the first year or so of the pilot program β which showed community notes not directly on users' posts but on a separate "Birdwatch" website β the product was very basic, but over time, it evolved and performed much better than expected.
When Musk took over the platform in 2022, he expanded the program beyond the US, renamed it "community notes," and allowed more users to become contributors.
Around the same time, he disassembled Twitter's trust and safety team, undid many of the platform's safety policies, and lowered the guardrails on content moderation. Musk said in 2022 that the community notes tool had "incredible potential for improving information accuracy."
It's unclear how many users participate in community notes contributors. It's one of the platform's main sources of content moderation. X didn't immediately respond to a request for comment from BI.
How the community notes feature works on X
The community notes feature is set to roll out on Meta's Instagram, Facebook, and Threads platforms over the next few months, the company said in a statement shared with BI. Meta said the feature on its platforms would be similar to X's.
On X, community notes act as a crowd-sourced way for users themselves to moderate content without the company directly overseeing that process.
A select group of users who sign up as "contributors" can write a note adding context to any post that could be misleading or contain misinformation.
Then, other contributors can rate that note as helpful or not. Once enough contributors from different points of view vote on the note as helpful, then a public note gets added underneath the post in question.
For instance, here's an example of a community note attached to a recent X post:
January moment pic.twitter.com/92nRy2eiW0
β Just Posting Ls (@MomsPostingLs) January 7, 2025
X has made the complex ranking algorithm behind the feature transparent and open-source, and users can view it online and download the latest data.
X says that community notes "do not represent X's viewpoint and cannot be edited or modified by our teams," adding that a community-flagged post is only removed if it violates X's rules, terms of service, or privacy policies.
Similar to X, Meta said its community notes will be written and rated by contributing users. It said the company will not write notes or decide which ones show up. Also like X, Meta said that its community notes "will require agreement between people with a range of perspectives to help prevent biased ratings."
Facebook, Instagram, and Threads users can sign up now to be among the first contributors to the new tool.
"As we make the transition, we will get rid of our fact-checking control, stop demoting fact-checked content and, instead of overlaying full-screen interstitial warnings you have to click through before you can even see the post, we will use a much less obtrusive label indicating that there is additional information for those who want to see it," Joel Kaplan, Meta's chief global affairs officer, said in Tuesday's statement.
Potential pros and cons of community notes
One possible issue with the feature is that by the time a note gets added to a potentially misleading post, the post may have already been widely viewed β spreading misinformation before it can be tamped down.
Another issue is that for a note to be added, contributors from across the political spectrum need to agree that a post is problematic or misleading, and in today's polarized political environment, concurring on facts has sometimes become increasingly difficult.
One possible advantage to the feature, though, is that the general public may be more likely to trust a consensus from their peers rather than an assessment handed down by a major corporation.
Maarten Schenk, cofounder and chief technology officer of Lead Stories, a fact-checking outlet, told the Poynter Institute that one benefit of X's community notes is that it doesn't use patronizing language.
"It avoids accusations or loaded language like 'This is false,'" Schenk told Poynter. "That feels very aggressive to a user."
And community notes can help combat misinformation in some ways. For example, researchers at the University of California, San Diego's Qualcomm Institute found in an April 2024 study that the X feature helped offset false health information in posts related to COVID-19. They also helped add accurate context.
In announcing the move, Zuckerberg said Meta's past content moderation practices have resulted in "too many mistakes" and "too much censorship." He said the new feature will prioritize free speech and help restore free expression on Meta's platforms.
Both President-elect Donald Trump and Musk have championed the cause of free speech online, railed against content moderation as politically biased censorship, and criticized Zuckerberg for his role overseeing the public square of social media.
One key person appeared pleased with the change: Trump said Tuesday that Zuckerberg had "probably" made the changes in response to previous threats issued by the president-elect.