In an attempt to justify Facebook’s decision to shut down programs monitoring the veracity of users’ posts on the site, Mark Zuckerberg has equated fact-checking with censorship. Ilya Ber, founder of the fact-checking publication Provereno (lit. “Checked”) and a participant in Facebook's fact-checking program through a contract with Estonia’s Delfi publication, argues that the real issues lie with the platform's algorithms, not the fact-checkers. Ber warns that removing the fact-checking mechanism will make the internet's largest platform even more polluted with misinformation, leading to new scandals and crises involving deepfakes, made-up quotes, and conspiracy theories.
Facebook’s independent fact-checking initiative began in 2016 under public pressure on the platform's leadership following the unexpected victory of Donald Trump in that year’s U.S. presidential election. At the time, widespread speculation suggested that a disinformation campaign on social media, allegedly orchestrated by Russia, had played a decisive role in the election’s outcome. As a result, Facebook decided to involve a third party — independent fact-checking organizations certified by the International Fact-Checking Network (IFCN), essentially a guild of fact-checkers — to verify the veracity of posts on the platform. These experts assessed user-generated content for accuracy and flagged any identified misinformation with a special label.
On Jan. 7, 2025, Facebook founder Mark Zuckerberg announced the end of the fact-checking program. Initially, the decision will affect only the U.S. division, terminating contracts with 10 organizations out of the 90+ participants in the program, which verifies posts in over 60 languages. However, it is possible that Meta will expand the shutdown to other regions.
A screenshot of Meta founder Mark Zuckerberg's statement on the cancellation of fact-checking on Facebook and Instagram.
“It’s time to get back to our roots of free expression,” Zuckerberg declared, referring to fact-checkers as “politically biased” censors who undermine trust in American society.
The decision appears to be linked to Donald Trump’s re-election, as his administration, along with a team advocating for “freedom of speech,” is now pressuring Meta in the opposite direction. Once again, Zuckerberg has yielded to external influence, bringing the issue full circle.
Zuckerberg outlined several key arguments in his address.
“Fact-checking is censorship and fact-checkers are censors”
Censorship involves a system of state oversight regulating works of art, media, or personal correspondence. Formally, the First Amendment to the U.S. Constitution states that “Congress shall make no law...abridging the freedom of speech, or of the press,” meaning that private media outlets and platforms retain the legal right to regulate content as they see fit. However, in today’s world, moderation on social media — including deleting posts and banning users — is being interpreted by millions of ordinary citizens as an attack on their constitutional right to free speech. This perception is unsurprising given the enormous influence of social media on people’s lives and the near-monopolistic dominance of its major platforms.
Historically, the primary task of censors has been to suppress information deemed undesirable by the authorities, often by removing it altogether.
Fact-checkers, however, do something entirely different. Their work doesn’t remove information, but adds “speech to public debates, it provides context and facts for every citizen to make up their own mind,” as stated by the European Fact-Checking Standards Network (EFCSN) in response to Zuckerberg’s decision. Moreover, fact-checkers often preserve misinformation for future analysis, archiving web pages, taking screenshots of social media posts, downloading viral videos, and quoting misleading texts.
In the world of information, fact-checkers play a role akin to doctors in the physical world: they investigate (diagnose) and provide a verdict (prescribe treatment).
Unlike doctors, fact-checkers never ask for one’s blind trust. Fact-checking methodology requires every claim to be supported with references to credible sources. Moreover, fact-checkers don’t engage in “treatment” — they don’t delete flagged posts or impose restrictions on user accounts. The fate of flagged posts is determined by social media moderators and algorithms, which are governed by platform management.
For years, Facebook relied on manual content moderation, supplemented by an increasing number of automated tools. The logic was simple: there would never be enough human moderators to manage the enormous user base, which now exceeds 3 billion monthly active users.
Meta’s website reads:
“Our technology proactively detects and removes the vast majority of violating content before anyone reports it. We remove millions of violating posts and accounts every day on Facebook and Instagram. Most of this happens automatically, with technology working behind the scenes to remove violating content – often before anyone sees it. Other times, our technology will detect potentially violating content but send it to review teams to check and take action on it. This work is never finished. People will keep trying to evade our technology, so we need to keep improving.”
However, the more decisions were left to algorithms, the poorer the results became. False positives increased, user dissatisfaction grew, and the platform’s reputation as a space for free and safe expression suffered.
Facebook rarely banned users for systematically spreading misinformation. Exceptions were few and highly publicized, such as the banning of certain Trump supporters — and of Trump himself for inciting the Capitol riot. Still, most bans concerned violations related to:
- nudity or sexually explicit content
- hate speech, real threats, or direct attacks on individuals or groups
- self-harm or excessively graphic violence
- fake profiles
- spam
Meanwhile, Facebook still hosts a significant amount of content — including ads — that blatantly violates these rules. Regular users, particularly in the Russian-speaking segment of the platform, likely come across these posts frequently.
The system is far from flawless and will continue to be imperfect — especially now that fact-checkers have been removed from the U.S. segment of the site. Automatic moderation, unrelated to fact-checking, is also expected to be scaled back. This means there will be fewer “correct” and “incorrect” algorithmic flags.
It seems likely that Mark Zuckerberg, who presumably understands how his social network functions, deliberately made fact-checkers the scapegoats.
“Fact-checkers are too politically biased”
This claim is unsubstantiated, lacking reliable data. The only notable figures come from a study by the Harvard Kennedy School Misinformation Review, which surveyed 150 fact-checking specialists worldwide. Appendix A includes data on respondents’ political leanings:
“Experts leaned strongly toward the left of the political spectrum: very right-wing (0), fairly right-wing (0), slightly right-of-center (7), center (15), slightly left-of-center (43), fairly left-wing (62), very left-wing (21).”
The study does not specify how this data was collected, meaning it likely relied on the self-identification of its respondents.
This data is not enough to draw definitive conclusions — which is what Zuckerberg did. However, I must admit that my personal experience and observations suggest that a “liberal-left bias” among fact-checkers likely exists.
What truly matters is that personal political views are inherent to everyone, regardless of their profession. If a fact-checker strives for accuracy and impartiality while adhering to the fact-checkers’ Code of Principles, their views should not affect their ability to distinguish falsehoods from factual information. If someone believes a fact-checking professional is biased, that has to be verified and proven by analyzing specific cases from the fact-checker’s work, or from the broader work of their organization.
Fact-checking typically focuses on the most viral false claims and misleading statements. For instance, if misinformation about Donald Trump becomes widespread, but fact-checkers ignore it — even after being informed — and instead address less viral misinformation, such as stories about Kamala Harris, that could lead to a scandal and complaints filed with Meta or the International Fact-Checking Network (IFCN). However, no concrete accusations of such cases have been reported — only broad, generalized criticism. The process should work the other way around: facts and research first, followed by conclusions.
“Fact-checking will be replaced by Community Notes similar to X”
Zuckerberg has announced the transition, but the new tool has not yet been implemented, leaving its final design and functionality unclear. In the meantime, it is worth examining how “Community Notes” currently work on X.
On X, fact-checking responsibilities are handed over to the platform’s users, who can attach notes to controversial posts, providing context and clarifying potential inaccuracies. These notes undergo a review by other community members, and only after reaching a required approval threshold are they displayed under a post. The system's quality and effectiveness, however, vary significantly.
Community Notes on the same post can appear and disappear based on the consensus view among participants in the program. However, there is no oversight of these participants — their qualifications or potential biases remain unchecked. This leaves the system open to manipulation by seemingly random users who could, for instance, be paid to influence outcomes for political or commercial purposes. Documented cases of such incidents have been detailed online.
Occasional errors with significant consequences have also been reported. For instance, in October 2023, the Israeli Prime Minister was among the X users who shared a photo of the remains of a child killed by Hamas militants. A different user then doctored the image, putting a puppy in the place of the child’s charred corpse, leading to widespread claims that the photo of the child had been fabricated using the “original” image of the dog. A Community Note was even added to political commentator Ben Shapiro's post featuring the photo of the child. But it was later revealed that the confusion stemmed from a mistake in the AI tool “AI or Not,” which users had relied on to assess the image’s authenticity. The photograph of the child killed by Hamas militants was ultimately confirmed to be genuine — it was the photo of the puppy that was the fake.
A screenshot of a false “Community Note” attached to a post by political commentator Ben Shapiro on X.
Beyond the English-speaking segment of X, “Community Notes” may not work at all. This is hardly surprising. If taking part in the process is driven solely by altruism, most people will not engage. Let’s assume one in 10,000 users contributes to “Community Notes.” With approximately 100 million active English-speaking accounts on X, this means 10,000 people might participate at least once, and about 1,000 may do so regularly.
In theory, such a program could be effective. However, the number of Russian-speaking users on the platform is no more than 2 million. Applying the same ratio, only 20 people would actively contribute to creating these notes — and they’d likely do so in their spare time. As a result, the program’s effectiveness would be close to zero.
Here’s one example: the fact-checking team at Provereno proved that both the 2018 news about Trump being added to the Ukrainian “Myrotvorets” website’s database and the claim that he was removed after winning the November 2024 election were false. Yet posts about this — in both Russian and English — remain up on X without any Community Notes attached.
In May 2024, X reported that around 500,000 users worldwide had signed up to participate in “Community Notes.” However, the platform has not disclosed how many of these users are genuinely active contributors. The earlier-mentioned ratio can be considered an expert estimate. For comparison, one can look at how many registered Wikipedia users actively participate in creating new articles and editing existing ones — as of July 2024, nearly 325,000 people were registered as editors of the English-language Wikipedia, but only about 38,000 were actively contributing.
Wikipedia is often cited as an example of successful crowdsourcing, supposedly real world proof that Community Notes could work well. While this is a reasonable argument, it has a significant flaw. Wikipedia is a secondary source, meaning all statements in its articles must be based on authoritative primary sources. Dedicated editors gather and summarize pre-verified information. In contrast, when dealing with fresh misinformation, people have to conduct investigations themselves. Currently, users can reference fact-checkers’ analyses, but if fact-checkers are eliminated, there will be no such sources to cite — leaving users to verify information independently.
Fact-checking requires specific skills and often relies on specialized, paid software. In complex cases, it can also be highly time-consuming. Expecting qualified individuals to perform this work regularly — for free, and without a drop in quality — seems overly optimistic.
“Fact-checking is ineffective”
Community Notes may be a controversial topic, but what about fact-checkers? After all, Zuckerberg himself said they “destroyed more trust than they've created.” What counter arguments are there to this claim?
Back in 2024, Meta pointed to the effectiveness of its labeling system prior to the EU Parliament elections: “Between July and December 2023, over 68 million pieces of content viewed in the EU on Facebook and Instagram had fact checking labels. When a fact-checked label is placed on a post, 95% of people don’t click through to view it.”
What does the removal of fact-checking mean for ordinary Facebook users? On the one hand, the overall quantity of fake news is unlikely to increase — it’s not as if users feel particularly constrained now for fear of bans. However, the impact of that fake news could grow, as questionable posts will appear in feeds more frequently. This raises the likelihood of major scandals and crises connected with deepfakes, invented quotes, and conspiracy theories.
One way or another, Zuckerberg has now legitimized and effectively mainstreamed the populist narrative that fact-checkers are censors. He has implied that getting rid of them would usher in a “golden age” on his platform. It won’t.