BY: Ibraheem Muhammad Mustapha
Meta’s recent decision to dismantle its third-party fact-checking program marks a dramatic departure from its previous stance on misinformation and disinformation. For a platform with over three billion users, this shift represents a profound gamble on the power of community-driven moderation—a gamble we believe could backfire catastrophically. While the social media giant insists that this shift is motivated by a desire to enhance free speech, the reality is that it threatens to undo years of progress in combating misinformation.
From Accountability to “Community Notes”
In 2016, as concerns about the integrity of online information reached a boiling point, Meta launched its independent fact-checking initiative. This was in response to global outcries over the role of social media in influencing political elections and spreading conspiracy theories. Meta partnered with established organizations like Reuters Fact Check, Agence France-Presse, and PolitiFact to independently assess the accuracy of content circulating on its platforms, including Facebook, Instagram, and Threads. When content was flagged as inaccurate or misleading, warning labels were added, helping users identify problematic posts. The program, though not without its limitations, was an essential part of the ecosystem of accountability Meta built in an effort to protect public discourse.
Fast forward to 2025, and Meta has announced its plan to phase out this program, beginning in the United States, in favor of a new system called “Community Notes.” How does it work? The premise is simple: vetted users provide context to potentially misleading posts, and their contributions are rated for helpfulness by a diverse set of peers. Once deemed helpful, the notes are made public. Zuckerberg’s rationale? Fact-checking led to “too much censorship” and stifled “free expression,” particularly following the recent U.S. presidential election. This decision, Zuckerberg claims, represents a return to the platform’s roots—prioritizing speech above all. Instead of relying on professional, independent fact-checkers, Meta will now crowdsource the task of annotating potentially misleading content, leaving users to add context and commentary to posts they deem false.
At FactCheckAfrica, we believe this is a grave mistake—a dangerous concession to the political pressures surrounding Meta and a significant step backward in the fight for truth online.
The Illusion of “Free Speech”
Zuckerberg’s framing of the decision as a restoration of “free expression” rings hollow when viewed through the lens of digital integrity. While it’s true that free speech is a cornerstone of democratic societies, Meta’s shift comes at the expense of responsibility. Moreover, Zuckerberg’s framing of this decision as a return to Meta’s “roots” ignores the evolution of digital platforms as public utilities. With great reach comes great responsibility. Platforms like Facebook and Instagram are not mere conduits of information; they shape perceptions, influence decisions, and, increasingly, define reality for billions of people.
The fact-checking system Meta has now abandoned wasn’t about silencing dissent; it was about providing users with the information they need to differentiate truth from falsehood. As Angie Drobnic Holan, head of the International Fact-Checking Network, noted in her statement in response to Meta’s move, “Fact-checking journalism has never censored or removed posts; it’s added information and context to controversial claims, and it’s debunked hoax content and conspiracy theories.” These fact-checkers followed a Code of Principles, ensuring their work was nonpartisan, transparent, and rooted in journalistic ethics.
Meta’s decision, by contrast, elevates the power of unverified user-driven annotations while removing the expert oversight that gave fact-checking its authority. Users—often without the requisite training or access to reliable sources—will be the new arbiters of truth. But will they truly be capable of tackling the complexity of modern disinformation? Will they be able to discern between opinion and fact, or will they simply reinforce their own biases? The evidence from X’s (formerly Twitter) community notes feature, which Meta is now borrowing from, suggests the latter.
In 2023, reports from The Washington Post and The Centre for Countering Digital Hate revealed that X’s community notes feature failed to stem the tide of lies on the platform. Despite being touted as a solution to misinformation, the system struggled to deal with high-profile falsehoods and often allowed misleading content to persist for long periods. This raises a crucial question: if a more established platform like X, which operates on a similar scale, has been unable to rein in misinformation using this model, what makes Meta think it will succeed?
Meta’s decision does not exist in a vacuum. As one of the largest tech companies globally, its choices set a precedent for other platforms. While Meta positions the change as a move to restore free speech, the reality is that it will likely enable the unchecked spread of lies and conspiracy theories. In 2023 in Australia alone, Meta placed warning labels on millions of pieces of misleading content across Facebook and Instagram. Numerous studies also showed that these efforts helped to slow the spread of harmful information, particularly related to issues like COVID-19, political extremism, and public health. With Meta stepping back from this responsibility, there is little reason to believe that user-generated notes will be as effective at curbing the spread of dangerous disinformation.
What’s at stake?
Replacing independent oversight with crowdsourced moderation risks eroding the already fragile trust in digital platforms. Users are more likely to question the credibility of annotations from anonymous peers than from impartial experts. This erosion of trust creates fertile ground for misinformation to thrive, particularly in polarized environments where truth is contested.
One of the most immediate consequences of Meta’s decision is the likely financial strain it will place on independent fact-checking organizations. Meta’s fact-checking initiative provided essential funding to up to 90 accredited organizations worldwide, who have relied on these partnerships to carry out their important work. The ripple effects are likely to weaken the global fight against misinformation, as the shift could destabilize many fact-checking organizations, forcing them to pivot toward different sources of funding or face diminished operational capacity.
Moreover, independent fact-checkers have historically adhered to a strict Code of Principles, ensuring transparency, nonpartisanship, and accuracy in their work. These organizations have become trusted sources of reliable information in a world increasingly plagued by misinformation. Meta’s departure from these rigorous, external verification processes could result in the dilution of trust in the information circulating on its platforms.
The potential for community notes to be weaponized for political gain is another critical issue. With users taking the lead in flagging content as misleading, there is an inherent risk of manipulation by interest groups or individuals with political agendas. Unlike professional fact-checkers, who are trained and guided by ethical standards, users may be more prone to bias, whether intentional or not. Without external oversight, the community notes system could devolve into a tool for stifling opposing viewpoints or promoting partisan narratives.
Moreover, the timing of Meta’s decision raises uncomfortable questions. For a company that once banned Trump’s account after the January 6 Capitol riots, this shift feels less like a principled stand for free expression and more like capitulation to political pressure. This is especially concerning in the wake of the U.S. elections, where political actors, including President-elect Trump and his conservative allies, expressed satisfaction with Meta’s shift. It is not difficult to imagine how certain groups could exploit a system like community notes to amplify their messages while dismissing opposing viewpoints as “misleading” or “false.” In a time when misinformation has become a powerful political tool, this type of open-ended, user-generated content moderation could fuel divisiveness rather than combat it.
A Wake-Up Call for Accountability
Meta’s abandonment of independent fact-checking is not just a corporate misstep—it is a threat to the very fabric of informed discourse in the digital age. While its move may be a short-term gain for free speech advocates, it represents a long-term risk to the integrity of the digital information landscape.
As Meta pivots toward a model that crowdsources truth, the onus is on users, journalists, and policymakers to step up and demand more. At FactCheckAfrica, we continue to believe in the power of independent journalism and fact-checking organizations to hold platforms accountable. But we cannot do this alone. We need stronger regulatory frameworks, more transparent business practices, and, above all, a commitment to the truth that transcends political and financial pressures.
In the absence of corporate responsibility, we must all become guardians of truth. The stakes are too high for us to remain passive in the face of Meta’s dangerous shift. As global citizens, we must demand more from our platforms—because, in the battle for truth, silence is complicity.
Edited by Habeeb Adisa