I worked as a Meta fact-checker - Community Notes is no adequate replacement
Third-party fact-checkers are just one of many voices in the conversation but they are a crucial one.
I have a confession that might get me refused entry to the US during President Trump’s Golden Age: I have worked as a third-party fact-checker for Meta.
I would also argue, through my experience, that we are not the Orwellian overlords that critics make us out to be. On the contrary, we are only one of many voices in the conversation and, by scrapping independent fact-checkers entirely, an important perspective will be lost.
Independent fact-checkers have less power than many assume. During my time at a Meta partner organisation from May 2022 until late 2023, I had the ability to flag a post as false, altered, partly false, missing context, satire and true. But, even then, the prerogative to take down a post or an account rests only with Meta: a post may be deprioritised by the algorithm, and a warning that it is potentially misleading is attached to it, but fact-checkers can’t do much else. If someone wants to share it, despite the warnings, they can.
In my role, I tackled a very wide range of subjects, ranging from low-quality copypasta videos, photos and text on vaccine alarmism (“Vaccines are killing off celebrities”, “They want to insert microchips in children!”) to rambling essays filled with less obviously false or out-of-context information designed to manipulate readers, in many cases pro-Russian.
For example, five months after Putin’s full-scale invasion of Ukraine, Romanian Facebook started blowing up with millions of impressions on posts from users claiming to have received a government letter calling for them to mobilise to their nearest military centre. To fact-check this, I called the Ministry of Defense spokesperson, who clarified that this action is done routinely once a year to keep track of reservists. I wrote my review, quoting this explanation and flagging posts of this nature to be misleading. My own review was also scrutinised for clarity, objectivity and accuracy, before being published.
Once fact-checkers review a post, Meta blurs it, if it is an image or a video, and attaches a label explaining the post may be false, partly false, altered or lacking context, linking to the review we have written debunking the original post. Meta also reduces its distribution, though the extent to which it does so depends on the degree of falsehood assigned. For repeat offenders, Meta may also reduce their ability to advertise or to register as a “news” page.
It was certainly a challenging job. Fact-checking viral misinformation often felt akin to cutting off the dragon’s head, only for two to grow in its place: no sooner would we flag a post with alarmist claims that the Covid vaccine causes heart attacks and make you infertile than tens more would pop back up. Covid was a fan favourite (back in 2021), but other subjects that the misinformation machine frequently churned out appealed to panic around the war in Ukraine, climate change denial, antisemitism, and EU or NATO dictatorship. These posts would get millions of views and hundreds of thousands of interactions before being flagged.
Something else that may surprise readers - I was certainly dismayed to learn - is that factcheckers can’t review “opinions and speech from politicians”. This is very clearly stated in Meta’s “not eligible to fact-check” terms and conditions.
In general, fact-checkers can’t flag opinions, so the only way that the service intervenes in online debate is if opinions being voiced online very clearly contain false information.
Community notes, X’s community fact-checking tool and Zuckerberg’s favoured alternative, has no such exemptions. Community Notes was devised by Twitter before Elon Musk acquired the platform and renamed it "X". It allows any user to register as a contributor and write or rate notes purporting to offer context or debunk misleading posts. Notes appear alongside a post, not obstructing the post like on Meta platforms.
According to X’s rules, in order for notes to be published, they require positive ratings from contributors who have sometimes disagreed in their past ratings. The purported intention here is that “only notes rated helpful by people from diverse perspectives appear on posts”.
While a tool to prevent one-sided ratings sounds sensible, the Center for Countering Digital Hate(CCDH) reviewed 283 misleading posts regarding the 2024 US election with 2.9bn total views, and found that 74% of accurate proposed Community Notes were not being shown to users. In total, claims that Democrats are importing illegal voters, the 2020 election was stolen, voting systems are unreliable and misleading posts about Donald Trump gained 2.2bn views.
CCDH also discovered that, of the posts in its sample that displayed Community Notes, the accompanying context was viewed 13 times less than the original post, due to the delay in drafting, rating and publishing the note. (Admittedly, neither system is ideal in this respect. On Facebook, factchecks took a while to be published too.)
Academic research has identified that an author on Twitter is 80% more likely to delete a post if it has been amended with a Community Note, and other users are 60% less likely to share it. That shows there is trust in the notes, but what good does it do if incendiary claims that immigrants in Ohio eat pets go unchallenged?
We know the damage that online misinformation can cause in the real world. By scrapping its independent fact-checkers to promote free speech, Meta is embracing chaos.
Given its record, it’s concerning that Community Notes is being regarded as a self-reliant misinformation combatter, rather than an aide to the existing factchecking system.
No system is perfect, but Meta’s third-party fact-checking service satisfied online safeguarding needs with journalistic ethical standards. If this is lost, as Maria Ressa, the Philipino Nobel laureate and longtime online misinformation fighter, warns, we will step into a “world without facts.”