Over four months, I’ve drafted 65 notes debunking conspiracy theories on topics ranging from airplane crashes to Ben & Jerry’s ice cream. I’ve tried to flag fake artificial intelligence-generated video clips, viral hoax security threats and false reports about an ICE partnership with DoorDash.
Only three of them got published, all related to July’s Texas floods. That’s an overall success rate of less than 5 percent. My proposed notes were on topics other news outlets — including Snopes, NewsGuard and Bloomberg News — had decided were worth publishing their own fact checks about.
Zuckerberg fired professional fact-checkers, leaving users to fight falsehoods with community notes. As the main line of defense against hoaxes and deliberate liars exploiting our attention, community notes appear — so far — nowhere near up to the task.
Feeds filled with inaccurate information matter for the 54 percent of American adults who, according to Pew Research Center, get news from social media.
Zuckerberg’s decision to fire fact-checkers was widely criticized as a craven attempt to appeal to President Donald Trump. He said Meta was adopting the crowdsourced community notes system used by Elon Musk’s X because users would be more trustworthy and less biased than fact-checkers. Before notes get published to posts, enough users have to agree they’re helpful. But agreement turns out to be more complicated than it sounds.