“Meta just flipped the switch that prevents misinformation from spreading in the United States”

Platformer:

Last week, Meta announced a series of changes to its content moderation policies and enforcement strategies designed to curry favor with the incoming Trump administration. The company ended its fact-checking program in the United States, stopped scanning new posts for most policy violations, and created carve-outs in its community standards to allow dehumanizing speech about transgender people and immigrants. The company also killed its diversity, equity and inclusion program.

Behind the scenes, the company was also quietly dismantling a system to prevent the spread of misinformation. When the company announced on Jan. 7 that it would end its fact-checking partnerships, the company also instructed teams responsible for ranking content in the company’s apps to stop penalizing misinformation, according to sources and an internal document obtained by Platformer.

The result is that the sort of viral hoaxes that ran roughshod over the platform during the 2016 US presidential election — “Pope Francis endorses Trump,” Pizzagate, and all the rest — are now just as eligible for free amplification on Facebook, Instagram, and Threads as true stories.

In 2016, of course, Meta hadn’t yet invested huge sums in machine-learning classifiers that can spot when a piece of viral content is likely a hoax. But nine years later, after the company’s own analyses found that these classifiers could reduce the reach of these hoaxes by more than 90 percent, Meta is shutting them off. 

Meta declined to comment on the changes. Instead, it pointed me to a letter and a blog post in which it had hinted that this change was coming. 

The letter was sent in August by Zuckerberg to Rep. Jim Jordan, the chairman of the House Judiciary Committee. In it, Zuckerberg expressed his discomfort with the Biden Administration’s efforts to pressure the company to remove certain posts about COVID-19. Zuckerberg also expressed regret that the company had temporarily reduced the distribution of stories about Hunter Biden’s laptop, which Meta and Twitter had both done out of fear that they had been the result of a Russian hack-and-leak operation. The few hours that the story’s distribution was limited would go on to become a Republican cause célèbre.

As a kind of retroactive apology for bowing to censorship requests in the past, and for the company’s own actions in the Hunter Biden case, Zuckerberg said that going forward, the company would no longer reduce the reach of posts that had been sent to fact checkers but not yet evaluated. Once they had been evaluatedMeta would continue to reduce the reach of posts that had been designated as false. 

In hindsight, this turned out to be the first step toward killing off Meta’s misinformation efforts: granting hoaxes a temporary window for expanded reach while they awaited fact checking.


That brings us to the blog post: Joel Kaplan’s “More speech, fewer mistakes,” which was published last Tuesday and among other things announced the end of the company’s US fact-checking partnerships. Buried toward the bottom were these two sentences: 

We also demote too much content that our systems predict might violate our standards. We are in the process of getting rid of most of these demotions and requiring greater confidence that the content violates for the rest.

At the time, Kaplan did not elaborate on which of these demotions the company planned to get rid of. Platformer can now confirm that misinformation-related demotions have been eliminated at the company….

Share this: