This is a good survey article by Brendan Nyhan about what we know (or wrongly think we know) about misinformation regarding factual matters, including on policy and politics: the sources of misinformation, the distribution of public beliefs about it, and an assessment of which interventions work or fail. An excerpt:
In general, interventions targeting the public face difficult issues in reaching the individuals who hold misperceptions, creating durable changes in beliefs, and scaling in a cost-effective manner across the population. It is therefore important to also consider alternate approaches that seek to limit misperceptions by reducing the supply of misinformation and its spread. One approach is to change the incentives or practices of political elites and publishers. In one field experiment testing the effects of these incentives, a random subset of state legislators from nine states were sent messages before the 2012 election about the political costs of having false claims identified by fact-checkers.
Those who were sent the messages were less likely to have the accuracy of their statements questioned publicly, suggesting that the reminder discouraged false claims (Nyhan and Reifler 2015). Facebook has also announced that it would reduce the reach of groups that repeatedly post false claims and content from publishers who try to game Facebook’s algorithms but have limited reach online, which may not only reduce the prevalence of misinformation but discourage publishers from using such tactics (Dreyfuss and Lapowsky 2019). In addition, online platforms can warn people about false claims and limit their reach when they have been identified by third-party fact-checkers, overcoming the scale and targeting problems that fact-checkers otherwise face. Facebook has made the most extensive efforts in this regard and has seemingly succeeded in reducing the prevalence of false content in the News Feed. Guess et al. (2018) estimate that visits to untrustworthy websites by Americans declined from 27 percent in fall 2016 to 7 percent in fall 2018. Allcott, Gentzkow, and Yu (2019) also find a differential decline in fake news stories during this period on Facebook relative to Twitter, which employs less aggressive content moderation practices, suggesting the same conclusion. …
Even exposure to the ill-defined term “fake news” and claims about its prevalence can be harmful. In an experimental study among respondents from Mechanical Turk, Van Duyn, and Collier (2019) find that when people are exposed to tweets containing the term “fake news,” they become less able to discern real from fraudulent news stories. Similarly, Clayton et al. (2019) find that participants from Mechanical Turk who are exposed to a general warning about the prevalence of misleading information on social media then tend to rate headlines from both legitimate and untrustworthy news sources as less accurate, suggesting that the warning causes an indiscriminate form of skepticism.Any evidence-based response to the problem of misperceptions must thus begin with an effort to counter misinformation about the problem itself. Only then can we design interventions that are proportional to the severity of the problem and consistent with the values of a democratic society