I had missed this Tom Edsall column.
Pro-Trump commentators’ hopes of developing major followings on right-leaning websites after they left Facebook and Twitter have run up against a harsh reality: Their audiences on those sites have stagnated.
A Washington Post analysis of audience data for 47 prominent right-wing influencers who flocked last year to alternative social networks Gab and Gettr, the video-streaming site Rumble and the chat service Telegram found that their followings surged immediately after President Donald Trump was banned on the mainstream sites.
But those audiences have barely grown in the year since. In some cases, they even declined.
The influencers previously had seen steady growth on Twitter and other big platforms that distributed their messages to a broad audience. But after their jump to the niche sites, the analysis indicates, they largely failed to continue attracting new followers who weren’t already engaged fans….
The data helps strengthen the case for supporters of “deplatforming,” who argue that banning the accounts of people known for distributing lies can have a powerful impact on their ability to win mainstream attention or political influence.
It also calls into question whether this new and polarized online ecosystem — possibly to be joined soon by Trump’s long-promised social network, Truth Social — can build a sustainable business solely by catering to a radicalized right.
New Reuters Institute report:
Terms like echo chambers, filter bubbles, and polarisation are widely used in public and political debate but not in ways that are always aligned with, or based on, scientific work. And even among academic researchers, there is not always a clear consensus on exact definitions of these concepts.
In this literature review we examine, specifically, social science work presenting evidence concerning the existence, causes, and effect of online echo chambers and consider what related research can tell us about scientific discussions online and how they might shape public understanding of science and the role of science in society.
Echo chambers, filter bubbles, and the relationship between news and media use and various forms of polarisation has to be understood in the context of increasingly digital, mobile, and platform-dominated media environments where most people spend a limited amount of time with news and many internet users do not regularly actively seek out online news, leading to significant inequalities in news use.
When defined as a bounded, enclosed media space that has the potential to both magnify the messages delivered within it and insulate them from rebuttal, studies in the UK estimate that between six and eight percent of the public inhabit politically partisan online news echo chambers.
More generally, studies both in the UK and several other countries, including the highly polarised US, have found that most people have relatively diverse media diets, that those who rely on only one source typically converge on widely used sources with politically diverse audiences (such as commercial or public service broadcasters) and that only small minorities, often only a few percent, exclusively get news from partisan sources.
Studies in the UK and several other countries show that the forms of algorithmic selection offered by search engines, social media, and other digital platforms generally lead to slightly more diverse news use – the opposite of what the “filter bubble” hypothesis posits – but that self-selection, primarily among a small minority of highly partisan individuals, can lead people to opt in to echo chambers, even as the vast majority do not.
Research on polarisation offers a complex picture both in terms of overall developments and the main drivers and there is in many cases limited empirical work done outside the United States. Overall, ideological polarisation has, in the long run, declined in many countries but affective polarisation has in some, but not all, cases increased. News audience polarisation is much lower in most European countries, including the United Kingdom. Much depends on the specifics of individual countries and what point in time one measures change from and there are no universal patterns.
There is limited research outside the United States systematically examining the possible role of news and media use in contributing to various kinds of polarisation and the work done does not always find the same patterns as those identified in the US. In the specific context of the United States where there is more research, it seems that exposure to like-minded political content can potentially polarise people or strengthen the attitudes of people with existing partisan attitudes and that cross- cutting exposure can potentially do the same for political partisans.
Public discussions around science online may exhibit some of the same dynamics as those observed around politics and in news and media use broadly, but fundamentally there is at this stage limited empirical research on the possible existence, size, and drivers of echo chambers in public discussions around science. More broadly, existing research on science communication, mainly from the United States, documents the important role of self-selection, elite cues, and small, highly active communities with strong views in shaping these debates and highlights the role especially political elites play in shaping both news coverage and public opinion on these issues.
In summary, the work reviewed here suggests echo chambers are much less widespread than is commonly assumed, finds no support for the filter bubble hypothesis and offers a very mixed picture on polarisation and the role of news and media use in contributing to polarisation.
Katie Harbath joined Facebook FB -0.20% more than a decade ago as the first Republican employee in the company’s Washington, D.C., office, pushing skeptical members of Congress on the virtues of the young social network for healthy elections.
Now she is pitching a different message. After rising to become Facebook’s public-policy director for global elections, Ms. Harbath left the company last year and teamed with a group now advising lawmakers in Washington and Europe on legislation advocating more guardrails around social media.
In her role at Facebook, now Meta Platforms Inc., Ms. Harbath had been the face of the company on many political issues and a liaison with governments and parties around the world. She says that when she resigned in March, she had come to believe that unless there is urgent intervention from governments and tech platforms, social media will likely incubate future political violence like that of the Capitol riot on Jan. 6, 2021.
“I still believe social media has done more good than harm in politics, but it’s close,” she says. “Maybe it’s 52-48—and trending south.”
Ms. Harbath, 41 years old, is the highest-ranking former Facebook executive now working with the Integrity Institute, a startup nonprofit founded by former employees who had worked on identifying and mitigating potential societal harms caused by the company’s products. The institute is now advising lawmakers and think tanks around the world on these issues.
Ms. Harbath, now also a fellow at several Washington think tanks focused on election issues, joins a growing number of former Facebook executives who have gone public with their criticisms of the company. She says she no longer thinks her former company, including Chief Executive Mark Zuckerberg, has the will to address its core problems in the way she believes is necessary.
“I’m disappointed in leadership, and I hate the fact that I’m disappointed in leadership,” she said of the company.
Really looking forward to moderating this (free registration required):
Fair Elections and Free Speech Center | What Can (and Should) Journalists Do to Prevent Election Subversion and Another January 6?
Thursday, January 20 at 12:00pm to 1:00pm Virtual Event
This virtual event hosted by the Fair Elections and Free Speech Center at UCI Law brings together three leading journalists to discuss what role journalists can and should play in supporting free and fair elections in the United States given the risk of election subversion in light of events surrounding the 2020 election.
The Atlantic’s Barton Gellman, Votebeat’s Jessica Huseman, and The Washington Post’s Margaret Sullivanjoin in conversation and dialogue with UCI Law professor and Center co-director Richard L. Hasen.
Barton Gellman, a staff writer at The Atlantic, is the author most recently of Dark Mirror: Edward Snowden and the American Surveillance State and the bestselling Angler: The Cheney Vice Presidency. He has held positions as senior fellow at The Century Foundation, Lecturer at Princeton’s Woodrow Wilson School and visiting research collaborator at Princeton’s Center for Information Technology Policy.
Jessica Huseman is the editorial director of Votebeat. She was previously the lead elections reporter for ProPublica and helped manage the Electionland project for three federal election cycles, sharing information and tips with hundreds of newsrooms across the United States. She is the owner of The Friendly State News, which offers low and no cost training to local newsrooms.
Margaret Sullivan is the Washington Post’s media columnist and the author of “Ghosting the News: Local Journalism and the Crisis of American Democracy.” She was the chief editor of the Buffalo News and the longest serving public editor of the New York Times.
Very much looking forward to participating in this:
Our annual symposium is only a few weeks away!
It will be held virtually on January 21, 2022, from 9:00 AM – 3:45 PM. You can register for the symposium here. The link to the event will be provided upon registration.
Keynote Address by
FEC Commissioner, Shana M. Broussard, J.D.
Moderated by Professor Mary-Rose Papandrea, J.D.
UNC School of Law, Samuel Ashe Distinguished Professor of Constitutional Law
Professor Helen Norton, J.D.
Colorado University School of Law, University Distinguished Professor and Rothgerber Chair in Constitutional Law
Professor Martin Redish, J.D.
Northwestern University School of Law, Louis and Harriet Ancel Professor of Law and Public Policy
Evan Ringle, J.D., Ph.D. Candidate
UNC Hussman School of Journalism & Media
UNC Center for Information Technology and Public Life – Research Lead, Regulation of Election-Related Speech
Professor William Marshall, J.D.
UNC School of Law, William Rand Kenan Jr. Distinguished Professor of Law
Moderated by Professor Michael Gerhardt, M.S., J.D.
UNC School of Law, Burton Craige Distinguished Professor of Jurisprudence
Professor Richard Hasen,
UC Irvine School of Law, Chancellor’s Professor of Law and Political Science
Co-Director, Fair Elections and Free Speech Center
Professor Ciara Torres-Spelliscy, J.D.
Stetson University, Professor of Law
Professor Leslie Kendrick, M.Phil, D.Phil, J.D.
University of Virginia School of Law, White Burkett Miller Professor of Law and Public Affairs
Director, Center for the First Amendment
Moderated by Professor David Ardia, M.S., J.D., L.L.M.
UNC School of Law, Reef C. Ivey II Excellence Fund Term Professor of Law
Co-Director of the Center for Media Law and Policy
Professor Jasmine McNealy, Ph.D.
University of Florida, Associate Professor, Department of Media Production, Management, and Technology
Associate Director, Marion B. Brechner First Amendment Project
Professor Brenda Reddix-Smalls, J.D., L.L.M.
North Carolina Central University School of Law, Professor of Law
Professor Robert Yablon, M.A., J.D.
University of Wisconsin School of Law, Associate Professor
Co-Director, State Democracy Research Initiative
Professor Neema Guliani, J.D.
Legislative Counsel, American Civil Liberties Union
Weeks before the 2020 presidential election, the conservative broadcaster Glenn Beck outlined his prediction for how Election Day would unfold: President Donald J. Trump would be winning that night, but his lead would erode as dubious mail-in ballots arrived, giving Joseph R. Biden Jr. an unlikely edge.
“No one will believe the outcome because they’ve changed the way we’re electing a president this time,” he said.
None of the predictions of widespread voter fraud came true. But podcasters frequently advanced the false belief that the election was illegitimate, first as a trickle before the election and then as a tsunami in the weeks leading up to the violent attack at the Capitol on Jan. 6, 2021, according to new research.
Researchers at the Brookings Institution reviewed transcripts of nearly 1,500 episodes from 20 of the most popular political podcasts. Among episodes released between the election and the Jan. 6 riot, about half contained election misinformation, according to the analysis.
In some weeks, 60 percent of episodes mentioned the election fraud conspiracy theories tracked by Brookings. Those included false claims that software glitches interfered with the count, that fake ballots were used, and that voting machines run by Dominion Voting Systems were rigged to help Democrats. Those kinds of theories gained currency in Republican circles and would later be leveraged to justify additional election audits across the country.
he new research underscores the extent to which podcasts have spread misinformation using platforms operated by Apple, Google, Spotify and others, often with little content moderation. While social media companies have been widely criticized for their role in spreading misinformation about the election and Covid-19 vaccines, they have cracked down on both in the last year. Podcasts and the companies distributing them have been spared similar scrutiny, researchers say, in large part because podcasts are harder to analyze and review.
Facebook groups swelled with at least 650,000 posts attacking the legitimacy of Joe Biden’s victory between Election Day and the Jan. 6 siege of the U.S. Capitol, with many calling for executions or other political violence, an investigation by ProPublica and The Washington Post has found.
The barrage — averaging at least 10,000 posts a day, a scale not reported previously — turned the groups into incubators for the baseless claims supporters of President Donald Trump voiced as they stormed the Capitol, demanding he get a second term. Many posts portrayed Biden’s election as the result of widespread fraud that required extraordinary action — including the use of force — to prevent the nation from falling into the hands of traitors.
“LOOKS LIKE CIVIL WAR is BECOMING INEVITABLE !!!” read a post a month before the Capitol assault. “WE CANNOT ALLOW FRAUDULENT ELECTIONS TO STAND ! SILENT NO MORE MAJORITY MUST RISE UP NOW AND DEMAND BATTLEGROUND STATES NOT TO CERTIFY FRAUDULENT ELECTIONS NOW !”
Another post, made 10 days after the 2020 election, bore an avatar of a smiling woman with her arms raised in apparent triumph and read, “WE ARE AMERICANS!!! WE FOUGHT AND DIED TO START OUR COUNTRY! WE ARE GOING TO FIGHT… FIGHT LIKE HELL. WE WILL SAVE HER❤ THEN WERE GOING TO SHOOT THE TRAITORS!!!!!!!!!!!”
One post showed a Civil War-era picture of a gallows with more than two dozen nooses and hooded figures waiting to be hanged. Other posts called for arrests and executions of specific public figures — both Democrats and Republicans — depicted as betraying the nation by denying Trump a second term.
“BILL BARR WE WILL BE COMING FOR YOU,” wrote a group member after Barr announced that the Justice Department had found little evidence to support Trump’s claims of widespread vote-rigging. “WE WILL HAVE CIVIL WAR IN THE STREETS BEFORE BIDEN WILL BE PRES.”
Facebook executives have played down the company’s role in the Jan. 6 attack and have resisted calls, including from its own Oversight Board, for a comprehensive internal investigation. The company also has yet to turn over all the information requested by the congressional committee studying the Jan. 6 attack, though it says it is negotiating with the committee.
But the ProPublica-Post investigation, which analyzed millions of posts between Election Day and Jan. 6 and drew on internal company documents and interviews with former employees, provides the clearest evidence yet that Facebook played a critical role in the spread of false narratives that fomented the violence of Jan. 6.
Its efforts to police such content, the investigation also found, were ineffective and started too late to quell the surge of angry, hateful misinformation coursing through Facebook groups — some of it explicitly calling for violent confrontation with government officials, a theme that foreshadowed the storming of the Capitol that day amid clashes that left five people dead.
Drew Pusateri, a spokesman for Meta, Facebook’s newly renamed parent company, said that the platform was not responsible for the violence on Jan. 6. He pointed instead to Trump and others who voiced the lies that sparked the attack on the Capitol.
“The notion that the January 6 insurrection would not have happened but for Facebook is absurd,” Pusateri said in a statement. “The former President of the United States pushed a narrative that the election was stolen, including in-person a short distance from the Capitol building that day. The responsibility for the violence that occurred on January 6 lies with those who attacked our Capitol and those who encouraged them.”
To determine the extent of posts attacking Biden’s victory, The Post and ProPublica obtained a unique dataset of 100,000 groups and their posts, along with metadata and images, compiled by CounterAction, a firm that studies online disinformation. The Post and ProPublica used machine learning to narrow that list to 27,000 public groups that showed clear markers of focusing on U.S. politics. Out of the more than 18 million posts in those groups between Election Day and Jan. 6, the analysis searched for words and phrases to identify attacks on the election’s integrity.
The more than 650,000 posts attacking the election — and the 10,000-a-day average — is almost certainly an undercount. The ProPublica-Washington Post analysis examined posts in only a portion of all public groups, and did not include comments, posts in private groups or posts on individuals’ profiles. Only Facebook has access to all the data to calculate the true total — and it hasn’t done so publicly.
Nate Persily and Joshua Tucker have issued a new Report, How to fix social media? Start with independent research. A key issue for regulatory responses, they argue, is just “how little outsiders know about what is happening inside these companies.”
“As the earlier utopian prediction for social media turned decidedly pessimistic, research on these new technologies developed into a field of its own. However, because the platforms tightly controlled the data necessary to study these phenomena, academic researchers were limited in their efforts to get a handle on the scale, character, and causes of the various phenomena attributed to the rise of social media.
. . . . [While the 2021 Haugen revelations seem to have focused legislative attention in the United States and around the world. . . , there remains a real risk that legislation, particularly as it relates to content moderation, will be based on the snippets of data and research found in the recent document disclosures. To fill the void, Congress should mandate an unprecedented corporate data-sharing program to enable outside, independent researchers to conduct the kinds of analysis on social media platforms that firm insiders routinely perform.”
The Report advocates for federal legislation along the lines of the Platform Transparency and Accountability Act.
New Ben Smith NYT column.
Iranian hackers last year infiltrated the computer systems of Lee Enterprises Inc., a major American media company that publishes dozens of daily newspapers across the U.S., as part of a broader effort to spread disinformation about the 2020 presidential election, according to people familiar with the matter.
On Thursday, the Justice Department said the alleged hackers broke in to the digital systems of an unnamed media company in fall 2020 and tested how to create false news content. People familiar with the matter on Friday identified the company as Lee Enterprises, a publicly traded company headquartered in Davenport, Iowa, and one of the largest newspaper chains in the U.S.
The Federal Bureau of Investigation warned the unnamed company about the intrusion, prosecutors said. The day after the November presidential election, the hackers tried to get back into the media company’s system but failed, prosecutors said. The federal charging document in the case doesn’t indicate the hackers successfully published fake information under the unnamed media company’s news brands.
A spokesman and executives at Lee Enterprises didn’t respond to requests for comment. The Justice Department declined to comment.
On Thursday, U.S. authorities charged two Iranian nationals, Seyyed Mohammad Hosein Musa Kazemi and Sajjad Kashian, of cyber-related crimes, allegedly carried out to engage in voter intimidation and election interference ahead of last year’s U.S. presidential election between Joe Biden and then-President Donald Trump. The Treasury Department also sanctioned the pair along with four other Iranian nationals, describing them as “state-sponsored actors” involved in a disinformation campaign.
Officials said both defendants were presumed to be residing in Iran and aren’t in custody. Neither defendant could be reached for comment.
Cybersecurity experts and senior U.S. officials have long worried that media organizations could be hacked or otherwise manipulated to spread disinformation around an election result—a concern that grew more pronounced after Russia’s interference in the 2016 election. State election officials in recent years have urged voters to turn to official vote tabulations rather than media reports or projections due to a mix of concerns ranging from human error, inaccurate projections, and possible tampering.
Last year, researchers at Facebook showed executives an example of the kind of hate speech circulating on the social network: an actual post featuring an image of four female Democratic lawmakers known collectively as “The Squad.”
The poster, whose name was scrubbed out for privacy, referred to the women, two of whom are Muslim, as “swami rag heads.” A comment from another person used even more vulgar language, referring to the four women of color as “black c—s,” according to internal company documents exclusively obtained by The Washington Post.
The post represented the“worst of the worst” language on Facebook — the majority of it directed at minority groups, according to a two-year effort by a large team working across the company, the document said. The researchers urged executives to adopt an aggressive overhaul of its software system that would primarily remove only those hateful posts before any Facebook users could see them.
But Facebook’s leaders balked at the plan. According to two people familiar with the internal debate, top executives including Vice President for Global Public Policy Joel Kaplan feared the new system would tilt the scales by protecting some vulnerable groups over others. A policy executive prepared a document for Kaplan that raised the potential for backlash from “conservative partners,” according to the document. The people spoke to The Post on the condition of anonymity to discuss sensitive internal matters.
The previously unreported debate is an example of how Facebook’s decisions in the name of being neutral and race-blind in fact come at the expense of minorities and particularly people of color. Far from protecting Black and other minority users, Facebook executives wound up instituting half-measures after the “worst of the worst” project that left minorities more likely to encounter derogatory and racist language on the site, the people said.
“Even though [Facebook executives] don’t have any animus toward people of color, their actions are on the side of racists,” said Tatenda Musapatike, a former Facebook manager working on political ads and CEO of the Voter Formation Project, a nonpartisan, nonprofit organization that uses digital communication to increase participation in local state and national elections. “You are saying that the health and safety of women of color on the platform is not as important as pleasing your rich White man friends.”
The Black audience on Facebook is in decline, according to data from a study Facebook conducted earlier this year that was revealed in documents obtained by whistleblower Frances Haugen. According to the February report, the number of Black monthly users fell 2.7 percent in one month to 17.3 million adults. It also shows that usage by Black people peaked in September 2020. Haugen’s legal counsel provided redacted versions of the documents to Congress, which were viewed by a consortium of news organizations including The Post.
A new article by Karen Hao in the Technology Review explores “how Facebook and Google fund global misinformation.” It argues that “the tech giants are paying millions of dollars to the operators of clickbait pages,” thereby “bankrolling the deterioration of information ecosystems around the world.”
From the Article:
“In 2015, six of the 10 websites in Myanmar getting the most engagement on Facebook were from legitimate media, according to data from CrowdTangle, a Facebook-run tool. A year later, Facebook (which recently rebranded to Meta) offered global access to Instant Articles, a program publishers could use to monetize their content.
One year after that rollout, legitimate publishers accounted for only two of the top 10 publishers on Facebook in Myanmar. By 2018, they accounted for zero. All the engagement had instead gone to fake news and clickbait websites. . . .
It was during this rapid degradation of Myanmar’s digital environment that a militant group of Rohingya—a predominantly Muslim ethnic minority—attacked and killed a dozen members of the security forces, in August of 2017.
From Nick Coransaniti at the NY Times
“On Tuesday, Meta, the social media company formerly known as Facebook, announced changes that, on the surface, would appear to reduce such targeting. But it remains entirely possible for campaigns to get around these limitations. . . . . Indeed, the changes announced by Meta on Tuesday — which arrived amid a growing outcry over the damage social platforms have done to the political and social fabric — will most likely just force political campaigns to switch methods.”