In the fall of 2018, Jonah Peretti, chief executive of online publisher BuzzFeed, emailed a top official at Facebook Inc. The most divisive content that publishers produced was going viral on the platform, he said, creating an incentive to produce more of it.
He pointed to the success of a BuzzFeed post titled “21 Things That Almost All White People are Guilty of Saying,” which received 13,000 shares and 16,000 comments on Facebook, many from people criticizing BuzzFeed for writing it, and arguing with each other about race. Other content the company produced, from news videos to articles on self-care and animals, had trouble breaking through, he said.
Mr. Peretti blamed a major overhaul Facebook had given to its News Feed algorithm earlier that year to boost “meaningful social interactions,” or MSI, between friends and family, according to internal Facebook documents reviewed by The Wall Street Journal that quote the email.
BuzzFeed built its business on making content that would go viral on Facebook and other social media, so it had a vested interest in any algorithm changes that hurt its distribution. Still, Mr. Peretti’s email touched a nerve.
Facebook’s chief executive, Mark Zuckerberg, said the aim of the algorithm change was to strengthen bonds between users and to improve their well-being. Facebook would encourage people to interact more with friends and family and spend less time passively consuming professionally produced content, which research suggested was harmful to their mental health.
Within the company, though, staffers warned the change was having the opposite effect, the documents show. It was making Facebook’s platform an angrier place.
Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.
“Our approach has had unhealthy side effects on important slices of public content, such as politics and news,” wrote a team of data scientists, flagging Mr. Peretti’s complaints, in a memo reviewed by the Journal. “This is an increasing liability,” one of them wrote in a later memo.
They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares,” researchers noted in internal memos.
Some political parties in Europe told Facebook the algorithm had made them shift their policy positions so they resonated more on the platform, according to the documents.
“Many parties, including those that have shifted to the negative, worry about the long term effects on democracy,” read one internal Facebook report, which didn’t name specific parties.
Mark Zuckerberg has publicly said Facebook Inc. allows its more than three billion users to speak on equal footing with the elites of politics, culture and journalism, and that its standards of behavior apply to everyone, no matter their status or fame.
In private, the company has built a system that has exempted high-profile users from some or all of its rules, according to company documents reviewed by The Wall Street Journal.
The program, known as “cross check” or “XCheck,” was initially intended as a quality-control measure for actions taken against high-profile accounts, including celebrities, politicians and journalists. Today, it shields millions of VIP users from the company’s normal enforcement process, the documents show. Some users are “whitelisted”—rendered immune from enforcement actions—while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.
At times, the documents show, XCheck has protected public figures whose posts contain harassment or incitement to violence, violations that would typically lead to sanctions for regular users. In 2019, it allowed international soccer star Neymar to show nude photos of a woman, who had accused him of rape, to tens of millions of his fans before the content was removed by Facebook. Whitelisted accounts shared inflammatory claims that Facebook’s fact checkers deemed false, including that vaccines are deadly, that Hillary Clinton had covered up “pedophile rings,” and that then-President Donald Trump had called all refugees seeking asylum “animals,” according to the documents.
A 2019 internal review of Facebook’s whitelisting practices, marked attorney-client privileged, found favoritism to those users to be both widespread and “not publicly defensible.”
“We are not actually doing what we say we do publicly,” said the confidential review. It called the company’s actions “a breach of trust” and added: “Unlike the rest of our community, these people can violate our standards without any consequences.”
Despite attempts to rein it in, XCheck grew to include at least 5.8 million users in 2020, documents show. In its struggle to accurately moderate a torrent of content and avoid negative attention, Facebook created invisible elite tiers within the social network.
In describing the system, Facebook has misled the public and its own Oversight Board, a body that Facebook created to ensure the accountability of the company’s enforcement systems….
In June 2020, a Trump post came up during a discussion about XCheck’s hidden rules that took place on the company’s internal communications platform, called Facebook Workplace. The previous month, Mr. Trump said in a post: “When the looting starts, the shooting starts.”
A Facebook manager noted that an automated system, designed by the company to detect whether a post violates its rules, had scored Mr. Trump’s post 90 out of 100, indicating a high likelihood it violated the platform’s rules.
For a normal user post, such a score would result in the content being removed as soon as a single person reported it to Facebook. Instead, as Mr. Zuckerberg publicly acknowledged last year, he personally made the call to leave the post up. “Making a manual decision like this seems less defensible than algorithmic scoring and actioning,” the manager wrote.
Mr. Trump’s account was covered by XCheck before his two-year suspension from Facebook in June. So too are those belonging to members of his family, Congress and the European Union parliament, along with mayors, civic activists and dissidents.
While the program included most government officials, it didn’t include all candidates for public office, at times effectively granting incumbents in elections an advantage over challengers. The discrepancy was most prevalent in state and local races, the documents show, and employees worried Facebook could be subject to accusations of favoritism.
Mr. Stone acknowledged the concern but said the company had worked to address it. “We made multiple efforts to ensure that both in federal and nonfederal races, challengers as well as incumbents were included in the program,” he said.
New report from NYU’s Paul M. Barrett, Justin Hendrix, J. Grant Sims.
Some critics of the social media industry contend that widespread use of Facebook, Twitter, and YouTube has contributed to increased political polarization in the United States. But Facebook, the largest social media platform, has disputed this contention, saying that it is unsupported by social science research. Determining whether social media plays a role in worsening partisan animosity is important because political polarization has pernicious consequences. We conclude that social media platforms are not the main cause of rising partisan hatred, but use of these platforms intensifies divisiveness and thus contributes to its corrosive effects.
The select committee investigating the Jan. 6 insurrection is seeking a massive tranche of records from social media companies,on whose platformsmany defendants charged in the Capitol attackplanned and coordinated their actions.
In a series of letters dated Aug. 26, the Democratic-controlled panel asked the companies, which include Facebook, Google, Twitter, Parler, 4chan, Twitch and TikTok, for all records and documents since April 1, 2020, relating to misinformation around the 2020 election, efforts to overturn the 2020 election, domestic violent extremists associated with efforts to overturn the election and foreign influence in the 2020 election.
One of Hong Kong’s most prominent singers has been charged with corruption for a performance he gave at a 2018 rally to support a pro-democracy candidate, the latest in a string of allegations brought by the city’s antigraft watchdog against pro-democracy figures.
Anthony Wong Yiu-ming, 59 years old, an outspoken critic of the city’s government, was arrested Monday morning and later released on bail.
The Independent Commission Against Corruption said on Monday that Mr. Wong sang two songs at a rally for pro-democracy candidate Au Nok-shin, who was running for a seat on the Legislative Council, or LegCo, the city’s top lawmaking body. Mr. Wong engaged in corrupt conduct by providing entertainment to induce another person to vote for Mr. Au, who has also been charged, the agency said.
Hong Kong’s elections ordinance bans the conduct of providing refreshments or entertainment to favor a candidate, but in the past charges have been rare. If convicted, a person could face imprisonment up to seven years on top of a fine as much as $64,000, the ordinance says.
Now that I have finished a draft of a new Article, Political Conduct and the First Amendment, I am eager to join the conversation on the ELB. I couldn’t be more thankful to Rick for including me as part of the team. I am a devout reader of the blog and look forward to broadening the ongoing discussion in the election law community about how to improve both democratic governance and faith in democratic institutions.
In the meanwhile, like many of us, I have been wrestling with how to make sense of the Roberts Court’s indifference to voters and democracy. Political Conduct and the First Amendment is my take on the bigger picture:
Preview: The First Amendment’s primary constitutional role is to defend our nation’s commitment to the collective project of self-governance. Its provisions protect both speech and political conduct toward the end of securing vital channels for influencing public policymaking, demanding responsiveness, and ensuring accountability. Over time, however, the Supreme Court and scholars alike have gravitated to the speech clause, driven by the misconception that democracy is a product of political discussion, rather than political participation. The Court has thus reduced a multifaceted amendment protecting the political process writ large into a singular protection for free expression. The Article explains not only why this is a mistake, but how it negatively impacts our democracy. It proceeds to offer a more nuanced account of the First Amendment’s relationship to self-governance—one that vindicates a construction of the amendment that actually protects democracy in all its facets. The three main pillars of this new account are: protection for political conduct; recognition of a strong anti-entrenchment norm; and a better appreciation of the significance of drawing a distinction between the domain of governance and the domain of politics in First Amendment jurisprudence.
The Department of Justice asked Congress on Wednesday to adopt a new law that would hold Facebook, Google and Twitter legally accountable for the way they moderate content on the Web, as the Trump administration ratchets up its attacks on social-media sites as the 2020 election approaches.
The new request from the Justice Department came in the form of a rare, legislative proposal that specifically seeks to whittle down Section 230, a decades-old provision of federal law that spares websites from being held liable for content posted by their users — and immunizes some of their own decisions about what posts, photos and videos to leave up or take down.
“For too long Section 230 has provided a shield for online platforms to operate with impunity,” said Attorney General William P. Barr in a statement. “Ensuring that the internet is a safe, but also vibrant, open and competitive environment is vitally important to America.”
The proposal also seeks to ensure social-media companies moderate their sites and services in a clear and consistent way. For years, President Trump and other top Republicans have attacked tech giants including Facebook, Google and Twitter for censoring conservatives online, something the U.S. government now may have the ability to police if the Justice Department’s proposal were to become law.
As part of my own book project, I read the edited volume “Social Media and Democracy” the day it came out. This is a must-read for researchers who care about the extent to which social media has changed and may further change campaigns and elections. It’s really three books in one: the first six chapters are reviews of the literature on various topics (like misinformation and echo chambers); the next six chapters discuss potential reforms: and the final chapter by Persily and Tucker explains how much more there is to learn about how social media is changing democracy if researchers could get fuller access to social media platforms. Highly recommended.
In conjunction with the book release, Stanford’s Cyber Policy Center is putting together this event on Sept. 8 with a terrific lineup. Registration required.
Can’t wait to dig into this volume edited by Nate Persily and Joshua Tucker, with free access from Cambridge. Thanks Cambridge!
Fact checkers were unanimous in their assessments when President Trump began claiming in June that Democrat Joe Biden wanted to “defund” police forces. Politifact called the allegations “false,” as did CheckYourFact. The Associated Press detailed “distortions” in Trump’s claims. FactCheck.org called an ad airing them “deceptive.” Another site, The Dispatch, said there is “nothing currently to support” Trump’s claims.
But these judgments, made by five fact-checking organizations that are part of Facebook’s independent network for policing falsehoods on the platform, were not shared with Facebook’s users. That’s because the company specifically exempts politicians from its rules against deception. Ads containing the falsehoods continue to run freely on the platform, without any kind of warning or label.
Enabled by Facebook’s rules, Trump’s reelection campaign has shown versions of the false claim on Facebook at least 22.5 million times, in more than 1,400 ads costing between $350,000 and $553,000, a Washington Post analysis found based on data from Facebook’s Ad Library. The ads, bought by the campaign directly or in a partnership with the Republican National Committee, were targeted at Facebook users mainly in swing states such as Ohio, Georgia, North Carolina, Florida, and Pennsylvania.
On April 3, Terrence K. Williams, a politically conservative actor and comedian who’s been praised by President Donald Trump, assured his nearly 3 million followers on Facebook that Democrats would light ballots on fire or throw them away. Wearing a red “Keep America Great” hat, Williams declared, “If you mail in your vote, your vote will be in Barack Obama’s fireplace.” The video has been viewed more than 350,000 times.
On May 8, Peggy Hubbard, a Navy veteran and police officer who this year sought the Republican nomination for a U.S. Senate seat from Illinois, warned on Facebook that the country was heading toward civil war. “Your democracy, your freedom is being stripped away from you, and if you allow that then everything this country stood for, fought for, bled for is all in vain.” The cause? California’s recent expansion of voting by mail: “The only way you will be able to vote in the upcoming election in November is by mail only,” Hubbard said. The video has attracted more than 209,000 views.
On June 27, Pamela Geller, an anti-Muslim activist with nearly 1.3 million followers, weighed in. “Mail-in ballots guarantee that the Democrats will commit voter fraud,” she said on Facebook.
There’s no evidence for any of these statements. While California will mail absentee ballots to all registered voters, polling places will also be available. Voter fraud is exceedingly rare, including with mail-in ballots. A recent Washington Post analysis analyzed three states with all-mail elections — Colorado, Oregon and Washington — and found just 372 potential irregularities among 14.6 million votes, or 0.0025%.
Facebook’s community standards ban “misrepresentation of who can vote, qualifications for voting, whether a vote will be counted, and what information and/or materials must be provided in order to vote.” But an analysis by ProPublica and First Draft, a global nonprofit that researches misinformation, shows that Facebook is rife with false or misleading claims about voting, particularly regarding voting by mail, which is the safest way of casting a ballot during the pandemic. Many of these falsehoods appear to violate Facebook’s standards yet have not been taken down or labeled as inaccurate. Some of them, generalizing from one or two cases, portrayed people of color as the face of voter fraud.
The false claims, including conspiracy theories about stolen elections or outright misrepresentations about voting by mail by Trump and prominent conservative outlets, are often among the most popular posts about voting on Facebook, according to a review of engagement data from CrowdTangle, a Facebook-owned analytics tool.
On Facebook, interactions — the number of comments, likes, reactions and shares that a post attracts — are a proxy for popularity. Of the top 50 posts, ranked by total interactions, that mentioned voting by mail since April 1, 22 contained false or substantially misleading claims about voting, particularly about mail-in ballots.
Videos peddling false claims about voter fraud and Covid-19 cures draw millions of views on YouTube. Partisan activist groups pretending to be online news sites set up shop on Facebook. Foreign trolls masquerade as U.S. activists on Instagram to sow divisions around the Black Lives Matter protests.
Four years after an election in which Russia and some far-right groups unleashed a wave of false, misleading and divisive online messages, Silicon Valley is losing the battle to eliminate online misinformation that could sway the vote in November.
Social media companies are struggling with an onslaught of deceptive and divisive messaging from political parties, foreign governments and hate groups as the months tick down to this year’s presidential election, according to more than two dozen national security policymakers, misinformation experts, hate speech researchers, fact-checking groups and tech executives, as well as a review of thousands of social media posts by POLITICO.
The tactics, many aimed at deepening divisions among Americans already traumatized by a deadly pandemic and record job losses, echo the Russian government’s years-long efforts to stoke confusion before the U.S. 2016 presidential election, according to experts who study the spread of harmful content. But the attacks this time around are far more insidious and sophisticated — with harder-to-detect fakes, more countries pushing covert agendas and a flood American groups copying their methods….
At the same time, social media companies are being squeezed by partisan scrutiny in Washington that makes their judgment calls about what to leave up or remove even more politically fraught: Trump and other Republicans accuse the companies of systematically censoring conservatives, while Democrats lambast them for allowing too many falsehoods to circulate.
Researchers say it’s impossible to know how comprehensive the companies have been in removing bogus content because the platforms often put conditions on access to their data. Academics have had to sign non-disclosure agreements promising not to criticize the companies to gain access to that information, according to people who signed the documents and others who refused to do so.
Experts and policymakers warn the tactics will likely become even more advanced over the next few months, including the possible use of so-called deepfakes, or false videos created through artificial intelligence, to create realistic-looking footage that undermines the opposing side.
“As more data is accumulated, people are going to get better at manipulating communication to voters,” said Robby Mook, campaign manager for Hillary Clinton’s 2016 presidential bid and now a fellow at the Harvard Kennedy School.
Facebook’s fact-checkers on Sunday labeled as “partly false” a video that it said was manipulated to make it appear as if House Speaker Nancy Pelosi was drunk or drugged. The video had been circulating on Facebook since Thursday and by Sunday night had been viewed more than 2 million times.
A similarly false video of Pelosi went viral on Facebook in May 2019. At the time, Pelosi blasted Facebook for not removing the video. Facebook had instead applied a fact-check label to it.Facebook did not remove the new video on Sunday either, meaning it can still be viewed on the platform but a warning label has been placed on it. Videos marked false are also promoted less by Facebook’s algorithms, the company says. Facebook said it will also send a notification to people who shared the video to flag the fact check.
That the video was viewed so many times will likely prompt renewed scrutiny of policies on misinformation. The earlier manipulated Pelosi video prompted similar scrutiny.
More from the “Reliable Sources” newsletter:
It’s worth noting that, once again, Facebook did not take action to even apply a label to the doctored video until it had gone viral on the platform. As Donie O’Sullivan noted in his story, the video had been circulating since Thursday, but Facebook took action Sunday night when it had been already viewed more than 2 million times. Stone reiterated to me that fact-checked content has its distribution reduced and that people who shared it are notified a fact-check was applied.
But O’Sullivan also noted the video is still continuing to rack up hundreds of thousands of views. My two cents: The fact that Facebook is so often acting only after disinfo has gone viral on its platform indicates that the company’s current approach to stemming bad content from being widely shared isn’t very effective.
Misinformation and disinformation can be used to disenfranchise voters and erode public confidence in the legitimacy of our elections. As we observed in the United States in 2016, and in numerous other countries since, the spread of viral misleading content can diminish trust in the results of electoral contests, and in the integrity of democratic processes and leadership transitions overall. An increasing percentage of voters this November will look for real-time election information on social media, and election misinformation and disinformation on social media is a significant threat to ensuring the integrity of the upcoming presidential election. In the United States, over 10,000 individual jurisdictions are responsible for election administration: presently, there is no centralized support to aid this front line in identifying and responding to emerging election-related disinformation.
The Election Integrity Partnership is a coalition of research entities focused on supporting real-time information exchange between the research community, election officials, government agencies, civil society organizations, and social media platforms. Our objective is to detect and mitigate the impact of attempts to prevent or deter people from voting or to delegitimize election results. This is not a fact-checking partnership to debunk misinformation more generally: our objective explicitly excludes addressing comments that may be made about candidates’ character or actions and is focused narrowly on content intended to suppress voting, reduce participation, confuse voters as to election processes, or delegitimize election results without evidence.
The foundational Partnership consists of four of the nation’s leading institutions focused on analysis of mis- and disinformation in the social media landscape: the Stanford Internet Observatory and Program on Democracy and the Internet, Graphika, the Atlantic Council’s Digital Forensic Research Lab, and the University of Washington’s Center for an Informed Public. We will be working with stakeholders in civil society as well as election officials to find instances of election-related misinformation, analyze reports from public sector and NGO partners, and route our findings to the appropriate parties to mitigate the impact. Tips on potential disinformation will come from multiple sources, including from local election officials via existing coordination channels. We will do so transparently and in a nonpartisan manner, sharing up-to-the-minute findings and rapid analysis through a web portal and official social media channels.
Our hope is that this Partnership will provide actionable support for election officials and other partners who are on the front lines of providing accurate information to the electorate, as well as increased transparency for the general public into real threats of election-related misinformation and disinformation this election.
We would like to thank the Knight Foundation and Craig Newmark Philanthropies for their support of this effort.
Public officials and voter-protection organizations can reach the Partnership at email@example.com