Category Archives: social media and social protests

Read the Position Papers for Friday’s Stanford/UCLA Conference, “Should Donald Trump Be Returned to Social Media?”

Very diverse group of papers from a set of very smart people for this Friday’s conference (not too late to register for the virtual webinar):

Chinmayi Arun

Guy Charles

Evelyn Douek

Katie Fallow

Katie Harbath

Rick Hasen

David Kaye

Genevieve Lakier

Eugene Volokh

Jonathan Zittrain

Share this:

“Disinformation Has Become Another Untouchable Problem in Washington”

NYT:

The memo that reached the top of the Department of Homeland Security in September could not have been clearer about its plan to create a board to monitor national security threats caused by the spread of dangerous disinformation.

The department, it said, “should not attempt to be an all-purpose arbiter of truth in the public arena.”

Yet when Secretary Alejandro N. Mayorkas announced the disinformation board in April, Republican lawmakers and conservative commentators denounced it as exactly that, calling it an Orwellian attempt to stifle dissenting views. So did some critics from the left, who questioned the powers that such an office might wield in the hands of future Republican administrations.

Within weeks, the new board was dismantled — put on “pause,” officially — undone in part by forces it was meant to combat, including distortions of the board’s intent and powers.

There is wide agreement across the federal government that coordinated disinformation campaigns threaten to exacerbate public health emergencies, stoke ethnic and racial divisions and even undermine democracy itself. The board’s fate, however, has underscored how deeply partisan the issue has become in Washington, making it nearly impossible to consider addressing the threat.

Share this:

Breaking and Analysis: Supreme Court on 5-4 Vote Reinstates District Court Order Temporarily Barring Enforcement of Texas Social Media Law; Good News for the First Amendment and Bad News for Those Seeking Law to Replatform Trump

In an unusual 5-4 vote, the Supreme Court has vacated a so-far-unexplained order from the 5th Circuit that stayed enforcement of a Texas district court order barring Texas from enforcing its new social media law. Among other things, this Texas law, if enforceable, could well require large social media companies such as Twitter and Facebook to re-platform Donald Trump after he was deplatformed for encouraging the January 6 insurrection at the United States Capitol. The district court held the statute likely violated the First Amendment and a Fifth Circuit panel, offering no reason thus far, stayed that order. That stay would have allowed Texas to enforce its law pending the appeal of the case. As it stands now, Texas cannot enforce its law. But the 5th Circuit will eventually issue an opinion and allow Texas to enforce its law, and the issue will almost certainly be back before the Supreme Court. This is especially true because of last week’s contrary 11th Circuit opinion, striking down a similar Florida law as violating the First Amendment rights of the private platforms to decide what content should be included or excluded.

The majority (C.J. Roberts, and Justices Barrett, Breyer, Kavanaugh, and Sotomayor) did not give a reason for vacating the 5th Circuit stay. Justice Kagan dissented, probably not on grounds of the merits but her views on whether the Supreme Court should be getting involved in these major pending cases on the shadow docket rather than letting them work their way through the courts.

But Justice Alito wrote an opinion for himself, Justice Thomas, and Justice Gorsuch. In the opinion, Alito does not say that the law is in fact unconstitutional. He argues that the matter is uncertain, buying into the arguments advanced in the past by Justice Thomas, Eugene Volokh, and others, that social media companies can be regulated like “common carriers” (such as the phone company) and forced to carry speech that they do not like.

The argument is one that is audacious and shocking for those (like Justice Thomas, less so for a Justice like Alito) who have taken near absolutist positions on First Amendment rights in the past, especially on issues such as campaign finance laws. I write about this in great detail in my Cheap Speech book, and explained the point briefly in this Slate piece:

It would be bad enough if the Supreme Court simply applied outmoded libertarian thinking to today’s information cesspool, believing that the truth will inevitably rise to the top and give voters the tools they need for informed decisionmaking. But the court’s inconsistent thinking on the First Amendment could make things far worse.

Consider the decision of Facebook and Twitter to “deplatform” Trump after he helped inspire the violent insurrection at the U.S. Capitol on January 6, 2021. Meta, which owns Facebook, and Twitter are private companies that make decisions all the time about what content to include, exclude, promote, and demote. The First Amendment does not limit these private companies and they can regulate speech in ways the government could not do. These companies remove hate speech, pornography, and other objectionable content from their platforms all the time.

But Justice Clarence Thomas—yes, the same Justice Thomas who believes that virtually all campaign finance laws violate the First Amendment—recently went out of his way in a case not presenting the issue to raise support for new laws, such as one passed last year in Florida, that would require social media companies to carry the content of politicians they do not like, even if those politicians support election violence or undermine voter confidence in the integrity of the electoral process. Justice Thomas has suggested that social media platforms are like telephone companies that could be subject to “must carry” provisions and cannot discriminate among customers based upon their political views.

But social media companies are much closer to newspapers and TV stations than telephone companies. The former but not the latter curate content all the time, and they can decide who appears on the platform and how. Justice Thomas appears to believe in the freedom of FOX News or the Atlantic to create a coherent brand with a message, but not Twitter or Facebook.

It is hard not to conclude that Justice Thomas was motivated toward this anti-libertarian position requiring private companies to carry speech they would rather not include on their websites because doing so would favor Donald Trump and those like Trump.

The good news from today’s opinion is that it looks like there are 5 or 6 votes at least to reject the Texas law and to hold that just like newspapers can decide what content to include or exclude, social media companies can do so too. Whether Section 230 of the Communications Decency Act recognizes it or not, social media companies exercise editorial discretion all the time. They should not be forced as private actors to carry dangerous and anti-democratic speech. People who want such speech can easily find it on Trump’s “Truth Social” platform or elsewhere.

Share this:

My New Washington Post Piece Connected to My Cheap Speech Book: “Facebook and Twitter could let Trump back online. But he’s still a danger.”

I have written this piece for the Washington Post. It begins:

In the Menlo Park, Calif., offices of Meta, discussions probably have already begun to consider what will happen Jan. 7, 2023, when former president Donald Trump’s ban from Facebook for encouraging the violent insurrection at the U.S. Capitol on Jan. 6, 2021, is set to potentially expire. Judging by how large social media companies have responded lately to the aftermath of the 2020 election and the looming 2022 election in which Republicans may take back control of Congress, there’s ample reason to worry Meta will restore the former president’s ability to post on Facebook — allowing him to continue to spread the false and dangerous claim that the 2020 election was stolen from him. Social media networks and other online platforms such as Google’s YouTube and Spotify can, instead, step up their support for reasonable measures to assure both vibrant political debate and protection of American election integrity and legitimacy. That would include keeping Trump off Facebook….

Social media and other new communications technologies are not solely to blame for the metastasizing election lies, but they play a big part. As the 2020 election season geared up and as Trump began spreading his false claims in the midst of the coronavirus pandemic that the election would be stolen or rigged, Facebook and Twitter reacted meekly. Rather than blocking Trump, they slapped labels on his posts saying his claims were disputed or directing voters to more information. Evidence indicates these labels may have backfired, amplifying Trump’s falsehoods and perhaps even suggesting to voters that they were correct. Things were even worse on other platforms: YouTube allowed videos with false accusations about the election to flourish, and its algorithm directed viewers to ever more extreme content. And those who distribute podcasts, such as on Spotify or Apple, appeared to do little policing of incendiary and dangerous election claims.

It took the actual violence of Jan. 6 for Facebook and Twitter to take action. Both chose to remove Trump from their platforms. Twitter made its ban permanent. Facebook initially did, too, but the Oversight Board it created to give it guidance on content told Meta that while deplatforming Trump was justified because he “created an environment where a serious risk of violence was possible,” the company needed criteria for removing politicians and conditions for determining the length of such bans. In response, the company announced that Trump would be booted for two years, followed by an evaluation as to whether he remained a “threat to public safety.” The company explained: “At the end of this period, we will look to experts to assess whether the risk to public safety has receded. We will evaluate external factors, including instances of violence, restrictions on peaceful assembly and other markers of civil unrest. If we determine that there is still a serious risk to public safety, we will extend the restriction for a set period of time and continue to reevaluate until that risk has receded.”…

Meta may soon face great political pressure from the right to show that they are being “fair” to Trump, especially with Republicans likely to take control of one or both houses of Congress after the 2022 elections and consider laws reining in tech platforms the GOP considers unfriendly. It could also have other incentives to let Trump back onto the site: Recent reporting by Judd Legum, for example, suggests that Facebook has not followed its own policies to prevent the viral spread of false political information, allowing fake groups to manipulate its rules to build up millions of followers to further spread election misinformation. Posts containing such misinformation are often among the most shared items on the platform.

It’s not just Facebook. To little fanfare, Twitter confirmed a few weeks ago that it will no longer police false election claims about the 2020 election, apparently because it believes such claims are no longer a threat to election integrity. Twitter told CNN that its civic integrity “policy is designed to be used ‘during the duration’ of an election or other civic event, and ‘the 2020 U.S. election is not only certified, but President Biden has been in office for more than a year.’ The staying power of the “big lie” and the rising threat of election subversion built on that lie shows how wrong that calculation is…

Companies such as Meta, Twitter and Google are private corporations, which have the right to decide what content to include, exclude, promote or demote on their platforms. They already do that with hate speech, pornography and violence. They need to continue to do that with speech threatening the integrity of American elections. Silencing a political leader should be the last resort, given our commitment to free speech and vibrant election contests. But Trump clearly crossed the line well before the Jan. 6 insurrection.

Share this:

“Echo chambers, filter bubbles, and polarisation: a literature review”

New Reuters Institute report:

Terms like echo chambers, filter bubbles, and polarisation are widely used in public and political debate but not in ways that are always aligned with, or based on, scientific work. And even among academic researchers, there is not always a clear consensus on exact definitions of these concepts.

In this literature review we examine, specifically, social science work presenting evidence concerning the existence, causes, and effect of online echo chambers and consider what related research can tell us about scientific discussions online and how they might shape public understanding of science and the role of science in society.

Echo chambers, filter bubbles, and the relationship between news and media use and various forms of polarisation has to be understood in the context of increasingly digital, mobile, and platform-dominated media environments where most people spend a limited amount of time with news and many internet users do not regularly actively seek out online news, leading to significant inequalities in news use.

When defined as a bounded, enclosed media space that has the potential to both magnify the messages delivered within it and insulate them from rebuttal, studies in the UK estimate that between six and eight percent of the public inhabit politically partisan online news echo chambers.

More generally, studies both in the UK and several other countries, including the highly polarised US, have found that most people have relatively diverse media diets, that those who rely on only one source typically converge on widely used sources with politically diverse audiences (such as commercial or public service broadcasters) and that only small minorities, often only a few percent, exclusively get news from partisan sources.

Studies in the UK and several other countries show that the forms of algorithmic selection offered by search engines, social media, and other digital platforms generally lead to slightly more diverse news use – the opposite of what the “filter bubble” hypothesis posits – but that self-selection, primarily among a small minority of highly partisan individuals, can lead people to opt in to echo chambers, even as the vast majority do not.

Research on polarisation offers a complex picture both in terms of overall developments and the main drivers and there is in many cases limited empirical work done outside the United States. Overall, ideological polarisation has, in the long run, declined in many countries but affective polarisation has in some, but not all, cases increased. News audience polarisation is much lower in most European countries, including the United Kingdom. Much depends on the specifics of individual countries and what point in time one measures change from and there are no universal patterns.

There is limited research outside the United States systematically examining the possible role of news and media use in contributing to various kinds of polarisation and the work done does not always find the same patterns as those identified in the US. In the specific context of the United States where there is more research, it seems that exposure to like-minded political content can potentially polarise people or strengthen the attitudes of people with existing partisan attitudes and that cross- cutting exposure can potentially do the same for political partisans.

Public discussions around science online may exhibit some of the same dynamics as those observed around politics and in news and media use broadly, but fundamentally there is at this stage limited empirical research on the possible existence, size, and drivers of echo chambers in public discussions around science. More broadly, existing research on science communication, mainly from the United States, documents the important role of self-selection, elite cues, and small, highly active communities with strong views in shaping these debates and highlights the role especially political elites play in shaping both news coverage and public opinion on these issues.

In summary, the work reviewed here suggests echo chambers are much less widespread than is commonly assumed, finds no support for the filter bubble hypothesis and offers a very mixed picture on polarisation and the role of news and media use in contributing to polarisation.

Share this:

Aspen Institute’s Commission on Information Disorder Issues Its Final Report

The Commission was created to explore the implications of our “crisis of trust and truth.” A chain reaction of harm to our democracy has emerged as “bad information has become as prevalent, persuasive, and persistent as good information.” The Final Report issued today promises “a viable framework for action” and “makes 15 recommendations for how government, private industry, and civil society can help to increase transparency and understanding, build trust, and reduce harms.”

Share this:

“Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead.”

WSJ:

In the fall of 2018, Jonah Peretti, chief executive of online publisher BuzzFeed, emailed a top official at Facebook Inc. The most divisive content that publishers produced was going viral on the platform, he said, creating an incentive to produce more of it.

He pointed to the success of a BuzzFeed post titled “21 Things That Almost All White People are Guilty of Saying,” which received 13,000 shares and 16,000 comments on Facebook, many from people criticizing BuzzFeed for writing it, and arguing with each other about race. Other content the company produced, from news videos to articles on self-care and animals, had trouble breaking through, he said.

Mr. Peretti blamed a major overhaul Facebook had given to its News Feed algorithm earlier that year to boost “meaningful social interactions,” or MSI, between friends and family, according to internal Facebook documents reviewed by The Wall Street Journal that quote the email.

BuzzFeed built its business on making content that would go viral on Facebook and other social media, so it had a vested interest in any algorithm changes that hurt its distribution. Still, Mr. Peretti’s email touched a nerve.

Facebook’s chief executive, Mark Zuckerberg, said the aim of the algorithm change was to strengthen bonds between users and to improve their well-being. Facebook would encourage people to interact more with friends and family and spend less time passively consuming professionally produced content, which research suggested was harmful to their mental health.

Within the company, though, staffers warned the change was having the opposite effect, the documents show. It was making Facebook’s platform an angrier place.

Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.

“Our approach has had unhealthy side effects on important slices of public content, such as politics and news,” wrote a team of data scientists, flagging Mr. Peretti’s complaints, in a memo reviewed by the Journal. “This is an increasing liability,” one of them wrote in a later memo.

They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares,” researchers noted in internal memos.

Some political parties in Europe told Facebook the algorithm had made them shift their policy positions so they resonated more on the platform, according to the documents.

“Many parties, including those that have shifted to the negative, worry about the long term effects on democracy,” read one internal Facebook report, which didn’t name specific parties.

Share this:

“Facebook Says Its Rules Apply to All. Company Documents Reveal a Secret Elite That’s Exempt.”

WSJ:

Mark Zuckerberg has publicly said Facebook Inc. allows its more than three billion users to speak on equal footing with the elites of politics, culture and journalism, and that its standards of behavior apply to everyone, no matter their status or fame.

In private, the company has built a system that has exempted high-profile users from some or all of its rules, according to company documents reviewed by The Wall Street Journal.

The program, known as “cross check” or “XCheck,” was initially intended as a quality-control measure for actions taken against high-profile accounts, including celebrities, politicians and journalists. Today, it shields millions of VIP users from the company’s normal enforcement process, the documents show. Some users are “whitelisted”—rendered immune from enforcement actions—while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.

At times, the documents show, XCheck has protected public figures whose posts contain harassment or incitement to violence, violations that would typically lead to sanctions for regular users. In 2019, it allowed international soccer star Neymar to show nude photos of a woman, who had accused him of rape, to tens of millions of his fans before the content was removed by Facebook. Whitelisted accounts shared inflammatory claims that Facebook’s fact checkers deemed false, including that vaccines are deadly, that Hillary Clinton had covered up “pedophile rings,” and that then-President Donald Trump had called all refugees seeking asylum “animals,” according to the documents.

A 2019 internal review of Facebook’s whitelisting practices, marked attorney-client privileged, found favoritism to those users to be both widespread and “not publicly defensible.”

“We are not actually doing what we say we do publicly,” said the confidential review. It called the company’s actions “a breach of trust” and added: “Unlike the rest of our community, these people can violate our standards without any consequences.”

Despite attempts to rein it in, XCheck grew to include at least 5.8 million users in 2020, documents show. In its struggle to accurately moderate a torrent of content and avoid negative attention, Facebook created invisible elite tiers within the social network.

In describing the system, Facebook has misled the public and its own Oversight Board, a body that Facebook created to ensure the accountability of the company’s enforcement systems….

In June 2020, a Trump post came up during a discussion about XCheck’s hidden rules that took place on the company’s internal communications platform, called Facebook Workplace. The previous month, Mr. Trump said in a post: “When the looting starts, the shooting starts.”

A Facebook manager noted that an automated system, designed by the company to detect whether a post violates its rules, had scored Mr. Trump’s post 90 out of 100, indicating a high likelihood it violated the platform’s rules.

For a normal user post, such a score would result in the content being removed as soon as a single person reported it to Facebook. Instead, as Mr. Zuckerberg publicly acknowledged last year, he personally made the call to leave the post up. “Making a manual decision like this seems less defensible than algorithmic scoring and actioning,” the manager wrote.

Mr. Trump’s account was covered by XCheck before his two-year suspension from Facebook in June. So too are those belonging to members of his family, Congress and the European Union parliament, along with mayors, civic activists and dissidents.

While the program included most government officials, it didn’t include all candidates for public office, at times effectively granting incumbents in elections an advantage over challengers. The discrepancy was most prevalent in state and local races, the documents show, and employees worried Facebook could be subject to accusations of favoritism.

Mr. Stone acknowledged the concern but said the company had worked to address it. “We made multiple efforts to ensure that both in federal and nonfederal races, challengers as well as incumbents were included in the program,” he said.

Share this:

“Fueling the Fire: How Social Media Intensifies U.S. Political Polarization–And What Can Be Done About It”

New report from NYU’s Paul M. Barrett, Justin Hendrix, J. Grant Sims.

Some critics of the social media industry contend that widespread use of Facebook, Twitter, and YouTube has contributed to increased political polarization in the United States. But Facebook, the largest social media platform, has disputed this contention, saying that it is unsupported by social science research. Determining whether social media plays a role in worsening partisan animosity is important because political polarization has pernicious consequences. We conclude that social media platforms are not the main cause of rising partisan hatred, but use of these platforms intensifies divisiveness and thus contributes to its corrosive effects.

Share this:

“Jan. 6 investigators demand records from social media companies”

Politico:

The select committee investigating the Jan. 6 insurrection is seeking a massive tranche of records from social media companies,on whose platformsmany defendants charged in the Capitol attackplanned and coordinated their actions.

In a series of letters dated Aug. 26, the Democratic-controlled panel asked the companies, which include Facebook, Google, Twitter, Parler, 4chan, Twitch and TikTok, for all records and documents since April 1, 2020, relating to misinformation around the 2020 election, efforts to overturn the 2020 election, domestic violent extremists associated with efforts to overturn the election and foreign influence in the 2020 election.

Share this:

“Hong Kong Pop Singer Anthony Wong Yiu-ming Arrested for Singing at a 2018 Election Rally”

Wall Street Journal:

One of Hong Kong’s most prominent singers has been charged with corruption for a performance he gave at a 2018 rally to support a pro-democracy candidate, the latest in a string of allegations brought by the city’s antigraft watchdog against pro-democracy figures.

Anthony Wong Yiu-ming, 59 years old, an outspoken critic of the city’s government, was arrested Monday morning and later released on bail.

The Independent Commission Against Corruption said on Monday that Mr. Wong sang two songs at a rally for pro-democracy candidate Au Nok-shin, who was running for a seat on the Legislative Council, or LegCo, the city’s top lawmaking body. Mr. Wong engaged in corrupt conduct by providing entertainment to induce another person to vote for Mr. Au, who has also been charged, the agency said.

Hong Kong’s elections ordinance bans the conduct of providing refreshments or entertainment to favor a candidate, but in the past charges have been rare. If convicted, a person could face imprisonment up to seven years on top of a fine as much as $64,000, the ordinance says.

Share this:

Political Conduct and the First Amendment

Now that I have finished a draft of a new Article, Political Conduct and the First Amendment, I am eager to join the conversation on the ELB. I couldn’t be more thankful to Rick for including me as part of the team. I am a devout reader of the blog and look forward to broadening the ongoing discussion in the election law community about how to improve both democratic governance and faith in democratic institutions.

In the meanwhile, like many of us, I have been wrestling with how to make sense of the Roberts Court’s indifference to voters and democracy. Political Conduct and the First Amendment is my take on the bigger picture:

Preview: The First Amendment’s primary constitutional role is to defend our nation’s commitment to the collective project of self-governance. Its provisions protect both speech and political conduct toward the end of securing vital channels for influencing public policymaking, demanding responsiveness, and ensuring accountability. Over time, however, the Supreme Court and scholars alike have gravitated to the speech clause, driven by the misconception that democracy is a product of political discussion, rather than political participation. The Court has thus reduced a multifaceted amendment protecting the political process writ large into a singular protection for free expression. The Article explains not only why this is a mistake, but how it negatively impacts our democracy. It proceeds to offer a more nuanced account of the First Amendment’s relationship to self-governance—one that vindicates a construction of the amendment that actually protects democracy in all its facets. The three main pillars of this new account are: protection for political conduct; recognition of a strong anti-entrenchment norm; and a better appreciation of the significance of drawing a distinction between the domain of governance and the domain of politics in First Amendment jurisprudence.

Share this:

“Justice Department asks Congress to weaken social media companies’ liability protection”

WaPo:

The Department of Justice asked Congress on Wednesday to adopt a new law that would hold Facebook, Google and Twitter legally accountable for the way they moderate content on the Web, as the Trump administration ratchets up its attacks on social-media sites as the 2020 election approaches.

The new request from the Justice Department came in the form of a rare, legislative proposal that specifically seeks to whittle down Section 230, a decades-old provision of federal law that spares websites from being held liable for content posted by their users — and immunizes some of their own decisions about what posts, photos and videos to leave up or take down.

“For too long Section 230 has provided a shield for online platforms to operate with impunity,” said Attorney General William P. Barr in a statement. “Ensuring that the internet is a safe, but also vibrant, open and competitive environment is vitally important to America.”

The proposal also seeks to ensure social-media companies moderate their sites and services in a clear and consistent way. For years, President Trump and other top Republicans have attacked tech giants including Facebook, Google and Twitter for censoring conservatives online, something the U.S. government now may have the ability to police if the Justice Department’s proposal were to become law.

Share this: