Category Archives: social media and social protests

Florida’s Lawyer Having Hard Time at Beginning of Oral Argument in Social Media Cases, Suggesting Law Could Well Be Struck Down [Corrected]

Arguments are just beginning, and at some point I’ll have to leave for class.

At this early point, it appears that Roberts, Sotomayor, Kavanaugh, and Kagan have all expressed great skepticism of these rules.

Justice Jackson pointed to some of the things that Facebook does that qualify as speech and some things that don’t. The questions suggest that at least some of the things that Facebook does are protected speech.

Kavanaugh asked if the antidistortion language in Buckley (saying government cannot equalize speech) and the precedent of Tornillo as to newspapers’ editorial discretion seems to doom this case.

Justice Thomas suggested that this should have not have been a facial challenge, which would be a way to duck deciding the merits in this case. So far, no other takers among the justices.

Justice Gorsuch asks about whether Section 230 preemption could dispose of parts of this case.

Justice Kagan made the same point I did in my recent Slate piece and in our brief about how when Musk took over, it changed the nature of the site. This shows content moderation is expressive:

It should be no surprise that after Elon Musk took over Twitter and changed its moderation policies to make the platform’s content less trustworthy and more incendiary, users and advertisers reevaluated the platform’s strengths and weaknesses, with many choosing to leave. Content moderation policies shape how the public perceives a platform’s messages. Content moderation decisions—including Musk’s, whether wise or not—are the exercise of editorial discretion. The public then decides which platforms to patronize, value, or devalue.

Justice Barrett, who had been quiet, suggests that platforms exercise editorial control like newspapers. More bad news for the Florida law.

[This post has been updated and corrected. It originally referenced Texas law.]

Share this:

New Amicus Brief Filed in Supreme Court in NetChoice Social Media Cases (on Behalf of Brendan Nyhan, Amy Wilentz, and Me) on How Texas and Florida’s Social Media Laws Raise the Risk of Election Subversion

Here’s the introduction and summary of argument from this just-filed amicus brief in the NetChoice cases on behalf of political scientist Brendan Nyhan, journalism professor Amy Wilentz, and me, written by me and Nat Bach (a former UCLA student), Marina Shvarts, and Tom Worger (a former UCI student) at Manatt. (Below the fold I am putting some first amendment arguments responding to Eugene Volokh on common carriers as well as an argument that Texas and Florida’s laws are justified by an “antidistortion” interest that the Supreme Court has already rejected in the campaign finance cases).

SUMMARY OF THE ARGUMENT

Social media has greatly amplified the ability of average individuals to share and receive information, helping to further the kind of robust, wide-open debate that promotes First Amendment values of free speech and association. Gone are the days of speech scarcity when a few gatekeepers such as newspapers and television networks controlled the bulk of political speech. But the rise of “cheap speech”[1] also has had negative consequences, such as when social media platforms are used to harass,[2] spread obscene or violent images,[3] or commit financial fraud.[4] In response to dangers like these, platforms have engaged in content moderation, making decisions as private actors participating in the marketplace of ideas to remove or demote speech that, in their judgment, is objectionable or dangerous.[5]

Social media companies engaged in just such content moderation decisions in the leadup to, and in the aftermath of, the 2020 U.S. presidential election.[6] During that election, President Donald Trump, then a candidate for reelection running against Joe Biden, relentlessly used his social media account on Twitter (now known as “X”[7]) to spread false claims that the election would be or was “rigged” or stolen through fraud, and to advocate for “wild” protests that inspired the January 6, 2021 violent attack on the United States Capitol as Congress was counting the Electoral College votes.

During the campaign and post-election period, these platforms labeled and fact-checked many of Trump’s false and incendiary statements, and limited the sharing of some of his content; but after Trump failed to condemn (and even praised) the January 6 rioters, many major platforms, fearing additional violence fomented by the President, decided to remove or suspend Trump’s social media accounts.

The platforms made voluntary decisions about labeling, factchecking, demoting, and deplatforming content that undermined election integrity, stoked violence, and raised the risk of election subversion. In so doing, the platforms participated in the open marketplace of ideas by exercising their sound editorial judgment in a socially responsible way to protect democracy. Even if certain moderation decisions were imperfect in hindsight, the platforms’ efforts were vastly preferable to an alternative in which government fiat deprives platforms of the power to remove even dangerous speech.

These 2020 election-related content moderation decisions were not compelled by law—and some other platforms continued to permit and post incendiary election-related content even after January 6[8]—but they were laudable. Without such content moderation decisions, the post-election violence could have been far worse and U.S. democracy imperiled.

The platforms’ editorial choices are fully protected by the First Amendment. Just as The Wall Street Journal newspaper has the First Amendment right to exercise editorial discretion and could not be compelled by law to share or remove a politician’s op-ed, platforms have a First Amendment right to include, exclude, label, promote, or demote posts made on their services.

Florida’s and Texas’s social media laws, if allowed to stand, would thwart the ability of platforms to moderate social media posts that risk undermining U.S. democracy and fomenting violence. Texas compels platforms to disseminate speech the platforms might find objectionable or dangerous, prohibiting them from “censor[ing]” an expression of any viewpoint by means of “block[ing], ban[ning], remov[ing], deplatform[ing], demonetiz[ing], de-boost[ing], restrict[ing], deny[ing] equal access or visibility to, or otherwise discriminat[ing].”[9] Florida’s convoluted law prohibits the “deplatforming” of known political candidates and “journalistic enterprises,” and from using algorithms to “shadow ban[]” users who post “about” a candidate.[10]

Even where platforms are permitted to take editorial actions, such as engaging in fact-checking, Florida mandates that such actions must be based on previously disclosed standards with “detailed definitions” that may not be updated more than once every 30 days.[11] Any such action must be followed up with individualized notice to the affected user, including a “thorough rationale” for the action and a “precise and thorough explanation of how the social media platform became aware” of the content that triggered its decision.[12] Under these sweepingly vague laws, broad swaths of dangerous election-related speech would be actually or effectively immune from moderation. And these burdensome laws inevitably will have a chilling effect.

Both Florida’s and Texas’s laws contain certain exceptions from their bar on content moderation, but those exceptions seemingly would not reach much of the speech that could foment election violence and set the stage for election subversion. As to the content arguably covered by these exceptions, neither Florida nor Texas can show that the exceptions are clear, workable in the real-time social media environment, and consistent with the protections of the First Amendment. For example, Florida’s limited exception for “obscene” speech would not permit moderation of dangerous and violent election-related speech, including speech that is unlawful under the standard of Brandenburg v. Ohio, 395 U.S. 444 (1969). And Texas’s allowance for moderation to prevent incitement of “criminal activity” or “specific threats” is limited to threats made “against a person or group because of their race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace officer or judge,” and does not even include threats against election officials or administrators.

Ultimately, NetChoice and the Computer & Communications Industry Association (“CCIA”) are correct that Florida’s and Texas’s laws violate the First Amendment rights of platforms to exercise appropriate editorial judgment and act as responsible corporations.[13] In a free market, consumers need not read or subscribe to social media platforms whose content moderation decisions they do not like; they can turn to other platforms with policies and views more amenable to them. Platforms are not common carriers because they, like newspapers, produce coherent speech products and produce public-facing speech (unlike a telephone call or private telegram). And even common carriers cannot be barred from recommending some speech over others without violating their First Amendment rights.

 Further, Florida’s and Texas’s laws have an impermissible “anti-distortion” purpose under this Court’s First Amendment precedents. This Court should not allow states to hijack the platforms, forcing them to equalize speech to include messages that could foment electoral violence and undermine democracy, simply because the states have objected to the platforms’ exercise of editorial discretion.


[1] See Eugene Volokh, Cheap Speech and What It Will Do, 104 Yale L.J. 1805, 1819–33 (1995); Richard L. Hasen, Cheap Speech: How Disinformation Poisons Our Politics—and How to Cure It 19–22 (2022) (hereinafter Hasen, Cheap Speech).

[2] Cyberbullying and Online Harms: Preventions and Interventions from Community to Campus 3–4 (Helen Cowie & Carrie Anne Myers, eds. 2023).

[3] Danielle K. Citron & Mary Anne Franks, Criminalizing Revenge Porn, 49 Wake Forest L. Rev. 345, 347 (2014).

[4] “More than 95,000 people reported about $770 million in losses to fraud initiated on social media platforms in 2021.” Emma Fletcher, Social Media is a Gold Mine for Scammers in 2021, Federal Trade Commission, Data Spotlight (Jan. 25, 2022), https://www.ftc.gov/news-events/data-visualizations/data-spotlight/2022/01/social-media-gold-mine-scammers-2021 [https://perma.cc/5UCK-QJP3].

[5] When the government pressures private entities such as platforms to speak or not to speak, this “jawboning” raises a different set of issues about the government violating the First Amendment. This Court will consider such issues in the recently-granted case, Murthy v. Missouri, No. 23-411.

[6] For details on the facts discussed in the next three paragraphs, see Part A, infra.

[7] We refer to the company as “Twitter” and the posts as “tweets” throughout this brief, as those were the names when the activities described in Part A occurred.

[8] For example, in the aftermath of January 6 and the deplatforming of Trump by Facebook and Twitter, Trump supporters continued to share messages on platforms including Gab and Parler. Kate Conger, Mike Isaac, & Sheera Frenkel, Twitter and Facebook Lock Trump’s Accounts after Violence on Capitol Hill, N.Y. Times, Jan. 6, 2021 (updated Feb. 14, 2023), https://www.nytimes.com/2021/01/06/technology/capitol-twitter-facebook-trump.html.

[9] Tex. Civ. Prac. & Remedies Code §§ 143A.001(1), 143A.002.

[10] Fla. Stat. §§ 106.072(2), 501.2041(1)(c), (2)(h), (2)(j).

[11] Id. §§ 501.2041(2)(a), (c).

[12] Id. § 501.2041(3). Texas too has individualized disclosure requirements. Tex. Bus & Com. Code §§ 120.101-104; id. §§ 120.051(a), 120.053(a)(7). We focus in our brief on the Florida disclosure rules but the Texas disclosure rules raise similar concerns.

[13] Br. For Resp’ts in No. 22-277, 18-52 (Nov. 30, 2023); Br. For Pet’rs in No. 22-555, 18-53 (Nov. 30, 2023).

Continue reading New Amicus Brief Filed in Supreme Court in NetChoice Social Media Cases (on Behalf of Brendan Nyhan, Amy Wilentz, and Me) on How Texas and Florida’s Social Media Laws Raise the Risk of Election Subversion
Share this:

“A year later, Musk’s X is tilting right. And sinking.”

WaPo:

One year after billionaire Elon Musk bought Twitter for $44 billion, aiming to rid it of a “woke mind virus” that he believed was suppressing free speech, the site’s business outlook appears dire.

The number of people actively tweeting has dropped by more than 30 percent, according to previously unreported data obtained by The Washington Post, and the company — which the entrepreneur behind Tesla and SpaceX has renamed X — is hemorrhaging advertisers and revenue, interviews show.

But in at least one respect, Musk has delivered on his original promise: Twitter has become far less “woke.”

Through dramatic product changes, sudden policy shifts, and his own outsize presence on the platform, Musk has rapidly re-engineered who has a voice on a service that used to be the hub of real-time news and global debate. A site that fueled social movements such as the Arab Spring, Black Lives Matter and #MeToo has veered noticeably rightward under Musk, especially in the United States, say organizers from across the political spectrum.

NYT:

Now rebranded as X, the site has experienced a surge in racist, antisemitic and other hateful speech. Under Mr. Musk’s watch, millions of people have been exposed to misinformation about climate change. Foreign governments and operatives — from Russia to China to Hamas — have spread divisive propaganda with little or no interference.

Mr. Musk and his team have repeatedly asserted that such concerns are overblown, sometimes pushing back aggressively against people who voice them. Yet dozens of studies from multiple organizations have shown otherwise, demonstrating on issue after issue a similar trend: an increase in harmful content on X during Mr. Musk’s tenure.

Share this:

Breaking: Supreme Court to Hear Major Pair of Cases Involving Regulation of Social Media Platforms’ Curation or Exclusion of Politician Speech

From today’s order list:

22-277 MOODY, ATT’Y GEN. OF FL, ET AL. V. NETCHOICE, LLC, ET AL.
22-555 NETCHOICE, LLC, ET AL. V. PAXTON, ATT’Y GEN. OF TX
The petitions for writs of certiorari are granted limited to Questions 1 and 2 presented by the Solicitor General in her brief for the United States as amicus curiae.

From the SG brief:

These cases concern laws enacted by Florida and Texas to regulate major social media platforms like Facebook, YouTube, and X (formerly known as Twitter). The two laws differ in some respects, but both restrict platforms’ ability to engage in content moderation by removing, editing, or arranging user-generated content; require platforms to provide individualized explanations for certain forms of content moderation; and require general disclosures about platforms’ content-moderation practices. The questions presented are:

  1. Whether the laws’ content-moderation restrictions comply with the First Amendment.
  2. Whether the laws’ individualized-explanation requirements comply with the First Amendment.
  3. Whether the laws’ general-disclosure provisions comply with the First Amendment.
  4. Whether the laws violate the First Amendment because they were motivated by viewpoint discrimination.

Again, only 1 and 2 are being heard by the Court.

This is going to be a major test of whether social media companies may have their content regulated as though they were government actors or have first amendment rights to include or exclude content of politicians as they see fit, much like newspapers, news websites, and cable and television stations.

It has major implications for efforts to limit election subversion, as I will write about later on.

More from SCOTUSBlog.

Share this:

“Twitter Fires Election Integrity Team Ahead of 2024 Elections”

Rolling Stone:

NEXT YEAR WILL see dozens of elections around the globe, but X (formerly Twitter) has seemingly abdicated responsibility for protecting users from misinformation during these democratic processes.

Several European staffers working on a threat disruption team for the social platform, including senior manager Aaron Rodericks, have been fired this week, according to a report in the tech publication The Information that cited anonymous sources familiar with the matter. Site owner Elon Musk confirmed the termination of the team members on Wednesday.

Last month, Rodericks, a Canadian senior manager based in Ireland, posted on LinkedIn that he was looking to hire eight staffers ahead of more than 70 elections worldwide in 2024, to work on combating material that could undermine democracy. “If you have a passion for protecting the integrity of elections and civic events, X is certainly at the centre of the conversation!” he wrote. The new employees would work at one of several U.S. offices, or in Toronto, Dublin, or Singapore. X’s Safety team likewise announced that they were “expanding our safety and elections teams to focus on combating manipulation, surfacing inauthentic accounts and closely monitoring the platform for emerging threats.”

But Rodericks — whose law firm did not respond to multiple requests for comment on his apparent termination by X — never got the chance to build his team. The hiring notice attracted the notice of right-wing influencers including Chaya Raichik (a.k.a. Libs of TikTok) and Mike Benz, a former State Department official who at one point was angling for access to the so-called “Twitter Files,” internal communications that conservatives believe demonstrated collusion between the company and the U.S. government to censor conservative views and media.

Share this:

“Misinformation research is buckling under GOP legal attacks”

Must-read WaPo:

Academics, universities and government agencies are overhauling or ending research programs designed to counter the spread of online misinformation amid a legal campaign from conservative politicians and activists who accuse them of colluding with tech companies to censor right-wing views.

The escalating campaign — led by Rep. Jim Jordan (R-Ohio) and other Republicans in Congress and state government — has cast a pall over programs that study not just political falsehoodsbut also the quality of medical information online.

Facing litigation,Stanford University officials are discussing how they can continue tracking election-related misinformation through the Election Integrity Partnership (EIP), a prominent consortium that flagged social media conspiracies about voting in 2020 and 2022, several participants told The Washington Post. The coalition of disinformation researchers may shrink and also may stop communicating with X and Facebook about their findings…..

Led by the Stanford Internet Observatory and the University of Washington’s Center for an Informed Public, the coalition of researchers was formed in the middle of the 2020 presidential campaign to alert tech companies in real time about viral election-related conspiracies on their platforms. The posts, for example, falsely claimed Dominion Voting Systems’ software switched votes in favor of President Biden, an allegation that also was at the center of a defamation case that Fox News settled for $787 million.

In March 2021, the group released a nearly 300-page report documenting how false election fraud claims rippled across the internet, coalescing into the #StopTheSteal movement that fomented the Jan. 6, 2021, attack at the U.S. Capitol. In its final report, the coalition noted that Meta, X (formerly Twitter), TikTok and YouTube labeled, removed or suppressed just over a third of the posts the researchers flagged.

But by 2022, the partnership was engulfed in controversy. Right-wing media outlets, advocacy groups and influencers such as the Foundation for Freedom Online, Just the News and far-right provocateur Jack Posobiec argued that the Election Integrity Partnership was part of a coalition with government and industry working to censor Americans’ speech online. (Posobiec didn’t respond to a request for comment, but after this story was published online he posted the request on X with the comment: “Every one of these programs will be penniless and powerless by the time I am done.”)

Jordan has sent several legal demands to see the coalition’sinternal communications with the government and social media platforms and hauled them into Congress to testify about their work.

Louis-Charles, the Judiciary Committee spokeswoman, said in a statement that the universities involved with EIP “played a unique role in the censorship industrial complex given their extensive, direct contacts with federal government agencies.”

The probe prompted members of the Election Integrity Partnership to reevaluate their participation in the coalition altogether. Stanford Internet Observatory founder Alex Stamos, whose group helps lead the coalition, told Jordan’s staff earlier this year that he would have to talk with Stanford’s leadership about the university’s continued involvement, according to a partial transcript filed in court.

“Since this investigation has cost the university now approaching seven [figure]legal fees, it’s been pretty successful, I think, in discouraging us from making it worthwhile for us to do a study in 2024,” Stamos said.

Kate Starbird, co-founder of the University of Washington Center for an Informed Public, declined to elaborate on specific plans to monitor the upcoming presidential race but said her group aims to put together a “similar coalition … to rapidly address harmful false rumors about the 2024 election.”

She added, “It’s clear to me that researchers and their institutions won’t be deterred by conspiracy theorists and those seeking to smear and silence this line of research for entirely political reasons.”…

Share this:

“Special counsel warned Trump could ‘precipitate violence’ if told of Twitter search warrant”

The Hill:

Newly unsealed court records indicate special counsel Jack Smith’s team warned that former President Trump could “precipitate violence” unless the court shielded its efforts to obtain information on his Twitter account.

The records show Smith’s office obtained a total of 32 direct messages from Trump’s account as part of its investigation, with a copy of the warrant also unsealed Friday showing the breadth of the information prosecutors sought.

The 71-page filing from prosecutors, submitted to the court in April but unsealed Friday, offers new details about why Smith’s team feared alerting Trump to the matter….

While prosecutors reiterated prior arguments that Trump could jeopardize the case if the warrant was disclosed, it cites Trump’s past behavior as the need to do so.

“These are not hypothetical considerations in this case. Following his defeat in the 2020 presidential election, the former President propagated false claims of fraud (including swearing to false allegations in a federal court filing), pressured state and federal officials to violate their legal duties, and retaliated against those who did not comply with his demands, culminating in violence at the U.S. Capitol on January 6,” prosecutors wrote.

Share this:

“Biden asks justices to block limits on collaboration with social media companies”

Amy Howe for SCOTUSBlog:

The Biden administration on Thursday afternoon asked the Supreme Court to temporarily block a lower court’s order that would limit its ability to communicate with social media companies over content moderation policies. U.S. Solicitor General Elizabeth Prelogar told the justices that if the “unprecedented” order is allowed to stand, it would put a Louisiana district judge in charge of overseeing the executive branch’s communications with social media companies.

Shortly after receiving the government’s request, Justice Samuel Alito — who handles emergency requests from the U.S. Court of Appeals for the 5th Circuit — put the lower court’s order on hold until the end of the day on Friday, Sept. 22, to give the justices time to rule on the request. Alito also directed the plaintiffs to file a response to the government’s application by 4 p.m. on Wednesday, Sept. 20.

The dispute arises from the federal government’s efforts to combat the spread of misinformation on social media by flagging content for social media platforms and urging them to remove that content. The lawsuit was filed by Republican attorneys general in Missouri and Louisiana, as well as four individual plaintiffs whose social media posts on controversial topics such as the COVID-19 lab-leak theory and vaccine side effects were removed or downgraded. They argued that the government “coerced, threatened, and pressured social-media platforms to censor” them, which violated the First Amendment.

The federal government countered that it had only sought to “mitigate the hazards of online misinformation” by flagging content that violated the platforms’ own policies.

Share this:

“Protecting Democracy Online in 2024 and Beyond”

New report from CAP:

This new report specifically anticipates risks to and from the major social media platforms in the 2024 elections, continuing CAP’s work to promote election integrity online and ensure free and fair elections globally. The report’s recommendations incorporate learnings from past elections and introduce new ideas to encourage technology platforms to safeguard democratic processes and mitigate election threats. In a world without standardized global social media regulation, ensuring elections are safe, accessible, and protected online and offline will require key actions to be taken ahead of any votes being cast—both in 2024 and beyond.

Share this:

“5th Circuit finds Biden White House, CDC likely violated First Amendment”

WaPo:

The U.S. Court of Appeals for the 5th Circuit on Friday ruled that the Biden White House, top government health officials and the FBI likelyviolated the First Amendment by improperly influencing tech companies’ decisions to remove or suppress posts on the coronavirus and elections.

The decisionwritten unanimously by three judges nominated by Republican presidents, was likely to be seen as victory for conservatives who have long argued that social media platforms’ content moderation efforts restrict their free speech rights. But some advocates also said the ruling was an improvement over a temporary injunction U.S. District Judge Terry A. Doughty issued July 4….

Doughty’s decision had affected a wide range of government departments and agencies, and imposed 10 specific prohibitions on government officials.The appeals court threw out nine of those and modified the 10th to limit it to efforts to “coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech.”…

The judges also zeroed in on the FBI’s communications with tech platforms in the run-up to the 2020 elections, which included regular meetings with the tech companies. The judges wrote that the FBI’s activities were “not limited to purely foreign threats,” citing instances where the law enforcement agency “targeted” posts that originated inside the United States, including some that stated incorrect poll hours or mail-in voting procedures.

The judges said in their rulings that the platforms changed their policies based on the FBI briefings, citing updates to their terms of service about handling of hacked materials, following warnings of state-sponsored “hack and dump” operations.

Share this: