Here’s the introduction and summary of argument from this just-filed amicus brief in the NetChoice cases on behalf of political scientist Brendan Nyhan, journalism professor Amy Wilentz, and me, written by me and Nat Bach (a former UCLA student), Marina Shvarts, and Tom Worger (a former UCI student) at Manatt. (Below the fold I am putting some first amendment arguments responding to Eugene Volokh on common carriers as well as an argument that Texas and Florida’s laws are justified by an “antidistortion” interest that the Supreme Court has already rejected in the campaign finance cases).
Social media has greatly amplified the ability of average individuals to share and receive information, helping to further the kind of robust, wide-open debate that promotes First Amendment values of free speech and association. Gone are the days of speech scarcity when a few gatekeepers such as newspapers and television networks controlled the bulk of political speech. But the rise of “cheap speech”[1] also has had negative consequences, such as when social media platforms are used to harass,[2] spread obscene or violent images,[3] or commit financial fraud.[4] In response to dangers like these, platforms have engaged in content moderation, making decisions as private actors participating in the marketplace of ideas to remove or demote speech that, in their judgment, is objectionable or dangerous.[5]
Social media companies engaged in just such content moderation decisions in the leadup to, and in the aftermath of, the 2020 U.S. presidential election.[6] During that election, President Donald Trump, then a candidate for reelection running against Joe Biden, relentlessly used his social media account on Twitter (now known as “X”[7]) to spread false claims that the election would be or was “rigged” or stolen through fraud, and to advocate for “wild” protests that inspired the January 6, 2021 violent attack on the United States Capitol as Congress was counting the Electoral College votes.
During the campaign and post-election period, these platforms labeled and fact-checked many of Trump’s false and incendiary statements, and limited the sharing of some of his content; but after Trump failed to condemn (and even praised) the January 6 rioters, many major platforms, fearing additional violence fomented by the President, decided to remove or suspend Trump’s social media accounts.
The platforms made voluntary decisions about labeling, factchecking, demoting, and deplatforming content that undermined election integrity, stoked violence, and raised the risk of election subversion. In so doing, the platforms participated in the open marketplace of ideas by exercising their sound editorial judgment in a socially responsible way to protect democracy. Even if certain moderation decisions were imperfect in hindsight, the platforms’ efforts were vastly preferable to an alternative in which government fiat deprives platforms of the power to remove even dangerous speech.
These 2020 election-related content moderation decisions were not compelled by law—and some other platforms continued to permit and post incendiary election-related content even after January 6[8]—but they were laudable. Without such content moderation decisions, the post-election violence could have been far worse and U.S. democracy imperiled.
The platforms’ editorial choices are fully protected by the First Amendment. Just as The Wall Street Journal newspaper has the First Amendment right to exercise editorial discretion and could not be compelled by law to share or remove a politician’s op-ed, platforms have a First Amendment right to include, exclude, label, promote, or demote posts made on their services.
Florida’s and Texas’s social media laws, if allowed to stand, would thwart the ability of platforms to moderate social media posts that risk undermining U.S. democracy and fomenting violence. Texas compels platforms to disseminate speech the platforms might find objectionable or dangerous, prohibiting them from “censor[ing]” an expression of any viewpoint by means of “block[ing], ban[ning], remov[ing], deplatform[ing], demonetiz[ing], de-boost[ing], restrict[ing], deny[ing] equal access or visibility to, or otherwise discriminat[ing].”[9] Florida’s convoluted law prohibits the “deplatforming” of known political candidates and “journalistic enterprises,” and from using algorithms to “shadow ban[]” users who post “about” a candidate.[10]
Even where platforms are permitted to take editorial actions, such as engaging in fact-checking, Florida mandates that such actions must be based on previously disclosed standards with “detailed definitions” that may not be updated more than once every 30 days.[11] Any such action must be followed up with individualized notice to the affected user, including a “thorough rationale” for the action and a “precise and thorough explanation of how the social media platform became aware” of the content that triggered its decision.[12] Under these sweepingly vague laws, broad swaths of dangerous election-related speech would be actually or effectively immune from moderation. And these burdensome laws inevitably will have a chilling effect.
Both Florida’s and Texas’s laws contain certain exceptions from their bar on content moderation, but those exceptions seemingly would not reach much of the speech that could foment election violence and set the stage for election subversion. As to the content arguably covered by these exceptions, neither Florida nor Texas can show that the exceptions are clear, workable in the real-time social media environment, and consistent with the protections of the First Amendment. For example, Florida’s limited exception for “obscene” speech would not permit moderation of dangerous and violent election-related speech, including speech that is unlawful under the standard of Brandenburg v. Ohio, 395 U.S. 444 (1969). And Texas’s allowance for moderation to prevent incitement of “criminal activity” or “specific threats” is limited to threats made “against a person or group because of their race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace officer or judge,” and does not even include threats against election officials or administrators.
Ultimately, NetChoice and the Computer & Communications Industry Association (“CCIA”) are correct that Florida’s and Texas’s laws violate the First Amendment rights of platforms to exercise appropriate editorial judgment and act as responsible corporations.[13] In a free market, consumers need not read or subscribe to social media platforms whose content moderation decisions they do not like; they can turn to other platforms with policies and views more amenable to them. Platforms are not common carriers because they, like newspapers, produce coherent speech products and produce public-facing speech (unlike a telephone call or private telegram). And even common carriers cannot be barred from recommending some speech over others without violating their First Amendment rights.
Further, Florida’s and Texas’s laws have an impermissible “anti-distortion” purpose under this Court’s First Amendment precedents. This Court should not allow states to hijack the platforms, forcing them to equalize speech to include messages that could foment electoral violence and undermine democracy, simply because the states have objected to the platforms’ exercise of editorial discretion.
[1] See Eugene Volokh, Cheap Speech and What It Will Do, 104 Yale L.J. 1805, 1819–33 (1995); Richard L. Hasen, Cheap Speech: How Disinformation Poisons Our Politics—and How to Cure It 19–22 (2022) (hereinafter Hasen, Cheap Speech).
[2] Cyberbullying and Online Harms: Preventions and Interventions from Community to Campus 3–4 (Helen Cowie & Carrie Anne Myers, eds. 2023).
[3] Danielle K. Citron & Mary Anne Franks, Criminalizing Revenge Porn, 49 Wake Forest L. Rev. 345, 347 (2014).
[4] “More than 95,000 people reported about $770 million in losses to fraud initiated on social media platforms in 2021.” Emma Fletcher, Social Media is a Gold Mine for Scammers in 2021, Federal Trade Commission, Data Spotlight (Jan. 25, 2022), https://www.ftc.gov/news-events/data-visualizations/data-spotlight/2022/01/social-media-gold-mine-scammers-2021 [https://perma.cc/5UCK-QJP3].
[5] When the government pressures private entities such as platforms to speak or not to speak, this “jawboning” raises a different set of issues about the government violating the First Amendment. This Court will consider such issues in the recently-granted case, Murthy v. Missouri, No. 23-411.
[6] For details on the facts discussed in the next three paragraphs, see Part A, infra.
[7] We refer to the company as “Twitter” and the posts as “tweets” throughout this brief, as those were the names when the activities described in Part A occurred.
[8] For example, in the aftermath of January 6 and the deplatforming of Trump by Facebook and Twitter, Trump supporters continued to share messages on platforms including Gab and Parler. Kate Conger, Mike Isaac, & Sheera Frenkel, Twitter and Facebook Lock Trump’s Accounts after Violence on Capitol Hill, N.Y. Times, Jan. 6, 2021 (updated Feb. 14, 2023), https://www.nytimes.com/2021/01/06/technology/capitol-twitter-facebook-trump.html.
[9] Tex. Civ. Prac. & Remedies Code §§ 143A.001(1), 143A.002.
[10] Fla. Stat. §§ 106.072(2), 501.2041(1)(c), (2)(h), (2)(j).
[11] Id. §§ 501.2041(2)(a), (c).
[12] Id. § 501.2041(3). Texas too has individualized disclosure requirements. Tex. Bus & Com. Code §§ 120.101-104; id. §§ 120.051(a), 120.053(a)(7). We focus in our brief on the Florida disclosure rules but the Texas disclosure rules raise similar concerns.
[13] Br. For Resp’ts in No. 22-277, 18-52 (Nov. 30, 2023); Br. For Pet’rs in No. 22-555, 18-53 (Nov. 30, 2023).
Continue reading New Amicus Brief Filed in Supreme Court in NetChoice Social Media Cases (on Behalf of Brendan Nyhan, Amy Wilentz, and Me) on How Texas and Florida’s Social Media Laws Raise the Risk of Election Subversion