New Amicus Brief Filed in Supreme Court in NetChoice Social Media Cases (on Behalf of Brendan Nyhan, Amy Wilentz, and Me) on How Texas and Florida’s Social Media Laws Raise the Risk of Election Subversion

Here’s the introduction and summary of argument from this just-filed amicus brief in the NetChoice cases on behalf of political scientist Brendan Nyhan, journalism professor Amy Wilentz, and me, written by me and Nat Bach (a former UCLA student), Marina Shvarts, and Tom Worger (a former UCI student) at Manatt. (Below the fold I am putting some first amendment arguments responding to Eugene Volokh on common carriers as well as an argument that Texas and Florida’s laws are justified by an “antidistortion” interest that the Supreme Court has already rejected in the campaign finance cases).


Social media has greatly amplified the ability of average individuals to share and receive information, helping to further the kind of robust, wide-open debate that promotes First Amendment values of free speech and association. Gone are the days of speech scarcity when a few gatekeepers such as newspapers and television networks controlled the bulk of political speech. But the rise of “cheap speech”[1] also has had negative consequences, such as when social media platforms are used to harass,[2] spread obscene or violent images,[3] or commit financial fraud.[4] In response to dangers like these, platforms have engaged in content moderation, making decisions as private actors participating in the marketplace of ideas to remove or demote speech that, in their judgment, is objectionable or dangerous.[5]

Social media companies engaged in just such content moderation decisions in the leadup to, and in the aftermath of, the 2020 U.S. presidential election.[6] During that election, President Donald Trump, then a candidate for reelection running against Joe Biden, relentlessly used his social media account on Twitter (now known as “X”[7]) to spread false claims that the election would be or was “rigged” or stolen through fraud, and to advocate for “wild” protests that inspired the January 6, 2021 violent attack on the United States Capitol as Congress was counting the Electoral College votes.

During the campaign and post-election period, these platforms labeled and fact-checked many of Trump’s false and incendiary statements, and limited the sharing of some of his content; but after Trump failed to condemn (and even praised) the January 6 rioters, many major platforms, fearing additional violence fomented by the President, decided to remove or suspend Trump’s social media accounts.

The platforms made voluntary decisions about labeling, factchecking, demoting, and deplatforming content that undermined election integrity, stoked violence, and raised the risk of election subversion. In so doing, the platforms participated in the open marketplace of ideas by exercising their sound editorial judgment in a socially responsible way to protect democracy. Even if certain moderation decisions were imperfect in hindsight, the platforms’ efforts were vastly preferable to an alternative in which government fiat deprives platforms of the power to remove even dangerous speech.

These 2020 election-related content moderation decisions were not compelled by law—and some other platforms continued to permit and post incendiary election-related content even after January 6[8]—but they were laudable. Without such content moderation decisions, the post-election violence could have been far worse and U.S. democracy imperiled.

The platforms’ editorial choices are fully protected by the First Amendment. Just as The Wall Street Journal newspaper has the First Amendment right to exercise editorial discretion and could not be compelled by law to share or remove a politician’s op-ed, platforms have a First Amendment right to include, exclude, label, promote, or demote posts made on their services.

Florida’s and Texas’s social media laws, if allowed to stand, would thwart the ability of platforms to moderate social media posts that risk undermining U.S. democracy and fomenting violence. Texas compels platforms to disseminate speech the platforms might find objectionable or dangerous, prohibiting them from “censor[ing]” an expression of any viewpoint by means of “block[ing], ban[ning], remov[ing], deplatform[ing], demonetiz[ing], de-boost[ing], restrict[ing], deny[ing] equal access or visibility to, or otherwise discriminat[ing].”[9] Florida’s convoluted law prohibits the “deplatforming” of known political candidates and “journalistic enterprises,” and from using algorithms to “shadow ban[]” users who post “about” a candidate.[10]

Even where platforms are permitted to take editorial actions, such as engaging in fact-checking, Florida mandates that such actions must be based on previously disclosed standards with “detailed definitions” that may not be updated more than once every 30 days.[11] Any such action must be followed up with individualized notice to the affected user, including a “thorough rationale” for the action and a “precise and thorough explanation of how the social media platform became aware” of the content that triggered its decision.[12] Under these sweepingly vague laws, broad swaths of dangerous election-related speech would be actually or effectively immune from moderation. And these burdensome laws inevitably will have a chilling effect.

Both Florida’s and Texas’s laws contain certain exceptions from their bar on content moderation, but those exceptions seemingly would not reach much of the speech that could foment election violence and set the stage for election subversion. As to the content arguably covered by these exceptions, neither Florida nor Texas can show that the exceptions are clear, workable in the real-time social media environment, and consistent with the protections of the First Amendment. For example, Florida’s limited exception for “obscene” speech would not permit moderation of dangerous and violent election-related speech, including speech that is unlawful under the standard of Brandenburg v. Ohio, 395 U.S. 444 (1969). And Texas’s allowance for moderation to prevent incitement of “criminal activity” or “specific threats” is limited to threats made “against a person or group because of their race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace officer or judge,” and does not even include threats against election officials or administrators.

Ultimately, NetChoice and the Computer & Communications Industry Association (“CCIA”) are correct that Florida’s and Texas’s laws violate the First Amendment rights of platforms to exercise appropriate editorial judgment and act as responsible corporations.[13] In a free market, consumers need not read or subscribe to social media platforms whose content moderation decisions they do not like; they can turn to other platforms with policies and views more amenable to them. Platforms are not common carriers because they, like newspapers, produce coherent speech products and produce public-facing speech (unlike a telephone call or private telegram). And even common carriers cannot be barred from recommending some speech over others without violating their First Amendment rights.

 Further, Florida’s and Texas’s laws have an impermissible “anti-distortion” purpose under this Court’s First Amendment precedents. This Court should not allow states to hijack the platforms, forcing them to equalize speech to include messages that could foment electoral violence and undermine democracy, simply because the states have objected to the platforms’ exercise of editorial discretion.

[1] See Eugene Volokh, Cheap Speech and What It Will Do, 104 Yale L.J. 1805, 1819–33 (1995); Richard L. Hasen, Cheap Speech: How Disinformation Poisons Our Politics—and How to Cure It 19–22 (2022) (hereinafter Hasen, Cheap Speech).

[2] Cyberbullying and Online Harms: Preventions and Interventions from Community to Campus 3–4 (Helen Cowie & Carrie Anne Myers, eds. 2023).

[3] Danielle K. Citron & Mary Anne Franks, Criminalizing Revenge Porn, 49 Wake Forest L. Rev. 345, 347 (2014).

[4] “More than 95,000 people reported about $770 million in losses to fraud initiated on social media platforms in 2021.” Emma Fletcher, Social Media is a Gold Mine for Scammers in 2021, Federal Trade Commission, Data Spotlight (Jan. 25, 2022), [].

[5] When the government pressures private entities such as platforms to speak or not to speak, this “jawboning” raises a different set of issues about the government violating the First Amendment. This Court will consider such issues in the recently-granted case, Murthy v. Missouri, No. 23-411.

[6] For details on the facts discussed in the next three paragraphs, see Part A, infra.

[7] We refer to the company as “Twitter” and the posts as “tweets” throughout this brief, as those were the names when the activities described in Part A occurred.

[8] For example, in the aftermath of January 6 and the deplatforming of Trump by Facebook and Twitter, Trump supporters continued to share messages on platforms including Gab and Parler. Kate Conger, Mike Isaac, & Sheera Frenkel, Twitter and Facebook Lock Trump’s Accounts after Violence on Capitol Hill, N.Y. Times, Jan. 6, 2021 (updated Feb. 14, 2023),

[9] Tex. Civ. Prac. & Remedies Code §§ 143A.001(1), 143A.002.

[10] Fla. Stat. §§ 106.072(2), 501.2041(1)(c), (2)(h), (2)(j).

[11] Id. §§ 501.2041(2)(a), (c).

[12] Id. § 501.2041(3). Texas too has individualized disclosure requirements. Tex. Bus & Com. Code §§ 120.101-104; id. §§ 120.051(a), 120.053(a)(7). We focus in our brief on the Florida disclosure rules but the Texas disclosure rules raise similar concerns.

[13] Br. For Resp’ts in No. 22-277, 18-52 (Nov. 30, 2023); Br. For Pet’rs in No. 22-555, 18-53 (Nov. 30, 2023).

D. The First Amendment Forbids Eliminating Platforms’ Editorial Discretion and Forcing Them to Have Their Property Used to Undermine Free Elections in the Name of Equalizing Speech.

No one could seriously contest the unconstitutionality of a law that would compel The Wall Street Journal to print on the front page of its newspaper each day Donald Trump’s 400 post-election tweets calling the 2020 election results into question or his January 6 messages advocating for election subversion. The conclusion that such a law would violate the First Amendment flows easily from Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974) and its progeny. The Journal, in exercising its speech rights, decides what content is worthy of inclusion, how the content is organized, and the general editorial direction of the newspaper. Those who do not like such editorial decisions are free to read other newspapers.

The same principles apply to the content moderation decisions of the platforms, for reasons fully described in NetChoice and CCIA’s Briefs on the Merits[1] and in the Solicitor General’s Brief for the United States as Amicus Curiae at the cert. stage, at pages 13-18. The state laws at issue here would have required Trump’s content to be displayed, prominently and unmediated, by the platforms, even after the attack on the Capitol. They would deprive the platforms of the same speech rights to which the Journal is entitled.

 Neither newspapers nor platforms (nor for that matter bookstores, television stations, or movie theaters) should be compelled by states give up their editorial discretion to those who would promote election subversion or support election-related violence. Instead, the corporations who run these entities have the right to edit and curate their content consistent with their discretion and with sound corporate responsibility.

Rather than repeat the correct First Amendment arguments of NetChoice, CCIA, and the U.S., we briefly emphasize two points.

First, it is absurd to argue that the platforms are more like common carriers such as telephone companies subject to viewpoint antidiscrimination provisions than like The Wall Street Journal. As Professor Eugene Volokh, one of the originators of the common carrier analogy, explains, what separates entities such as newspapers from entities such as phone companies is whether they produce a “coherent speech product.”[2] Those who do are entitled under the First Amendment to exercise editorial discretion.

Platforms surely do produce such coherent products, despite what Professor Volokh suggests.[3] Of course the public reasonably associates a controversial politician’s speech with a platform’s editorial message. People may be attracted to or repulsed by Trump’s speech on a platform, but they will perceive that speech as part of the platform’s overall message. (In contrast, no one perceives private text messages sent over AT&T’s network as AT&T’s speech.) People know that Truth Social, where Trump commonly posts, is different from a platform where people rarely, if ever, see posts from Trump or a platform marketed to Democrats organized around criticizing Trump.[4][RH1] 

It should be no surprise that after Elon Musk took over Twitter and changed its moderation policies to make the platform’s content less trustworthy and more incendiary, users and advertisers reevaluated the platform’s strengths and weaknesses, with many choosing to leave.[5] Content moderation policies shape how the public perceives a platform’s messages. Content moderation decisions—including Mr. Musk’s, whether wise or not—are the exercise of editorial discretion. The public then decides which platforms to patronize, value, or devalue.

Even if the law treated platforms as common carriers for some purposes by requiring them to carry certain content, Professor Volokh writes that “platforms retain the First Amendment right to choose what to include in . . . recommendations and what to exclude from them.”[6] For reasons explained above, certain decisions to recommend some content over others would violate both Florida’s and Texas’s laws, rendering such laws unconstitutional even as applied to common carriers.

Second, Florida’s and Texas’s laws seek to equalize political speech in violation of this Court’s First Amendment jurisprudence. In his amicus brief supporting cert. in the Florida litigation, Trump approvingly quoted Professor Volokh on an equalization rationale for treating platforms like common carriers: “Recent experience has fostered a widespread and growing concern that behemoth social media platforms . . . have ‘seriously leverage[d their] economic power into a means of affecting the community’s political life.’” Br. Donald J. Trump as Amicus Curiae in Support of Petitioners in No. 22-277 at 2 (quoting Professor Volokh).

But this Court has repeatedly rejected the “anti-distortion” rationale that government may limit the voice of some to enhance the relative voice of others. See e.g., Citizens United v. FEC, 558 U.S. 310, 349-56 (2010); Buckley v. Valeo, 424 U.S. 1, 48-49 (1976) (“restrict[ing] the speech of some elements of our society in order to enhance the relative voice of others is wholly foreign to the First Amendment”).

This Court also has held the government is powerless to prevent those with greater economic power from leveraging that power through political speech: “It is irrelevant for purposes of the First Amendment that corporate funds may ‘have little or no correlation to the public’s support for the corporation’s political ideas.’ [Citation.] All speakers, including individuals and the media, use money amassed from the economic marketplace to fund their speech. The First Amendment protects the resulting speech, even if it was enabled by economic transactions with persons or entities who disagree with the speaker’s ideas.” Citizens United, 558 U.S. at 351.

So long as this Court is going to continue to read the First Amendment in this fashion in the campaign finance context, it would be squarely inconsistent to uphold Florida’s and Texas’s speech equalization mandates in the social media context.

[1] Br. For Resp’ts in No. 22-277, 18-52 (Nov. 30, 2023), Br. For Pet’rs in No. 22-555, 18-53 (Nov. 30, 2023).

[2] Eugene Volokh, Treating Social Media Platforms Like Common Carriers?, 1 J. Free Speech L. 377, 404-05 (2021).

[3] See Hasen, Cheap Speech, at 126, 220–21 n.85.

[4] See Br. of Amici Curiae Electronic Frontier Foundation and Protect Democracy, 6-9, NetChoice, L.L.C. v. Att’y Gen., Case No. 21-12355, (11th Cir. Nov. 14, 2021) (discussing various ideologically focused social media platforms and their terms of use that disclose such leanings to users).

[5] Will Oremus et al., A Year Later Musk’s X is Tilting Right. And Sinking, Wash. Post (Oct. 27, 2023),; Steven Lee Myers, Stuart A. Thompson, and Tiffany Hsu, The Consequences of Elon Musk’s Ownership of X, N.Y. Times (Oct. 27, 2023), (“Now rebranded as X, the site has experienced a surge in racist, antisemitic and other hateful speech. Under Mr. Musk’s watch, millions of people have been exposed to misinformation about climate change. Foreign governments and operatives — from Russia to China to Hamas — have spread divisive propaganda with little or no interference.”).

[6] Supra n. 74 at 382. “Recommendations” includes newsfeeds. Id at 409.

Share this: