In early 2022, a young couple from Canada, Lauren Chen and Liam Donovan, registered a new company in Tennessee that went on to create a social media outlet called Tenet Media.
By November 2023, they had assembled a lineup of major conservative social media stars, including Benny Johnson, Tim Pool and Dave Rubin, to post original content on Tenet’s platform. The site then began posting hundreds of videos — trafficking in pointed political commentary as well as conspiracy theories about election fraud, Covid-19, immigrants and Russia’s war with Ukraine — that were then promoted across the spectrum of social media, from YouTube to TikTok, X, Facebook, Instagram and Rumble.
It was all, federal prosecutors now say, a covert Russian influence operation. On Wednesday, the Justice Department accused two Russians of helping orchestrate $10 million in payments to Tenet in a scheme to use those stars to spread Kremlin-friendly messages.
The disclosures reflect the growing sophistication of the Kremlin’s longstanding efforts to shape American public opinion and advance Russia’s geopolitical goals, which include, according to American intelligence assessments, the election of former President Donald J. Trump in November.
In 2016 and 2020, Russia employed armies of internet trolls, fake accounts and bot farms to try to reach American audiences, with debatable success. The operation that prosecutors described this week shows a pivot to exploiting already established social media influencers, who, in this case, generated as many as 16 million views on Tenet’s YouTube channel alone.
Most viewers were presumably unaware, as the influencers themselves said they were, that Russia was paying for it all.
“Influencers already have a level of trust with their audience,” said Jo Lukito, a professor at the University of Texas at Austin’s journalism school who studies Russian disinformation. “So, if a piece of information can come through the mouth of an existing influencer, it comes across as more authentic.”
The indictment — which landed like a bombshell in the country’s conservative media ecosystem — also underscored the growing ideological convergence between President Vladimir V. Putin’s Russia and a significant portion of the Republican Party since Mr. Trump’s rise to political power….
Martin J. Riedl, a journalism professor at the University of Tennessee, Knoxville, who studies the spread of misinformation on social media, said the case of Tenet spotlighted gaping regulatory holes when it came to the American political system.
While the Federal Election Commission has strict disclosure rules for television and radio advertisements, it has no such restrictions for paid social media influencers.
The result is an enormous loophole — one that the Russians appeared to exploit.
“Influencers have been around for a while,” Mr. Riedl said, “but there are few rules around their communication, and political speech is not regulated at all.”
Category Archives: social media and social protests
“DOJ alleges Russia funded US media company linked to right-wing social media stars”
The unnamed Tennessee-based company that the Justice Department alleges was being funded by Russian operatives working as part of a Kremlin-orchestrated influence operation targeting the 2024 US election is Tenet Media, which is linked to right-wing commentators with millions of subscribers on YouTube and other social media platforms, according to a US official briefed on the matter.
The indictment unsealed in New York’s Southern District accused two employees of RT, the Kremlin’s media arm, of funneling nearly $10 million to an unidentified company, described only as “Company 1” in court documents.
CNN has independently confirmed that “Company 1” is Tenet Media, which is a platform for independent content creators. It is self-described as a “network of heterodox commentators that focus on Western political and cultural issues,” according to its website, which matches language contained in the newly unsealed indictment.
The goal of the operation, according to prosecutors, was to fuel pro-Russian narratives, in part, by pushing content and news articles favoring Republican presidential nominee Donald Trump and others who the Kremlin deemed to be friendlier to its interests.
The indictment also says that Company 1’s website identifies six commentators.
Among the commentators listed on Tenet Media’s website are right-wing personalities Benny Johnson and Tim Pool. Both have millions of subscribers on YouTube and other social media platforms. Pool interviewed Trump on his podcast in May.
In separate statements released Wednesday, Johnson and Pool said they were victims of the alleged scheme and said they maintained editorial control of the content they had created.
Announcement: UCLA Law’s Safeguarding Democracy Project Fall Calendar of Events
As we prepare for another fall semester, we’re excited to bring you a robust series of events on the 2024 Elections, Election Law, and the risks facing democracy in the U.S.
This semester, we present a mix of live, online, and hybrid events. Please see below or click the link for details. We hope you can join us!
Sept. 12: From Here to There: How States Can and Should Certify the Results of the 2024 Elections (Webinar) |
Thursday, September 12, 12:15pm-1:15pm PT, Webinar, (Recording to Follow) Webinar Registration Ben Berwick, Head of Election Law & Litigation Team & Counsel (Protect Democracy), Lauren Miller Karalunas, (Brennan Center for Justice), and Michael Morley (Florida State University College of Law). Moderated by Rick Hasen |
Sept. 17: Democracy and Risks to the 2024 Elections (in person at UCLA Hammer Museum) |
Tuesday, September 17, 7:30pm PT at the UCLA Hammer Museum, (Recording to Follow) Co-presented with the Hammer Forum and the David J. Epstein Program in Public Interest Law & Policy, UCLA Law Can the United States conduct a free and fair election in November in which the public will have confidence? Are concerns about foreign interference, deep fakes, and disinformation serious or overblown? Is participation equally open to minority voters? What are the risks to U.S. democracy if significant portions of the public do not accept the election results as legitimate? Moderated by Rick Hasen, UCLA Law. Panelists: Leah Aden, NAACP Legal Defense and Education Fund; John Fortier, American Enterprise Institute; Justin Levitt, Loyola Law School, Los Angeles; Yoel Roth, The Match Group. More information here. |
Sept. 24: One Person, One Vote? (in person at UCLA Hammer Museum) |
Tuesday, September 24, 7:30pm PT at the UCLA Hammer Museum, live in person only Co-presented with the Hammer Forum Documentary film screening. At a time when many Americans question democratic institutions, One Person, One Vote? unveils the complexities of the Electoral College, the uniquely American and often misunderstood mechanism for electing a president. The documentary follows four presidential electors representing different parties in Colorado during the intense 2020 election.2024. dir. Maximina Juson. Color. 78 minutes. More information here. |
Oct. 8: The United States Electoral College and Fair Elections (in person at UCLA Hammer Museum) |
Tuesday, October 8, 7:30pm PT at the UCLA Hammer Museum, (Recording to Follow) Co-presented with the Hammer Forum Why does the United States use the Electoral College for choosing the President? Is the Electoral College a fair way to choose a President? What specific risks does the method for choosing electors pose for free and fair elections? How likely is the United States to adopt a national popular vote instead of the Electoral College? Moderated by Rick Hasen, UCLA Law. Panelists: Joey Fishkin, UCLA Law; Amanda Hollis-Brusky, Pomona College; Derek Muller, University of Notre Dame. More information here. |
Oct. 9: Finding Common Ground in Election Law (in person and online) |
Wednesday, October 9, 12:15pm-1:15pm PT, Lunch will be provided, (Recording to Follow) In person at UCLA Law School Room 1430 and online In person registration Webinar Registration Co-sponsored by the Office of the Dean, UCLA Law Lisa Manheim (University of Washington School of Law), Derek T. Muller (Notre Dame Law School), and Richard L. Hasen (Director, Safeguarding Democracy Project, moderator) |
Oct. 15: Are We Ready for a Fair and Legitimate Election? (in person at UCLA Hammer Museum) |
Tuesday, October 15, 7:30pm PT at the UCLA Hammer Museum, (Recording to Follow) Co-presented with the Hammer Forum Are election administrators up to the task of holding elections and fairly counting votes when they are subject to unprecedented public scrutiny and face possible harassment? Will delays in reporting vote totals undermine the public’s confidence in election results, regardless of how well the election is administered? What are the risks to acceptance of election results and peaceful transitions of power between election day and January 6, 2025, when Congress counts the states’ Electoral College votes? Moderated by Rick Hasen, UCLA Law. Panelists: Larry Diamond, Stanford University, Ben Ginsberg, Stanford University. Franita Tolson, USC Law. More information here. |
Oct. 21: A.I., Social Media, the Information Environment and the 2024 Elections (webinar) |
Monday, October 21, 12:15pm-1:15pm PT, Webinar, (Recording to Follow) Co-sponsored by the Institute for Technology, Law & Policy, UCLA Law Danielle Citron (University of Virginia Law School), Brendan Nyhan (Dartmouth), Nate Persily (Stanford Law School). Moderated by Rick Hasen Webinar Registration |
“Social platform X edits AI chatbot after election officials warn that it spreads misinformation”
The social media platform X has made a change to its AI chatbot after five secretaries of state warned it was spreading election misinformation.
Top election officials from Michigan, Minnesota, New Mexico, Pennsylvania and Washington sent a letter this month to Elon Musk complaining that the platform’s AI chatbot, Grok, produced false information about state ballot deadlines shortly after President Joe Biden dropped out of the 2024 presidential race.
The secretaries of state requested that the chatbot instead direct users who ask election-related questions to CanIVote.org, a voting information website run by the National Association of Secretaries of State.
Before listing responses to election-related questions, the chatbot now says, “For accurate and up-to-date information about the 2024 U.S. Elections, please visit Vote.gov.”…
Zuckerberg on 2020 Election Administration Spending, Hunter Biden Laptop
Meta Platforms Chief Executive Mark Zuckerberg said it was improper for the Biden administration to have pressured Facebook to censor content in 2021 related to the coronavirus pandemic, vowing that the social-media giant would reject any such future efforts.
Zuckerberg also said he didn’t plan to repeat efforts to fund nonprofits to assist in state election efforts, a Covid-era push that had drawn Republican criticism and sparked many Republican-leaning states to ban the practice.
In a letter to House Judiciary Committee Chairman Jim Jordan (R., Ohio) that touched on a series of controversies, Zuckerberg wrote that senior Biden administration officials, including from the White House, had “repeatedly pressured our teams for months to censor certain COVID-19 content, including humor and satire, and expressed a lot of frustration with our teams when we didn’t agree.”…
Zuckerberg also made clear he didn’t plan to repeat heavy spending on election access. The billionaire Facebook founder and his wife, Priscilla Chan, donated more than $400 million to nonprofits to help conduct elections during the 2020 coronavirus pandemic.
While many localities said the money was a lifeline helping them register voters, set up socially-distanced voting booths and provide equipment to sort mail-in ballots, among other uses, Republicans said that the money, which they dubbed “Zuckerbucks,” unfairly benefited Democratic areas. More than two dozen mostly Republican-leaning states have now banned, limited or regulated the use of private funds to manage elections, according to the National Conference of State Legislatures.
“Despite the analysis I’ve seen showing otherwise, I know that some people believe this work benefited one party over the other,” Zuckerberg wrote. “My goal is to be neutral and not play a role one way or another—or to even appear to be playing a role. So I don’t plan on making a similar contribution this cycle.”….
In his letter to Jordan, Zuckerberg said that Meta “shouldn’t have demoted” a New York Post story about President Biden’s son Hunter Biden ahead of the 2020 election. The Post said at the time that its reporting was based on email exchanges between the two Bidens that were provided by allies of President Donald Trump, who in turn said they received them from a computer-repair person who found them on a laptop. At the time, dozens of former intelligence officials signed a letter that then-candidate Biden cited in a presidential debate saying that the release of the emails had “all the classic earmarks of a Russian information operation.”
“It’s since been made clear that the reporting was not Russian disinformation, and in retrospect, we shouldn’t have demoted the story,” Zuckerberg wrote.
“Elon Musk’s Hard Turn to Politics, in 300,000 of His Own Words”
When Elon Musk endorsed former President Donald Trump’s campaign in July, X was his megaphone to reach his almost 200 million followers. The endorsement not only made Musk one of Trump’s most influential supporters, but also represented a remarkable shift in his eagerness to weigh in on political debates compared with just a few years ago.
Musk posted about 13,000 times this year through the end of July—almost as much as in all of 2023. That’s about 61 posts a day, compared with nine in 2019.
If you were to read all his exchanges on X from the past 5½ years—including the posts he replied to—that would add up to about 1.5 million words. That’s roughly twice as long as the King James Bible. The words in Musk’s posts alone added up to more than 300,000—not counting emojis.
Musk and his representatives didn’t respond to questions from The Wall Street Journal about his posting patterns on X, formerly called Twitter.
To understand the political evolution of one of the world’s richest men, the Journal captured nearly 42,000 of Musk’s exchanges on X between 2019 and the end of July. (That’s nearly all his conversations during that period, with a small number of exceptions, such as posts he deleted. Read here for more on methodology.)
Musk’s exchanges included roughly 76,000 posts—his tweets as well as his retweets, tweets to which he replied and any quoted tweets. The Journal mapped them using the same technology that powers artificial intelligence tools like ChatGPT….
My Forthcoming Yale Law Journal Feature: “The Stagnation, Retrogression, and Potential Pro-Voter Transformation of U.S. Election Law”
I have written this draft, forthcoming this spring in Volume 134 of the Yale Law Journal. I consider it my most important law review article (or at least the most important that I’ve written in some time). It offers a 30,000-foot view of the state of election law doctrine, politics, and theory. The piece is still in progress, so comments are welcome. Here is the abstract:
American election law is in something of a funk. This Feature explains why, what it means, and how to move forward.
Part I of this Feature describes election law’s stagnation. After a few decades of protecting voting rights, courts (and especially the Supreme Court), acting along ideological—and now partisan—lines, have pulled back on voter protections in most areas of election law and deprived other actors including Congress, election administrators, and state courts of the ability to more fully protect voters rights. Politically, pro-voter election reform has stalled out in a polarized and gridlocked Congress, and the voting wars in the states mean that ease of access to the ballot depends in part on where in the United States one lives. Election law scholarship too has stagnated, failing to generate meaningful theoretical advances about the key purposes of election law.
Part II considers the retrogression of election law doctrine, politics, and theory to a focus on the very basics of democracy: the requirement of fair vote counts, peaceful transitions of power, and voter access to reliable information. Courts on a bipartisan basis in the aftermath of the 2020 election rejected illegitimate attempts to overturn Joe Biden’s presidential election victory. Yet the courts’ ability to thwart attempted election subversion remains a question mark in light of the Supreme Court’s recent decisions in Trump v. Anderson and Trump v. United States. Politically, Congress came together at the end of 2022 to pass the Electoral Count Reform Act to deter future attempts to manipulate electoral college rules in order to subvert election results, but future bipartisan action to prevent retrogression seems less likely. Further, because of the collapse of local journalism and the rise of cheap speech, voters face a decreased ability to obtain reliable information to make voting decisions consistent with their interests and preferences. Meanwhile, parties have become potential paths for subversion. Party-centered election law theory and the First Amendment “marketplace of ideas” theory have not yet incorporated these emerging challenges.
Part III considers the potential to transform election law doctrine, politics, and theory in a pro-voter direction despite high current levels of polarization, the misperceived partisan consequences of pro-voter election reforms, and new, serious technological and political challenges to democratic governance. Election law alone is not up to the task of saving American democracy. But it can help counter stagnation and thwart retrogression. The first order of business must be to assure continued free and fair elections and peaceful transitions of power. But the new election law must be more ambitiously and unambiguously pro-voter. The pro-voter approach to election law is one grounded in political equality and based on four principles: all eligible voters should have the ability to easily register and vote in a fair election with the capacity for reasoned decisionmaking; each voter’s vote is entitled to equal weight; the winners of fair elections are recognized and able to take office peacefully; and political power is fairly distributed across groups in society, with particular protection for those groups who have faced historical discrimination in voting and representation.
In New Supreme Court Social Media Case, Echoes of Citizens United on “AntiDistortion” and the Foreign Campaign Spending Ban, with Implications for Shutting Down Tik-Tok
I want to pick up a point first flagged yesterday by Eugene Volokh from yesterday’s decision in Moody v. NetChoice that could have relevance to new legislation, currently being challenged in court, that could ban Tik-Tok as being foreign owned. Some lines in Justice Barrett’s concurrence makes it more likely the Court would uphold a Tik-Tok ban, if the issue makes it to the Supreme Court.
I need to give a bit of wonky background to set the stage (and I’m writing about this more extensively in a larger piece that will post in a few weeks).
The Supreme Court has long rejected the idea in the campaign finance context that one could limit the speech of some to enhance the relative voice of others. The Court made such a statement first in the 1976 case, Buckley v. Valeo, and it played a major role in the Supreme Court’s 2010 Citizens United case. The idea of equalizing campaign spending to prevent distortion of the political marketplace became known as the “antidistortion” rationale, and it figured heavily in the 1990 Austin v. Michigan Chamber of Commerce case upholding a requirement that corporations use PACs for their political spending and not their general treasury funds. Citizens United emphatically rejected this antidistortion rationale, overturning Austin. It held corporations have a First Amendment right to spend unlimited sums independently to support or oppose candidates for office.
In part of his dissent in Citizens United, Justice Stevens raised the issue of spending by foreign individuals, governments, and entities. Federal law bars such spending, but Stevens suggested Citizens United raised the question whether such a ban by foreign corporations violated the First Amendment too. Justice Kennedy’s majority opinion in Citizens United explicitly said it was not reaching the issue.
Just a year later, a three judge court, in an opinion in Bluman v. FEC by then-judge Brett Kavanaugh upheld the foreign spending ban, saying it was justified by the government’s compelling interest in “democratic self-government.” The Supreme Court summarily affirmed, without any opinion and with no dissents. I’ve long criticized the Supreme Court for not explaining how the corporate ban could be forbidden but the foreign spending ban is just fine.
Fast forward to yesterday’s Moody decision. There was this particularly notable line from Justice Kagan’s majority opinion (who as solicitor general argued Citizens United on behalf of the government, but did not endorse the antidistortion rationale), citing Buckley, affirming the rejection of the antidistortion rationale:
But a State may not interfere with private actors’ speech to advance its own vision of ideological balance. States (and their citizens) are of course right to want an expressive realm in which the public has access to a wide range of views. That is, indeed, a fundamental aim of the First Amendment. But the way the First Amendment achieves that goal is by preventing the government from “tilt[ing] public debate in a preferred direction.” Sorrell v. IMS Health Inc., 564 U.S. 552, 578–579, 131 S.Ct. 2653, 180 L.Ed.2d 544 (2011). It is not by licensing the government to stop private actors from speaking as they wish and preferring some views over others. And that is so even when those actors possess “enviable vehicle[s]” for expression. Hurley, 515 U.S. at 577, 115 S.Ct. 2338. In a better world, there would be fewer inequities in speech opportunities; and the government can take many steps to bring that world closer. But it cannot prohibit speech to improve or better balance the speech market. On the spectrum of dangers to free expression, there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana. That is why we have said in so many contexts that the government may not “restrict the speech of some elements of our society in order to enhance the relative voice of others.” Buckley v. Valeo, 424 U.S. 1, 48–49, 96 S.Ct. 612, 46 L.Ed.2d 659 (1976) (per curiam). That unadorned interest is not “unrelated to the suppression of free expression,” and the government may not pursue it consistent with the First Amendment.
Justice Amy Coney Barrett joined the majority opinion that opined on how a state ban on content moderation applied to social media platforms likely violated the First Amendment. But she added some caveats and issues for future cases, including this observation, citing Citizens United:
There can be other complexities too. For example, the corporate structure and ownership of some platforms may be relevant to the constitutional analysis. A speaker’s right to “decide ‘what not to say’ ” is “enjoyed by business corporations generally.” Hurley, 515 U.S. at 573–574, 115 S.Ct. 2338 (quoting Pacific Gas & Elec. Co. v. Public Util. Comm’n of Cal., 475 U.S. 1, 16, 106 S.Ct. 903, 89 L.Ed.2d 1 (1986)). Corporations, which are composed of human beings with First Amendment rights, possess First Amendment rights themselves. See Citizens United v. Federal Election Comm’n, 558 U.S. 310, 365, 130 S.Ct. 876, 175 L.Ed.2d 753 (2010); cf. Burwell v. Hobby Lobby Stores, Inc., 573 U.S. 682, 706–707, 134 S.Ct. 2751, 189 L.Ed.2d 675 (2014). But foreign persons and corporations located abroad do not. Agency for Int’l Development v. Alliance for Open Society Int’l, Inc., 591 U.S. 430, 433–436, 140 S.Ct. 2082, 207 L.Ed.2d 654 (2020). So a social-media platform’s foreign ownership and control over its content-moderation decisions might affect whether laws overriding those decisions trigger First Amendment scrutiny. What if the platform’s corporate leadership abroad makes the policy decisions about the viewpoints and content the platform will disseminate? Would it matter that the corporation employs Americans to develop and implement content-moderation algorithms if they do so at the direction of foreign executives? Courts may need to confront such questions when applying the First Amendment to certain platforms.
So we see here the same parallel move as in Citizens United. Reject the antidistortion rationale applied to corporations, but note that the rules might be different for foreign corporations, and limits on certain foreign corporations may not violate the First Amendment as they would for domestic corporations.
Surely this will play a role in the Tik-Tok litigation.
My New Slate Piece on Today’s NetChoice Social Media Cases: “The First Amendment Just Dodged an Enormous Bullet at the Supreme Court”
I have written this piece for Slate. It begins:
At Supreme Court oral argument in the Texas social media case back in February, Justice Samuel Alito asked the question: “Let’s say YouTube were a newspaper, how much would it weigh?” In Monday’s Supreme Court opinion in Moody v. NetChoice, a five-justice majority over Alito’s objection did not directly answer that absurd question, but it did say that under the First Amendment, Facebook should get about the same amount of editorial discretion as the Miami Herald. And that’s some good news from an otherwise bleak end of the Supreme Court term….That’s where the agreement among the justices ended. Speaking for herself, Chief Justice John Roberts, and Justices Amy Coney Barrett, Brett Kavanaugh, and Sonia Sotomayor, Kagan gave guidance on where the 5th Circuit went wrong in its First Amendment analysis in considering the constitutionality of the Texas content moderation decisions. None of this was necessary for the decision (in legal parlance, it was “dicta”), but the court addressed the issue because “[i]f we said nothing about those views, the court presumably would repeat them when it next considers NetChoice’s challenge.” The other justices would not have reached the First Amendment merits, although Alito expressed some serious reservations about the analysis.
Kagan’s guidance relied heavily on a 1974 case, Miami Herald v. Tornillo, in which the court held unconstitutional a Florida law that required newspapers to print the reply of someone who had been criticized in the newspaper. The court held that private actors like newspapers have every right under the First Amendment to include or exclude content as they see fit.
To Kagan, social media companies in moderating content were just like newspapers. She said that curating content is expressive activity protected by the First Amendment and that includes the decision to exclude content and that this principle is true even if most content is allowed and just a little bit is excluded. Further, when it comes to laws regulating speech, “the government cannot get its way just by asserting an interest in improving, or better balancing, the marketplace of ideas.” Were the rule otherwise, Kagan asserted, the platforms could be forced by Texas law to carry bad content including posts that “support Nazi ideology; advocate for terrorism; espouse racism, Islamophobia, or anti-Semitism; glorify rape or other gender-based violence; encourage teenage suicide and self-injury; discourage the use of vaccines; advise phony treatments for diseases; [and] advance false claims of election fraud.”
Moody might seem like an unremarkable decision, consistent with long-standing First Amendment principles. And indeed, in an amicus brief in the cases that I filed with political scientist Brendan Nyhan and journalism professor Amy Wilentz and co-authored with Nat Bach and his team at Manatt Phelps, we argued that Tornillo is the right analogy.
But in endorsing this view of the First Amendment, the majority brushed aside a major argument made by Justice Clarence Thomas in earlier cases and by First Amendment scholar Eugene Volokh that social media companies should be treated differently because they function like “common carriers,” such as the phone company. Just like Verizon cannot deny you a phone because of what you might say using it, the argument is that Facebook had to be open to everyone’s view.
The court gives the argument the back of its hand, never even addressing it directly; Alito says the majority “brushes aside the argument without adequate consideration.” Thomas says the argument should still be pursued in the lower courts, but it’s squarely inconsistent with what the Kagan majority says in its dicta. Volokh too sees many unanswered questions and thinks there is still a chance for some parts of these laws to be upheld when the cases get back to the lower court….
“The Government Needs to Act Fast to Protect the Election”
Gowri Ramachandran and Lawrence Norden post-Murthy in The Atlantic:
With Murthy now dismissed and limited time before November 5, the federal government can and should immediately resume its regular briefings with social-media companies about foreign interference in our elections. Although there are encouraging signs that the federal government is slowly resuming these efforts, they appear limited compared with what was done in prior elections. The government should also, as it has in the past, help connect state and local election officials with appropriate contacts at social-media companies. That way local officials and social-media companies can keep each other apprised of any changes in disinformation they are seeing regarding how, when, and where to vote. And the federal government should drastically increase efforts to inform the American public about foreign adversaries’ operations intended to decrease confidence in elections. The government must also make clear that threatening election officials—and their families and children—will not be tolerated.
Supreme Court on 6-3 Vote Rejects Social Media Government “Jawboning” Claim on Standing Grounds, But Strongly Suggests Claims of Jawboning were False
You can find the majority opinion in Murthy v. Missouri of Justice Barrett, along with the dissent of Justice Alito (joined by Justices Gorsuch and Thomas) at this link.
The claim was that government agencies pressured or coerced social media platforms including Facebook and Twitter to remove content (related to the election, Covid, etc.). This what the term “jawboning” refers to.
The Court did not opine on what would have to be proven in a jawboning case involving social media companies, because it held that none of the plaintiffs had standing: they did not show enough of a connection between the government‘s actions and plaintiffs’ injuries. As the majority opinion states: “the platforms moderated similar content long before any of the Government defendants engaged in the challenged conduct. In fact, the platforms, acting independently, had strengthened their pre-existing content moderation policies before the Government defendants got involved.”
Given that the majority said it would not reach the merits of the jawboning question, it’s inclusion of footnote 4, casting aspersions on the ridiculous factfinding of the district court, was notable as a slam. This is arguably the most important part of the opinion:
The Fifth Circuit relied on the District Court’s factual findings, many of which unfortunately appear to be clearly erroneous. The District Court found that the defendants and the platforms had an “efficient report-and-censor relationship.” Missouri v. Biden, 680 F. Supp. 3d 630, 715 (WD La. 2023). But much of its evidence is inapposite. For instance, the court says that Twitter set up a “streamlined process for censorship requests” after the White House “bombarded” it with such requests. Ibid., n. 662 (internal quotation marks omitted). The record it cites says nothing about “censorship requests.” See App. 639–642. Rather, in response to a White House official asking Twitter to remove an impersonation account of President Biden’s granddaughter, Twitter told the official about a portal that he could use to flag similar issues. Ibid. This has nothing to do with COVID–19 misinformation. The court also found that “[a] drastic increase in censorship . . . directly coincided with Defendants’ public calls for censorship and private demands for censorship.” 680 F. Supp. 3d, at 715. As to the “calls for censorship,” the court’s
proof included statements from Members of Congress, who are not parties to this suit. Ibid., and n. 658. Some of the evidence of the “increase in censorship” reveals that Facebook worked with the CDC to update its list of removable false claims, but these examples do not suggest that the agency “demand[ed]” that it do so. Ibid. Finally, the court, echoing the plaintiffs’ proposed statement of facts, erroneously stated that Facebook agreed to censor content that did not violate its policies. Id., at 714, n. 655. Instead, on several occasions, Facebook explained that certain content did not qualify for removal under its policies but did qualify for
other forms of moderation.
Justice Alito, in contrast, found enough evidence of jawboning to find standing (and then a likely violation of the law by the government). He relied in part on a report from Jim Jordan’s “weaponization of government” committee in the House, something that itself is quite unreliable.
“It Depends Who’s Doing the Jawboning”
I’ve got a new post up at Lawfare about a crucial piece missing from the discussion around Murthy v. Missouri, the SCOTUS case about jawboning the social media platforms. Plenty of the Justices had welcome real-world executive experience that came through in last Monday’s argument — but they didn’t recognize that their experiences were also different in ways that should matter. The governing philosophy and structure of different Administrations are distinct, and that context is really important in assessing the potential for coercion.
Or, if you prefer:
Happy Administrations are all alike; unhappy Administrations are each unhappy with social media platforms in their own way.
“The Shortlist: Seven Ways Platforms Can Prepare for the U.S. 2024 Election”
New from Protect Democracy.
Florida’s Lawyer Having Hard Time at Beginning of Oral Argument in Social Media Cases, Suggesting Law Could Well Be Struck Down [Corrected]
Arguments are just beginning, and at some point I’ll have to leave for class.
At this early point, it appears that Roberts, Sotomayor, Kavanaugh, and Kagan have all expressed great skepticism of these rules.
Justice Jackson pointed to some of the things that Facebook does that qualify as speech and some things that don’t. The questions suggest that at least some of the things that Facebook does are protected speech.
Kavanaugh asked if the antidistortion language in Buckley (saying government cannot equalize speech) and the precedent of Tornillo as to newspapers’ editorial discretion seems to doom this case.
Justice Thomas suggested that this should have not have been a facial challenge, which would be a way to duck deciding the merits in this case. So far, no other takers among the justices.
Justice Gorsuch asks about whether Section 230 preemption could dispose of parts of this case.
Justice Kagan made the same point I did in my recent Slate piece and in our brief about how when Musk took over, it changed the nature of the site. This shows content moderation is expressive:
It should be no surprise that after Elon Musk took over Twitter and changed its moderation policies to make the platform’s content less trustworthy and more incendiary, users and advertisers reevaluated the platform’s strengths and weaknesses, with many choosing to leave. Content moderation policies shape how the public perceives a platform’s messages. Content moderation decisions—including Musk’s, whether wise or not—are the exercise of editorial discretion. The public then decides which platforms to patronize, value, or devalue.
Justice Barrett, who had been quiet, suggests that platforms exercise editorial control like newspapers. More bad news for the Florida law.
[This post has been updated and corrected. It originally referenced Texas law.]