Tag Archives: content moderation

SCOTUS, Social Media Removal of Hate is Not “Discrimination”

This is Orwellian.

Texas and Florida passed state laws that effectively hinder social media platforms in removing hate, white supremacy, election denialism, and similar content. The states are currently before the U.S. Supreme Court in the NetChoice cases attempting to defend those laws. As Daphne Keller explains:

Yet now, in their briefs, Texas and Florida are also arguing their laws prohibit discrimination, just as civil rights laws do. On that logic, ‘must-carry’ laws that may compel platforms to carry racist diatribes and hate speech are justified for the same reasons as laws that prohibit businesses from discriminating based on race or gender.

This should be obvious, but Facebook or YouTube deciding to remove a racial slur, Nazi propaganda, or white nationalist attempts to mainstream “replacement theory” is not the same as Woolworth’s deciding to remove African American college students sitting at a lunch counter and attempting to order food.

Daphne has more here.

Share this:

Election Officials file amicus in Murthy v. Missouri 

The current and former officials include Seth Bluestein, Kathy Boockvar, Edgardo Cortes, Lisa Deeley, Mark Earley, Neal Kelley, Trey Grayson, and DeForest B. Soaries, and are represented by the Brennan Center. 

The brief is here, and a summary of the argument is below. 

. . . Social media platforms rely on communicating with election officials to supply accurate information for the platforms’ voluntary public education efforts, to correct false and misleading content, and to identify threatening content that violates the platforms’ moderation policies. The integrity of American elections depends on those open lines of communication to ensure that platforms provide accurate information to the voting public.

The First Amendment permits private social media companies to decide what content to host on their platforms. In making those decisions, platforms are free to consult with government officials and, if they choose, to take those officials’ suggestions. Such communications by government officials—even emphatic ones—are an exercise of the government’s prerogative to voice its own views and are consistent with the First Amendment as long as the ultimate decision regarding content rests with the platforms themselves. The Fifth Circuit’s expansive state action test incorrectly classifies benign, non-coercive governmental communication as “entanglement” that renders platforms’ content moderation decisions to be attributable to the government itself. This Court should preserve its robust state action requirement and clarify that government officials responsible for protecting the integrity of American elections remain free to communicate with social media platforms, both regarding the platforms’ efforts to curate content and apply their content moderation policies, and to advocate for the government’s view on responsible moderation policies and practices.

Share this:

Lawyers’ Committee files amicus in Murthy v. Missouri

From the Lawyers’ Committee for Civil Rights Under Law:

Washington, DC – Today, the Lawyers’ Committee for Civil Rights Under Law filed an amicus brief in Murthy v. Missouri in support of social media platforms’ ability to censor harmful disinformation. The brief follows the Lawyers’ Committee’s similar filing last week in Netchoice, LLC v. Paxton and Moody v. NetChoice, LLC, the Supreme Court cases concerning Texas House Bill 20 and Florida Senate Bill 7072, laws that would vitiate the ability of online businesses to remove content spewing hate and disinformation on their platforms.

In the brief, the Lawyers’ Committee argues that social media companies have an obligation to safeguard elections from disinformation, misinformation and other threats, underscoring the importance of collaboration between these entities to develop and enforce policies that protect the integrity of the electoral process.

Share this: