Tag Archives: content moderation

“How Samuel Alito got canceled from the Supreme Court social media majority”

CNN:

The hardline approach Supreme Court Justice Samuel Alito takes usually gets him what he wants.

This year it backfired.

Behind the scenes, the conservative justice sought to put a thumb on the scale for states trying to restrict how social media companies filter content. His tactics could have led to a major change in how platforms operate.

CNN has learned, however, that Alito went too far for two justices – Amy Coney Barrett and Ketanji Brown Jackson – who abandoned the precarious 5-4 majority and left Alito on the losing side.

Share this:

“False rumors about Vance, Musk’s X show misinfo cuts both ways”

Washington Post:

Billionaire Elon Musk pitches X as both a haven for free speech and a superior alternative to the mainstream media for keeping up with news and politics. Under his ownership, the social media platform formerly known as Twitter has pulled back on policing misinformation, relying instead on the wisdom of the crowd to debunk falsehoods.

Musk’s critics say that approach benefits the political right, with which Musk increasingly identifies. . .

This week a pair of falsehoods that originated and gained traction on X mostly among left-leaning users provided a reminder that online misinformation can come from anywhere in the political spectrum — and tested Musk’s commitment to letting users decide the truth for themselves.

Share this:

“A Volatile Election Is Intensifying Conspiracy Theories Online”

NYT:

….The emergence of Vice President Kamala Harris as the new Democratic front-runner touched off new paroxysms of disinformation and explicitly hateful comments. More than one in 10 posts mentioning her on X on Sunday included racist or sexist attacks, according to PeakMetrics, which tracks activity online. They included false claims about her race and whether she was ineligible to run for the presidency because she was not a citizen. (She is a citizen, and she is eligible to run.)…

Most social media platforms profit when outrage and indignation results in more engagement, and ultimately, more advertising revenue. Companies have little incentive to alter the algorithms that allow toxic content to spread, despite calls from political leaders appealing to society’s better angels.

That dynamic appears all but certain to define this year’s presidential election, as it did in 2016 and 2020.

Share this:

“FBI should clean up its interactions with online platforms, DOJ watchdog says”

Washington Post:

Weeks after the Supreme Court rejected a conservative-led push to block contacts between the U.S. government and social media companies, a new report from the Justice Department’s inspector general found that intelligence agencies’ communications with the companies have sometimes been undisciplined.


The 53-page report, published Tuesday by Justice Department Inspector General Michael E. Horowitz, affirmed that U.S. law enforcement agencies need to communicate with tech firms about foreign influence operations, such as Russia’s campaign to interfere in the 2016 presidential election. But it warned that officials need to be more systematic and careful about the nature of those communications to ensure they don’t cross the line into government censorship.

Share this:

SCOTUS, Social Media Removal of Hate is Not “Discrimination”

This is Orwellian.

Texas and Florida passed state laws that effectively hinder social media platforms in removing hate, white supremacy, election denialism, and similar content. The states are currently before the U.S. Supreme Court in the NetChoice cases attempting to defend those laws. As Daphne Keller explains:

Yet now, in their briefs, Texas and Florida are also arguing their laws prohibit discrimination, just as civil rights laws do. On that logic, ‘must-carry’ laws that may compel platforms to carry racist diatribes and hate speech are justified for the same reasons as laws that prohibit businesses from discriminating based on race or gender.

This should be obvious, but Facebook or YouTube deciding to remove a racial slur, Nazi propaganda, or white nationalist attempts to mainstream “replacement theory” is not the same as Woolworth’s deciding to remove African American college students sitting at a lunch counter and attempting to order food.

Daphne has more here.

Share this:

Election Officials file amicus in Murthy v. Missouri 

The current and former officials include Seth Bluestein, Kathy Boockvar, Edgardo Cortes, Lisa Deeley, Mark Earley, Neal Kelley, Trey Grayson, and DeForest B. Soaries, and are represented by the Brennan Center. 

The brief is here, and a summary of the argument is below. 

. . . Social media platforms rely on communicating with election officials to supply accurate information for the platforms’ voluntary public education efforts, to correct false and misleading content, and to identify threatening content that violates the platforms’ moderation policies. The integrity of American elections depends on those open lines of communication to ensure that platforms provide accurate information to the voting public.

The First Amendment permits private social media companies to decide what content to host on their platforms. In making those decisions, platforms are free to consult with government officials and, if they choose, to take those officials’ suggestions. Such communications by government officials—even emphatic ones—are an exercise of the government’s prerogative to voice its own views and are consistent with the First Amendment as long as the ultimate decision regarding content rests with the platforms themselves. The Fifth Circuit’s expansive state action test incorrectly classifies benign, non-coercive governmental communication as “entanglement” that renders platforms’ content moderation decisions to be attributable to the government itself. This Court should preserve its robust state action requirement and clarify that government officials responsible for protecting the integrity of American elections remain free to communicate with social media platforms, both regarding the platforms’ efforts to curate content and apply their content moderation policies, and to advocate for the government’s view on responsible moderation policies and practices.

Share this:

Lawyers’ Committee files amicus in Murthy v. Missouri

From the Lawyers’ Committee for Civil Rights Under Law:

Washington, DC – Today, the Lawyers’ Committee for Civil Rights Under Law filed an amicus brief in Murthy v. Missouri in support of social media platforms’ ability to censor harmful disinformation. The brief follows the Lawyers’ Committee’s similar filing last week in Netchoice, LLC v. Paxton and Moody v. NetChoice, LLC, the Supreme Court cases concerning Texas House Bill 20 and Florida Senate Bill 7072, laws that would vitiate the ability of online businesses to remove content spewing hate and disinformation on their platforms.

In the brief, the Lawyers’ Committee argues that social media companies have an obligation to safeguard elections from disinformation, misinformation and other threats, underscoring the importance of collaboration between these entities to develop and enforce policies that protect the integrity of the electoral process.

Share this: