Former U.S. President Donald Trump on Wednesday filed lawsuits against Twitter Inc (TWTR.N), Facebook Inc (FB.O), and Alphabet Inc’s Google (GOOGL.O), as well as their chief executives, alleging they unlawfully silence conservative viewpoints.
The lawsuits, filed in U.S. District Court in Miami, allege the California-based social media platforms violated the right to freedom of speech guaranteed by the First Amendment of the U.S. Constitution…
The lawsuits ask a judge to invalidate Section 230 of the Communications Decency Act, a law that has been called the backbone of the internet because it provides websites with protections from liability over content posted by users. Trump and others who have attacked Section 230 say it has given big internet companies too much legal protection and allowed them to escape responsibility for their actions.
A federal judge on Wednesday blocked a Florida law that would penalize social media companies for blocking a politician’s posts, a blow to conservatives’ efforts to respond to Facebook and other websites’ suspension of former president Donald Trump.
The law was due to go into effect Thursday, but in issuing a preliminary injunction, U.S. District Judge Robert Hinkle of the Northern District of Florida suggested that the law would be found unconstitutional.
“The plaintiffs are likely to prevail on the merits of their claim that these statutes violate the First Amendment,” Hinkle wrote. “There is nothing that could be severed and survive.”
The law laid out fines for tech companies that suspended political candidates in the run-up to an election.
Florida legislators approved the law after Facebook, Twitter and YouTube suspended Trump’s accounts for violating their policies following the Jan. 6 attack on the U.S. Capitol. Florida Gov. Ron DeSantis (R), a potential 2024 presidential candidate and key Trump ally, touted the law as a stand against alleged censorship of conservatives when he signed it in May….
The judge wrote a blistering criticism of the Florida law, saying that it “compels providers to host speech that violates their standards.”
“Like prior First Amendment restrictions, this is an instance of burning the house to roast a pig,” he wrote.
He also said that remarks from the governor and other lawmakers made clear that the law was “viewpoint-based,” adding that there was “substantial factual support” showing the law was motivated by hostility toward the perceived liberal bias of large tech firms.
Hinkle also referred to the law as “riddled with imprecision and ambiguity” and said it “does not survive strict scrutiny.”
You can read the opinion here.
The New York Times examined Mr. Trump’s nearly 1,600 social media posts from Sept. 1 to Jan. 8, the day Mr. Trump was banned from the platforms. We then tracked the social media engagement with the dozens of written statements he made on his personal website, campaign fund-raising site and in email blasts from Jan. 9 until May 5, which was the day that the Facebook Oversight Board, which reviews some content decisions by the company, said that the company acted appropriately in kicking him off the service.
Before the ban, the social media post with the median engagement generated 272,000 likes and shares. After the ban, that dropped to 36,000 likes and shares. Yet 11 of his 89 statements after the ban attracted as many likes or shares as the median post before the ban, if not more.
How does that happen?
Mr. Trump had long been his own best promoter on social media. The vast majority of people on Twitter and Facebook interacted directly with Mr. Trump’s posts, either liking or sharing them, The Times analysis found.
But after the ban, other popular social media accounts often picked up his messages and posted them themselves. (Last week, Mr. Trump shut down his blog, one of the places he made statements.)…
One topic from Mr. Trump that has not spread far: claims of widespread election fraud.
The Times analysis looked at the 10 most popular posts with election misinformation — judged by likes and shares — from Mr. Trump before the social media bans, and compared them with his 10 most popular written statements containing election misinformation after the ban. All the posts included falsehoods about the election — that the process had been “rigged,” for instance, or that there had been extensive voter fraud.
Before the ban, Mr. Trump’s posts garnered 22.1 million likes and shares; after the ban, his posts earned 1.3 million likes and shares across Twitter and Facebook.
Disinformation researchers say the difference points to the enormous power the social media companies have in curbing political misinformation, if they choose to wield it. Facebook and Twitter curb the spread of false statements about the November election, though Twitter has loosened its enforcement since March to dedicate more resources to fact-checking in other parts of the world.
Facebook’s Investigation of its Role Leading Up to Jan. 6
One of the most consequential recommendations the FOB made was for Facebook to “undertake a comprehensive review of its potential contribution to the narrative of electoral fraud and the exacerbated tensions that culminated in the violence in the United States on January 6, 2021.” BuzzFeed has reported that even before the FOB’s decision, Facebook had created such a report internally, but the FOB’s recommendation was specifically for an “open reflection.” Interestingly, the FOB’s recommendation became a focal point for public pressure: for example, Bob Bauer, who advised Biden’s presidential campaign and served as White House Counsel during the Obama administration, called on Facebook CEO Mark Zuckerberg to make “an unequivocal commitment to the complete and public review suggested by the Oversight Board.”
Unfortunately, neither Bauer’s plea nor the FOB’s recommendation worked to any substantial extent. Facebook did not commit itself to any further public reflection on the role it played in the election fraud narrative that sparked violence in the United States on January 6, 2021. It pointed to existing research partnerships with independent researchers, and did extend the amount of data it will provide them. Facebook also highlighted its previous enforcement actions against groups like QAnon. But it said “the responsibility for January 6, 2021, lies with the insurrectionists and those who encouraged them” and its only further commitment is to “continue to cooperate with law enforcement and US government investigations related to the events on January 6.” This is extremely disappointing. Of course the blame for Jan. 6 does not lie entirely, or perhaps even primarily, with Facebook. Other institutions also desperately need to hold themselves accountable. But the dramatic failure of other institutions does not mean that Facebook should not have seized this opportunity to do better and to add to the public record about what enabled the insurrection to happen.
Facebook keeps touting its labels as a proactive response to misinformation spread on the platform, even though internal and external data shows the labels are ineffective and the platform’s application of them is inconsistent at best. In fact, Media Matters found that the average number of interactions per post on former President Donald Trump’s labeled posts is more than double that of his posts overall, and posts containing his misinformation are still spreading on the platform even though he is suspended from it for now.
In our latest study, Media Matters analyzed former President Trump’s 6,081 posts that he created between January 1, 2020, and January 6, 2021. Key findings include:
- Facebook labeled at least 506 Trump posts between January 1, 2020, and January 6, 2021. These posts earned over 205.8 million interactions, or an average of roughly 407,000 interactions per post. Comparatively, all of Trump’s posts during this time earned over 927 million interactions, or an average of roughly 152,000 interactions per post.
- Facebook labeled 147 of Trump’s 868 posts that cited right-wing media outlets. These 147 posts earned over 42 million interactions, or an average of roughly 291,000 interactions per post.
- Notably, 127 — or over 86% — of Trump’s labeled posts citing right-wing media were related to election integrity, five specifically mentioned “Stop the Steal,” and two were related to COVID-19. These posts earned more average interactions per post than Trump’s posts overall and Trump’s posts citing right-wing media.
- Even as Trump is suspended from Facebook, the platform is failing to consistently label his election misinformation. Facebook labeled at least two posts that promoted Trump’s May 13 statement, originally posted to his blog, as “false,” but dozens of other posts with images or text from the statement remain on Facebook.
A nonprofit advocacy group with close ties to President Joe Biden on Wednesday joined calls for Facebook to review whether its actions contributed to the spread of unfounded election fraud claims leading up to the Jan. 6 siege on the Capitol.
Building Back Together, an outside coalition formed by top Biden allies and campaign advisers, urged Facebook in a letter reviewed by POLITICO to commit to an internal probe of the matter, something the company’s oversight board recommended last month.
Requirements vs. suggestions: The panel, which recently upheld Facebook’s decision to suspend former President Donald Trump, also called on the company to carry out “a comprehensive review of Facebook’s potential contribution to the narrative of electoral fraud and the exacerbated tensions that culminated in the violence in the United States on January 6.”
While the ruling on Trump’s suspension is binding, the board’s recommendations for changes to Facebook’s policies and for follow-up actions, such as the review, are not. Facebook is required to respond to the suggestions by Friday, though, and Biden’s allies are pressuring the tech giant to make good on the guidance ahead of the deadline.
A sprawling online network tied to Chinese businessman Guo Wengui has become a potent platform for disinformation in the United States, attacking the safety of coronavirus vaccines, promoting false election-fraud claims and spreading baseless QAnon conspiracies, according to research published Monday by the network analysis company Graphika.
The report, provided in advance to The Washington Post, details a network that Graphika says amplifies the views of Guo, a Chinese real estate developer whose association with former Trump White House adviser Stephen K. Bannon became a focus of news coverage last year after Bannon was arrested aboard Guo’s yacht on federal fraud charges.
Graphika said the network includes media websites such as GTV, for which Guo last year publicly said he was raising funds, along with thousands of social media accounts that Graphika said amplify content in a coordinated fashion. The network also includes more than a dozen local-action groups over which Guo has publicly claimed an oversight role, Graphika found.
President Joe Biden’s nominee to head the Justice Department’s civil rights division, already facing a tough confirmation fight, has received a civil investigative demand from Indiana’s Republican attorney general as part of his probe into Big Tech content moderation practices.
Kristen Clarke, whose nomination deadlocked in the Senate Judiciary Committee Thursday, received the request from Indiana Attorney General Todd Rokita, who also sent demands to NAACP President Derrick Johnson, Rev. Al Sharpton, and several other activists.
Rokita is investigating whether Facebook Inc., Twitter Inc., Alphabet Inc.’s Google, Apple Inc.or Amazon.com Inc.‘s content management practices violate state consumer protection laws by “censoring” conservative content online. The activists drew Rokita’s attention following a media reportabout a meeting Rokita’s office believes they attended with Facebook CEO Mark Zuckerberg and other tech company executives.
Watch here, via Yale’s Abrams Institute.
Nate Persily at The Monkey Cage:
The most important aspect of the board’s adjudication of the Trump takedown was its reliance on international human rights law to guide its decision. The board’s earlier decisions also referred to applicable United Nations Conventions and treaties,as does the charter that established the board in the first place. In many respects, the Oversight Board is a first step toward realizing human rights’ defenders long-standing dream of a world court with transnational jurisdiction.
However, it’s hard for Facebook to implement human rights principles that were designed to bind governments, rather than to guide a private company trying to moderate content. Facebook is not a government; its news feed, which uses algorithms to deliver personalized content to billions of people, is not the public square. Facebook is in the business of what constitutional lawyers call “prior restraints” — that is, filtering speech before it reaches its audience. Its rules on hate speech, obscenity, self-harm and disinformation, to name a few, would all be unconstitutional under the First Amendment if passed by the U.S. government.
Facebook’s Oversight Board has issued its long-anticipated decision on whether Facebook was correct in removing Trump from the platform given his statements supporting the Capitol insurrection on January 6, 2021. The Board determined that Facebook was correct: “In maintaining an unfounded narrative of electoral fraud and persistent calls to action, Mr. Trump created an environment where a serious risk of violence was possible.” But the Board also found that Facebook’s “indefinite” suspension of Trump was not supported by Facebook’s own rules. It requires Facebook within 6 months to explain what it rules are for indefinite suspension and apply them to Trump. It also suggests more broadly a set of criteria that will protect both freedom of expression and require the platform to take action against threats of political violence by political leaders.
The approach that the Oversight Board took is broadly consistent with the approach I and a group of scholars advocated in this letter we submitted to the Board in the case. The Board properly recognizes that Trump’s statements increased the danger of violence and democratic instability which overcomes the usual heavy thumb on the scale in favor of the rights of free expression on political issues. And, although not addressed in the letter, the Board is surely right that Facebook needs to have transparent and consistently applied standards for when content from influential leaders is to be removed. And it should apply that standard to Trump.
Where the Board fell short is in opining on what those standards should be and when someone suspended from the platform for “creating an environment where a serious risk of violence was possible” should be reinstated. The majority refused to opine on such a standard, but a minority of the Board did. “Facebook should, for example, be satisfied that Mr. Trump has ceased making unfounded claims about election fraud in the manner that justified suspension on January 6.” From the summary of the decision: “A minority of the Board emphasized that Facebook should take steps to prevent the repetition of adverse human rights impacts and ensure that users who seek reinstatement after suspension recognize their wrongdoing and commit to observing the rules in the future.”
This, at a minimum, should be the standard that Facebook applies in the future. Facebook is a private company that can include or exclude content as it sees fit. As a responsible corporate citizen, Facebook, like Twitter, can decide it does not need to give a platform to someone who encouraged violence and who continues to insist, against all reliable evidence, that the election was stolen. Until Trump backs off such claims (and he never will), he should not be reinstated. As we explained in the letter to the Oversight Board:
Under these standards, President Trump’s statements and course of conduct culminating on January 6, 2021 justified his deplatforming from social media. Before January 6 the President had made over 400 comments falsely calling the election into question. He encouraged his supporters to come to the Capitol on January 6 for “wild” protests. He gave a speech shared on social media that encouraged his supporters to march to the Capitol and interfere with the vote counting, and in the post that led to his deplatforming, he praised those engaged in insurrection with “love” and repeated false claims of a “fraudulent” and “stolen” election as the violence in the Capitol was ongoing.
Anyone who doubts the risks of such speech need only look at the events of January 6, 2021 in the U.S. Capitol. Not only did such speech lead to the deaths of five people and injuries to countless others, including police officers guarding the Vice President of the United States and Members of Congress; those political leaders came within moments of being kidnapped or killed but for the bravery of law enforcement. Without social media spreading Trump’s statements, it seems extremely unlikely these events would have occurred. The eventual deplatforming of Trump’s accounts helped defuse a dangerous and antidemocratic situation.
There no doubt will be close calls under a policy that allows the deplatforming of political leaders in extreme circumstances. This was not one of them.
Let’s be perfectly clear about this: if the Board required Trump’s reinstatement, he’d be writing TODAY about how the fake Arizona “audit” will prove the election was stolen, further undermining confidence in the American electoral process. (Indeed, here’s what Trump just released; it would be on Facebook if allowed.)
This looks to be a very important event (registration required):
The Facebook Oversight Board will release its decision concerning the takedown of President Donald Trump’s account this Wednesday. On Thursday, May 6, from 2:00 to 3:15 PM Pacific, members of the Oversight Board will be joined by the leaders of the Stanford Cyber Policy Center to discuss the Board’s decision. Two members of the Oversight Board, Michael McConnell and Julie Owono, will be joined by Nate Persily, Renee DiResta, Daphne Keller, Marietje Schaake and Alex Stamos to discuss the decision and its implications for Facebook’s handling of similar controversies around the world.
Sue Halpern for The New Yorker.
I’m really looking forward to participating in this event:
As speech on the Internet increasingly dominates public discourse, the decision-making within Facebook, Google and the like about what to carry and what not is of ever-increasing import. The impact of Facebook and its competitors on our elections, the nature of their content moderation policies, the impact of the Internet on First Amendment values and law is the topic of this conversation led by Floyd Abrams between evelyn douek, a leading scholar of content moderation and platform governance, Professor Noah Feldman, whose concept of an internal equivalent of a Supreme Court within Facebook has been adopted by that company, and Professor Richard Hasen, the nation’s leading legal expert on election law.
Please use this link to make your reservation now, and join us on May 4: https://www.eventbrite.com/e/the-internet-elections-and-the-first-amendment-tickets-151219807515.
Abrams Institute Conversations are made possible through the generous support of the Stanton Foundation.
evelyn douek is a Lecturer on Law and S.J.D. candidate at Harvard Law School, Associate Research Scholar at the Knight First Amendment Institute at Columbia University, and Affiliate at the Berkman Klein Center for Internet & Society. She studies online speech regulation and platform governance. Before coming to Harvard to complete a Master of Laws, evelyn clerked for the Chief Justice of the High Court of Australia, the Hon. Justice Susan Kiefel, and worked as a corporate litigator. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.
Noah Feldman is the Felix Frankfurter Professor of Law and Director of the Julis-Rabinowitz Program on Jewish and Israeli Law. He specializes in constitutional studies, with a particular emphasis on the relationship between law and religion, free speech, constitutional design, and the history of legal theory. Felix Frankfurter Professor of Law at Harvard Law School, he is also the Chairman of the Society of Fellows at Harvard. In 2003 he served as senior constitutional advisor to the Coalition Provisional Authority in Iraq, and subsequently advised members of the Iraqi Governing Council on the drafting of the Transitional Administrative Law or interim constitution. Noah Feldman proposed what became the Facebook Oversight Board; helped design it; and advises FB and other social media clients on free expression issues.
Richard L. Hasen is Chancellor’s Professor of Law and Political Science at the University of California, Irvine. Hasen is a nationally recognized expert in election law and campaign finance regulation, writing as well in the areas of legislation and statutory interpretation, remedies, and torts. He is co-author of leading casebooks in election law and remedies. He served in 2020 as a CNN Election Law Analyst. From 2001-2010, he served (with Dan Lowenstein) as founding co-editor of the quarterly peer-reviewed publication, Election Law Journal. He is the author of over 100 articles on election law issues.
Floyd Abrams is senior counsel at Cahill Gordon & Reindel LLP, a Visiting Lecturer at Yale Law School and a Lecturer in Law at Columbia Law School. He is the author of three books about the First Amendment of which the most recent was “The Soul of the First Amendment“ (2017). Mr. Abrams has argued numerous cases involving the First Amendment in the Supreme Court and lower courts. Among others, he was co-counsel to the New York Times in the Pentagon Papers case, counsel to the Brooklyn Museum in its litigation against New York City Mayor Rudolph Giuliani, and counsel to Senator Mitch McConnell in the Citizens United case. Former Yale Law School Dean Robert Post has observed that “no lawyer has exercised a greater influence on the development of First Amendment jurisprudence in the last four decades.”
The Floyd Abrams Institute for Freedom of Expression at Yale Law School promotes freedom of speech, freedom of the press, access to information and government transparency. The Institute’s activities are grounded in the belief that collaboration between the academy and the bar will enrich both scholarship and practice.
Twitch comes with a bonus: The service makes it easy for streamers to make money, providing a financial lifeline just as their access to the largest online platforms has narrowed. The site is one of the avenues, along with apps like Google Podcasts, where far-right influencers have scattered as their options for spreading falsehoods have dwindled.
Twitch became a multibillion-dollar business thanks to video gamers broadcasting their play of games like Fortnite and Call of Duty. Fans, many of whom are young men, pay the gamers by subscribing to their channels or donating money. Streamers earn even more by sending their fans to outside sites to either buy merchandise or donate money.
Now Twitch has also become a place where right-wing personalities spread election and vaccine conspiracy theories, often without playing any video games. It is part of a shift at the platform, where streamers have branched out from games into fitness, cooking, fishing and other lifestyle topics in recent years.
But unlike fringe livestreaming sites like Dlive and Trovo, which have also offered far-right personalities moneymaking opportunities, Twitch attracts far larger audiences. On average, 30 million people visit the site each day, the platform said.