Mark Zuckerberg, Facebook’s chief executive, made securing the 2020 U.S. election a top priority. He met regularly with an election team, which included more than 300 people from across his company, to prevent misinformation from spreading on the social network. He asked civil rights leaders for advice on upholding voter rights.
The core election team at Facebook, which was renamed Meta last year, has since been dispersed. Roughly 60 people are now focused primarily on elections, while others split their time on other projects. They meet with another executive, not Mr. Zuckerberg. And the chief executive has not talked recently with civil rights groups, even as some have asked him to pay more attention to the midterm elections in November.
Safeguarding elections is no longer Mr. Zuckerberg’s top concern, said four Meta employees with knowledge of the situation. Instead, he is focused on transforming his company into a provider of the immersive world of the metaverse, which he sees as the next frontier of growth, said the people, who were not authorized to speak publicly.
The shift in emphasis at Meta, which also owns Instagram and WhatsApp, could have far-reaching consequences as faith in the U.S. electoral system reaches a brittle point. The hearings on the Jan. 6 Capitol riots have underlined how precarious elections can be. And dozens of political candidates are running this November on the false premise that former President Donald J. Trump was robbed of the 2020 election, with social media platforms continuing to be a key way to reach American voters.
Election misinformation remains rampant online. This month, “2000 Mules,” a film that falsely claims the 2020 election was stolen from Mr. Trump, was widely shared on Facebook and Instagram, garnering more than 430,000 interactions, according to an analysis by The New York Times. In posts about the film, commenters said they expected election fraud this year and warned against using mail-in voting and electronic voting machines.
Other social media companies have also pulled back some of their focus on elections. Twitter, which stopped labeling and removing election misinformation in March 2021, has been preoccupied with its $44 billion sale to Elon Musk, three employees with knowledge of the situation said. Mr. Musk has suggested he wants fewer rules about what can and cannot be posted on the service….
Mr. Zuckerberg no longer meets weekly with those focused on election security, said the four employees, though he receives their reports. Instead, they meet with Nick Clegg, Meta’s president of global affairs.
Several civil right groups said they had noticed Meta’s shift in priorities. Mr. Zuckerberg isn’t involved in discussions with them as he once was, nor are other top Meta executives, they said.
“I’m concerned,” said Derrick Johnson, president of the N.A.A.C.P., who talked with Mr. Zuckerberg and Sheryl Sandberg, Meta’s chief operating officer, ahead of the 2020 election. “It appears to be out of sight, out of mind.” (Ms. Sandberg has announced she will leave Meta this fall.)
Homeland and national security officials are worried about how Russia could significantly exploit US divisions over the November midterms, considering scenarios like Russia staging smaller hacks of local election authorities — done with the deliberate purpose of being noticed — and then using that to seed more conspiracies about the integrity of American elections.
These efforts, the officials said, would be designed to dovetail with the false doubts about the 2020 presidential election spread by former President Donald Trump and many of his allies.The five current and former US officials who spoke to CNN stressed that such a scenario remains hypothetical.
Although US elections have become more secure in recent years, officials say that an atmosphere of distrust in America’s elections, coupled with the sheer number of local election systems, means there’s no way to truly be ready for such a convergence of Russian asymmetric warfare techniques.
You can read the review here.
Richard Stengel for the NY Times:
The rise in disinformation aided by automatic bots, false personas and troll farms is leading some thinkers to conclude that the marketplace of ideas — the foundation of modern First Amendment law — is experiencing a market failure. In the traditional marketplace model, the assumption is that truth ultimately drives out falsehood. That, suggests Hasen in “Cheap Speech,” is hopelessly naïve. Hasen, a law professor at University of California, Irvine, posits that the increase in dis- and misinformation is a result of what he calls “cheap speech,” a term coined by Eugene Volokh, a law professor at U.C.L.A. The idea is that social media has created a class of speech that is sensational and inexpensive to produce, with little or no social value.
In the pre-internet era, disinformation was as difficult and expensive to produce as truthful information. You still had to pay someone to do it — you still had to buy ink and paper and distribute it. Now, the distribution cost of bad information is essentially free, with none of the liability of traditional media. In the age of cheap speech, the classic libertarian line that the cure for bad speech is more speech seems dangerously outdated.
Hasen puts forth a number of solid recommendations on how to combat disinformation — more content moderation, more liability for the platforms, more transparency of algorithms — but adds a very specific one: a narrow ban on verifiably false election speech. The idea is that elections are so vital to democracy that even though political speech has a higher standard of First Amendment protection, false information about voting should be removed from the big platforms.
I had a great conversation on the “In House-Warrior” podcast with Richard Levick. Listen here.
In an unusual 5-4 vote, the Supreme Court has vacated a so-far-unexplained order from the 5th Circuit that stayed enforcement of a Texas district court order barring Texas from enforcing its new social media law. Among other things, this Texas law, if enforceable, could well require large social media companies such as Twitter and Facebook to re-platform Donald Trump after he was deplatformed for encouraging the January 6 insurrection at the United States Capitol. The district court held the statute likely violated the First Amendment and a Fifth Circuit panel, offering no reason thus far, stayed that order. That stay would have allowed Texas to enforce its law pending the appeal of the case. As it stands now, Texas cannot enforce its law. But the 5th Circuit will eventually issue an opinion and allow Texas to enforce its law, and the issue will almost certainly be back before the Supreme Court. This is especially true because of last week’s contrary 11th Circuit opinion, striking down a similar Florida law as violating the First Amendment rights of the private platforms to decide what content should be included or excluded.
The majority (C.J. Roberts, and Justices Barrett, Breyer, Kavanaugh, and Sotomayor) did not give a reason for vacating the 5th Circuit stay. Justice Kagan dissented, probably not on grounds of the merits but her views on whether the Supreme Court should be getting involved in these major pending cases on the shadow docket rather than letting them work their way through the courts.
But Justice Alito wrote an opinion for himself, Justice Thomas, and Justice Gorsuch. In the opinion, Alito does not say that the law is in fact unconstitutional. He argues that the matter is uncertain, buying into the arguments advanced in the past by Justice Thomas, Eugene Volokh, and others, that social media companies can be regulated like “common carriers” (such as the phone company) and forced to carry speech that they do not like.
The argument is one that is audacious and shocking for those (like Justice Thomas, less so for a Justice like Alito) who have taken near absolutist positions on First Amendment rights in the past, especially on issues such as campaign finance laws. I write about this in great detail in my Cheap Speech book, and explained the point briefly in this Slate piece:
It would be bad enough if the Supreme Court simply applied outmoded libertarian thinking to today’s information cesspool, believing that the truth will inevitably rise to the top and give voters the tools they need for informed decisionmaking. But the court’s inconsistent thinking on the First Amendment could make things far worse.
Consider the decision of Facebook and Twitter to “deplatform” Trump after he helped inspire the violent insurrection at the U.S. Capitol on January 6, 2021. Meta, which owns Facebook, and Twitter are private companies that make decisions all the time about what content to include, exclude, promote, and demote. The First Amendment does not limit these private companies and they can regulate speech in ways the government could not do. These companies remove hate speech, pornography, and other objectionable content from their platforms all the time.
But Justice Clarence Thomas—yes, the same Justice Thomas who believes that virtually all campaign finance laws violate the First Amendment—recently went out of his way in a case not presenting the issue to raise support for new laws, such as one passed last year in Florida, that would require social media companies to carry the content of politicians they do not like, even if those politicians support election violence or undermine voter confidence in the integrity of the electoral process. Justice Thomas has suggested that social media platforms are like telephone companies that could be subject to “must carry” provisions and cannot discriminate among customers based upon their political views.
But social media companies are much closer to newspapers and TV stations than telephone companies. The former but not the latter curate content all the time, and they can decide who appears on the platform and how. Justice Thomas appears to believe in the freedom of FOX News or the Atlantic to create a coherent brand with a message, but not Twitter or Facebook.
It is hard not to conclude that Justice Thomas was motivated toward this anti-libertarian position requiring private companies to carry speech they would rather not include on their websites because doing so would favor Donald Trump and those like Trump.
The good news from today’s opinion is that it looks like there are 5 or 6 votes at least to reject the Texas law and to hold that just like newspapers can decide what content to include or exclude, social media companies can do so too. Whether Section 230 of the Communications Decency Act recognizes it or not, social media companies exercise editorial discretion all the time. They should not be forced as private actors to carry dangerous and anti-democratic speech. People who want such speech can easily find it on Trump’s “Truth Social” platform or elsewhere.
You can listen here.
A must-read story from the Wa. Post on how Australia combats election disinformation:
In a Canberra office covered in computer screens, the alerts began pouring in.
“This needs a #FactCheck,” one person tweeted.
“Is this not illegal?” another asked.
Tagged in the torrent of tweets was the Australian Electoral Commission (AEC). Within minutes, the federal agency responded, calling the video “false” and “disappointing.” The agency’s actions quickly led Twitter to label the cartoon as “misleading,” and Facebook and TikTok took it down completely.
The incident last month reflects the rising tide of misinformation Australia faces as it prepares to go to the polls on Saturday. But it also shows the benefit of a single agency overseeing a country’s electoral process….
“There are a myriad of major and minor differences in how electoral laws and regulations are administered across America,” said Pippa Norris, a professor at Harvard’s Kennedy School of Government. “This violates basic principles of equality and consistency in electoral processes and voting rights, leads to excessively partisan considerations gaming the system, and encourages numerous malpractices.”
Australia’s electoral system, in contrast, is praised by analysts around the world.
Steven J. Mulroy, a professor at the University of Memphis and the author of a book on American election law, called it the “gold standard in election administration.”…
As the challenges have changed, so, too, has the AEC.
When Ekin-Smyth joined in 2011, the AECdidn’t even have a Twitter account. A decade later, half a dozen people now help him tweet at a blistering pace: up to two dozen times per hour. It also has accounts on Facebook, Instagram, LinkedIn and YouTube, has partnered with TikTok on an election guide, and has held an “Ask me Anything” on Reddit….
“We’re not blind to the fact that social media moves incredibly swiftly,” Ekin-Smyth said. “And the action that social media organizations can take is brilliant. But the action we can take even quicker by responding on our channels is perhaps going to be even more effective.”…
“A party or candidate talking about another party, their policies, their history — we cannot be the regulators of truth for that,” Ekin-Smyth said. “We don’t have legislation that allows it. But also there would be some practical problems and some perception problems if we were making decisions on those things.”…
With social media stoking tribalism, the AEC requires all its employees — including its 100,000 temporary election workers — to sign a declaration of political neutrality.
“There is a lot of responsibility to it,” Ekin-Smyth said, “because a failed election — real or perceived — as we’ve seen in other jurisdictions, is potentially devastating.”
You can watch the full interview here:
Jeff Kosseff reviews my book Cheap Speech at Lawfare:
To say that Volokh’s article was prophetic would be an understatement. More than a quarter-century later, the cheap speech that Volokh predicted has upended commerce, art, politics, news and community. Many volumes can and should be written about the effects of the rapid evolution of cheap speech on discrete areas of American life.
Fortunately, Rick Hasen has done just that. In “Cheap Speech: How Disinformation Poisons Our Politics—and How to Cure It,” Hasen takes on the lofty task of examining the impact of cheap speech on American elections, politics and democracy. Hasen has written an extraordinary, thorough and fair examination of the impact of misinformation on democracy. He examines the costs and benefits of cheap speech and presents carefully crafted proposals that attempt to address the harms without straying from core First Amendment values or from falling into a moral panic about misinformation….
More important than Hasen’s evaluation of the problem is the second half of the book, in which Hasen considers potential solutions to mitigate some harms of cheap speech. Too often, discussions about misinformation end in solutions that casually dismiss First Amendment principles. Other times, they simply conclude with despair and do not even try to address the problems. Hasen—one of the nation’s most knowledgeable and respected election law scholars—could have gotten away with half-baked proposals to lop off large chunks of the First Amendment for the sake of saving democracy from cheap speech. …
But Hasen has not joined the calls for substantial abrogations to free speech. As Hasen recognizes, solving the problems created by cheap speech with sweeping new laws that limit speech “would undermine some fundamental American values and a key part of our democracy: the benefits of robust and uninhibited political debate.” Hasen also properly questions “who, in a society animated by distrust, would do the regulating and how they would do it.”
Rather than traveling down the censorial road that many others have traveled, Hasen relies on his deep knowledge of election and campaign finance law to suggest ways to mitigate some of the worst political misinformation harms.
Some of Hasen’s suggestions—such as ensuring that state and local governments competently administer elections—do not raise First Amendment problems. The proposals that do implicate potential free speech concerns valiantly attempt to stay within the strictures of the First Amendment. For instance, when Hasen suggests that Congress amend campaign finance disclosure laws to address online advertisements, he attempts to adhere to the Supreme Court precedent that has approved of some campaign finance disclosure requirements. Yet Hasen also recognizes that some justices who supported campaign finance disclosure laws no longer sit on the court, so how the current court would react to new requirements is uncertain.
Hasen also recognizes that it is hard to predict whether the Supreme Court would approve his proposal to require large online platforms to place labels on synthetically altered videos and images of politicians, addressing the concerns about deep fakes. Yet he presents a reasonable argument for such a proposal to survive even the most rigorous constitutional scrutiny and contrasts it with more constitutionally problematic bans on deep fakes.
Hasen argues that the government should have the power to ban “false speech about the mechanics of voting,” such as lying about when an election will occur or how people can vote. The Supreme Court has suggested that a state’s ban on speech that is “intended to mislead voters about voting requirements and procedures” would survive a First Amendment challenge. Hasen is appropriately careful to exclude from his proposed ban generalized claims that an election will be “rigged” or “stolen,” as well as postelection claims about stolen or rigged elections. The narrowness of this proposal means that it would not address much of the “big lie” that fueled the Jan. 6 storming of the Capitol, but it is far more likely to survive a First Amendment challenge than a more sweeping election misinformation proposal.
Hasen rightly resists the temptation to attempt to address misinformation through amendments to Section 230, the 1996 law that shields online platforms from many claims arising from third-party content. A wide swath of misinformation is constitutionally protected, and amending Section 230 could not eliminate that protection. He also correctly recognizes that repealing Section 230 would not address the claims that platforms are biased against conservatives, as the increased legal risk likely would cause platforms to take down more content. That is not to say that Hasen dismisses concerns about platform power; rather, he suggests addressing them via required disclosures about algorithmic tweaking of content, antitrust law and privacy laws.
I have written this piece for Slate. It begins:
Elon Musk’s apparent decision to restore former President Donald Trump’s privileges to post on Twitter if his purchase of the company closes is a dangerous one for American democracy. And there’s one group that has by far the best chance to prevent it from happening and who must organize and act: Twitter’s employees.
Musk told an interviewer Tuesday at an automobile conference that it was a mistake to ban Trump after his comments egging on rioters on Jan. 6, while he was working to overturn the 2020 election from the White House. “I do think that it was not correct to ban Donald Trump. I think that was a mistake because it alienated a large part of the country and did not ultimately result in Donald Trump not having a voice,” Musk said, articulating his reasons to undo a Trump ban. “He is now going to be on Truth Social, as will a large part of the right, in the United States, and so I think this could end up being, frankly, worse than having a single forum where everyone could debate. I guess the answer is that I would reverse the permanent ban.” When pressed on whether it was wrong to ban Trump after Trump encouraged the violence of January 6, Musk said: “I think if there are tweets that are wrong and bad, those should be either deleted or made invisible, and a suspension, a temporary suspension, is appropriate, not a permanent ban.”…
To begin with, the First Amendment properly understood prevents laws that require either the platforming or deplatforming of politicians on social media platforms. Meta and Twitter are private companies that are not constrained by the First Amendment. The government should not have the right to tell them which content to include or exclude any more than the government can tell Slate or Fox News what content to include or exclude. (For this reason, laws like those passed recently in Florida and Texas that would seek to force Trump be restored to social media platforms should be held unconstitutional. Lower courts have held these laws likely unconstitutional, and the Fifth Circuit held arguments on Texas’s law earlier this week.) So there is no legal obligation for Musk to make the decision he’s reached.
What could alter his course? Public pressure is a possibility. A decision to restore Trump to Twitter could lead to people leaving Twitter in protest. But there are network effects of social media platforms that make leaving professionally or personally difficult. For me, if most journalists and scholars writing about elections remain on Twitter, it would be hard for me to leave in protest. I expect I would leave if Trump begins posting again but many others may not follow suit. So while public pressure is possible and desirable, it might not be enough to make a difference if too many people view the cost of leaving as too high.
But Twitter employee action could make a real difference. Engineers and others who work at tech companies are in high demand. There’s lots of competition among the leading companies to bring in and retain the best talent. Employees can organize and seek to pressure Twitter’s likely new owner to do the right thing; they can threaten to leave if he doesn’t….
What would action by Twitter employees look like? Those in high enough positions can threaten to quit, though that could come at a potentially large personal cost. To make the threats credible, other social media companies with pro-democracy policies can invite these employees to apply for jobs with open arms. All employees could try to unionize to have greater power to push against anti-democratic moves of the company. Individual employees can leak information about what Twitter knows about the relationship of Trump’s tweets to threats to American democracy. At the very least, employees can make their views known within the company and seek to put pressure on managers to bring concerns to Musk.
You can listen here.