Lawrence Norden, Mekela Panditharatne, and David Harris at Just Security.
Protect Democracy and the Yale Media Freedom and Information Access Clinic filed this Second Circuit amicus brief (with me as client and co-counsel) in United States v. Mackey. Mackey was convicted “under 18 U.S.C. § 241 for conspiring “to use Twitter to trick American citizens into thinking they could vote by text and stay at home on Election Day—thereby suppressing and injuring those citizens’ right to vote.” Gov’t Br. 2. Mackey has argued that section 241 does not cover such a scheme and that the law is facially unconstitutional under the First Amendment because it punishes too much protected speech.
In our brief, we explain that the statute, properly construed, both bars lies about when, where or how people vote intended to deprive people of their right to vote and that limiting section 241 to such empirically verifiable false speech assures that the law does not violate the First Amendment. The Supreme Court has already stated that the government “may prohibit messages intended to mislead voters about voting requirements and procedures” consistent with the First Amendment. Minn. Voters All. v. Mansky, 138 S. Ct. 1876, 1889 n.4 (2018). Further, as explained in Protect Democracy’s blog post on the filing:
The primary question before the Second Circuit in Mackey’s appeal is whether the federal civil rights statute he was convicted under – which bans conspiring to “injure” any person in their exercise of federal rights – actually bars conspiracies to circulate false information about voting mechanisms and procedures. Professor Hasen’s amicus brief explains why intentionally false statements about voting mechanisms and procedures violate federal law, and why such speech can be punished without running afoul of the First Amendment’s protections.
In particular, to establish the applicability of Reconstruction-era civil rights protection to internet memes, the brief tracks the history of legal actions protecting the right to vote back to England in 1703. That history shows, among other things, a three-century-long recognition among judges that an intentional deprivation of the right to vote constitutes an “injury” for which the law provides a remedy. As a result, the brief argues, Mackey’s conduct clearly constituted a conspiracy to “injure” under long-recognized legal principles, even if the Reconstruction Congress would have had no idea what an internet meme is.
You can find the introduction to our brief below the fold, which relies heavily on common law tort principles protecting the right to vote and its explanation in the Restatement (2d) Torts section 865.Continue reading Our Amicus Brief in United States v. Mackey: Lying About When, Where or How People Vote Violates Federal Law (18 USC 241) and Prosecution is Consistent with the First Amendment
Meta announced on Friday it would stop proactively recommending political content on Instagram or its upstart text-based app Threads, alarming news and politics-focused creators and journalists gearing up for a crucial election year.
While users will still be allowed to follow accounts that post about political and social issues, accounts posting such content will not be recommended and content posted by nonpolitical accounts that is political in nature or includes social commentary also won’t be recommended, Meta said.
The company said it also won’t show users posts focused on laws, elections or social issues from accounts those users don’t follow.
“This announcement expands on years of work on how we approach and treat political content based on what people have told us they wanted,” said Meta spokesperson Dani Lever.
Meta said users will still be able to see politics-related posts in their main feeds from accounts they follow. But the new approach means users are less likely to see politics-oriented content or accounts on Instagram’s “Explore” page, its short-form video product known as Reels, and the suggested-users-to-follow box. Meta also won’t be recommending politics to users’ feeds on Threads. Meta said it plans to develop tools to allow users to opt in to seeing more political content, but those tools are not available.
Days before a pivotal election in Slovakia to determine who would lead the country, a damning audio recording spread online in which one of the top candidates seemingly boasted about how he’d rigged the election.
And if that wasn’t bad enough, his voice could be heard on another recording talking about raising the cost of beer.
The recordings immediately went viral on social media, and the candidate, who is pro-NATO and aligned with Western interests, was defeated in September by an opponent who supported closer ties to Moscow and Russian President Vladimir Putin.
While the number of votes swayed by the leaked audio remains uncertain, two things are now abundantly clear: The recordings were fake, created using artificial intelligence; and US officials see the episode in Europe as a frightening harbinger of the sort of interference the United States will likely experience during the 2024 presidential election.
“As a nation, we are woefully underprepared,” said V.S. Subrahmanian, a Northwestern University professor who focuses on the intersection of AI and security.
Senior national security officials in the US have been gearing up for “deepfakes” to inject confusion among voters in a way not previously seen, a senior US official familiar with the issue told CNN. That preparation has involved contingency planning for a foreign government potentially using AI to interfere in the election.
In the spring of 2020, when President Donald J. Trump wrote messages on Twitter warning that increased reliance on mail-in ballots would lead to a “rigged election,” the platform ran a corrective, debunking his claims.
“Get the facts about mail-in voting,” a content label read. “Experts say mail-in ballots are very rarely linked to voter fraud,” the hyperlinked article declared.
This month, Elon Musk, who has since bought Twitter and rebranded it X, echoed several of Mr. Trump’s claims about the American voting system, putting forth distorted and false notions that American elections were wide open for fraud and illegal voting by noncitizens.
This time, there were no fact checks. And the X algorithm — under Mr. Musk’s direct control — helped the posts reach large audiences, in some cases drawing many millions of views.
Since taking control of the site, Mr. Musk has dismantled the platform’s system for flagging false election content, arguing it amounted to election interference.
Now, his early election-year attacks on a tried-and-true voting method are raising alarms among civil rights lawyers, election administrators and Democrats. They worry that his control over the large social media platform gives him an outsize ability to reignite the doubts about the American election system that were so prevalent in the lead-up to the riot at the Capitol on Jan. 6, 2021.
As Mr. Trump’s victory in New Hampshire moved the race closer to general election grounds, the Biden campaign for the first time criticized Mr. Musk directly for his handling of election content on X: “It is profoundly irresponsible to spread false information and sow distrust about how our elections operate,” the Biden campaign manager, Julie Chávez Rodríguez, said this week in a statement to The New York Times.
“It’s even more dangerous coming from the owner of a social media platform,” she added.
What is angering the Biden campaign is delighting pro-Trump Republicans and others who depict the old Twitter as part of a government-controlled censorship regime that aided Mr. Biden in 2020. Under a system now in dispute at the Supreme Court, government officials alerted platforms to posts they deemed dangerous, though it was up to the companies to act or not.
“Oh, boo hoo,” Harmeet K. Dhillon, a lawyer whose firm represents Mr. Trump, said of the Democrats’ complaints. Ms. Dhillon has sued the company for suspending an election-denying client’s account after receiving a notice from the California election officials — the sort of government interplay Mr. Musk has repudiated. She noted the platform was now “a much better place for conservatives,” and said of Mr. Musk, “he’s great.”…
Check this out. Great lineup.
LaToi Storr, a 42-year-old content creator and lifestyle blogger based in Philadelphia, normally posts Instagram and TikTok videos of local restaurants and skincare tips, mingled with some community-focused material on Black mental health care.
Last fall, she started posting a new kind of message on her feeds.
In an Instagram reel in October, she urged her 16,500 followers to register for a Pennsylvania election for state judges and district attorneys. She posted the same video on TikTok. Then, she posted another reel reminding people to get out to vote.
For her political posts, she was paid by Priorities USA, a super PAC supporting President Joe Biden’s reelection.
The influential Democratic PAC is spending $1 million for its first-ever “creator” program, enlisting Storr and 150 other influencers to post on social media in the 2024 election cycle, according to details first shared with POLITICO.
The effort is part of a larger Democratic strategy to lure young voters in battleground states, who polls show are increasingly critical of Biden, whether over his age or issues like his stance towards Israel. Biden’s reelection campaign itself is amping up its work with social media influencers in 2024, though those partnerships are currently unpaid, Daniel Wessel, a Biden campaign spokesperson, told POLITICO. The White House team separately is also flexing its creator game, throwing its first-ever influencer Christmas party last December.
A prominent New Hampshire Democrat plans to file a complaint with the state attorney general over an apparent robocall that appears to encourage supporters of President Joe Biden not to vote in Tuesday’s presidential primary.
The voice in the message is familiar — even presidential — as it’s an apparent imitation or digital manipulation of Biden’s voice.
“What a bunch of malarkey,” the voice message begins, echoing a favorite term Biden has uttered before.
The message says that “it’s important that you save your vote for the November election.”
“Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday,” it says.
The message concludes with a phone number belonging to Kathy Sullivan, a former New Hampshire Democratic Party chair who is now running a super PAC supporting the campaign to urge New Hampshire Democrats to write in Biden’s name in the primary.
Billions of people will vote in major elections this year — around half of the global population, by some estimates — in one of the largest and most consequential democratic exercises in living memory. The results will affect how the world is run for decades to come.
At the same time, false narratives and conspiracy theories have evolved into an increasingly global menace.
Baseless claims of election fraud have battered trust in democracy. Foreign influence campaigns regularly target polarizing domestic challenges. Artificial intelligence has supercharged disinformation efforts and distorted perceptions of reality. All while major social media companies have scaled back their safeguards and downsized election teams.
“Almost every democracy is under stress, independent of technology,” said Darrell M. West, a senior fellow at the Brookings Institution think tank. “When you add disinformation on top of that, it just creates many opportunities for mischief.”
Weeks before the 2020 presidential election, infamous political operative Roger Stone sat across from his associate Sal Greco at a restaurant in Florida.
At the time, Greco was an NYPD cop working security for Stone on the side. Their conversation, at Caffe Europa in Fort Lauderdale, focused on two House Democrats for whom Stone harbors particular animosity, Jerry Nadler and Eric Swalwell.
In audio of the conversation obtained exclusively by Mediaite, Stone made threatening comments about the two lawmakers.
“It’s time to do it,” Stone told Greco. “Let’s go find Swalwell. It’s time to do it. Then we’ll see how brave the rest of them are. It’s time to do it. It’s either Nadler or Swalwell has to die before the election. They need to get the message. Let’s go find Swalwell and get this over with. I’m just not putting up with this shit anymore.”
A source familiar with the discussion told Mediate they believed Stone’s remarks were serious. “It was definitely concerning that he was constantly planning violence with an NYPD officer and other militia groups,” the source said….
Stone denied making those comments, claiming they were generated by AI. He has previously claimed videos of his comments are actually “deep fakes.” In response to a request for comment on the remarks aimed at Swalwell and Nadler, Stone said, “Total nonsense. I’ve never said anything of the kind more AI manipulation. You asked me to respond to audios that you don’t let me hear and you don’t identify a source for. Absurd.”
On the liar’s dividend (from my Cheap Speech book):
Artificial intelligence allowed Pakistan’s former prime minister Imran Khan to campaign from behind bars on Monday, with a voice clone of the opposition leader giving an impassioned speech on his behalf.
Khan has been locked up since August and is being tried for leaking classified documents, allegations he says have been trumped up to stop him contesting general elections due in February.
His Pakistan Tehreek-e-Insaf (PTI) party used artificial intelligence to make a four-minute message from the 71-year-old, headlining a “virtual rally” hosted on social media overnight on Sunday into Monday despite internet disruptions that monitor NetBlocks said were consistent with previous attempts to censor Khan.
The Guardian reports, with the subhead: “Kate Starbird says attacks have made research difficult, and claims of bias arise because of prevalence of lies from the right.”
Jennifer Szalai “critics notebook” review essay for the NYT.