Category Archives: cheap speech

“Trump and Musk Attack Journalists by Name in Social Media Posts”

NYT:

President Trump has made clear his animus toward mainstream media organizations. Now he’s getting more personal.

Mr. Trump and his key lieutenant, Elon Musk, who has been empowered to run what they call the Department of Government Efficiency as a “special government employee,” have attacked journalists by name in recent days on the social media platforms they own: Truth Social and X….

Mr. Musk took aim at a Wall Street Journal investigative reporter, Katherine Long. Ms. Long was the first to reveal, in a report in The Journal on Thursday, that Marko Elez, one of Mr. Musk’s lieutenants in the Department of Government Efficiency, was linked to a since-deleted racist social media account that had posted statements like, “You could not pay me to marry outside of my ethnicity.”

Mr. Elez resigned after The Journal approached the White House for comment, according to the article. It was Ms. Long’s first article in her new job at The Journal.

Mr. Musk said in separate replies on X on Friday that Ms. Long was “a disgusting and cruel person” and should be “fired immediately.”…

Vice President JD Vance also weighed in on X on Friday, saying that he disagreed with some of Mr. Elez’s posts but that they shouldn’t “ruin a kid’s life.” (Mr. Elez is 25 years old.)

“We shouldn’t reward journalists who try to destroy people. Ever,” Mr. Vance wrote.

The Wall Street Journal did not immediately respond to a request for comment.

“Journalists have a job to do and should never be attacked by high-ranking government officials for doing it,” Timothy Richardson, the journalism and disinformation program director at PEN America, a free-expression nonprofit, said in a statement.

He added, “Musk’s call for this journalist’s firing contradicts his self-proclaimed free speech advocacy and reveals his hypocrisy.”…

Share this:

“Trump continues federal purge, gutting cyber workers who combat disinformation”

Politico:

The Trump administration has moved to push out a swathe of federal workers previously involved in combating election-related disinformation, according to three people familiar with the matter, amid allegations from congressional Republicans that their work unfairly targeted conservative speech online.

Roughly half a dozen employees from the Cybersecurity and Infrastructure Security Agency who once worked in its Election Security and Resilience division were notified Thursday night they were being put on administrative leave, said the three people, who were granted anonymity to discuss sensitive personnel matters.

The move comes shortly after the installment of new DHS Secretary Kristi Noem, a close Trump ally. The former South Dakota governor told congressional Republicans in her confirmation hearing last month she shared their view that CISA should no longer be involved in efforts to combat the scores of online hoaxes peddled by the likes of Russia, China and Iran.

“As Secretary Noem stated during her confirmation hearing, CISA needs to refocus on its mission, and we are starting with election security,” Tricia McLaughlin, assistant secretary for Public Affairs at CISA, said in a statement.

McLaughlin added that the agency is “undertaking an evaluation” of how it handles election security, and “personnel who worked on mis-, dis-, and malinformation, as well as foreign influence operations and disinformation, have been placed on administrative leave.”

The ousters are the latest example of how the administration is targeting career government officials with prior connections, however tenuous, to efforts it disagrees with or that interfere with Trump’s agenda….

Share this:

“Altered image of Wisconsin Supreme Court candidate in new ad raises ethics concerns”

AP:

A new television attack ad in Wisconsin’s hotly contested Supreme Court race features a doctored image of the liberal candidate, a move that her campaign claims could be a violation of a recently enacted state law.

The image in question is of Susan Crawford, a Dane County circuit court judge. It appeared in a new TV ad paid for by the campaign of her opponent Brad Schimel, a Waukesha County circuit court judge.

The winner of the high-stakes race on April 1 will determine whether the Wisconsin Supreme Court remains under a liberal majority or flips to conservative control.

The Schimel campaign ad begins and ends with a black-and-white image of Crawford with her lips closed together. A nearly identical color image from her 2018 run for Dane County Circuit Court shows Crawford with a wide smile on her face.

Crawford’s campaign accused Schimel of manipulating the image, potentially in violation of a state law enacted last year. The law, passed with bipartisan support in the Legislature and signed by Democratic Gov. Tony Evers, requires disclosure if political ads use audio or video content created by generative artificial intelligence. Failure to disclose the use of AI as required can result in a $1,000 fine….

Schimel’s campaign spokesperson Jacob Fischer said the image was “edited” but not created by AI.

Peter Loge, the director of the Project on Ethics in Political Communication at George Washington University, said images should never be changed to give a false impression.

“That said, as these things go, it’s not that egregious,” Loge said of the Schimel ad….

Share this:

“The Liar’s Dividend: Can Politicians Claim Misinformation to Evade Accountability?”

Kaylyn Schiff, Daniel Schiff, and Natalia Bueno have written this article for APSR. Here is the abstract:

This study addresses the phenomenon of misinformation about misinformation, or politicians “crying wolf” over fake news. Strategic and false claims that stories are fake news or deepfakes may benefit politicians by helping them maintain support after a scandal. We posit that this benefit, known as the “liar’s dividend,” may be achieved through two politician strategies: by invoking informational uncertainty or by encouraging oppositional rallying of core supporters. We administer five survey experiments to over 15,000 American adults detailing hypothetical politician responses to stories describing real politician scandals. We find that claims of misinformation representing both strategies raise politician support across partisan subgroups. These strategies are effective against text-based reports of scandals, but are largely ineffective against video evidence and do not reduce general trust in media. Finally, these false claims produce greater dividends for politicians than alternative responses to scandal, such as remaining silent or apologizing.

Share this:

“The Power of Trump’s Big Lie: Identity Fusion, Internalizing Misinformation, and Support for Trump”

Philip Moniz & William B. Swann in PS: Political Science and Politics. Abstract:

Former president Trump has maintained broad support despite falsely contending that he was the victim of electoral fraud, also known as the “big lie.” We consider both the antecedents of this phenomenon and its consequences. We propose that Trump supporters’ already established deep personal alignment—identity fusion—with their leader predisposed them to believe the lie. Accepting it then set the foundation for other identity-protecting beliefs and attitudes. Using a three-wave panel of Trump supporters, we found that the more fused they were before the 2020 election, the stronger their belief in the big lie grew between 2021 and 2024. Accepting the big lie helped solidify fusion with Trump and had consequences for related attitudes. Belief in the big lie predicted downplaying the criminal charges against Trump and supporting his antidemocratic policy agenda. Fueled by and fueling further fusion, belief in the big lie is a primary component of a larger narrative that emboldens Trump and justifies antidemocratic behavior.

Share this:

“Trump Signs Agreement Calling for Meta to Pay $25 Million to Settle Suit”

WSJ:

President Trump has signed settlement papers that are expected to require Meta Platforms to pay roughly $25 million to resolve a 2021 lawsuit Trump brought after the company suspended his accounts following the attacks on the U.S. Capitol that year, according to people familiar with the agreement.

Of that, $22 million will go toward a fund for Trump’s presidential library, with the rest going to legal fees and the other plaintiffs who signed onto the case. Meta won’t admit wrongdoing, the people said. Trump signed the settlement agreement Wednesday in the Oval Office. 

Meta didn’t immediately respond to a request for comment.

Serious talks about the suit, which had seen little activity since the fall of 2023, began after Meta Chief Executive Mark Zuckerberg flew to Trump’s Mar-a-Lago club in Florida to dine with him in November, according to the people familiar with the discussions. The dinner was one of several efforts by Zuckerberg and Meta to soften the relationship with Trump and the incoming administration. Meta also donated $1 million to Trump’s inaugural fund. Last year, Trump warned that Zuckerberg could go to prison if he tried to rig the election against him. 

Toward the end of the November dinner, Trump raised the matter of the lawsuit, the people said. The president signaled that the litigation had to be resolved before Zuckerberg could be “brought into the tent,” one of the people said. 

Weeks later, in early January, Zuckerberg returned to Mar-a-Lago for a full day of mediation. Trump was present for part of the session, though he stepped out at one point to be sentenced—appearing virtually—for covering up hush money paid to a porn star, one of the people said. He also golfed, reappearing in golf clothes and talking about the round he had just played, the person said….

The Meta lawsuit was one of a series of legal actions that Trump, freshly voted out of office, brought in July 2021 against social-media companies that suspended his accounts. He also sued Twitter, now renamed X, and YouTube, along with their corporate leaders. A federal judge dismissed the Twitter suit, and the Google suit was administratively closed in 2023 but could be reopened….

Trump’s Facebook and Instagram accounts were suspended in 2021 because of posts he made around Jan. 6, 2021, when a mob stormed the Capitol building. In the days leading up to the attacks and on Jan. 6, he repeatedly used the platforms to make false claims that he won the 2020 election and alleged widespread election fraud that was denied by the administration’s top election-security experts and attorneys. 

Zuckerberg, at the time, said the risks of the president using the social-media platforms during that period “are simply too great” and then paused the president’s accounts for two weeks. The pause was subsequently lengthened. …

Share this:

“Trump Barely Won the Popular Vote. Why Doesn’t It Feel That Way?”

Ezra Klein NYT column:

In 2024, Donald Trump won the popular vote by 1.5 points. Trump and Democrats alike treated this result as an overwhelming repudiation of the left and a broad mandate for the MAGA movement. But by any historical measure, it was a squeaker….

In July of 2024, Tyler Cowen, the economist and cultural commentator, wrote a blog post that proved to be among the election’s most prescient. It was titled “The change in vibes — why did they happen?” Cowen’s argument was that mass culture was moving in a Trumpian direction. Among the tributaries flowing into the general shift: the Trumpist right’s deeper embrace of social media, the backlash to the “feminization” of society, exhaustion with the politics of wokeness, an era of negativity that Trump captured but Democrats resisted, a pervasive sense of disorder at the border and abroad and the breakup between Democrats and “Big Tech.”

I was skeptical of Cowen’s post when I first read it, as it described a shift much larger than anything I saw reflected in the polls. I may have been right about the polls. But Cowen was right about the culture.

Reading Cowen’s list with the benefit of hindsight, four factors converged to turn Trump’s narrow victory in votes into an overwhelming victory in vibes. The first is the very different relationship (most) Democrats and Republicans have to social media. To Democrats, mastering social media means having a good team of social media content producers; Kamala Harris’s capably snarky team was just hired more or less en masse by the D.N.C.

To the Trumpian right, mastering social media — and attention, generally — means being, yourself, a dominant and relentless presence on social media and YouTube and podcasts, as Trump and JD Vance and Elon Musk all are. It’s the politician-as-influencer, not the politician-as-press-shop. There are Democrats who do this too, like A.O.C., but they are rare.

Biden has no authentic relationship with social media, nor does Harris. They treat it cautiously, preferring to make fewer mistakes, even if that means commanding less attention. Since the election, I have heard no end of Democrats lament their “media problem,” and I’ve found the language telling. Democrats won voters who consume heavy amounts of political news, but they lost voters who don’t follow the news at all. What Democrats have is an attention problem, not a media problem, and it stems partly from the fact that they still treat attention as something the media controls rather than as something they have to fight for themselves.

Share this:

“Trump Says He Will Sign Executive Order to Stall TikTok Ban”

NYT:

President-elect Donald J. Trump said on Sunday that he would issue an executive order to stall a federal ban of TikTok, just hours after major app stores removed the popular social media site and it stopped operating for U.S. users.

“I’m asking companies not to let TikTok stay dark,” Mr. Trump said in a post on Truth Social. “I will issue an executive order on Monday to extend the period of time before the law’s prohibitions take effect, so that we can make a deal to protect our national security.”

The ban stems from a 2024 law that requires app stores and cloud computing providers to stop distributing or hosting TikTok unless it is sold by its Chinese parent company, ByteDance. Lawmakers passed the law over concerns that the Chinese government could use the app, which claims roughly 170 million United States users, to gather information about Americans or spread propaganda.

App stores and cloud computing providers that do not comply with the law face potentially significant financial penalties. Mr. Trump said in his post on Sunday that he would “confirm that there will be no liability for any company that helped keep TikTok from going dark before my order.”…

An executive order would mark a new phase in the fight over the future of the app, which has reshaped the social media landscape and popular culture, and created a living for millions of influencers and small businesses that rely on the platform.

In issuing an order, Mr. Trump would raise questions about the rule of law in the United States. His action would constitute an attempt to temporarily neuter a law that passed with broad bipartisan support in Congress and that the Supreme Court unanimously upheld last week.

It is unclear whether Mr. Trump’s efforts will be successful. His executive order could face a legal challenge, including over whether he has the power to stop enforcement of a federal law. Companies subject to the law may determine that the order does not provide enough assurance that they will not be punished for violations…

In his post on Sunday, Mr. Trump floated the idea that he “would like the United States to have a 50% ownership position in a joint venture,” without providing further details….

Share this:

Supreme Court, Applying Intermediate Scrutiny, Upholds TikTok Ban Against First Amendment Challenge on Grounds That It Allows a Foreign Adversary to Spy on Americans

The result was unanimous, in a per curiam opinion and two concurrences (by Sotomayor and Gorsuch).

The nub, as Justice Gorsuch put it at the end of his concurrence, was this: “Speaking with and in favor of a foreign adversary is one thing. Allowing a foreign adversary to spy on Americans is
another.”

TikTok’s survival now depends on political action by Congress and the President.

Share this:

ELB Podcast 6:4: Katie Harbath: The Present and Future of Social Media, Politics, and Elections

Season 6, Episode 4 of the ELB Podcast:

How did social media’s treatment of election content change in the 2024 elections?

What do Meta’s new announcements mean for politics and society going forward?

How might AI change everything?

On Season 6, Episode 4 of the ELB Podcast, we speak with social media and politics expert Katie Harbath.

You can subscribe on SoundcloudApple Podcasts, and Spotify.

Share this:

“Meta just flipped the switch that prevents misinformation from spreading in the United States”

Platformer:

Last week, Meta announced a series of changes to its content moderation policies and enforcement strategies designed to curry favor with the incoming Trump administration. The company ended its fact-checking program in the United States, stopped scanning new posts for most policy violations, and created carve-outs in its community standards to allow dehumanizing speech about transgender people and immigrants. The company also killed its diversity, equity and inclusion program.

Behind the scenes, the company was also quietly dismantling a system to prevent the spread of misinformation. When the company announced on Jan. 7 that it would end its fact-checking partnerships, the company also instructed teams responsible for ranking content in the company’s apps to stop penalizing misinformation, according to sources and an internal document obtained by Platformer.

The result is that the sort of viral hoaxes that ran roughshod over the platform during the 2016 US presidential election — “Pope Francis endorses Trump,” Pizzagate, and all the rest — are now just as eligible for free amplification on Facebook, Instagram, and Threads as true stories.

In 2016, of course, Meta hadn’t yet invested huge sums in machine-learning classifiers that can spot when a piece of viral content is likely a hoax. But nine years later, after the company’s own analyses found that these classifiers could reduce the reach of these hoaxes by more than 90 percent, Meta is shutting them off. 

Meta declined to comment on the changes. Instead, it pointed me to a letter and a blog post in which it had hinted that this change was coming. 

The letter was sent in August by Zuckerberg to Rep. Jim Jordan, the chairman of the House Judiciary Committee. In it, Zuckerberg expressed his discomfort with the Biden Administration’s efforts to pressure the company to remove certain posts about COVID-19. Zuckerberg also expressed regret that the company had temporarily reduced the distribution of stories about Hunter Biden’s laptop, which Meta and Twitter had both done out of fear that they had been the result of a Russian hack-and-leak operation. The few hours that the story’s distribution was limited would go on to become a Republican cause célèbre.

As a kind of retroactive apology for bowing to censorship requests in the past, and for the company’s own actions in the Hunter Biden case, Zuckerberg said that going forward, the company would no longer reduce the reach of posts that had been sent to fact checkers but not yet evaluated. Once they had been evaluatedMeta would continue to reduce the reach of posts that had been designated as false. 

In hindsight, this turned out to be the first step toward killing off Meta’s misinformation efforts: granting hoaxes a temporary window for expanded reach while they awaited fact checking.


That brings us to the blog post: Joel Kaplan’s “More speech, fewer mistakes,” which was published last Tuesday and among other things announced the end of the company’s US fact-checking partnerships. Buried toward the bottom were these two sentences: 

We also demote too much content that our systems predict might violate our standards. We are in the process of getting rid of most of these demotions and requiring greater confidence that the content violates for the rest.

At the time, Kaplan did not elaborate on which of these demotions the company planned to get rid of. Platformer can now confirm that misinformation-related demotions have been eliminated at the company….

Share this:

Announcing the Winter/Spring Lineup of Safeguarding Democracy Project Events

We’ve got a great lineup of in person, online, and hybrid events!

Alternate text   Tuesday, January 28 
Fair Elections and Voting Rights: What’s Ahead in the Next Four Years? Image   

Register for the webinar here. In-person registration here. Lunch will be provided.
Tuesday, January 28, 12:15pm-1:15pm PT Room 1327 at UCLA Law and online
Amy Gardner, The Washington Post, Pamela Karlan, Stanford Law School, and Stephen Richer, former Recorder of Maricopa County, Arizona. Moderated by Richard L. Hasen (Director, Safeguarding Democracy Project)   

Thursday, February 13 
Finding Common Ground on Modernizing Voter Registration Image   

Register for the webinar here.
Thursday, February 13, 12:15pm-1:15pm PT, Webinar
Christina Adkins, Director of Elections, Texas Secretary of State’s Office, Judd Choate, Director of Elections in Colorado, and Charles H. Stewart III, MIT. Richard L. Hasen, moderator (Director, Safeguarding Democracy Project, UCLA)   

Tuesday, March 4 
What do Documentary Proof of Citizenship Requirements for Voter Registration Accomplish? Image   

Register for the webinar here. In-person registration here. Lunch will be provided.Tuesday, March 4, 12:15pm-1:15pm PT at UCLA Law School Room 1327 and online
Adrian Fontes, Arizona Secretary of State, Walter Olson, Senior Fellow at the Cato Institute, and Nina Perales, Vice President of Litigation, MALDEF (Mexican American Legal Defense and Educational Fund) Richard L. Hasen, moderator (Director, Safeguarding Democracy Project, UCLA)   

Monday, March 31 
Combatting False Election Information: Lessons from 2024 and a Look to the Future 
Image   

Register for the webinar here.
Monday, March 31, 12:15pm-1:15pm PT, Webinar
Alice Marwick, Director of Research, Data & Society, UNC Chapel Hill, Kate Starbird, University of Washington, and Joshua Tucker, NYU. Richard L. Hasen, moderator (Director, Safeguarding Democracy Project, UCLA)   

Thursday, April 10
 Partisan Primaries, Polarization, and the Risks of Extremism
 Image   

Register for the webinar here.Thursday, April 10, 12:15pm-1:15pm PT, Webinar
Julia Azari, Marquette University, Ned Foley, Ohio State University, Moritz College of Law, Seth Masket, Denver University, and Rick Pildes, NYU Law School  Richard L. Hasen, moderator (Director, Safeguarding Democracy Project, UCLA)   
Share this:

“Inside Mark Zuckerberg’s Sprint to Remake Meta for the Trump Era”

NYT:

Mark Zuckerberg kept the circle of people who knew his thinking small.

Last month, Mr. Zuckerberg, the chief executive of Meta, tapped a handful of top policy and communications executives and others to discuss the company’s approach to online speech. He had decided to make sweeping changes after visiting President-elect Donald J. Trump at Mar-a-Lago over Thanksgiving. Now he needed his employees to turn those changes into policy.

Over the next few weeks, Mr. Zuckerberg and his handpicked team discussed how to do that in Zoom meetings, conference calls and late-night group chats. Some subordinates stole away from family dinners and holiday gatherings to work, while Mr. Zuckerberg weighed in between trips to his homes in the San Francisco Bay Area and the island of Kauai.

By New Year’s Day, Mr. Zuckerberg was ready to go public with the changes, according to four current and former Meta employees and advisers with knowledge of the events, who were not authorized to speak publicly about the confidential discussions.

The entire process was highly unusual. Meta typically alters policies that govern its apps — which include Facebook, Instagram, WhatsApp and Threads — by inviting employees, civic leaders and others to weigh in. Any shifts generally take months. But Mr. Zuckerberg turned this latest effort into a closely held six-week sprint, blindsiding even employees on his policy and integrity teams.

On Tuesday, most of Meta’s 72,000 employees learned of Mr. Zuckerberg’s plans along with the rest of the world. The Silicon Valley giant said it was overhauling speech on its apps by loosening restrictions on how people can talk about contentious social issues such as immigration, gender and sexuality. It killed its fact-checking program that had been aimed at curbing misinformation and said it would instead rely on users to police falsehoods. And it said it would insert more political content into people’s feeds after previously de-emphasizing that very material.

In the days since, the moves — which have sweeping implications for what people will see online — have drawn applause from Mr. Trump and conservatives, derision from fact-checking groups and misinformation researchers and concerns from L.G.B.T.Q. advocacy groups that fear the changes will lead to more people getting harassed online and offline….

Mr. Zuckerberg decided to promote Mr. Kaplan to Meta’s head of global public policy to carry out the changes and deepen Meta’s ties to the incoming Trump administration, replacing Nick Clegg, a former deputy prime minister of Britain who had handled policy and regulatory issues globally for Meta since 2018. The night before Meta’s announcement, Mr. Kaplan held individual calls with top conservative social media influencers, two people said.

On Tuesday, Mr. Zuckerberg made the new speech policies public in his Instagram video. Mr. Kaplan appeared on “Fox & Friends,” a mainstay of Mr. Trump’s media diet, saying Meta’s fact-checking partners “had too much political bias.”

(Fact-checking groups that worked with Meta have said they had no role in deciding what the company did with the content that was fact-checked.)

Among its changes, Meta loosened rules so people could post statements saying they hated people of certain races, religions or sexual orientations, including permitting “allegations of mental illness or abnormality when based on gender or sexual orientation.” The company cited political discourse about transgender rights for the change. It also removed a rule that forbade users from saying people of certain races were responsible for spreading the coronavirus.

Some training materials that Meta created for the new policies were confusing and contradictory, two employees who reviewed the documents said. Some of the text said that saying “white people have mental illness” would be prohibited on Facebook, but saying “gay people have mental illness” was allowed, they said….

Share this: