Category Archives: cheap speech

Must-read NYT Deep Dive that Helps Explain SCOTUS Argument Monday in Murthy v. Missouri: “How Trump’s Allies Are Winning the War Over Disinformation”

It’s a complex story because concern about government jawboning is real but the mendacious attack on those who fought disinformation in the 2020 election is having major reverberations for 2024. Jim Rutenberg and Steven Lee Myers lay it all out:

In the wake of the riot on Capitol Hill on Jan. 6, 2021, a groundswell built in Washington to rein in the onslaught of lies that had fueled the assault on the peaceful transfer of power.

Social media companies suspended Donald J. Trump, then the president, and many of his allies from the platforms they had used to spread misinformation about his defeat and whip up the attempt to overturn it. The Biden administration, Democrats in Congress and even some Republicans sought to do more to hold the companies accountable. Academic researchers wrestled with how to strengthen efforts to monitor false posts.

Mr. Trump and his allies embarked instead on a counteroffensive, a coordinated effort to block what they viewed as a dangerous effort to censor conservatives.

They have unquestionably prevailed.

Waged in the courts, in Congress and in the seething precincts of the internet, that effort has eviscerated attempts to shield elections from disinformation in the social media era. It tapped into — and then, critics say, twisted — the fierce debate over free speech and the government’s role in policing content.

Projects that were once bipartisan, including one started by the Trump administration, have been recast as deep-state conspiracies to rig elections. Facing legal and political blowback, the Biden administration has largely abandoned moves that might be construed as stifling political speech.

While little noticed by most Americans, the effort has helped cut a path for Mr. Trump’s attempt to recapture the presidency. Disinformation about elections is once again coursing through news feeds, aiding Mr. Trump as he fuels his comeback with falsehoods about the 2020 election.

“The censorship cartel must be dismantled and destroyed, and it must happen immediately,” he thundered at the start of his 2024 campaign.

The counteroffensive was led by former Trump aides and allies who had also pushed to overturn the 2020 election. They include Stephen Miller, the White House policy adviser; the attorneys general of Missouri and Louisiana, both Republicans; and lawmakers in Congress like Representative Jim Jordan, Republican of Ohio, who since last year has led a House subcommittee to investigate what it calls “the weaponization of government.”

Those involved draw financial support from conservative donors who have backed groups that promoted lies about voting in 2020. They have worked alongside an eclectic cast of characters, including Elon Musk, the billionaire who bought Twitter and vowed to make it a bastion of free speech, and Mike Benz, a former Trump administration official who previously produced content for a social media account that trafficked in posts about “white ethnic displacement.” (More recently, Mr. Benz originated the false assertion that Taylor Swift was a “psychological operation” asset for the Pentagon.)

Three years after Mr. Trump’s posts about rigged voting machines and stuffed ballot boxes went viral, he and his allies have achieved a stunning reversal of online fortune. Social media platforms now provide fewer checks against the intentional spread of lies about elections.

“The people that benefit from the spread of disinformation have effectively silenced many of the people that would try to call them out,” said Kate Starbird, a professor at the University of Washington whose research on disinformation made her a target of the effort.

It took aim at a patchwork of systems, started in Mr. Trump’s administration, that were intended to protect U.S. democracy from foreign interference. As those systems evolved to address domestic sources of misinformation, federal officials and private researchers began urging social media companies to do more to enforce their policies against harmful content.

That work has led to some of the most important First Amendment cases of the internet age, including one to be argued on Monday at the Supreme Court. That lawsuit, filed by the attorneys general of Missouri and Louisiana, accuses federal officials of colluding with or coercing the platforms to censor content critical of the government. The court’s decision, expected by June, could curtail the government’s latitude in monitoring content online.

The arguments strike at the heart of an unsettled question in modern American political life: In a world of unlimited online communications, in which anyone can reach huge numbers of people with unverified and false information, where is the line between protecting democracy and trampling on the right to free speech?…

See also  “Supreme Court Case Could Be Disastrous for Detecting Election Misinformation,” a piece published yesterday by Lawrence Norden and Gowri Ramachandran of the Brennan Center and listen to Gowri on the Amicus podcast.

Share this:

“The Chinese government is using TikTok to meddle in elections, ODNI says”

Politico:

The Chinese government is using TikTok to expand its global influence operations to promote pro-China narratives and undermine U.S. democracy, according to a report released today from the Office of the Director of National Intelligence.

The annual assessment from ODNI outlines national security threats facing the U.S. in the coming year, and warns that China may attempt to influence this year’s elections through online influence and disinformation campaigns.

ODNI alleges that “TikTok accounts run by a PRC propaganda arm reportedly targeted candidates from both political parties during the U.S. midterm election cycle in 2022,” and that “China is demonstrating a higher degree of sophistication in its influence activity, including experimenting with generative AI.”

The report’s release comes as lawmakers are increasingly concerned about national security threats that TikTok poses. House lawmakers are expected to vote Wednesday on a bill that would force Beijing-based ByteDance to sell TikTok — or it would face a ban from U.S. app stores.

Share this:

“America’s election chiefs are worried AI is coming for them”

Zach Montellaro for Politico:

A false call from a secretary of state telling poll workers they aren’t needed on Election Day. A fake video of a state election director shredding ballots before they’re counted. An email sent to a county election official trying to phish logins to its voter database.

Election officials worry that the rise of generative AI makes this kind of attack on the democratic process even easier ahead of the November election — and they’re looking for ways to combat it.

Election workers are uniquely vulnerable targets: They’re obscure enough that nobody knows who they really are, so unlike a fake of a more prominent figure — like Joe Biden or Donald Trump — people may not be on the lookout for something that seems off. At the same time, they’re important enough to fake and just public enough that it’d be easy to do.

Combine that with the fact that election officials are still broadly trusted by most Americans — but don’t have a way to effectively reach their voters — a well-executed fake of them could be highly dangerous but hard to counter.

“I 100 percent expect it to happen this cycle,” New Mexico Secretary of State Maggie Toulouse Oliver said of deepfake videos or other disinformation being spread about elections. “It is going to be prevalent in election communications this year.”

Secretaries of state gathered at the National Association of Secretaries of State winter meeting last month told POLITICO they have already begun working AI scenarios into their trainings with local officials, and that the potential dangers of AI-fueled misinformation will be featured in communication plans with voters.

Share this:

“More action needed to tackle disinformation and enhance transparency of online platforms: OECD”

Release:

As roughly half the world’s population prepares to vote in elections, a new OECD report offers the first baseline assessment of how OECD countries are upgrading their governance measures to support an environment where reliable information can thrive, prioritising freedom of expression and human rights, and sets out a policy framework for countries to address the global challenge of disinformation.


Facts not fakes: Tackling disinformation, strengthening information integrity emphasises the need for democracies to champion diverse, high-quality information spaces that support freedom of opinion and expression, along with policies that may be utilised to increase the degree of accountability and transparency of online platforms.


The report details specific risks, including the spread of disinformation during electoral periods, foreign information manipulation and interference campaigns, and the implications of generative artificial intelligence. Based in part on a survey of 23 OECD countries, the report includes case studies and provides recommendations on how governments can play a positive but not intrusive role in this area. It reveals that national strategies for tackling disinformation remain the exception rather than the rule….


As a key pillar of the OECD’s Reinforcing Democracy Initiative, the report presents a policy framework to strengthen information integrity that encourages action across societies, in three areas:

  • Enhance the transparency, accountability, and plurality of information sources, including through a diverse and independent media sector as well as better functioning online platforms.
  • Strengthen media literacy and critical thinking skills to enable citizens to recognise, combat and limit the spread of disinformation.
  • Bolster strategic co-ordination, training, and technological infrastructure in government, as well as peer-learning and co-operation among governments to combat disinformation….
Share this:

“The US is bracing for complex, fast-moving threats to elections this year, FBI director warns”

AP:

The United States expects to face fast-moving threats to American elections this year as artificial intelligence and other technological advances have made interference and meddling easier than before, FBI Director Christopher Wray said Thursday.

“The U.S. has confronted foreign malign influence threats in the past,” Wray told a national security conference. “But this election cycle, the U.S. will face more adversaries, moving at a faster pace, and enabled by new technology.”

Wray singled out advances in generative AI, which he said had made it “easier for both more and less-sophisticated foreign adversaries to engage in malign influence.”

The remarks underscored escalating U.S. government concerns over sometimes hard-to-detect influence operations that are designed to shape public opinion. Though officials have not cited successful efforts by foreign governments to directly alter election results, they have sounded the alarms over the past decade about foreign influence campaigns….

Share this:

“Law School clinic files brief to combat intentionally false statements about voting”

Yale Daily News:

Yale Law School’s Media Freedom and Information Access Clinic filed an amicus brief on Feb. 12 in United States v. Mackey, a case currently at the Second Circuit Court of Appeals. The case involves an influential social media user convicted of attempting to convince voters to believe they could cast their votes through a false voting mechanic. 

The case centers on claims that Douglass Mackey, a social media influencer on X, formerly known as Twitter, made during the 2016 presidential election campaign. Mackey, who was known to his 58,800 followers as Ricky Vaughn, repeatedly tweeted false claims to supporters of former Secretary of State Hillary Clinton LAW ’73 that they could cast their ballots via text message in the weeks leading up to the election.

Mackey was convicted by a New York jury in March 2023, ordered to pay a $15,000 fine and charged with violating Section 241, which prohibits conspiring to “injure” individuals’ federal rights or privileges, including the right to vote. He was sentenced to seven months in prison and appealed his conviction to the Second Circuit.

The YLS Media Freedom and Information Access Clinic filed its amicus brief in collaboration with Protect Democracy, a nonpartisan anti-authoritarian organization, on behalf of election law expert and UCLA School of Law professor Richard Hasen. The brief argues that a Reconstruction-era civil rights law can be utilized to prosecute deliberate misinformation regarding voting procedures, while still upholding the First Amendment’s right to freedom of speech….

“I really appreciated the opportunity to work on this case because I think combating election disinformation is going to be key to preserving our democracy, this year and beyond,” Victoria Maras LAW ’25, an MFIA clinic member who worked on the brief, told the News. “As a former Field Organizer, I know how important it is to get the right information out to voters, and, by the same token, how harmful it can be when misinformation spreads.”

Maras said she was grateful that this brief can show how people who conspire to spread false election information can be held accountable without threatening First Amendment free speech rights.

Another MFIA clinic member, Ben Menke LAW ’25, told the News that delving into the history of Section 241, which was passed in 1870, led him to examine transcripts of debates in Congress during that time. Through this research, Menke said that he uncovered the motivations of the legislators who first enacted the law, as well as the legal opinions of the judges who applied Section 241 at the time.

“Our brief offers clarity on the proper way to construe Section 241, and we show that the law is consistent with the First Amendment,” Menke told the News. “Bad actors are finding it easier to spread knowingly false information to interfere with the right of the people to vote. Enforcing Section 241 is one way the federal government can respond to this threat.”

In a statement to the News, James Lawrence, Mackey’s attorney, said that their core argument in defense of Mackey is that he did not have fair notice, required by the Fifth Amendment, that his conduct violated “clearly established” law.

Lawrence claimed that the amicus brief uses a Supreme Court case about a different law to argue that a rarely used legal concept, not accepted in many state courts and never applied in New York, should be turned into a federal crime for misleading election information.

“If a team of federal prosecutors never came up with this convoluted argument after pursuing this case for more than three years … how could Douglass Mackey be expected to know his conduct violated Section 241 in 2016?” Lawrence wrote in the statement. 

The MFIA clinic declined to comment on Lawrence’s statement….

Share this:

In Florida Social Media Case, Florida’s Lawyer Defends, Without Recognizing It, an Equalizing Interest in Requiring Platforms to Carry Speech They Don’t Want

I thought this exchange in the transcript of the Moody case was particularly telling (I’ve bolded the response that I think shows an interest in imposing an equality floor):

JUSTICE KAVANAUGH: Can I — can I ask you about a different precedent, about what we said in Buckley? And this picks up on the Chief Justice’s earlier comment about government intervention because of the power of the social media companies. And it seems like, in Buckley, in 1976, in a really important sentence in our First Amendment jurisprudence, we said that “the
concept that the government may restrict the speech of some elements of our society in order to enhance the relative voice of others is wholly foreign to the First Amendment.” And that seems to be what you responded with to the Chief Justice.


And then, in Tornillo, the Court went on at great length as well about the power of then newspapers, and the Court said they recognized the argument about vast changes that place in a few hands the power to inform the American people and shape public opinion and that that had led to abuses of bias and manipulation. The Court accepted all that but still said that wasn’t good enough to allow some kind of government-mandated fairness, right ofreply or anything. So how do you deal with those two principles?


MR. WHITAKER: Sure, JusticeKavanaugh. First of all, if — if you agree with me with our front-line position that what is being regulated here is conduct, not speech, I don’t think you get into interests and scrutiny and all that. I do think that the law advances the — the First Amendment interests that I mentioned, but I think the — the — the — that interest, the interest that our law is serving, if you did get to a point in the analysis that required consideration of those interests, our interests —

JUSTICE KAVANAUGH: Do you agree then, if speech is involved, that those cases mean that you lose?


MR. WHITAKER: No, I don’t agree with that, and — and the reason I don’t agree with that is because the interests that our law serve are — are legitimate, and it’s — it’s hard because different parts of the law serve different interests. But I think the one that — that sounds in the — in your concern that is most directly implicated would be the hosting requirement applicable to journalistic enterprises.

So one provision of the law says that the platforms cannot censor, shadow ban, or deplatform journalistic enterprises based on the content of their publication or broadcast. And that serves an interest very similar to the interest that this Court recognized as legitimate in Turner when Congress imposed on cable operators a must-carry obligation for broadcasters.


And — and just as a broadcaster –and what the Court said was there was not just a legitimate interest in promoting the free dissemination of ideas through broadcasting, but it was indeed a — a compelling interest, a highly compelling interest. And so I think the journalistic enterprise provision serves a — that very similar issue….

[20 pages later….]

JUSTICE KAVANAUGH: Well, in the Turner case, the intervention was, the Court emphasized, unrelated to the suppression of speech, the antitrust-type intervention there. So I’m not sure when it’s related to ensuring relative voices are balanced out or there’s fairness in the speech or balance in the speech, that that is covered by Turner. Do you agree with that?


MR. WHITAKER: No, I don’t agree withthat, Your Honor. Our — our — our interest and our law —


JUSTICE KAVANAUGH: What did Turner mean by “unrelated to” the suppression of speech?


MR. WHITAKER: Well, we don’t view our law as advancing interests that are related to the suppression of speech. We think that the interests, for example, in protecting journalistic enterprises from being censured, from — from MSNBC being censured because an Internet platform doesn’t like a broadcast it showed on its station the other day, that is just an interest in preventing from being silenced. **It’s not an equalizing interest. It’s giving them a chance.**

The idea of giving everyone a chance to speak on a social media platform is an equalizing interest.

(Note that although the transcript uses “censuring” here, I believe Whitaker said “censoring.”)

Share this:

“Supreme Court Seems Wary of State Laws Regulating Social Media Platforms”

Adam Liptak for the NYT:

The Supreme Court seemed skeptical on Monday of laws in Florida and Texas that bar major social media companies from making editorial judgments about which messages to allow.

The laws were enacted in an effort to shield conservative voices on the sites, but a decision by the court, expected by June, will almost certainly be its most important statement on the scope of the First Amendment in the internet era, with broad political and economic implications.

A ruling that tech platforms have no editorial discretion to decide which posts to allow would expose users to a greater variety of viewpoints but almost certainly amplify the ugliest aspects of the digital age, including hate speech and disinformation.

Though a ruling in favor of big platforms like Facebook and YouTube appeared likely, the court also seemed poised to return the cases to the lower courts to answer questions about how the laws apply to sites that do not seem to moderate their users’ speech in the same way, like Gmail, Venmo, Uber and Etsy.

The justices, over almost four hours of arguments, differed about whether the laws, which have been blocked for now, should go into effect in the meantime. But a majority seemed inclined to keep them on hold while the litigation moves forward. Several justices said that the states violated the First Amendment by telling a handful of major platforms that they could not moderate their users’ posts, drawing distinctions between government censorship prohibited by the First Amendment and actions by private companies to determine what speech to include on their sites.

“I have a problem with laws that are so broad that they stifle speech just on their face,” Justice Sonia Sotomayor said.

Justice Brett M. Kavanaugh read a sentence from a 1976 campaign finance decision that has long been a touchstone for him. “The concept that government may restrict the speech of some elements of our society in order to enhance the relative voice of others is wholly foreign to the First Amendment,” he said, indicating that he rejected the states’ argument that they may regulate the fairness of public debate in private settings….

Share this:

“Supreme Court to Decide How the First Amendment Applies to Social Media”

Adam Liptak for the NYT:

The most important First Amendment cases of the internet era, to be heard by the Supreme Court on Monday, may turn on a single question: Do platforms like Facebook, YouTube, TikTok and X most closely resemble newspapers or shopping centers or phone companies?

The two cases arrive at the court garbed in politics, as they concern laws in Florida and Texas aimed at protecting conservative speech by forbidding leading social media sites from removing posts based on the views they express.

But the outsize question the cases present transcends ideology. It is whether tech platforms have free speech rights to make editorial judgments. Picking the apt analogy from the court’s precedents could decide the matter, but none of the available ones is a perfect fit.

If the platforms are like newspapers, they may publish what they want without government interference. If they are like private shopping centers open to the public, they may be required to let visitors say what they like. And if they are like phone companies, they must transmit everyone’s speech.

“It is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies,” Justice Samuel A. Alito Jr. wrote in a 2022 dissent when one of the cases briefly reached the Supreme Court.

Supporters of the state laws say they foster free speech, giving the public access to all points of view. Opponents say the laws trample on the platforms’ own First Amendment rights and would turn them into cesspools of filth, hate and lies. One contrarian brief, from liberal professors, urged the justices to uphold the key provision of the Texas law despite the harm they said it would cause….

Supporting briefs mostly divided along the predictable lines. But there was one notable exception. To the surprise of many, some prominent liberal professors filed a brief urging the justices to uphold a key provision of the Texas law.

“There are serious, legitimate public policy concerns with the law at issue in this case,” wrote the professors, including Lawrence Lessig of Harvard, Tim Wu of Columbia and Zephyr Teachout of Fordham. “They could lead to many forms of amplified hateful speech and harmful content.”

But they added that “bad laws can make bad precedent” and urged the justices to reject the platforms’ plea to be treated as news outlets.

“To put a fine point on it: Facebook, Twitter, Instagram and TikTok are not newspapers,” the professors wrote. “They are not space-limited publications dependent on editorial discretion in choosing what topics or issues to highlight. Rather, they are platforms for widespread public expression and discourse. They are their own beast, but they are far closer to a public shopping center or a railroad than to The Manchester Union Leader.”

In an interview, Professor Teachout linked the Texas case to the Citizens United decision, which struck down a campaign finance law regulating corporate spending on First Amendment grounds.

“This case threatens to be another expansion of corporate speech rights,” she said. “It may end up in fact being a Trojan horse, because the sponsors of the legislation are so distasteful. We should be really wary of expanding corporate speech rights just because we don’t like particular laws.”

Other professors, including Richard L. Hasen of the University of California, Los Angeles, warned the justices in a brief supporting the challengers that prohibiting the platforms from deleting political posts could have grave consequences.

“Florida’s and Texas’ social media laws, if allowed to stand,” the brief said, “would thwart the ability of platforms to moderate social media posts that risk undermining U.S. democracy and fomenting violence.”

Share this:

My New One at Slate: “The Biggest Supreme Court Case That Nobody Seems to Be Talking About”

I have written this piece for Slate. It begins:

On Monday, the Supreme Court will hear arguments in a pair of cases out of Texas and Florida that could force major social media platforms to carry posts from Donald Trump or others who lie about elections being stolen or obliquely encourage election-related violence. A ruling in favor of these states would turn the First Amendment upside down and create the conditions for undermining American democracy. If there wasn’t so much else swirling around our elections and democracy right now, this case would be commanding everyone’s attention.

Moody v. NetChoice LLC and NetChoice LLC v. Paxton arise out of the actions that Facebook, Twitter (now X), and other social media companies took in removing Trump from their platforms after the attack on the U.S. Capitol on Jan. 6, 2021. Trump had been relentlessly calling the 2020 election results into question despite having no reliable evidence of widespread fraud or irregularities. In an infamous tweet in December 2020, he encouraged his supporters to come to Washington for “wild” protests on Jan. 6, the day that Congress would be counting the states’ Electoral College votes to confirm Joe Biden as the election victor. After Trump and his supporters spoke in speeches on the Ellipse on Jan. 6, a crowd stormed the Capitol. The violent incident left 140 law enforcement officers injured (four later died by suicide) and four protesters dead. After Trump failed to immediately condemn the violence and call for the siege to end, the platforms had enough, determining that Trump had violated their terms of service and needed to be removed.

In response to the removal of Trump and concern over what they call “censorship” of conservatives, Florida and Texas each passed laws that make content moderation difficult if not impossible for major social media companies. The laws differ in some particulars, but both would make it illegal to remove the kinds of content we saw from Trump before he was deplatformed in 2020. A coalition representing the platforms sued, arguing that the laws violated the platforms’ First Amendment rights to decide what content to include or exclude on their platforms. The coalition won their primary arguments in the Florida case but lost in the Texas case, and the Supreme Court is hearing both of them on Monday….

There’s also a huge irony in seeing people like Volokh or Justice Clarence Thomas express support for the common carrier theory and requiring private companies to carry speech they may disagree with or even find dangerous. In his amicus brief supporting Florida’s appeal, Trump approvingly quoted Volokh: “Recent experience has fostered a widespread and growing concern that behemoth social media platforms … have ‘seriously leverage[d their] economic power into a means of affecting the community’s political life.’ ”

That kind of equalization rationale has been rejected by the libertarians on the court in cases like Citizens United, the case that freed corporations to spend unlimited sums in support of candidates for election to office. There, the court wrote (quoting a 1976 case, Buckley v. Valeo) that it is “wholly foreign” to the First Amendment to seek to equalize speech, and that the First Amendment can’t do anything to stop those with economic power from translating it into political power.

Now that it is conservatives yelling “censorship” rather than liberals complaining about big corporations seeking to have an outsize influence on whom is elected and on public policy, is the court really going to change its position on whether the government can mandate speech equalization depending on whose ox is being gored?…

Share this:

Our Amicus Brief in United States v. Mackey: Lying About When, Where or How People Vote Violates Federal Law (18 USC 241) and Prosecution is Consistent with the First Amendment

Protect Democracy and the Yale Media Freedom and Information Access Clinic filed this Second Circuit amicus brief (with me as client and co-counsel) in United States v. Mackey. Mackey was convicted “under 18 U.S.C. § 241 for conspiring “to use Twitter to trick American citizens into thinking they could vote by text and stay at home on Election Day—thereby suppressing and injuring those citizens’ right to vote.” Gov’t Br. 2. Mackey has argued that section 241 does not cover such a scheme and that the law is facially unconstitutional under the First Amendment because it punishes too much protected speech.

In our brief, we explain that the statute, properly construed, both bars lies about when, where or how people vote intended to deprive people of their right to vote and that limiting section 241 to such empirically verifiable false speech assures that the law does not violate the First Amendment. The Supreme Court has already stated that the government “may prohibit messages intended to mislead voters about voting requirements and procedures” consistent with the First Amendment. Minn. Voters All. v. Mansky, 138 S. Ct. 1876, 1889 n.4 (2018). Further, as explained in Protect Democracy’s blog post on the filing:

The primary question before the Second Circuit in Mackey’s appeal is whether the federal civil rights statute he was convicted under – which bans conspiring to “injure” any person in their exercise of federal rights – actually bars conspiracies to circulate false information about voting mechanisms and procedures. Professor Hasen’s amicus brief explains why intentionally false statements about voting mechanisms and procedures violate federal law, and why such speech can be punished without running afoul of the First Amendment’s protections.

In particular, to establish the applicability of Reconstruction-era civil rights protection to internet memes, the brief tracks the history of legal actions protecting the right to vote back to England in 1703. That history shows, among other things, a three-century-long recognition among judges that an intentional deprivation of the right to vote constitutes an “injury” for which the law provides a remedy. As a result, the brief argues, Mackey’s conduct clearly constituted a conspiracy to “injure” under long-recognized legal principles, even if the Reconstruction Congress would have had no idea what an internet meme is.

You can find the introduction to our brief below the fold, which relies heavily on common law tort principles protecting the right to vote and its explanation in the Restatement (2d) Torts section 865.

Continue reading Our Amicus Brief in United States v. Mackey: Lying About When, Where or How People Vote Violates Federal Law (18 USC 241) and Prosecution is Consistent with the First Amendment
Share this:

“Meta turns its back on politics again, angering some news creators”

WaPo:

Meta announced on Friday it would stop proactively recommending political content on Instagram or its upstart text-based app Threads, alarming news and politics-focused creators and journalists gearing up for a crucial election year.

While users will still be allowed to follow accounts that post about political and social issues, accounts posting such content will not be recommended and content posted by nonpolitical accounts that is political in nature or includes social commentary also won’t be recommended, Meta said.

The company said it also won’t show users posts focused on laws, elections or social issues from accounts those users don’t follow.

“This announcement expands on years of work on how we approach and treat political content based on what people have told us they wanted,” said Meta spokesperson Dani Lever.

Meta said users will still be able to see politics-related posts in their main feeds from accounts they follow. But the new approach means users are less likely to see politics-oriented content or accounts on Instagram’s “Explore” page, its short-form video product known as Reels, and the suggested-users-to-follow box. Meta also won’t be recommending politics to users’ feeds on Threads. Meta said it plans to develop tools to allow users to opt in to seeing more political content, but those tools are not available.

Share this: