Category Archives: cheap speech

“YouTube’s stronger election misinformation policies had a spillover effect on Twitter and Facebook, researchers say.”

NYT:

YouTube’s stricter policies against election misinformation was followed by sharp drops in the prevalence of false and misleading videos on Facebook and Twitter, according to new research released on Thursday, underscoring the video service’s power across social media.

Researchers at the Center for Social Media and Politics at New York University found a significant rise in election fraud YouTube videos shared on Twitter immediately after the Nov. 3 election. In November, those videos consistently accounted for about one-third of all election-related video shares on Twitter. The top YouTube channels about election fraud that were shared on Twitter that month came from sources that had promoted election misinformation in the past, such as Project Veritas, Right Side Broadcasting Network and One America News Network.

But the proportion of election fraud claims shared on Twitter dropped sharply after Dec. 8. That was the day YouTube said it would remove videos that promoted the unfounded theory that widespread errors and fraud changed the outcome of the presidential election. By Dec. 21, the proportion of election fraud content from YouTube that was shared on Twitter had dropped below 20 percent for the first time since the election.

The proportion fell further after Jan. 7, when YouTube announced that any channels that violated its election misinformation policy would receive a “strike,” and that channels that received three strikes in a 90-day period would be permanently removed. By Inauguration Day, the proportion was around 5 percent.

Share this:

“Drawing the Line Between False Election Speech and False Campaign Speech”

I have written this blog post for the Knight First Amendment Institute in connection with this event on lies and elections that I’ll be participating in on Wednesday. A snippet:

In a book to be released in March 2022, Cheap Speech: How Disinformation Poisons Our Politics—and How to Cure It, I explain how technological change has allowed the spread of disinformation, and particularly disinformation about how elections are run, to threaten the integrity of the election system. And I offer a host of both legal and norm-based solutions to lessen the risk that disinformation will bring down American democracy, as was threatened on Jan. 6.

I cannot offer the full argument in this brief blog post, but I do think it is useful to address a point I will make in the book about a distinction between false election speech and false campaign speech. I believe that Congress or states may constitutionally ban the former but not the latter, and government may require social media companies to remove false election speech from their platforms.

Share this:

“Lawmakers’ latest idea to fix Facebook: Regulate the algorithm”

Will Oremus for WaPo:

On Facebook, you decide who to befriend, which pages to follow, which groups to join. But once you’ve done that, it’s Facebook that decides which of their posts you see each time you open your feed — and which you don’t.

The software that makes those decisions for each user, based on a secret ranking formula devised by Facebook that includes more than 10,000 factors, is commonly referred to as “the news feed algorithm,” or sometimes just “the algorithm.” On a social network with nearly 3 billion users, that algorithm arguably has more influence over what people read, watch and share online than any government or media mogul.

It’s the invisible hand that helps to make sure you see your close friend’s wedding photos at the top of your feed, rather than a forgotten high school classmate’s post about what they had for lunch today. But because Facebook’s primary goal is to grab and hold your attention, critics say, it’s also prone to feed you that high school classmate’s post of a meme that demonizes people you disagree with, rather than, say, a balanced news story — or an engrossing conspiracy theory rather than a dry, scientific debunking….

Forcing tech companies to be more careful about what they amplify might sound straightforward. But it poses a challenge to tech companies because the ranking algorithms themselves, while sophisticated, generally aren’t smart enough yet to fully grasp the message of every post. So the threat of being sued for even a couple of narrow types of illegal content could force platforms to adjust their systems on a more fundamental level. For instance, they might find it prudent to build in human oversight of what gets amplified, or perhaps move away from automatically personalized feeds altogether.

To some critics, that would be a win. Roddy Lindsay, a former Facebook data scientist who worked on the company’s algorithms, argued in a New York Times op-ed this week that Section 230 reform should go further. He proposes eliminating the liability shield for any content that social platforms amplify via personalized recommendation software. The idea echoes Haugen’s own suggestion. Both Lindsay and Haugen say companies such as Facebook would respond by abandoning their recommendation algorithms and reverting to feeds that simply show users every post from the people they follow.

Nick Clegg, Facebook’s vice president for global affairs and communications, argued against that idea Sunday on ABC’s “This Week.”

Daphne Keller, who directs the Program on Platform Regulation at Stanford University’s Cyber Policy Center, has thrown cold water on the idea of regulating what types of speech platforms can amplify, arguing that bills such as Eshoo and Malinowski’s would probably violate the First Amendment.

“Every time a court has looked at an attempt to limit the distribution of particular kinds of speech, they’ve said, ‘This is exactly the same as if we had banned that speech outright. We recognize no distinction,’” Keller said.

Proposals to limit algorithmic amplification altogether, such as Lindsay’s, might fare better than those that target specific categories of content, Keller added, but then social media companies might argue that their algorithms are protected under their First Amendment right to set editorial policy.

Share this:

“It’s Not Misinformation. It’s Amplified Propaganda.”

Renee DiResta for The Atlantic:

If Buttar were a Russian troll, the #PelosiMustGo triumph might have earned him a promotion: Americans were yet again feuding on social media. But Buttar is very much an American, and so were the overwhelming majority of the online activists whom he exhorted to join his campaign. Although it is tempting to believe that foreign bogeymen are sowing discord, the reality is far simpler and more tragic: Outrage generates engagement, which algorithmically begets more engagement, and even those who don’t want to shred the fabric of American society are nonetheless encouraged to play by these rules in their effort to call attention to their cause. When I asked Buttar about the hashtag campaign recently, he told me that he’d chosen #PelosiMustGo because it had the potential to attract attention from a variety of communities. “Foundationally, the challenge is that I talk about all kinds of things—most of what I talk about are solutions to problems—but those posts don’t go viral,” Buttar said. His campaign had built direct-messaging groups of supporters “who were enthusiastic about coordinating across the broader movement,” he recalled, “and I thought of that network and its messaging and capacity as a sort of counterpropaganda, a way to help break through to the public because so many stories never get covered.”

Some ampliganda takes off because an influential user gets an ideologically aligned crowd of followers to spread it; in other cases, an idea spontaneously emerges from somewhere in the online crowd, fellow travelers give it an initial boost, and the influencer sees the emergent action and amplifies it, precipitating a cascade of action from adjacent factions. Most Twitter users never knew that #PelosiMustGo began because someone gave marching orders in a private Discord channel. They saw only the hashtag. They likely assumed that somewhere, some sizable portion of Americans were spontaneously tweeting against the speaker of the House. And they were right—sort of.

Share this:

“Facebook Whistleblower’s Testimony Builds Momentum for Tougher Tech Laws”

WSJ:

Facebook Inc. whistleblower Frances Haugen testified to Congress Tuesday on internal documents showing harms from the company’s products—from teenagers’ mental-health problems to poisoned political debate—adding fuel to efforts to pass tougher regulations on Big Tech.

The documents gathered by Ms. Haugen, which provided the foundation for The Wall Street Journal’s Facebook Files series, show how the company’s moderation rules favor elites; how its algorithms foster discord; and how drug cartels and human traffickers use its services openly.

“I saw Facebook repeatedly encounter conflicts between its own profit and our safety. Facebook consistently resolved these conflicts in favor of its own profits,” Ms. Haugen told a Senate consumer protection subcommittee. “As long as Facebook is operating in the shadows, hiding its research from public scrutiny, it is unaccountable. Until the incentives change, Facebook will not change.”

Ms. Haugen singled out Facebook founder Mark Zuckerberg for criticism, citing his control over the company. Mr. Zuckerberg controls about 58% of Facebook’s voting shares, according to an April regulatory filing.

“There is no one currently holding Mark accountable but himself,” she said. Facebook under Mr. Zuckerberg makes decisions based on how they will affect measurements of user engagement, rather than their potential downsides for the public, she said.

“Mark has built an organization that is very metrics-driven,” she said. “The metrics make the decision. Unfortunately that itself is a decision.”

Share this:

“Facebook hides data showing it harms users. Outside scholars need access.”

Nate Persily in WaPo opinion:

The disclosures made by whistleblower Frances Haugen about Facebook — first to the Wall Street Journal and then to “60 Minutes” — ought to be the stuff of shareholders’ nightmares: When she left Facebook, she took with her documents showing, for example, that Facebook knew Instagram was making girls’ body-image issues worse, that internal investigators knew a Mexican drug cartel was using the platform to recruit hit men and that the company misled its own oversight board about having a separate content appeals process for a large number of influential users. (Haugen is scheduled to appear before a Congressional panel on Tuesday.)

Facebook, however, may be too big for the revelations to hurt its market position — a sign that it may be long past time for the government to step in and regulate the social media company. But in order for policymakers to effectively regulate Facebook — as well as Google, Twitter, TikTok and other Internet companies — they need to understand what is actually happening on the platforms.

Whether the problem is disinformation, hate speech, teenagers’ depression or content that encourages violent insurrection, governments cannot institute sound policies if they do not know the character and scale of these problems. Unfortunately, only the platforms have access to the relevant data, and as the newest revelations suggest, they have strong incentives not to make their internal research available to the public. Independent research on how people use social media platforms is clearly essential.

After years of frustration — frustration also felt by many Facebook employees trying to do the right thing — I resigned last year as co-chair of an outside effort to try to get the company to share more data with researchers. Facebook’s claims of privacy dangers and fears about another Cambridge Analytica scandal significantly hindered our efforts. (A researcher at data firm Cambridge Analytica violated users’ privacy, prompting an investigation by the federal government into Facebook’s data-protection practices that led to a $5 billion fine.)

When Facebook did finally give researchers access to data, it ended up having significant errors — a problem that was discovered only after researchers had spent hundreds of hours analyzing it, and in some cases publishing their findings (about, for example, how disinformation spreads).

So we are now at a standstill, where the public does not trust Facebook on research and data that it releases, and Facebook says existing law (including the Cambridge Analytica settlement) prevents it from sharing useful data with outside researchers. Congress has the ability to solve this problem by passing a law granting scholars from outside the social media companies access to the information held by them — while protecting user privacy. (I have drafted text for a law along these lines, which I call the “Platform Transparency and Accountability Act.”)…

Share this:

October 13 Knight Columbia Virtual Event: “Lies and Elections: How exceptional should we consider the electoral context when it comes to the regulation of lies?”

Looking forward to participating in this event at the Knight First Amendment Institute at Columbia Law (free registration required):

How should the government regulate election-related speech? Trump’s “Big Lie” raises the question of whether lies about election results should be regulated by the social media platforms, as well as the government. But of course, these kinds of lies are not the only kinds of election-related lies that raise thorny free speech questions. Can or should foreign actors be able to intervene in electoral speech in the run-up to elections? How much should campaign finance law be used to patrol misinformation and disinformation about election donations and spending? Can there be stricter regulation of election-related speech without that justifying vote-suppressing laws targeting virtually nonexistent election fraud? More fundamentally, how exceptional should we consider the electoral context when it comes to the regulation of lies? And how do race, nationality, and gender play into both election-related disinformation and its regulation? 

Schedule
WEDNESDAY OCTOBER 13
ONLINE

1:00PM – 2:30PM EDT
Featuring 
Richard Hasen, University of California, Irvine School of Law 
Janell Byrd-Chichester, Thurgood Marshall Institute of the NAACP Legal Defense and Educational Fund, Inc. 
Atiba Ellis, Marquette University Law School
Matt Perault, Center on Science & Technology Policy at Duke University
Moderated by 
Genevieve Lakier, Knight First Amendment Institute
Share this:

“Whistle-Blower to Accuse Facebook of Contributing to Jan. 6 Riot, Memo Says”

NYT:

Facebook, which has been under fire from a former employee who has revealed that the social network knew of many of the harms it was causing, was bracing for new accusations over the weekend from the whistle-blower and said in a memo that it was preparing to mount a vigorous defense.

The whistle-blower, whose identity has not been publicly disclosed, planned to accuse the company of relaxing its security safeguards for the 2020 election too soon after Election Day, which then led it to be used in the storming of the U.S. Capitol on Jan. 6, according to the internal memo obtained by The New York Times. The whistle-blower planned to discuss the allegations on “60 Minutes” on Sunday, the memo said, and was also set to say that Facebook had contributed to political polarization in the United States.

The 1,500-word memo, written by Nick Clegg, Facebook’s vice president of policy and global affairs, was sent on Friday to employees to pre-empt the whistle-blower’s interview. Mr. Clegg pushed back strongly on what he said were the coming accusations, calling them “misleading.” “60 Minutes” published a teaser of the interview in advance of its segment on Sunday.

“Social media has had a big impact on society in recent years, and Facebook is often a place where much of this debate plays out,” he wrote. “But what evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization.” (See below for the full memo.)

At the WSJ:

The former Facebook Inc. employee who gathered documents that formed the foundation of The Wall Street Journal’s Facebook Files series said she acted to help prompt change at the social-media giant, not to stir anger toward it.

Frances Haugen, a former product manager hired to help protect against election interference on Facebook, said she had grown frustrated by what she saw as the company’s lack of openness about its platforms’ potential for harm and unwillingness to address its flaws. She is scheduled to testify before Congress on Tuesday. She has also sought federal whistleblower protection with the Securities and Exchange Commission.

In a series of interviews, Ms. Haugen, who left the company in May after nearly two years, said that she had come into the job with high hopes of helping Facebook fix its weaknesses. She soon grew skeptical that her team could make an impact, she said. Her team had few resources, she said, and she felt the company put growth and user engagement ahead of what it knew through its own research about its platforms’ ill effects.

Toward the end of her time at Facebook, Ms. Haugen said, she came to believe that people outside the company—including lawmakers and regulators—should know what she had discovered.

“If people just hate Facebook more because of what I’ve done, then I’ve failed,” she said. “I believe in truth and reconciliation—we need to admit reality. The first step of that is documentation.”

Share this:

“Trump asks judge to force Twitter to restore his account”

Politico:

Former President Donald Trump has asked a federal judge in Florida to force Twitter to restore his account, which the company suspended in January following the deadly storming of the U.S. Capitol.

Trump’s attorneys on Friday filed a motion in U.S. District Court in Miami seeking a preliminary injunction against Twitter and its CEO, Jack Dorsey. They argue that Twitter is censoring Trump in violation of his First Amendment rights, according to the motion.

Share this:

Oct. 6 Virtual Lunch Talk: Disinformation in American Elections Part I: Election Officials (with Michigan SOS Jocelyn Benson and Orange County Registrar Neal Kelley)

Join us for this October 6 event, which is the first in a three part lunch series on Disinformation in American Elections put on by UCI Law’s Fair Elections and Free Speech Center. (Free registration required):

Fair Elections and Free Speech Center | Disinformation in American Elections Part I: Election Officials

 Wednesday, October 6 at 12:15pm to 1:15pm Virtual Event

Countering the Risk of Disinformation in American Elections: How Big a Problem Is It and What Should Be Done?

This three-part online lunch series hosted by the Fair Elections and Free Speech Center at UCI Law explores the risk of disinformation in American elections, spread through social media and otherwise, and how to counter it.

This session, Part I of the series, offers the perspective of state and local election officials from both major political parties.

Speakers include:

Michigan Secretary of State Jocelyn Benson
Orange County Registrar Neal Kelley

Moderated by Tammy Patrick of the Democracy Fund

Coming up:

Part II, October 27 (Legal scholars Danielle Citron, Nate Persily and Spencer Overton, moderated by Rick Hasen)

Part III, November 10 (Social scientists John Donovan, Brendan Nyhan, and Renee DiResta, moderated by Pam Fessler)

Share this:

“Election fraud, QAnon, Jan. 6: Far-right extremists in Germany read from a pro-Trump script”

WaPo:

One message advocated “occupying election offices.”

Another warned of “coronavirus tyranny.”

And a third extolled former president Donald Trump and Q, the shadowy oracle of the extremist ideology QAnon, for inspiring a new social movement prepared to take back power from the state. “America is waking up and ready to fight,” it vowed.

The calls to action came not in anticipation of the Jan. 6 assault on the U.S. Capitol. Rather, they emerged this month in Germany, within a far-right group on the messaging app Telegram, where neo-Nazis and doomsday preppers foresee what’s known as “Day X” — the collapse of the German state and assassination of high-ranking officials.

Such apocalyptic messages — posted in the run-up to German elections on Sunday — import conspiratorial, anti-government rhetoric broadcast in the U.S., according to screenshots of the since-deleted chatroom reviewed by The Washington Post.

Share this:

“Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead.”

WSJ:

In the fall of 2018, Jonah Peretti, chief executive of online publisher BuzzFeed, emailed a top official at Facebook Inc. The most divisive content that publishers produced was going viral on the platform, he said, creating an incentive to produce more of it.

He pointed to the success of a BuzzFeed post titled “21 Things That Almost All White People are Guilty of Saying,” which received 13,000 shares and 16,000 comments on Facebook, many from people criticizing BuzzFeed for writing it, and arguing with each other about race. Other content the company produced, from news videos to articles on self-care and animals, had trouble breaking through, he said.

Mr. Peretti blamed a major overhaul Facebook had given to its News Feed algorithm earlier that year to boost “meaningful social interactions,” or MSI, between friends and family, according to internal Facebook documents reviewed by The Wall Street Journal that quote the email.

BuzzFeed built its business on making content that would go viral on Facebook and other social media, so it had a vested interest in any algorithm changes that hurt its distribution. Still, Mr. Peretti’s email touched a nerve.

Facebook’s chief executive, Mark Zuckerberg, said the aim of the algorithm change was to strengthen bonds between users and to improve their well-being. Facebook would encourage people to interact more with friends and family and spend less time passively consuming professionally produced content, which research suggested was harmful to their mental health.

Within the company, though, staffers warned the change was having the opposite effect, the documents show. It was making Facebook’s platform an angrier place.

Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.

“Our approach has had unhealthy side effects on important slices of public content, such as politics and news,” wrote a team of data scientists, flagging Mr. Peretti’s complaints, in a memo reviewed by the Journal. “This is an increasing liability,” one of them wrote in a later memo.

They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares,” researchers noted in internal memos.

Some political parties in Europe told Facebook the algorithm had made them shift their policy positions so they resonated more on the platform, according to the documents.

“Many parties, including those that have shifted to the negative, worry about the long term effects on democracy,” read one internal Facebook report, which didn’t name specific parties.

Share this:

Republican FEC Commissioners: Twitter Entitled to Press Exemption for Excluding NY Post Content on Hunter Biden

Dickerson/Trainor statement:

Moreover, the decision as to precisely which news to distribute is, in many ways, the sine qua non of “the business of producing…news stories, commentary,
and/or editorials.”27 The New York Times famously emblazons its masthead with the slogan “All The News That’s Fit To Print,” suggesting the paper’s published materials were carefully selected and contextualized to fit the Times’s subjective view of “news” that is “fit to print.” That is precisely what Twitter did here: it made the editorial judgment that links to the New York Post articles were not “fit to print”—or, restated, “fit to share.”


Under FECA, then, Twitter is likely a press entity.28 Even so, under the Act press entities only get the media exemption’s protections when they act in their “legitimate press function,” which we have historically viewed under a two-part analysis: “(1) whether the entity’s materials are available to the general public, and (2) whether they are comparable in form to those ordinarily issued by the entity.”29 Twitter’s platform is available to any American willing to access it via an app or web browser. And when Twitter chooses to limit the sharing of a news story, it does
not fundamentally change the appearance or underlying function of the platform itself. Indeed, Twitter argues that its content moderation policies are central to its users’ experience and a core part of its overall commercial product.30
Accordingly, Twitter’s activities fall within our press exemption. But this regulatory safe harbor operates as a floor, not a ceiling. As the Citizens United Court noted, the judicial branch has “consistently rejected the proposition that the institutional press has any constitutional privilege beyond that of other speakers.”31 So even if Twitter’s decision to limit distribution of the New York Post’s articles were not protected by the Act’s press exemption, it would likely be protected by the Constitution itself.

See also the Cooksey statement.

(h/t Shane Goldmacher)

Share this: