Category Archives: social media and social protests

Continuing “Twitter Files” Tussle

General background: there’s an explainer here about the “Twitter Files” (and the controversies over the “Twitter Files”).

One of the controversies is about Matt Taibbi’s reporting on the Election Integrity Partnership, an academic research coalition studying misinformation related to elections.  (Here’s their primary report explaining the work relating to the 2020 election; here are explainers that the primary academic centers have offered.) 

Taibbi claimed that the EIP “succeeded in getting nearly 22 million tweets labeled in the runup to the 2020 vote,” and a related claim that EIP was partnered with a government entity seeking “elimination” of millions of tweets, which is different (and also false).   (The Tweets are still up.)  The appear to be a misreading of the EIP’s own summary of their activities. The EIP says that they notified Twitter about ~2980 Tweets (I don’t believe all of those were before the election), some of which Twitter decided to label.  The 22 million figure, instead, appears to come from the EIP’s estimate of response on the platform associated with the alleged misinformation in question, including both pre- and post-election engagement — some of which repeated false claims and some of which confronted them.   

On a program aired Thursday, Mehdi Hasan confronted Taibbi with some of the errors in Taibbi’s reporting (including this one).  Taibbi acknowledged some but later returned on Twitter to press other aspects of his reporting, including the claim about labeling millions of Tweets.  Alex Stamos, working with the EIP, debunked further.

(Update: there’s a more fulsome explanation of the debunking here, at Techdirt.)

Share this:

My New One at Slate: “Meta is Bringing Trump Back to Facebook. It Should Keep Him on a Short Leash to Protect Our Democracy.”

I have written this piece for Slate. A snippet:

Meta’s decision Wednesday to replatform former president Donald J. Trump on Facebook and Instagram is lamentable and ill-considered. The company’s own standards required his continued exclusion from the social media platform so long as he remains a “serious risk to public safety”—and he remains one. After all, it was Trump’s continuing election-denialist rhetoric that apparently led a MAGA-supporting New Mexico candidate last month to mastermind shootings into the homes of Democratic legislators. Millions of Americans continue to believe Trump’s false claim of a stolen 2020 election, and some have taken violent actions and made threats against election workers and others involved in the election process.

The replatforming decision was the latest misstep by a company that “did not even try” to grapple with the risks of election delegitimization in 2020, according to a leaked draft report from the House Select Committee investigating the Jan. 6 attack on the United States Capitol. But rather than wring our hands over mistakes Meta has made, we should focus instead on how the company can minimize the ongoing risk that Trump poses. The key thing that Meta can do now is escalate sanctions against him, such as demoting his content and blocking his expected campaign ads, if he continues to undermine the integrity of American elections….

While Meta has said that Trump will face “heightened penalties” if he breaks the platform’s rules such as by creating a risk of civil unrest, it should go further. Mark Zuckerberg should call Trump directly and warn him that he risks having his posts demoted or removed and his advertising limited if he glorifies or encourages violence, especially election-related violence. We know from the draft report that Zuckerberg has called Trump about specific posts before. Zuckerberg should be firm that sanctions will come if Trump posts anything that could be interpreted as even an implicit threat of violence, given that Trump likes to use innuendo to make his threats.

For example, Trump recently took to posting on the Truth Social platform (in which he has a partial ownership interest) to once again attack Georgia election worker Ruby Freeman. His earlier false claims against Freeman and her mother led to threats of violence against them and a climate of fear for election workers. Meta should not tolerate anything like these posts on Facebook or Instagram.

Second, Meta should demote posts from Trump that engage in election denialism. While Meta has said that it may demote content “that delegitimizes an upcoming election or is related to QAnon,” it does not appear to be willing to take action on what will likely be a core part of what he posts: delegitimation of the last election that will cause continuing harm to faith in our democracy and democratic institutions.

This is a key failing on Meta’s part. Rolling Stone reports that Trump is planning to make his return to major social media platforms with posts about  “rigged elections.” Demotion means that the material would remain visible to people searching for it, but the company’s algorithms would be less likely to suggest the posts into people’s feeds. As a private company, Meta has the right under the First Amendment to promote or demote content as it sees fit. And just as Musk, as owner of Twitter, can decide to replatform white supremacists and Neo-Nazis (as he recently did), Meta can be a more responsible corporate citizen and decline to amplify election lies that threaten violence and undermine democratic institutions.

Third, Meta can renew its commitment to protecting free and fair elections in the United States and around the world. It can begin by beefing up the election integrity team that it partially dismantled after the 2020 elections. The draft report of the Jan. 6 committee describes the weakening of election protections that the company had in place in the past.

There is an urgent need for the restoration of strong election integrity measures. Whether Zuckerberg likes it or not, social media platforms are one of the main ways people communicate about politics and elections around the world. And that means they also become major vectors of election disinformation. That was true not only of the Jan. 6 attack on the Capitol, but also the recent attack on government buildings in Brasilia, following the recent defeat of Trumpian candidate Jair Bolsonaro. As the Times’ Jack Nicas recently reported, the rioting was the result of social-media-fueled “mass delusion” focused on election denialism: “Mr. Bolsonaro’s supporters have been repeating the claims for months, and then built on them with new conspiracy theories passed along in group chats on WhatsApp and Telegram, many focused on the idea that the electronic voting machines’ software was manipulated to steal the election.”

Share this:

“Former top Twitter official forced to leave home due to threats amid ‘Twitter Files’ release”

CNN:


Twitter’s former head of trust and safety has fled his home due to an escalation in threats resulting from Elon Musk’s campaign of criticism against him, a person familiar with the matter told CNN on Monday.

Yoel Roth, who resigned from the social media company in November, has in recent weeks faced a storm of attacks and threats of violence following the release of the so-called “Twitter Files” — internal Twitter communications that new owner Musk has released through journalists including Matt Taibbi and Bari Weiss.

Roth’s position involved him working on sensitive issues including the suspension of then-President Donald Trump’s account in 2021. On Monday, Weiss posted a series of screenshots purported to show internal Twitter documents where Roth and others discussed whether to ban Trump’s account, with some employees questioning if the former president’s tweets violated the platform’s policies.

While Musk had initially been publicly supportive of Roth, that soon changed after he left the company.

Roth has since been the subject of criticism and threats following the release of the Twitter Files. However, things took a dark turn over the weekend when Musk appeared to endorse a tweet that baselessly accused Roth of being sympathetic to pedophilia — a common trope used by conspiracy theorists to attack people online.

A person familiar with Roth’s situation told CNN threats made against the former Twitter employee escalated exponentially after Musk engaged in the pedophilia conspiracy theory.

Share this:

Read the Position Papers for Friday’s Stanford/UCLA Conference, “Should Donald Trump Be Returned to Social Media?”

Very diverse group of papers from a set of very smart people for this Friday’s conference (not too late to register for the virtual webinar):

Chinmayi Arun

Guy Charles

Evelyn Douek

Katie Fallow

Katie Harbath

Rick Hasen

David Kaye

Genevieve Lakier

Eugene Volokh

Jonathan Zittrain

Share this:

“Disinformation Has Become Another Untouchable Problem in Washington”

NYT:

The memo that reached the top of the Department of Homeland Security in September could not have been clearer about its plan to create a board to monitor national security threats caused by the spread of dangerous disinformation.

The department, it said, “should not attempt to be an all-purpose arbiter of truth in the public arena.”

Yet when Secretary Alejandro N. Mayorkas announced the disinformation board in April, Republican lawmakers and conservative commentators denounced it as exactly that, calling it an Orwellian attempt to stifle dissenting views. So did some critics from the left, who questioned the powers that such an office might wield in the hands of future Republican administrations.

Within weeks, the new board was dismantled — put on “pause,” officially — undone in part by forces it was meant to combat, including distortions of the board’s intent and powers.

There is wide agreement across the federal government that coordinated disinformation campaigns threaten to exacerbate public health emergencies, stoke ethnic and racial divisions and even undermine democracy itself. The board’s fate, however, has underscored how deeply partisan the issue has become in Washington, making it nearly impossible to consider addressing the threat.

Share this:

Breaking and Analysis: Supreme Court on 5-4 Vote Reinstates District Court Order Temporarily Barring Enforcement of Texas Social Media Law; Good News for the First Amendment and Bad News for Those Seeking Law to Replatform Trump

In an unusual 5-4 vote, the Supreme Court has vacated a so-far-unexplained order from the 5th Circuit that stayed enforcement of a Texas district court order barring Texas from enforcing its new social media law. Among other things, this Texas law, if enforceable, could well require large social media companies such as Twitter and Facebook to re-platform Donald Trump after he was deplatformed for encouraging the January 6 insurrection at the United States Capitol. The district court held the statute likely violated the First Amendment and a Fifth Circuit panel, offering no reason thus far, stayed that order. That stay would have allowed Texas to enforce its law pending the appeal of the case. As it stands now, Texas cannot enforce its law. But the 5th Circuit will eventually issue an opinion and allow Texas to enforce its law, and the issue will almost certainly be back before the Supreme Court. This is especially true because of last week’s contrary 11th Circuit opinion, striking down a similar Florida law as violating the First Amendment rights of the private platforms to decide what content should be included or excluded.

The majority (C.J. Roberts, and Justices Barrett, Breyer, Kavanaugh, and Sotomayor) did not give a reason for vacating the 5th Circuit stay. Justice Kagan dissented, probably not on grounds of the merits but her views on whether the Supreme Court should be getting involved in these major pending cases on the shadow docket rather than letting them work their way through the courts.

But Justice Alito wrote an opinion for himself, Justice Thomas, and Justice Gorsuch. In the opinion, Alito does not say that the law is in fact unconstitutional. He argues that the matter is uncertain, buying into the arguments advanced in the past by Justice Thomas, Eugene Volokh, and others, that social media companies can be regulated like “common carriers” (such as the phone company) and forced to carry speech that they do not like.

The argument is one that is audacious and shocking for those (like Justice Thomas, less so for a Justice like Alito) who have taken near absolutist positions on First Amendment rights in the past, especially on issues such as campaign finance laws. I write about this in great detail in my Cheap Speech book, and explained the point briefly in this Slate piece:

It would be bad enough if the Supreme Court simply applied outmoded libertarian thinking to today’s information cesspool, believing that the truth will inevitably rise to the top and give voters the tools they need for informed decisionmaking. But the court’s inconsistent thinking on the First Amendment could make things far worse.

Consider the decision of Facebook and Twitter to “deplatform” Trump after he helped inspire the violent insurrection at the U.S. Capitol on January 6, 2021. Meta, which owns Facebook, and Twitter are private companies that make decisions all the time about what content to include, exclude, promote, and demote. The First Amendment does not limit these private companies and they can regulate speech in ways the government could not do. These companies remove hate speech, pornography, and other objectionable content from their platforms all the time.

But Justice Clarence Thomas—yes, the same Justice Thomas who believes that virtually all campaign finance laws violate the First Amendment—recently went out of his way in a case not presenting the issue to raise support for new laws, such as one passed last year in Florida, that would require social media companies to carry the content of politicians they do not like, even if those politicians support election violence or undermine voter confidence in the integrity of the electoral process. Justice Thomas has suggested that social media platforms are like telephone companies that could be subject to “must carry” provisions and cannot discriminate among customers based upon their political views.

But social media companies are much closer to newspapers and TV stations than telephone companies. The former but not the latter curate content all the time, and they can decide who appears on the platform and how. Justice Thomas appears to believe in the freedom of FOX News or the Atlantic to create a coherent brand with a message, but not Twitter or Facebook.

It is hard not to conclude that Justice Thomas was motivated toward this anti-libertarian position requiring private companies to carry speech they would rather not include on their websites because doing so would favor Donald Trump and those like Trump.

The good news from today’s opinion is that it looks like there are 5 or 6 votes at least to reject the Texas law and to hold that just like newspapers can decide what content to include or exclude, social media companies can do so too. Whether Section 230 of the Communications Decency Act recognizes it or not, social media companies exercise editorial discretion all the time. They should not be forced as private actors to carry dangerous and anti-democratic speech. People who want such speech can easily find it on Trump’s “Truth Social” platform or elsewhere.

Share this:

My New Washington Post Piece Connected to My Cheap Speech Book: “Facebook and Twitter could let Trump back online. But he’s still a danger.”

I have written this piece for the Washington Post. It begins:

In the Menlo Park, Calif., offices of Meta, discussions probably have already begun to consider what will happen Jan. 7, 2023, when former president Donald Trump’s ban from Facebook for encouraging the violent insurrection at the U.S. Capitol on Jan. 6, 2021, is set to potentially expire. Judging by how large social media companies have responded lately to the aftermath of the 2020 election and the looming 2022 election in which Republicans may take back control of Congress, there’s ample reason to worry Meta will restore the former president’s ability to post on Facebook — allowing him to continue to spread the false and dangerous claim that the 2020 election was stolen from him. Social media networks and other online platforms such as Google’s YouTube and Spotify can, instead, step up their support for reasonable measures to assure both vibrant political debate and protection of American election integrity and legitimacy. That would include keeping Trump off Facebook….

Social media and other new communications technologies are not solely to blame for the metastasizing election lies, but they play a big part. As the 2020 election season geared up and as Trump began spreading his false claims in the midst of the coronavirus pandemic that the election would be stolen or rigged, Facebook and Twitter reacted meekly. Rather than blocking Trump, they slapped labels on his posts saying his claims were disputed or directing voters to more information. Evidence indicates these labels may have backfired, amplifying Trump’s falsehoods and perhaps even suggesting to voters that they were correct. Things were even worse on other platforms: YouTube allowed videos with false accusations about the election to flourish, and its algorithm directed viewers to ever more extreme content. And those who distribute podcasts, such as on Spotify or Apple, appeared to do little policing of incendiary and dangerous election claims.

It took the actual violence of Jan. 6 for Facebook and Twitter to take action. Both chose to remove Trump from their platforms. Twitter made its ban permanent. Facebook initially did, too, but the Oversight Board it created to give it guidance on content told Meta that while deplatforming Trump was justified because he “created an environment where a serious risk of violence was possible,” the company needed criteria for removing politicians and conditions for determining the length of such bans. In response, the company announced that Trump would be booted for two years, followed by an evaluation as to whether he remained a “threat to public safety.” The company explained: “At the end of this period, we will look to experts to assess whether the risk to public safety has receded. We will evaluate external factors, including instances of violence, restrictions on peaceful assembly and other markers of civil unrest. If we determine that there is still a serious risk to public safety, we will extend the restriction for a set period of time and continue to reevaluate until that risk has receded.”…

Meta may soon face great political pressure from the right to show that they are being “fair” to Trump, especially with Republicans likely to take control of one or both houses of Congress after the 2022 elections and consider laws reining in tech platforms the GOP considers unfriendly. It could also have other incentives to let Trump back onto the site: Recent reporting by Judd Legum, for example, suggests that Facebook has not followed its own policies to prevent the viral spread of false political information, allowing fake groups to manipulate its rules to build up millions of followers to further spread election misinformation. Posts containing such misinformation are often among the most shared items on the platform.

It’s not just Facebook. To little fanfare, Twitter confirmed a few weeks ago that it will no longer police false election claims about the 2020 election, apparently because it believes such claims are no longer a threat to election integrity. Twitter told CNN that its civic integrity “policy is designed to be used ‘during the duration’ of an election or other civic event, and ‘the 2020 U.S. election is not only certified, but President Biden has been in office for more than a year.’ The staying power of the “big lie” and the rising threat of election subversion built on that lie shows how wrong that calculation is…

Companies such as Meta, Twitter and Google are private corporations, which have the right to decide what content to include, exclude, promote or demote on their platforms. They already do that with hate speech, pornography and violence. They need to continue to do that with speech threatening the integrity of American elections. Silencing a political leader should be the last resort, given our commitment to free speech and vibrant election contests. But Trump clearly crossed the line well before the Jan. 6 insurrection.

Share this:

“Echo chambers, filter bubbles, and polarisation: a literature review”

New Reuters Institute report:

Terms like echo chambers, filter bubbles, and polarisation are widely used in public and political debate but not in ways that are always aligned with, or based on, scientific work. And even among academic researchers, there is not always a clear consensus on exact definitions of these concepts.

In this literature review we examine, specifically, social science work presenting evidence concerning the existence, causes, and effect of online echo chambers and consider what related research can tell us about scientific discussions online and how they might shape public understanding of science and the role of science in society.

Echo chambers, filter bubbles, and the relationship between news and media use and various forms of polarisation has to be understood in the context of increasingly digital, mobile, and platform-dominated media environments where most people spend a limited amount of time with news and many internet users do not regularly actively seek out online news, leading to significant inequalities in news use.

When defined as a bounded, enclosed media space that has the potential to both magnify the messages delivered within it and insulate them from rebuttal, studies in the UK estimate that between six and eight percent of the public inhabit politically partisan online news echo chambers.

More generally, studies both in the UK and several other countries, including the highly polarised US, have found that most people have relatively diverse media diets, that those who rely on only one source typically converge on widely used sources with politically diverse audiences (such as commercial or public service broadcasters) and that only small minorities, often only a few percent, exclusively get news from partisan sources.

Studies in the UK and several other countries show that the forms of algorithmic selection offered by search engines, social media, and other digital platforms generally lead to slightly more diverse news use – the opposite of what the “filter bubble” hypothesis posits – but that self-selection, primarily among a small minority of highly partisan individuals, can lead people to opt in to echo chambers, even as the vast majority do not.

Research on polarisation offers a complex picture both in terms of overall developments and the main drivers and there is in many cases limited empirical work done outside the United States. Overall, ideological polarisation has, in the long run, declined in many countries but affective polarisation has in some, but not all, cases increased. News audience polarisation is much lower in most European countries, including the United Kingdom. Much depends on the specifics of individual countries and what point in time one measures change from and there are no universal patterns.

There is limited research outside the United States systematically examining the possible role of news and media use in contributing to various kinds of polarisation and the work done does not always find the same patterns as those identified in the US. In the specific context of the United States where there is more research, it seems that exposure to like-minded political content can potentially polarise people or strengthen the attitudes of people with existing partisan attitudes and that cross- cutting exposure can potentially do the same for political partisans.

Public discussions around science online may exhibit some of the same dynamics as those observed around politics and in news and media use broadly, but fundamentally there is at this stage limited empirical research on the possible existence, size, and drivers of echo chambers in public discussions around science. More broadly, existing research on science communication, mainly from the United States, documents the important role of self-selection, elite cues, and small, highly active communities with strong views in shaping these debates and highlights the role especially political elites play in shaping both news coverage and public opinion on these issues.

In summary, the work reviewed here suggests echo chambers are much less widespread than is commonly assumed, finds no support for the filter bubble hypothesis and offers a very mixed picture on polarisation and the role of news and media use in contributing to polarisation.

Share this:

Aspen Institute’s Commission on Information Disorder Issues Its Final Report

The Commission was created to explore the implications of our “crisis of trust and truth.” A chain reaction of harm to our democracy has emerged as “bad information has become as prevalent, persuasive, and persistent as good information.” The Final Report issued today promises “a viable framework for action” and “makes 15 recommendations for how government, private industry, and civil society can help to increase transparency and understanding, build trust, and reduce harms.”

Share this:

“Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead.”

WSJ:

In the fall of 2018, Jonah Peretti, chief executive of online publisher BuzzFeed, emailed a top official at Facebook Inc. The most divisive content that publishers produced was going viral on the platform, he said, creating an incentive to produce more of it.

He pointed to the success of a BuzzFeed post titled “21 Things That Almost All White People are Guilty of Saying,” which received 13,000 shares and 16,000 comments on Facebook, many from people criticizing BuzzFeed for writing it, and arguing with each other about race. Other content the company produced, from news videos to articles on self-care and animals, had trouble breaking through, he said.

Mr. Peretti blamed a major overhaul Facebook had given to its News Feed algorithm earlier that year to boost “meaningful social interactions,” or MSI, between friends and family, according to internal Facebook documents reviewed by The Wall Street Journal that quote the email.

BuzzFeed built its business on making content that would go viral on Facebook and other social media, so it had a vested interest in any algorithm changes that hurt its distribution. Still, Mr. Peretti’s email touched a nerve.

Facebook’s chief executive, Mark Zuckerberg, said the aim of the algorithm change was to strengthen bonds between users and to improve their well-being. Facebook would encourage people to interact more with friends and family and spend less time passively consuming professionally produced content, which research suggested was harmful to their mental health.

Within the company, though, staffers warned the change was having the opposite effect, the documents show. It was making Facebook’s platform an angrier place.

Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.

“Our approach has had unhealthy side effects on important slices of public content, such as politics and news,” wrote a team of data scientists, flagging Mr. Peretti’s complaints, in a memo reviewed by the Journal. “This is an increasing liability,” one of them wrote in a later memo.

They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares,” researchers noted in internal memos.

Some political parties in Europe told Facebook the algorithm had made them shift their policy positions so they resonated more on the platform, according to the documents.

“Many parties, including those that have shifted to the negative, worry about the long term effects on democracy,” read one internal Facebook report, which didn’t name specific parties.

Share this:

“Facebook Says Its Rules Apply to All. Company Documents Reveal a Secret Elite That’s Exempt.”

WSJ:

Mark Zuckerberg has publicly said Facebook Inc. allows its more than three billion users to speak on equal footing with the elites of politics, culture and journalism, and that its standards of behavior apply to everyone, no matter their status or fame.

In private, the company has built a system that has exempted high-profile users from some or all of its rules, according to company documents reviewed by The Wall Street Journal.

The program, known as “cross check” or “XCheck,” was initially intended as a quality-control measure for actions taken against high-profile accounts, including celebrities, politicians and journalists. Today, it shields millions of VIP users from the company’s normal enforcement process, the documents show. Some users are “whitelisted”—rendered immune from enforcement actions—while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.

At times, the documents show, XCheck has protected public figures whose posts contain harassment or incitement to violence, violations that would typically lead to sanctions for regular users. In 2019, it allowed international soccer star Neymar to show nude photos of a woman, who had accused him of rape, to tens of millions of his fans before the content was removed by Facebook. Whitelisted accounts shared inflammatory claims that Facebook’s fact checkers deemed false, including that vaccines are deadly, that Hillary Clinton had covered up “pedophile rings,” and that then-President Donald Trump had called all refugees seeking asylum “animals,” according to the documents.

A 2019 internal review of Facebook’s whitelisting practices, marked attorney-client privileged, found favoritism to those users to be both widespread and “not publicly defensible.”

“We are not actually doing what we say we do publicly,” said the confidential review. It called the company’s actions “a breach of trust” and added: “Unlike the rest of our community, these people can violate our standards without any consequences.”

Despite attempts to rein it in, XCheck grew to include at least 5.8 million users in 2020, documents show. In its struggle to accurately moderate a torrent of content and avoid negative attention, Facebook created invisible elite tiers within the social network.

In describing the system, Facebook has misled the public and its own Oversight Board, a body that Facebook created to ensure the accountability of the company’s enforcement systems….

In June 2020, a Trump post came up during a discussion about XCheck’s hidden rules that took place on the company’s internal communications platform, called Facebook Workplace. The previous month, Mr. Trump said in a post: “When the looting starts, the shooting starts.”

A Facebook manager noted that an automated system, designed by the company to detect whether a post violates its rules, had scored Mr. Trump’s post 90 out of 100, indicating a high likelihood it violated the platform’s rules.

For a normal user post, such a score would result in the content being removed as soon as a single person reported it to Facebook. Instead, as Mr. Zuckerberg publicly acknowledged last year, he personally made the call to leave the post up. “Making a manual decision like this seems less defensible than algorithmic scoring and actioning,” the manager wrote.

Mr. Trump’s account was covered by XCheck before his two-year suspension from Facebook in June. So too are those belonging to members of his family, Congress and the European Union parliament, along with mayors, civic activists and dissidents.

While the program included most government officials, it didn’t include all candidates for public office, at times effectively granting incumbents in elections an advantage over challengers. The discrepancy was most prevalent in state and local races, the documents show, and employees worried Facebook could be subject to accusations of favoritism.

Mr. Stone acknowledged the concern but said the company had worked to address it. “We made multiple efforts to ensure that both in federal and nonfederal races, challengers as well as incumbents were included in the program,” he said.

Share this:

“Fueling the Fire: How Social Media Intensifies U.S. Political Polarization–And What Can Be Done About It”

New report from NYU’s Paul M. Barrett, Justin Hendrix, J. Grant Sims.

Some critics of the social media industry contend that widespread use of Facebook, Twitter, and YouTube has contributed to increased political polarization in the United States. But Facebook, the largest social media platform, has disputed this contention, saying that it is unsupported by social science research. Determining whether social media plays a role in worsening partisan animosity is important because political polarization has pernicious consequences. We conclude that social media platforms are not the main cause of rising partisan hatred, but use of these platforms intensifies divisiveness and thus contributes to its corrosive effects.

Share this:

“Jan. 6 investigators demand records from social media companies”

Politico:

The select committee investigating the Jan. 6 insurrection is seeking a massive tranche of records from social media companies,on whose platformsmany defendants charged in the Capitol attackplanned and coordinated their actions.

In a series of letters dated Aug. 26, the Democratic-controlled panel asked the companies, which include Facebook, Google, Twitter, Parler, 4chan, Twitch and TikTok, for all records and documents since April 1, 2020, relating to misinformation around the 2020 election, efforts to overturn the 2020 election, domestic violent extremists associated with efforts to overturn the election and foreign influence in the 2020 election.

Share this: