You can watch the presentation at this link.
Supporters of QAnon on former President Donald Trump’s social media platform have celebrated what they see as his renewed embrace of the conspiracy theory over the past week after he shared a meme that was viewed as one of his most brazen nods to QAnon yet.
The meme Trump shared on Truth Social included an illustration of him wearing a “Q” on his lapel and two QAnon slogans – “The storm is coming” and “WWG1WGA” (Where we go one, we go all). A few days later, he held a rally in Youngstown, Ohio, where he delivered some of his speech to music that sounded almost exactly like a song associated with QAnon. As he did that, a group of his supporters in the crowd began pointing in unison toward the sky.
“Once we saw that, we realized we might have a problem,” a Trump aide told CNN. The former President’s team spent hours online after the rally trying to understand what the salute meant and where it might have come from, sources said.
The 2020 election and its turbulent aftermath fueled a powerful generation of online influencers, a Washington Post data analysis has found, producing sky-high follower counts for an array of conservatives who echoed Trump’s false claims of election fraud, known as the “big lie.” Some doubled or tripled their audiences on Twitter, while others saw even larger gains — catapulting, like Becker, from relative obscurity to online fame.
These accounts amassed followers despite vows by Big Tech companies to police election disinformation, The Post found. And they have gone on to use their powerful megaphones to shape the national debate on other subjects, injecting fresh waves of distortion into such culture-war topics as transgender rights and critical race theory.
“Once they’ve gained a level of influence, they can continue to leverage that influence going forward,” said Kate Starbird, a leading expert on disinformation at the University of Washington. “Manipulation becomes embedded in the network.”
To conduct its analysis, The Post identified 77 of the biggest spreaders of disinformation about the 2020 election, tracked how they built large audiences online and then analyzed how they used their new power to fuel debate on other divisive topics.
The list of 77 was drawn from research by disinformation experts at Stanford, Harvard and Cornell universities, as well as the University of Washington. While the details of their methodologies differed, the researchers all culled Twitter for posts that spread misperceptions about the election and then determined which accounts had racked up the most retweets, spreading the “big lie” most widely.
Conflicting lower court rulings about removing controversial material from social media platforms point toward a landmark Supreme Court decision on whether the First Amendment protects Big Tech’s editorial discretion or forbids its censorship of unpopular views.
The stakes are high not just for government and the companies, but because of the increasingly dominant role platforms such as Twitter and Facebook play in American democracy and elections. Social media posts have the potential to amplify disinformation or hateful speech, but removal of controversial viewpoints can stifle public discourse about important political issues.
Governments that say conservative voices are the ones most often eliminated by the decisions of tech companies scored a major victory Friday, when a divided panel of the U.S. Court of Appeals for the 5th Circuit upheld a Texas law barring companies from removing posts based on political ideology.
“Big Tech’s reign of endless censorship and their suppression of conservative viewpoints is coming to an end,” Texas Attorney General Ken Paxton (R) said after the decision. “These massive corporate entities cannot continue to go unchecked as they silence the voices of millions of Americans.”
But a unanimous panel of the U.S. Court of Appeals for the 11th Circuit went the other way earlier this year, saying that a similar Florida law violated constitutional protections for tech companies that do not want to host views on their platforms that they find hateful, divisive or false.
Judge Kevin Newsom criticized a depiction of social media platforms as “dumb pipes … reflexively transmitting data from point A to point B.” Instead, he wrote, their “content-moderation decisions constitute the same sort of editorial judgments” entitled to First Amendment protections when made by a newspaper.
All of the appeals court judges considering the Florida and Texas laws have noted the difficulty of applying some Supreme Court precedents regarding legacy media. And all weighing in so far were nominated by Republican presidents, with Newsom and Judge Andrew Oldham, who wrote the conflicting opinion in the Texas case, both nominated by President Donald Trump, who was kicked off Twitter in the aftermath of the U.S. Capitol riot on Jan. 6, 2021.
A federal appeals court on Friday reversed a lower court’s order blocking a Texas law that stops large social media platforms from removing political posts, a blow for tech companies that say their content moderation decisions are protected by the Constitution.
“Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say,” Judge Andrew S. Oldham of the U.S. Court of Appeals for the Fifth Circuit, which is known to be conservative, said in the court’s ruling. One member of the three-judge panel dissented from portions of the ruling.
The law makes it possible for individuals or the Texas attorney general’s office to sue social media platforms with more than 50 million monthly users in the United States for taking down political viewpoints. The legislation is the product of conservative anger over posts that were taken down largely because they had violated the social media platforms’ rules.
It comes as platforms like Facebook, YouTube and Twitter face immense political pressure over their decisions to take down content they deem misinformation, or view as hateful or violent. Republicans have generally called for the platforms to leave up more posts, while Democrats have urged them to be more aggressive in removing some content.
Mike Masnick at TechDirt:
5th Circuit Rewrites A Century Of 1st Amendment Law To Argue Internet Companies Have No Right To Moderate
As far as I can tell, in the area the 5th Circuit appeals court has jurisdiction, websites no longer have any 1st Amendment editorial rights. That’s the result of what appears to me to be the single dumbest court ruling I’ve seen in a long, long time, and I know we’ve seen some crazy rulings of late. However, thanks to judge Andy Oldham, internet companies no longer have 1st Amendment rights regarding their editorial decision making….
It is difficult to state how completely disconnected from reality this ruling is, and how dangerously incoherent it is. It effectively says that companies no longer have a 1st Amendment right to their own editorial policies. Under this ruling, any state in the 5th Circuit could, in theory, mandate that news organizations must cover certain politicians or certain other content. It could, in theory, allow a state to mandate that any news organization must publish opinion pieces by politicians. It completely flies in the face of the 1st Amendment’s association rights and the right to editorial discretion.
There’s going to be plenty to say about this ruling, which will go down in the annals of history as a complete embarrassment to the judiciary, but let’s hit the lowest points. The crux of the ruling, written by Judge Andy Oldham, is as follows:
“Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say. Because the district court held otherwise, we reverse its injunction and remand for further proceedings.“
Considering just how long Republicans (and Oldham was a Republican political operative before being appointed to the bench) have spent insisting that corporations have 1st Amendment rights, this is a major turnaround, and (as noted) an incomprehensible one. Frankly, Oldham’s arguments sound much more like the arguments made by ignorant trolls in our comments than anyone with any knowledge or experience with 1st Amendment law.
With two months to go until the midterms, tech companies are getting ready: rolling out fact checks, labeling misleading claims and setting up voting guides.
The election playbooks being used by Facebook, Twitter, Google-owned YouTube and TikTok are largely in line with those they used in 2020, when they warned that both foreign and domestic actors were seeking to undermine confidence in the results.
That’s left experts who study social media wondering what lessons tech companies have learned from 2020 — and whether they are doing enough this year.
The host of election-related announcements in recent weeks add up to a “business as usual” approach, said Katie Harbath, a former elections policy director at Facebook who’s now a fellow at the Bipartisan Policy Center.
Former president Donald Trump has failed to win another vote — this time, by the shareholders of an investment ally his social network Truth Social had been counting on for cash.
Digital World Acquisition Corp., a special-purpose acquisition company, said Thursday it had not yet gained enough shareholder votes to extend its deadline for merging with Trump’s start-up — a necessary step to unlock $1.3 billion in raised funds.
The company was scheduled for liquidation Thursday unless its investors approved the extension or the company’s sponsor paid to push back the deadline itself.
But Digital World’s chief, Patrick Orlando, said late Thursday that the company would instead postpone a long-awaited meeting until Oct. 10 without offering further detail, indicating the company was still scrambling to garner enough shareholder support.’
The company has said in filings that Arc Capital, the Shanghai-based investment advisory firm that funds and sponsors Digital World, could pay $2.8 million to give the company another three months to seal the deal without shareholder approval.
Even that might not be enough time. Ongoing investigations by the Securities and Exchange Commission and federal prosecutors have frozen the merger indefinitely. In an SEC filing Wednesday, Digital World reprinted an item from a pro-Trump blog post urging the SEC to “wrap up its probe.”
Mr. Trump has spent more than a decade on social media attacking enemies, cozying up to far-right ideas and sharing false information. He used Twitter to perpetuate the lie that President Barack Obama was not born in the United States and later deemed one investigation after another partisan witch hunts.
But, as his legal exposure intensifies over his handling of government documents, the former president this week crossed over to a more direct embrace of claims batted around the dark corners on the internet. His winks and nods to the far right became enthusiastic endorsements, and his flirtations with convoluted conspiratorial ideas became more overt.
He shared a flurry of 61 posts written by Truth Social users, many of whom had ties to QAnon, an online conspiracy movement aligned with the former president. One post included “the storm,” which QAnon followers use to describe the day when the movement’s enemies will be violently punished.
The strategy partly mirrors Mr. Trump’s chaotic approach during moments of crisis, searching for a message to ignite supporters while shifting attention away from his controversies. But the posts this week appeared especially haphazard, opening a door to the former president’s thought process even as his legal team tries to craft a cogent defense against the Justice Department’s investigation.
The “Big Lie” that the 2020 election was ‘stolen’ from former President Donald Trump is a persistent alternate reality for a sizable number of U.S. voters. As a result, violent threats against election workers are a substantial problem, according to a recent U.S. House Oversight Committee report. False claims about the 2020 election have been used to justify state laws that seek to limit voter participation, while the election of just one of a sizable number of election-denying candidates for key positions overseeing the vote in battlegrounds such as Arizona, Pennsylvania or Wisconsin could help throw the next election cycle into chaos.
Mis- and disinformation on social media are not the primary cause of belief in the Big Lie, but given the scale of social networks, any marginal effect that contributes to the propagation of false election claims could have substantial impact.
It is in this context that, in a blog post this week, YouTube announced its plans to “limit the spread of harmful misinformation” in the 2022 U.S. midterm elections. The company says it intends to recommend “authoritative national and local news sources like PBS NewsHour, The Wall Street Journal, Univision and local ABC, CBS and NBC affiliates,” and to add “a variety of information panels in English and Spanish from authoritative sources underneath videos and in search results about the midterms.” And, YouTube promises to take action on election denial, including videos that claim “widespread fraud, errors, or glitches occurred in the 2020 U.S. presidential election, or alleging the election was stolen or rigged.”
But given the prevalence of such false claims, how might YouTube’s algorithms– designed to recommend content that users want– contribute to their propagation, particularly to users already inclined to accept them?
A snapshot of data from the 2020 election suggests cause for concern. On the same day that YouTube released its plans for the midterms, the Journal of Online Trust and Safety published the results of a study by researchers at NYU’s Center for Social Media and Politics (CSMaP) that found “a systematic association between skepticism about the legitimacy of the election and exposure to election fraud-related content on YouTube.”
Former President Donald Trump spent Tuesday morning posting inflammatory messages on social media, including many explicitly promoting the QAnon conspiracy theory.
While Trump has in the past promoted QAnon-inspired accounts and theories, the posts on his Truth Social account were his most explicit, unobscured, QAnon-promoting and QAnon-baiting posts to date.
In one, he reposted the QAnon slogan — “Where We Go One We Go All.” In another, he re-posted a 2017 message from “Q” that’s critical of the intelligence community. The QAnon conspiracy theory was built around Q, an anonymous account that posts periodically on 8kun, often with vague or symbolic language that is then interpreted by followers. The account claims to document a secret battle being waged by Trump against the Democratic Party, which followers of the theory contend is run by satanic, child-eating cannibals who run a pedophile ring filled with celebrities and political elites who have been covertly running the United States government for decades. None of the posts’ concrete predictions have come to fruition.
Users of QAnon forums rejoiced at Trump’s apparent endorsement of the conspiracy theory and its mythology. The top response on the most visited QAnon forum to one of Trump’s posts about the conspiracy theory read simply, “Wipe them out sir.” Others pleaded with Trump to “nuke them from orbit” and to “sir, please finish them off,” referring to QAnon enemies such as Hillary Clinton and President Joe Biden.
In addition to the QAnon-adjacent posts, Trump shared several conspiracy theories Tuesday on his Truth Social site and he re-posted a picture of Biden, Vice President Kamala Harris and House Speaker Nancy Pelosi, with the words “Your enemy is not in Russia” written in black bars over their eyes.
The posting spree comes one day after Trump posted a message that he should be reinstated as president — “Declare the rightful winner, or hold a new Election, NOW!” — and as he’s come under increased scrutiny from federal investigators who executed a search warrant at his Florida resort earlier this month and recovered troves of classified documents.
Dozens of QAnon-boosting accounts decamped to Truth Social this year after they were banned by other social networks and have found support from the platform’s creator, former President Donald J. Trump, according to a report released on Monday.
NewsGuard, a media watchdog that analyzes the credibility of news outlets, found 88 users promoting the QAnon conspiracy theory on Truth Social, each to more than 10,000 followers. Of those accounts, 32 were previously banned by Twitter.
Twitter barred Mr. Trump over fears that he might incite violence after the riot at the U.S. Capitol on Jan. 6, 2021. He started Truth Social as an alternative in February 2022. He has amplified content from 30 of the QAnon accounts to his more than 3.9 million Truth Social followers, reposting their messages 65 times since he became active on the platform in April, according to the report.
Former president Donald Trump’s Truth Social website is facing financial challenges as its traffic remains puny and the company that is scheduled to acquire it expresses fear that his legal troubles could lead to a decline in his popularity.
Six months after its high-profile launch, the site — a clone of Twitter, which banned Trump after Jan. 6, 2021 — still has no guaranteed source of revenue and a questionable path to growth, according to Securities and Exchange Commission filings from Digital World Acquisition, the company planning to take Trump’s start-up, the Trump Media & Technology Group, public….
Digital World also has said in filings that Trump’s social network will need millions of people to “regularly use” it for the site to achieve commercial success.
But Trump, the site’s most popular user, has fewer than 4 million followers, and the site’s most active trending topics, including #DefundTheFBI, have shown only a few thousand people posting to them in recent days, data from the site shows. For comparison, Twitter says it has about 37 million people in the U.S. actively using the site every day….
But in the days since the FBI search of Mar-a-Lago, Truth Social’s viewership has slowed, according to traffic estimates from Similarweb, an online analytics firm. Its U.S. audience has tumbled to about 300,000 views per day, down from nearly 1.5 million on the day of its launch.
Facebook gave politicians 13 exemptions to its content-moderation rules over a one-year period because their offending posts were determined to be newsworthy, the company revealed Thursday in a series of quarterly reports on its moderation practices.
The company also said it had applied the newsworthiness exception to 55 other cases between June 2021 and June 2022.
The disclosures offer new detailsabout the company’s treatment of politicians who violate its rules, an issue that was brought to the fore by the platform’s suspension of former president Donald Trump’s account after the Jan. 6, 2021, riot at the U.S. Capitol. For years, Facebook has been criticized for giving too much deference to politicians such as Trump who broke the platform’s rules, while conservatives have said Facebook’s suspension of Trump was overly punitive of a world leader.
Last fall, the company’s Oversight Board criticized Facebook for failing to be transparent about the exemptions it grants high-profile users who break the platform’s rules. The board’s criticism came after a Wall Street Journal report detailed the platform’s “cross check” program, which shields selected users from the company’s regular content-moderation system, though the newsworthy exemption operates separately.
Facebook said that it was releasing the data in response to the Oversight Board’s criticism and that it would update the numbers in future reports.
Under its current rules, Facebook may determine a post is newsworthy — and therefore exempt from its community standards — if it raises awareness of an imminent threat to public health or safety or adds to a public debate in politics.
Facebook spokeswoman Jen Ridings confirmed none of the 13 politicians granted exemptions are American. Of the 55 other exemptions, one was a post from the United States, Ridings said. The company provided few other details about the exempted posts, but on its website, it gave three examples.