Bruce Schneier and Nathan Sanders expand the frame in The Conversation.
Tag Archives: Artificial Intelligence
“States Legislating Against Digital Deception: A Comparative Study of Laws to Mitigate Deepfake Risks in American Political Advertisements”
Hayden Goldberg’s new piece on SSRN:
Abstract
Scholars have long debated whether or not new laws are needed to address new technologies and their risks, or if existing laws are sufficient. This thesis examines one of the newest technologies – deepfakes – in the context of an essential areas of governance – election law – and asks two questions. First, “are recently passed bills prohibiting unlabeled deepfakes in campaign advertisements necessary?” To answer this question, I ask “what risks do legislators envision new and old laws addressing?” For new laws, I transcribe their statements in committee hearings and conduct a qualitative thematic analysis to develop risk models. For old laws, I draw on hearings and case law. I find legislators envision new laws address risks regarding election security, their own reputation, and information. Informational risks’ constituent components are the right to access true information, the right to know something has been manipulated, the risk of false/deceptive information, and an obligation for campaigns to be truthful. In contrast, I find that the risks older laws address concern voter suppression or intimidation, the risks of the undermining of civil rights, or fraudulent fundraising tactics. I argue these laws are inadequate for addressing new risks because they are intended to cover vastly different actions. In contrast, existing state laws prohibiting candidates representing themselves certain ways have partial overlap with risk models for new laws, but fail to cover all types of misrepresentations, demonstrating the new for new laws. Therefore, I conclude that the risks deepfakes present are a difference of kind of risk, not a difference in degree. This limits the utility of existing laws, demonstrating the necessity of new ones. By systematically demonstrating the shortcoming of existing laws, I provide insight into law and technology, election law, and the study of deepfake risks.
“Brief – Election Integrity Recommendations for Generative AI Developers”
New report from Tim Harper at CDT:
With over 80 countries and more than half of the world’s population going to the polls this year, 2024 represents the largest single year of global elections since the advent of the internet. It has also been dubbed the ‘First AI Election’, in light of the boom in widely accessible generative AI tools that have the potential to accelerate cybersecurity and information integrity challenges to global elections this year. . . .
Although we are halfway through this election year, it remains imperative for AI developers to quickly develop election integrity programs employing a variety of levers including policy, product, and enforcement to protect democratic elections this year and beyond.
“Indian election was awash in deepfakes – but AI was a net positive for democracy”
Not sure I agree that the conclusion follows from the examples, but it’s an intriguing argument in The Conversation.
“AI can make our elections safer—if we use it correctly”
Chris McIsaac offers a valuable in-depth look at AI in elections over at R Street. I’m really looking forward to digging in.
From the executive summary:
Artificial intelligence (AI) is already having an impact on upcoming U.S. elections and other political races around the globe. Much of the public dialogue focuses on AI’s ability to generate and distribute false information, and government officials are responding by proposing rules and regulations aimed at limiting the technology’s potentially negative effects. However, questions remain regarding the constitutionality of these laws, their effectiveness at limiting the impact of election disinformation, and the opportunities the use of AI presents, such as bolstering cybersecurity and improving the efficiency of election administration. While Americans are largely in favor of the government taking action around AI, there is no guarantee that restrictions will curb potential threats.
This paper explores AI impacts on the election information environment, cybersecurity, and election administration to define and assess risks and opportunities. It also evaluates the government’s AI-oriented policy responses to date and assesses the effectiveness of primarily focusing on regulating the use of AI in campaign communications through prohibitions or disclosures. It concludes by offering alternative approaches to increased government-imposed limits, which could empower local election officials to focus on strengthening cyber defenses, build trust with the public as a credible source of election information, and educate voters on the risk of AI-generated disinformation and how to recognize it.
“Overcoming Racial Harms to Democracy from Artificial Intelligence”
I posted this paper to SSRN, which will be published in the Iowa Law Review. The abstract is below:
While the United States is becoming more racially diverse, generative artificial intelligence and related technologies threaten to undermine truly representative democracy. Left unchecked, AI will exacerbate already substantial existing challenges, such as racial polarization, cultural anxiety, antidemocratic attitudes, racial vote dilution, and voter suppression. Synthetic video and audio (“deepfakes”) receive the bulk of popular attention—but are just the tip of the iceberg. Microtargeting of racially tailored disinformation, racial bias in automated election administration, discriminatory voting restrictions, racially targeted cyberattacks, and AI-powered surveillance that chills racial justice claims are just a few examples of how AI is threatening democracy. Unfortunately, existing laws—including the Voting Rights Act—are unlikely to address the challenges. These problems, however, are not insurmountable if policymakers, activists, and technology companies act now. This Article asserts that AI should be regulated to facilitate a racially inclusive democracy, proposes novel principles that provide a framework to regulate AI, and offers specific policy interventions to illustrate the implementation of the principles. Even though race is the most significant demographic factor that shapes voting patterns in the United States, this is the first article to comprehensively identify the racial harms to democracy posed by AI and offer a way forward.