Tag Archives: deepfakes

“States Legislating Against Digital Deception: A Comparative Study of Laws to Mitigate Deepfake Risks in American Political Advertisements”

Hayden Goldberg’s new piece on SSRN:

Abstract

Scholars have long debated whether or not new laws are needed to address new technologies and their risks, or if existing laws are sufficient. This thesis examines one of the newest technologies – deepfakes – in the context of an essential areas of governance – election law – and asks two questions. First, “are recently passed bills prohibiting unlabeled deepfakes in campaign advertisements necessary?” To answer this question, I ask “what risks do legislators envision new and old laws addressing?” For new laws, I transcribe their statements in committee hearings and conduct a qualitative thematic analysis to develop risk models. For old laws, I draw on hearings and case law. I find legislators envision new laws address risks regarding election security, their own reputation, and information. Informational risks’ constituent components are the right to access true information, the right to know something has been manipulated, the risk of false/deceptive information, and an obligation for campaigns to be truthful. In contrast, I find that the risks older laws address concern voter suppression or intimidation, the risks of the undermining of civil rights, or fraudulent fundraising tactics. I argue these laws are inadequate for addressing new risks because they are intended to cover vastly different actions. In contrast, existing state laws prohibiting candidates representing themselves certain ways have partial overlap with risk models for new laws, but fail to cover all types of misrepresentations, demonstrating the new for new laws. Therefore, I conclude that the risks deepfakes present are a difference of kind of risk, not a difference in degree. This limits the utility of existing laws, demonstrating the necessity of new ones. By systematically demonstrating the shortcoming of existing laws, I provide insight into law and technology, election law, and the study of deepfake risks. 

Share this:

“Brief – Election Integrity Recommendations for Generative AI Developers”

New report from Tim Harper at CDT:

With over 80 countries and more than half of the world’s population going to the polls this year, 2024 represents the largest single year of global elections since the advent of the internet. It has also been dubbed the ‘First AI Election’, in light of the boom in widely accessible generative AI tools that have the potential to accelerate cybersecurity and information integrity challenges to global elections this year. . . .

Although we are halfway through this election year, it remains imperative for AI developers to quickly develop election integrity programs employing a variety of levers including policy, product, and enforcement to protect democratic elections this year and beyond.

Share this:

“Elon Musk Shares Manipulated Harris Video, in Seeming Violation of X’s Policies”

NYT:

Elon Musk, the world’s richest man, has waded into one of the thorniest issues facing U.S. politics: deepfake videos.

On Friday night, Mr. Musk, the billionaire owner of the social media platform X, reposted an edited campaign video for Vice President Kamala Harris that appears to have been digitally manipulated to change the spot’s voice-over in a deceptive manner.

The video mimics Ms. Harris’s voice, but instead of using her words from the original ad, it has the vice president saying that President Biden is senile, that she does not “know the first thing about running the country” and that, as a woman and a person of color, she is the “ultimate diversity hire.”

Share this:

“FCC pursues new rules for AI in political ads, but changes may not take effect before the election”

ABC News:

NEW YORK — The Federal Communications Commission has advanced a proposal that would require political advertisers to disclose their use of artificial intelligence in broadcast television and radio ads, though it is unclear whether new regulations may be in place before the November presidential election.

The proposed rules announced Thursday could add a layer of transparency in political campaigning that some tech watchdogs have called for to help inform voters about lifelike and misleading AI-generated media in ads….

But the FCC’s action is part of a federal turf war over the regulation of AI in politics. The move has faced pushback from the chairman of the Federal Election Commission, who previously accused the FCC of stepping on his own agency’s authority and has warned of a possible legal challenge.

Share this: