“Adaptation and Innovation: The Civic Space Response to AI-Infused Elections”

New report from CDT. From the Introduction:

AI avatars delivered independent news about Venezuela’s contested election, allowing journalists to protect their identity and avoid politically motivated arrest. Voters in the United Kingdom could cast their ballots for an AI avatar to hold a seat in Parliament. A deepfake video showed United States President Joe Biden threatening to impose sanctions on South Africa if the incumbent African National Congress won.

These are a few of the hundreds of ways generative AI was used during elections in 2024, a year that was touted as “the year of elections” and described as the moment in which newly widespread AI tools could do lasting damage to human rights and democracy worldwide. Though technology and security experts have described deepfakes as a threat to elections since at least the mid to late 2010s, the concentrated attention in 2024 was a reaction to the AI boom in the preceding year. In September 2023, a leading parliamentary candidate in Slovakia lost after a fake audio smearing him was released two days before the election, prompting speculation that the deepfake had changed the election results. At the beginning of the year, OpenAI’s ChatGPT set a record as the “fastest-growing consumer application in history.” 

Though 2024 ended with debates over the extent to which the risks AI posed to elections were overstated, in one way the consequences were clear: the technology changed the way stakeholders around the world did their work. Governments from Brazil to the Philippines passed new laws and regulations to govern the use of generative AI in elections. The European Commission published guidelines for how large companies should protect the information environment ahead of the June 2024 elections, including by labeling AI-generated content. US election administrators adopted new communication tactics that were tailored to an AI-infused information environment.

Political campaigns and candidates adopted AI tools to create advertisements and help with voter outreach. Candidates in Indonesia paid for a service that used ChatGPT to write speeches and develop campaign strategies. In India, candidates used deepfake audio and video of themselves to enable more personalized outreach to voters. Germany’s far right AfD party ran anti-immigrant ads on Meta platforms, some of which incorporated AI-altered images.

Social media platforms and AI developers implemented some election integrity programs, despite recent cuts to trust and safety teams. Twenty-seven technology companies signed the AI Elections Accord, a one-year commitment to addressing “deceptive AI election content” through improved detection, provenance, and other efforts. Google restricted the Gemini chatbot’s responses to election-related queries, and OpenAI announced that ChatGPT would redirect users to external sources when users asked about voting ahead of certain elections. Google and Jigsaw worked with media, civil society, and government partners on public media literacy ahead of the European Union elections, including about generative AI. 

In anticipation of AI tools accelerating or increasing threats to the information environment, civic space actors changed their work, too. This report looks at their contributions to a resilient information environment during the 2024 electoral periods through three case studies: (I) fact-checking collectives in Mexico, (II) decentralization and coordination among civil society in Taiwan, and (III) AI incident tracking projects by media, academics, and civil society organizations. 

The case studies highlight a range of approaches to building resilient information environments. They show the ways artificial intelligence complicates that work, as well as how it can be used to support resilience building efforts. The mix of approaches — from fact-checking bots on WhatsApp to cataloging hundreds of deepfakes — tap into information resilience from different angles.

The Mexico case study focuses on the development and tactics of fact-checking collectives, especially in the context of a hostile media environment. The case study also considers the role of AI-generated content in the 2024 election and how WhatsApp and AI are used in fact-checking work.

The Taiwan case study also examines a collaborative but decentralized civil society model. Unlike the Mexican case, however, Taiwan is subject to a prolific amount of Chinese government-linked disinformation campaigns. The case study considers the roles of research into influence operations, fact-checking or information literacy programming, and government policies to counter misinformation. 

The third case study looks at how civil society organizations, journalists, and academics tracked the use of generative AI in elections throughout 2024, both in the US and globally. Their work was an important contribution to the current public understanding of how AI was used and offers lessons for improving research and policy in the future, including challenges in data collection and how to conduct well-balanced research on such a high-profile subject. The interviews CDT conducted for this case study also give a snapshot of expert thinking on the impact that generative AI had on elections in 2024.

Though the case studies span different political contexts and types of interventions, common themes emerged. Organizations benefited from complementary or collaborative work with peer groups. They also used AI to bolster their own work. Civic space actors contended with funding and capacity constraints, insufficient access to information from companies, difficulty detecting and verifying AI-generated content, and the politicization of media resilience work, including fact-checking.

Finally, the case studies emphasize that the issue of AI in elections is not temporary. Civic space actors have been addressing the risks and exploring the opportunities AI presents for years — long before the media and policy attention of 2024. These groups will continue to be invaluable resources and partners for public and private actors in 2025 and beyond. 

And their work will be urgently needed. The end of companies’ commitments under the AI Elections Accord and a global political environment that is increasingly hostile to work relating to elections, fact-checking, and disinformation research mark an absence of leadership on the most pressing threats to information resilience. To that end, the report concludes with recommendations for how companies and civic space actors can continue to support information resilience by fostering collaboration, developing company policies, and strengthening transparency and data access.

Share this: