Tag Archives: AI

“Overcoming Racial Harms to Democracy from Artificial Intelligence”

I posted this paper to SSRN, which will be published in the Iowa Law Review.  The abstract is below: 

While the United States is becoming more racially diverse, generative artificial intelligence and related technologies threaten to undermine truly representative democracy. Left unchecked, AI will exacerbate already substantial existing challenges, such as racial polarization, cultural anxiety, antidemocratic attitudes, racial vote dilution, and voter suppression. Synthetic video and audio (“deepfakes”) receive the bulk of popular attention—but are just the tip of the iceberg. Microtargeting of racially tailored disinformation, racial bias in automated election administration, discriminatory voting restrictions, racially targeted cyberattacks, and AI-powered surveillance that chills racial justice claims are just a few examples of how AI is threatening democracy. Unfortunately, existing laws—including the Voting Rights Act—are unlikely to address the challenges. These problems, however, are not insurmountable if policymakers, activists, and technology companies act now. This Article asserts that AI should be regulated to facilitate a racially inclusive democracy, proposes novel principles that provide a framework to regulate AI, and offers specific policy interventions to illustrate the implementation of the principles. Even though race is the most significant demographic factor that shapes voting patterns in the United States, this is the first article to comprehensively identify the racial harms to democracy posed by AI and offer a way forward.

Share this:

“As social media guardrails fade and AI deepfakes go mainstream, experts warn of impact on elections”

PBS NewsHour:

Nearly three years after rioters stormed the U.S. Capitol, the false election conspiracy theories that drove the violent attack remain prevalent on social media and cable news: suitcases filled with ballots, late-night ballot dumps, dead people voting.

Experts warn it will likely be worse in the coming presidential election contest. The safeguards that attempted to counter the bogus claims the last time are eroding, while the tools and systems that create and spread them are only getting stronger.

Share this:

“Arizona creates own deep-fake election hoaxes to prepare for 2024”

Politico:

Arizona’s top elections official has a novel plan to prevent artificial intelligence from supercharging election hoaxes in 2024: Test the technology on himself first.

After his key swing state became a magnet for election fraud conspiracy theories in the 2020 presidential election, Secretary of State Adrian Fontes is leading a series of exercises to prepare the Grand Canyon State for a range of likely threats to next year’s vote, foremost among them the use of open access AI tools to amplify disinformation.

Arizona held the first such simulation last weekend, a two-day exercise involving roughly 200 stakeholders from across the state and a handful from the federal government. In it, Fontes tried to fool participants by presenting them with AI-generated audio and video of key officials — including Fontes himself — spinning falsehoods.

Share this:

“Elon Musk promised an anti-‘woke’ chatbot. It’s not going as planned.”

WaPo:

Decrying what he saw as the liberal bias of ChatGPT, Elon Musk earlier this year announced plans to create an artificial intelligence chatbot of his own. In contrast to AI tools built by OpenAI, Microsoft and Google, which are trained to tread lightly around controversial topics, Musk’s would be edgy, unfiltered and anti-“woke,” meaning it wouldn’t hesitate to give politically incorrect responses.

That’s turning out to be trickier than he thought.

Two weeks after the Dec. 8 launch of Grok to paid subscribers of X, formerly Twitter, Musk is fielding complaints from the political right that the chatbot gives liberal responses to questions about diversity programs, transgender rights and inequality.

Share this: