Eliot Higgins, the founder of the open-source investigative outlet Bellingcat, was reading this week about the expected indictment of Donald Trump when he decided he wanted to visualize it.
He turned to an AI art generator, giving the technology simple prompts, such as, “Donald Trump falling down while being arrested.” He shared the results — images of the former president surrounded by officers, their badges blurry and indistinct — on Twitter. “Making pictures of Trump getting arrested while waiting for Trump’s arrest,” he wrote.
“I was just mucking about,” Higgins said in an interview. “I thought maybe five people would retweet it.”
Two days later, his posts depicting an event that never happened have been viewed nearly 5 million times, creating a case study in the increasing sophistication of AI-generated images, the ease with which they can be deployed and their potential to create confusion in volatile news environments. The episode also makes evident the absence of corporate standards or government regulation addressing the use of AI to create and spread falsehoods.
“Policymakers have been warning for years about the potential misuse of synthetic media to spread disinformation and more generally to sow confusion and discord,” said Sen. Mark R. Warner (D-Va.), the chairman of the Senate Intelligence Committee. “While it took a few years for the capabilities to catch up, we’re now at a point where these tools are widely available and incredibly capable.”
Warner said developers “should already be on notice: if your product directly enables harms that are reasonably foreseeable, you can be held potentially liable.” But he said policymakers also have work to do, calling for new obligations to ensure firms are addressing the dangers of artificial intelligence.
I wrote about the problem of deep fakes affecting elections, and what might be done about it consistent with the First Amendment, in my book Cheap Speech.