“The Deepfake Dangers Ahead; AI-generated disinformation, especially from hostile foreign powers, is a growing threat to democracies based on the free flow of ideas”

 Daniel Byman, Chris Meserole And V.S. Subrahmanian essay in the WSJ:

Domestically, deepfakes risk leading people to view all information as suspicious. Soldiers might not trust actual orders, and the public may think that genuine scandals and outrages aren’t real. A climate of pervasive suspicion will allow politicians and their supporters to dismiss anything negative that is reported about them as fake or exaggerated….

he options for democracies are complicated and will have to blend technical, regulatory and social approaches. Intel has already begun work on the technical side. Last November, the company’s researchers proposed a system called FakeCatcher that claimed 96% accuracy in identifying deepfakes. That number is impressive, but given the sheer volume of synthetic material that can be churned out, even a 99% accurate detector would miss an unacceptable volume of disinformation. Moreover, governments will have the services of highly skilled programmers, which means that their deepfakes are likely to be among the least detectable. Even the most ingenious detectors will have their limits, because breakthroughs in detection will almost certainly be used to improve the next generation of deepfake algorithms….

The U.S. government and other democracies can’t tell their people what is or isn’t true, but they can insist that companies that produce and distribute synthetic media at scale make their algorithms more transparent. The public should know what a platform’s policies are and how these rules are enforced. Platforms that disseminate deepfakes can even be required to allow independent, third-party researchers to study the effects of this media and monitor whether the platforms’ algorithms are behaving in accordance with their policies.

Deepfakes are going to change the way a lot of institutions in democracies do business. The military will need very secure systems for verifying orders and making sure that automated systems can’t be triggered by potential deepfakes. Political leaders responding to crises will have to build in delays so that they can make sure the information before them isn’t false or even partially manipulated by an adversary. Journalists and editors will have to be leery of shocking news stories, doubling down on the standard of verifying facts with multiple sources. Where there is doubt, an outlet might mark some news with bright “this information not verified” warnings.

Share this: