In one video, a news anchor with perfectly combed dark hair and a stubbly beard outlined what he saw as the United States’ shameful lack of action against gun violence.
In another video, a female news anchor heralded China’s role in geopolitical relations at an international summit meeting.
But something was off. Their voices were stilted and failed to sync with the movement of their mouths. Their faces had a pixelated, video-game quality and their hair appeared unnaturally plastered to the head. The captions were filled with grammatical mistakes.
The two broadcasters, purportedly anchors for a news outlet called Wolf News, are not real people. They are computer-generated avatars created by artificial intelligence software. And late last year, videos of them were distributed by pro-China bot accounts on Facebook and Twitter, in the first known instance of “deepfake” video technology being used to create fictitious people as part of a state-aligned information campaign.
“This is the first time we’ve seen this in the wild,” said Jack Stubbs, the vice president of intelligence at Graphika, a research firm that studies disinformation. Graphika discovered the pro-China campaign, which appeared intended to promote the interests of the Chinese Communist Party and undercut the United States for English-speaking viewers.
“Deepfake” technology, which has progressed steadily for nearly a decade, has the ability to create talking digital puppets. The A.I. software is sometimes used to distort public figures, like a video that circulated on social media last year falsely showing Volodymyr Zelensky, the president of Ukraine, announcing a surrender. But the software can also create characters out of whole cloth, going beyond traditional editing software and expensive special effects tools used by Hollywood, blurring the line between fact and fiction to an extraordinary degree.