TThe newscaster exudes a very eerie air as he delivers partisan and derogatory messages in Mandarin. Taiwan's outgoing president, Tsai Ing-wen, is as effective as a limp spinach, and her tenure has been plagued by poor economic performance, social problems and protests.
“water spinach looks at water spinach.The water turns out “Spinach is more than just a name,” the presenter says in an extended metaphor about Tsai. “Hollow Tsai” is a pun related to the Mandarin Chinese word for “Hollow Tsai”.
This is not traditional broadcast journalism, even if the lack of impartiality is no longer shocking. The anchor is generated by an artificial intelligence program and is attempting to influence Taiwan's presidential election, albeit clumsily.
The source and creator of the video are unknown, but the clip is intended to make voters question Taiwan's desire to remain at arm's length from China, which claims it is part of its territory. The purpose is This is the latest example of deepfake news, a subgenre of AI-generated disinformation games, by anchors and TV presenters.
Such avatars proliferate on social networks, spreading state-backed propaganda. Experts say this type of video will continue to be popular as the technology becomes more widely available.
“It doesn't have to be perfect,” said Tyler Williams, research director at disinformation research firm Graphica. “If a user is just scrolling through X or TikTok, they won't be able to see the small nuances on a small screen.”
The Chinese government is already experimenting with AI-generated news anchors. In 2018, the state-run Xinhua News Agency announced Qiu Hao, a digital news presenter who promised to bring news to viewers “24 hours a day, 365 days a year.” Although the Chinese public is generally keen on using digital avatars in the media, Qiu Hao has been unable to popularize them more widely.
China is at the forefront of the disinformation element in this trend. Last year, a pro-China bot account on Facebook and X distributed an AI-generated deepfake video of a newscaster representing a fictitious station called Wolf News. One video accused the US government of failing to address gun violence, while another highlighted China's role at an international summit.
Microsoft said in a report released in April that Chinese state-backed cyber groups were targeting Taiwan's elections with AI-generated disinformation content, including the use of fake news anchors and TV-style presenters. In one clip cited by Microsoft, an AI-generated anchor made unsubstantiated claims about the personal life of the ultimately successful pro-sovereignty candidate, Kiyonori Lai, claiming that he had fathered a child out of wedlock. .
Microsoft said the news anchor was created using CapCut, a video editing tool developed by ByteDance, the Chinese company that owns TikTok.
Clint Watts, general manager of Microsoft's Threat Analysis Center, noted that China officially uses synthetic news anchors in its domestic media market, which has allowed the country to hone the format. So far, it has had little noticeable impact, but it has now become a tool for disinformation.
“The Chinese are much more focused on incorporating AI into their systems, including propaganda and disinformation, and they moved there very quickly. They're trying everything that's particularly effective. That's not true,” Watts said.
Third-party vendors such as CapCut provide the News Anchor format as a template, so it's easy to adapt and create in bulk.
Some clips feature avatars that look like a combination of a professional TV presenter and an influencer speaking directly to the camera. One video produced by the Chinese state-backed group Storm 1376 (also known as Spamoflage) features an AI-generated blonde female presenter who explains that the US and India are secretly selling weapons to Myanmar's military. It is claimed that
The overall effect is anything but convincing. Despite the realistic presenter, the video is marred by stiff, clearly computer-generated audio. In other examples unearthed by NewsGuard, an organization that monitors misinformation and disinformation, TikTok accounts linked to Spamfrage used AI-generated avatars to post information about U.S. policies such as food and gas prices. Shows you are commenting on a news article. In one video, an avatar with a computer-generated voice discusses Walmart prices under the slogan, “Is Walmart lying to you about the weight of their meat?”
NewsGuard said the Avatar video was part of a pro-China network that was “expanding” ahead of the US presidential election. He pointed out that 167 accounts created since last year were linked to Spamouflage.
Other countries are also experimenting with deepfake newscasters. Iranian government-backed hackers recently disrupted a television streaming service in the United Arab Emirates, broadcasting a deepfake newscaster reporting on the Gaza war. The Washington Post reported Friday that the Islamic State terrorist group is using newscasters wearing AI-generated helmets and military uniforms to broadcast propaganda.
And one European country is openly experimenting with AI-generated presenters. Ukraine's Ministry of Foreign Affairs launched an AI public relations officer, Victoria Sea, with the likeness of Ukrainian singer and media personality Rosalie Nombre, who gave permission to use the image. The result is impressive, at least at first glance.
Last year, the Chinese government issued guidelines on content tagging, stating that images and videos generated using AI must be clearly watermarked. But Jeffrey Ding, an assistant professor at George Washington University who specializes in technology, said it was an “open question” how tagging requirements would be enforced in practice, especially when it comes to state propaganda.
Chinese guidelines also call for minimizing “false” information in AI-generated content, but the priority for Chinese regulators is to “control the flow of information; “Ensure that the content produced is not politically sensitive and does not cause social disorder,” Ding said. . In other words, when it comes to fake news, “what the Chinese government considers to be disinformation on the Taiwan front may be very different from an appropriate or truer interpretation of the disinformation.”
Experts still don't think computer-generated newscasters are an effective scam. Despite Avatar's best efforts, Tsai's pro-sovereignty party won in Taiwan. Macrina Wang, NewsGuard's deputy news verification editor, said the Avatar content she saw was “very rough” but that the amount was increasing. To a trained eye, these videos are clearly fake, she said, with awkward movements and a lack of light and shadow changing the avatar's appearance among the perks. Nevertheless, some of the comments below the TikTok video show that people are into it.
“There's a danger that ordinary people would think that way.” [avatar] “She’s a real person,” she said, adding that AI is making video content “more compelling, clickable, and viral.”
Microsoft's Watts said the likely evolution of newscaster tactics is footage of actual newscasters being manipulated, rather than fully AI-generated figures. Watts said he could see “anchors at major news outlets being manipulated into saying things they're not saying.” That's “much more likely” than a fully synthetic effort.
Microsoft researchers said in a report last month that there aren't many examples of AI-generated content having an impact in the offline world.
“There are very few examples of nation-states adopting AI-powered content to achieve significant reach across social media, and only a few cases of true audience deception by such content have been observed. only one incident,” the report states.
Instead, viewers are drawn to simple fabrications, such as fake text stories embossed with parody media logos.
Watts said it's possible that fully AI-generated videos could influence elections, but the tools to create such clips don't yet exist. “Maybe the tools used in that video aren't commercially available yet.” The most effective AI video messenger may not be a newscaster yet. But it underscores the importance of video for states seeking to confuse voters.
Threat actors are also waiting for examples of AI-generated videos that capture the attention of viewers, leading them to replicate them. OpenAI and Google have been demoing AI video makers in recent months, but neither has made the tool available to the public.
“The effective use of synthetic personas in videos that people actually watch will first occur in the commercial space. And you'll see threat actors move there.”
Additional research by Chi Hui Lin