Beijing has "doubled down" on targets and increased sophistication of its influence operations, Microsoft threat analysis center general manager Clint Watts said in a report released late Thursday.
"China is using fake social media accounts to poll voters on what divides them most to sow division and possibly influence the outcome of the US presidential election in its favor," Watts said in the report.
"China has also increased its use of AI-generated content to further its goals around the world" as well as in the US.
Chinese influence operations continue to "opportunistically jump" on events such as a train derailment in Kentucky and wildfires in Maui to promote mistrust of the US government, according to the report.
The polling about controversial US domestic issues "indicates a deliberate effort to understand better which US voter demographic supports what issue or position and which topics are the most divisive, ahead of the main phase of the US presidential election," Watts wrote.
The report concluded there is little evidence that the influence operations have succeeded in swaying opinions thus far.
The threat center reported late last year that social media accounts "affiliated" with the Chinese government had used social media accounts to impersonate US voters to influence midterm elections in 2022.
"This activity has continued and these accounts nearly exclusively post about divisive US domestic issues such as global warming, US border policies, drug use, immigration, and racial tensions," Watts wrote.
"They use original videos, memes, and infographics as well as recycled content from other high-profile political accounts."
Microsoft saw a surge in the use of AI-generated content used to augment China-linked online influence operations aimed at the presidential election in Taiwan in January, according to Watts.
"With major elections taking place around the world this year, particularly in India, South Korea and the United States, we assess that China will, at a minimum, create and amplify AI-generated content to benefit its interests," Watts wrote.
Microsoft's report also noted that North Korea has begun to use AI to steal cryptocurrency, attack supply chains, and gather military intelligence more effectively.
Meta to start labeling AI-generated content in May
Washington (AFP) April 5, 2024 -
Facebook and Instagram giant Meta on Friday said it will begin labeling AI-generated media beginning in May, as it tries to reassure users and governments over the risks of deepfakes.
The social media juggernaut added that it will no longer remove manipulated images and audio that don't otherwise break its rules, relying instead on labeling and contextualization, so as to not infringe on freedom of speech.
The changes come as a response to criticism from the tech giant's oversight board, which independently reviews Meta's content moderation decisions.
The board in February requested that Meta urgently overhaul its approach to manipulated media given the huge advances in AI and the ease of manipulating media into highly convincing deepfakes.
The board's warning came amid fears of rampant misuse of artificial intelligence-powered applications for disinformation on platforms in a pivotal election year not only in the United States but worldwide.
Meta's new "Made with AI" labels will identify content created or altered with AI, including video, audio, and images. Additionally, a more prominent label will be used for content deemed at high risk of misleading the public.
"We agree that providing transparency and additional context is now the better way to address this content," Monika Bickert, Meta's Vice President of Content Policy, said in a blog post.
"The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling," she added.
These new labeling techniques are linked to an agreement made in February among major tech giants and AI players to cooperate on ways to crack down on manipulated content intended to deceive voters.
Meta, Google and OpenAI had already agreed to use a common watermarking standard that would invisibly tag images generated by their AI applications.
Identifying AI content "is better than nothing, but there are bound to be holes," Nicolas Gaudemet, AI Director at Onepoint, told AFP.
He took the example of some open source software, which doesn't always use this type of watermarking adopted by AI's big players.
- Biden deepfakes -
Meta said its rollout will occur in two phases with AI-generated content labeling beginning in May 2024, while the removal of manipulated media solely based on the old policy will cease in July.
According to the new standard, content, even if manipulated with AI, will remain on the platform unless it violates other rules, such as those prohibiting hate speech or voter interference.
Recent examples of convincing AI deepfakes have only heightened worries about the easily accessible technology.
The board's list of requests was part of its review of Meta's decision to leave a manipulated video of US President Joe Biden online last year.
The video showed Biden voting with his adult granddaughter, but was manipulated to falsely appear that he inappropriately touched her chest.
In a separate incident not linked to Meta, a robocall impersonation of Biden pushed out to tens of thousands of voters urged people to not cast ballots in the New Hampshire primary.
In Pakistan, the party of former prime minister Imran Khan has used AI to generate speeches from their jailed leader.
Related Links
All about the robots on Earth and beyond!
Subscribe Free To Our Daily Newsletters |
Subscribe Free To Our Daily Newsletters |