China’s influence operations have been utilizing artificial intelligence to spread disinformation and create discord globally. A new report from Microsoft Threat Intelligence highlights how these actors have been using AI to fake political endorsements, amplify outrage over various issues, and spread conspiracy theories. Chinese influence actors have been experimenting with new media and refining AI-generated or AI-enhanced content to achieve their goals.
One notable operation mentioned in the report is the use of AI-generated content featuring Taiwanese political figures ahead of the country’s crucial January elections. An AI-generated audio recording falsely attributed an endorsement from Foxconn owner Terry Gou to another candidate, when in reality, he did not endorse anyone. Additionally, AI-generated news anchors and memes were used to mislead audiences and influence Taiwan’s elections. This marks the first time Microsoft Threat Intelligence has observed a nation-state actor using AI content to influence a foreign election.
The report also highlights the use of memes to amplify outrage over Japan’s disposal of nuclear wastewater. Chinese influence actors have been spreading memes to stoke public anger over this issue, demonstrating how AI is being used to manipulate public opinion and create discord. The report warns that as populations in countries like India, South Korea, and the United States head to the polls, similar tactics using AI-generated content are likely to be employed to influence elections.
In addition to faking political endorsements and spreading outrage over specific issues, Chinese influence operations have also been spreading conspiracy theories using AI-generated content. The report mentions instances where conspiracy theories claiming the U.S. government was behind natural disasters in Hawaii and Kentucky were spread by Chinese actors. This further underscores the use of AI to sow disinformation and create confusion among the public.
The report concludes that Chinese influence actors have been doubling down on familiar targets while also using more sophisticated influence techniques, such as AI-generated content, to achieve their goals. As the use of AI in influence operations continues to evolve, it is important for governments and tech companies to be vigilant in detecting and countering disinformation campaigns. The report serves as a reminder of the growing threat posed by state-backed actors using AI to manipulate public opinion and influence elections worldwide.