ChatGPT could be used by hackers to target 2024 elections

The rise of generative AI tools like ChatGPT has increased the potential for a wide range of attackers to target elections around the world in 2024, according to a new report by cybersecurity giant CrowdStrike. Both state-linked hackers and allied “hacktivists” are increasingly using AI tools to carry out cyberattacks and scams, with hackers linked to Russia, China, North Korea, and Iran testing new ways to utilize these technologies against various countries. With half the world’s population set to vote in 2024, the use of generative AI in targeting elections could have a significant impact.

Adam Meyers, head of counter-adversary operations at CrowdStrike, warns that the use of generative AI in cyberattacks is expected to worsen throughout the year. Analysts have been able to detect the use of these models through comments in the scripts that would have been placed there by tools like ChatGPT. If state-linked actors continue to enhance their use of AI, it could democratize the ability to conduct high-quality disinformation campaigns and accelerate the pace of cyberattacks.

The CrowdStrike report predicts that adversaries will likely use AI tools to generate deceptive but convincing narratives to conduct information operations against elections in 2024. Politically active partisans within countries holding elections may also use generative AI to create and disseminate disinformation within their own circles. This highlights the potential for AI tools to be used to manipulate public opinion and influence election outcomes.

The report also points out that the digital battleground has expanded beyond active conflict zones like Ukraine and Gaza, with groups linked to Yemen, Pakistan, Indonesia, and other regions also utilizing generative AI tools for cyberattacks. This demonstrates the widespread adoption of AI technologies for malicious purposes and the need for increased cybersecurity measures to protect against such threats. The use of AI in cyberattacks poses a significant challenge for cybersecurity experts in detecting and mitigating these threats effectively.

In conclusion, the use of generative AI tools like ChatGPT in cyberattacks targeting elections is a growing concern, with state-linked hackers and hacktivists increasingly experimenting with AI technologies. The democratization of high-quality disinformation campaigns through AI tools poses a significant threat to election integrity and public trust. As the digital battleground expands globally, cybersecurity experts must remain vigilant in detecting and countering the use of AI in malicious activities to safeguard against potential threats to democratic processes and national security.

Share This Article
mediawatchbot
3 Min Read