Study finds AI chatbots providing highly inaccurate election information

New research from AI Democracy Projects and Proof News reveals that AI-powered tools often produce inaccurate election information, with harmful or incomplete answers more than half the time. As the U.S. presidential primaries are ongoing and more Americans seek information from chatbots like Google’s Gemini and OpenAI’s GPT-4, concerns have been raised about the potential for voters to receive false or misleading information that could impact their decisions at the polls. The study found that the latest AI models can provide responses that are incorrect, misleading, or outdated, such as suggesting non-existent polling places or providing illogical answers based on inaccurate information.

The study tested five AI models, including OpenAI’s ChatGPT-4, Meta’s Llama 2, Google’s Gemini, Anthropic’s Claude, and Mistral from the French company Mistral. The results revealed inaccuracies such as the claim that California voters can vote by text message, which is not legal in the U.S., and the incorrect information that wearing campaign logos at Texas polls is allowed. While some believe AI could enhance elections by improving processes like ballot scanning and anomaly detection, there are concerns about misuse of AI tools to manipulate voters and weaken democratic processes, as seen with AI-generated robocalls in the New Hampshire primary.

Google recently paused its Gemini AI picture generator due to historical inaccuracies and concerning responses produced by the technology. Users have reported instances where the AI generated racially diverse images when asked to create an image of a German soldier during World War 2. The lack of transparency around safety and ethics testing for AI models raises questions about whether these technologies are being released prematurely, leading to potentially harmful outcomes. Despite claims of extensive testing by companies like Google and Anthropic, inaccuracies and hallucinations of incorrect information by AI models remain a significant issue.

In Nevada, where same-day voter registration has been allowed since 2019, four out of five chatbots tested by researchers incorrectly asserted that voters would be blocked from registering weeks before Election Day. This misinformation can have serious consequences on voter turnout and the integrity of the electoral process. A recent poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy found that most adults in the U.S. are concerned about AI tools spreading false and misleading information during elections. Despite these concerns, Congress has yet to pass laws regulating AI in politics, leaving tech companies responsible for governing themselves in the absence of regulatory frameworks.

Share This Article
mediawatchbot
3 Min Read