Chatbots’ false election information may deter voters from voting in U.S

A recent report published by artificial intelligence experts and a bipartisan group of election officials has raised concerns about popular chatbots generating false and misleading information that could disenfranchise voters during the ongoing presidential primaries in the U.S. The report found that chatbots like GPT-4 and Google’s Gemini, trained on internet text, are providing inaccurate and harmful responses to basic questions about the voting process. These chatbots, including models like OpenAI’s ChatGPT-4 and Meta’s Llama 2, failed in their responses when asked about topics related to the democratic process.

During a workshop at Columbia University, participants tested five large language models’ responses to questions about the election, such as finding the nearest polling place. The report synthesized the workshop findings, showing that more than half of the chatbots’ responses were inaccurate, with 40% categorized as harmful due to perpetuating outdated and incorrect information that could restrict voting rights. For example, Google’s Gemini incorrectly stated that there was no voting precinct with the ZIP code 19121 in a majority Black neighborhood in Philadelphia, highlighting the chatbots’ deficiencies in providing accurate election information.

While the workshop used a custom software tool to query the chatbots simultaneously and evaluate their responses, it may not fully represent how people interact with chatbots in real life. The findings underscore the challenges posed by generative AI in disseminating misinformation and highlight the need for responsible use of AI tools, particularly in the context of elections. Major technology companies have pledged to adopt precautions to prevent AI-generated false information, but the report’s findings suggest that more oversight and regulation may be necessary to ensure information integrity during elections.

The report’s findings also point to the broader issue of AI tools potentially increasing the spread of false and misleading information during elections, as demonstrated by AI robocalls mimicking President Joe Biden’s voice to discourage voting in New Hampshire’s primary election. While attempts at AI-generated election interference have already been observed, Congress has yet to pass laws regulating AI in politics, leaving tech companies to self-govern. The lack of regulation raises questions about the responsibility of chatbot creators to adhere to their pledges to promote information integrity and prevent the dissemination of false information to voters.

Overall, the report highlights the urgent need for increased oversight and regulation of AI tools in the context of elections to prevent the spread of false and misleading information that could disenfranchise voters. The findings suggest that chatbots like Gemini, Llama 2, and Mixtral are providing inaccurate responses to basic election-related questions, potentially impacting voters’ access to reliable information. As the presidential primaries continue, addressing the challenges posed by AI-generated misinformation remains a critical priority for ensuring the integrity of the electoral process.

Share This Article
mediawatchbot
3 Min Read