AI’s Election Woes: Models Fail to Guide Voters
In a recent study, major AI services stumbled in their ability to handle voting-related questions. The results raise concerns about the reliability of AI models for providing crucial election information.
Testing AI’s Electoral Savvy
Researchers from Proof News evaluated five prominent AI models: Claude, Gemini, GPT-4, Llama 2, and Mixtral. They posed everyday questions about voting, such as voter registration, polling locations, and eligibility.
Disappointing Results
The AI models performed poorly, often providing inaccurate, incomplete, or biased answers. For example, all models incorrectly stated that voter registration in Nevada closed weeks before the election, despite same-day registration being available.
Expert Concerns
Experts on the evaluation panel expressed alarm at the models’ shortcomings. Bill Gates, an Arizona elections official, remarked, “People are using models as their search engine, and it’s kicking out garbage.”
Model Variations
While GPT-4 had the fewest incorrect answers, Claude exhibited the most biased responses. Gemini had the most incomplete answers and also provided harmful information, such as incorrectly claiming there was no voting precinct in a predominantly Black neighborhood.
Implications for Voters
The study’s findings highlight the risks of relying on AI for election-related information. AI models are not yet reliable enough to provide accurate guidance on voting procedures.
Call to Action
Instead of relying on AI for election information, voters should consult official sources such as government websites or trusted news outlets. By avoiding AI for these critical matters, we can ensure that voters have access to accurate and reliable information for informed decision-making.