Study Finds AI Chatbots Unprepared for Election Primetime

0
453
AI Chatbots Not Ready for Election Prime Time, Study Shows






The Risks of <a href='https://ainewsera.com/how-to-use-new-google-gemini/artificial-intelligence-news/' title='Discover the Ultimate Guide for Mastering Google Gemini AI in 2024' >AI</a> Chatbots in Disseminating False Information to Voters

(Bloomberg) — The Risks of AI Chatbots in Disseminating False Information to Voters

In a year when more than 50 countries are holding national elections, a new study shows the risks posed by the rise of artificial intelligence chatbots in disseminating false, misleading or harmful information to voters.

The Study

The AI Democracy Projects, which brought together more than 40 experts, including US state and local election officials, journalists — including one from Bloomberg News — and AI experts, built a software portal to query the five major AI large language models: Open AI’s GPT-4, Alphabet Inc.’s Gemini, Anthropic’s Claude, Meta Platforms Inc.’s Llama 2 and Mistral AI’s Mixtral. It developed questions that voters might ask around election-related topics and rated 130 responses for bias, inaccuracy, incompleteness, and harm.

All of the models performed poorly. The results found that just over half of the answers given by all of the models were inaccurate and 40% were harmful. Gemini, Llama 2, and Mixtral had the highest rates of inaccurate answers — each was more than 60%. Gemini returned the highest rate of incomplete answers, 62%, while Claude had the most biased answers — 19%.

Open AI’s GPT-4 seemed to stand out, with a lower rate of inaccurate or biased responses — but that still meant 1 in 5 of its answers was inaccurate, according to the study.

Concerns and Recommendations

“The chatbots are not ready for primetime when it comes to giving important nuanced information about elections,” said Seth Bluestein, a Republican city commissioner in Philadelphia, in a statement issued by the AI Democracy Projects.

With so many elections around the world in 2024, the stakes have never been higher. While disinformation has been a challenge for voters and candidates for years, it has been turbocharged by the rise of generative AI tools that can create convincing fake images, text, and audio.

The big tech companies and the newer AI startups are all making efforts to establish safeguards to ensure election integrity. For example, Anthropic has recently said it’s redirecting voting-related prompts away from the service. Alphabet’s Google said last year that it would restrict the types of election-related queries for which its AI would return responses. And OpenAI, Amazon.com Inc., Google, and 17 other major players in AI technology have formed a consortium to try to prevent AI from being used to deceive voters in upcoming global elections.

But more guardrails are needed before the AI models are safe for voters to use, according to the report.

Experts noted several instances where the AI chatbots provided inaccurate or incomplete information that could potentially disenfranchise voters. The study highlighted the importance of seeking information from authoritative sources, such as local election websites, rather than relying on AI chatbots for critical election-related information.

Conclusion

As technology continues to advance, it is crucial to address the potential risks and limitations of AI chatbots in the dissemination of information, especially in important events such as national elections. Ensuring the accuracy and reliability of information provided by AI models is essential to safeguarding the integrity of the democratic process.