Microsoft’s and Google’s chatbots’ inability to provide accurate responses about election results comes amid the most significant global election year in modern history and just five months before the 2024 US election, with 30% of Americans still believing the baseless claims that the 2020 election was stolen, a narrative continually pushed by Trump and his supporters. (Source: Image by RR)

Six Months Before U.S. Presidential Election, AI Tools Struggle with Basic Election Queries

AI chatbots, such as OpenAI’s ChatGPT-4, Meta’s Llama, and Anthropic’s Claude, provide responses affirming Biden’s victory and offer detailed information about both historical and contemporary election results. This disparity, as reported in wired.com, highlights a significant gap in functionality between Microsoft’s and Google’s AI offerings and those of their competitors, especially in terms of handling sensitive and factual information about elections. The refusal of Microsoft and Google’s chatbots to address election results comes at a critical time, with a pivotal US election just months away and widespread, yet unfounded, beliefs in voter fraud still prevalent. This limitation could undermine the reliability and trust in these AI tools, particularly when accurate information is essential for public discourse.

Google and Microsoft have acknowledged these restrictions, citing caution and the need for further development to meet their expectations for the 2024 elections. Microsoft stated that some election-related prompts might be redirected to search while improvements are made to their tools. In December, WIRED reported that Microsoft’s AI chatbot had previously responded to political queries with conspiracies, misinformation, and outdated information, indicating systemic issues with its accuracy. This included referencing in-person voting in irrelevant contexts and suggesting extremist content channels when asked for election information, raising significant concerns about the chatbot’s reliability.

Both companies are taking steps to refine their AI tools to ensure they provide accurate and reliable information while also safeguarding against misinformation. Research shared by AIForensics and AlgorithmWatch claimed that Copilot’s election misinformation was systemic, noting inaccuracies in reporting polling numbers, incorrect election dates, and made-up controversies. These issues underscore the importance of ongoing development and rigorous testing to prevent the dissemination of false information, especially in the context of elections.

Microsoft and Google have emphasized their commitment to enhancing voter protection and ensuring their AI tools meet high standards of performance for the upcoming elections. Microsoft’s spokesperson stated the company is dedicated to addressing issues and preparing their tools to meet expectations for the 2024 elections. This involves not only correcting inaccuracies but also building systems that can handle sensitive topics with the necessary nuance and factual accuracy, ultimately aiming to protect voters, candidates, campaigns, and election authorities from misinformation and its potential impacts.

read more at wired.com