Cornell-led studies suggest conversational AI can meaningfully persuade voters—often by rapidly generating dense ‘factual’ arguments—while also increasing the risk of inaccuracies as persuasion is optimized. (Source: Image by RR)

Studies Highlight Growing Need for Disclosure, Auditing with Content Controls

New research from Cornell University suggests that conversational AI chatbots can significantly influence political attitudes after only brief interactions, raising fresh concerns about how AI-generated messaging could shape elections. As noted in an article in theaiinsider.tech, after conducting experiments in multiple countries, researchers have found these systems can shift voters’ preferences on candidates and policies by amounts that exceed the measured impact of traditional political advertising in past cycles.

In a Nature-reported set of studies led by David Rand and Gordon Pennycook, chatbots were tasked with persuading voters in contexts including the 2024 U.S. presidential election, the 2025 Canadian federal election, and the 2025 Polish presidential race. The results indicated measurable movement in voter opinion in the U.S., and stronger effects in Canada and Poland, where opposition voters reportedly shifted attitudes and intentions by roughly 10 percentage points, especially when the chatbot was framed as polite and fact-based.

A key mechanism behind the persuasion effect appears to be volume: the chatbots’ ability to produce dense streams of “factual” claims in support of a position. When researchers instructed models to avoid making factual claims, their persuasive power largely collapsed—suggesting evidence-heavy argumentation (even when incomplete) is central to influence. However, the researchers also flagged a major hazard: while many claims were accurate, some were not, and persuasion-optimized behavior increased the risk of errors and misleading omissions.

A companion Science study conducted with the UK AI Security Institute extended the analysis to a much larger sample, reporting that persuasion strength increased with larger models and with instructions or training that pushed the systems to provide more “facts.” The papers collectively argue that as chat-style AI becomes embedded in everyday tools, the probability of voters encountering persuasive AI interactions rises—making safeguards, transparency, and resilience strategies increasingly important ahead of real campaign deployment.

read more at theaiinsider.tech