AI chatbots can sway voters better than political advertisements
“One conversation with an LLM has a very beneficial effect on salient electoral choices,” says Gordon Pennycook, a psychologist at Cornell University who has studied law. nature He studies. He says LLMs can persuade people more effectively than political ads, because they generate much more information in real time, and deploy it strategically in conversations.
to nature In a paper, researchers recruited more than 2,300 participants to engage in a conversation with a chatbot two months before the 2024 US presidential election. The chatbot, which was trained to defend a front-runner, was relatively persuasive, especially when discussing the candidates’ political platforms on issues like the economy and health care. Donald Trump supporters who spoke to the AI model that favored Kamala Harris became slightly more likely to support Harris, moving toward her by 3.9 points on a 100-point scale. This was nearly four times the measured impact of political ads during the 2016 and 2020 elections. The AI model favoring Trump moved Harris supporters by 2.3 points toward Trump.
In similar experiments conducted during the run-up to the 2025 Canadian federal election and the 2025 Polish presidential election, the team found a larger effect. Chatbots changed opposition voters’ positions by about 10 points.
Older theories of politically motivated reasoning hold that partisan voters are unaffected by facts and evidence that conflict with their beliefs. But the researchers found that the chatbots, which used a range of models including GPT and DeepSeek variants, were more persuasive when asked to use facts and evidence than when asked not to. “People update based on the facts and information the model gives them,” says Thomas Costello, a psychologist at American University who worked on the project.
The problem is that some of the “evidence” and “facts” provided by the chatbots were incorrect. In all three countries, chatbots defending right-leaning candidates made more inaccurate claims than those defending left-leaning candidates. The underlying models are trained on vast amounts of human-written text, meaning they reproduce real-world phenomena, including “political communication that comes from the right, which tends to be less accurate,” according to studies of partisan social media posts, Costello says.
In another study published this week, in sciencesan overlapping team of researchers investigated what makes these chatbots so compelling. They deployed 19 LLMs to engage with nearly 77,000 participants from the UK on more than 700 policy issues with different factors such as computational power, training techniques, and rhetorical strategies.
The most effective way to make the models persuasive was to guide them to combine their arguments with facts and evidence and then give them additional practice by giving them examples of persuasive conversations. In fact, the most convincing model shifted participants who initially disagreed with a political statement by 26.1 points toward agreement. “These are really big therapeutic implications,” says Coby Hackenberg, a research scientist at the AI Security Institute in the UK, who worked on the project.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-12-04 19:54:00



