Chatbots are surprisingly effective at debunking conspiracy theories
But the facts it’s not dead. Our findings on conspiracy theories are the latest — and perhaps the most extreme — in an emerging body of research that demonstrates the persuasive power of facts and evidence. For example, while it was once thought that correcting lies that were consistent with one’s politics would prompt people to dig deeper and believe them more, the idea of “backfire” has itself been debunked: many studies consistently find that corrections and warning labels reduce belief in and sharing of lies — even among those who distrust the fact-checkers who make the corrections. Likewise, evidence-based arguments can change partisans’ opinions on political issues, even when they are actively reminded that the argument conflicts with their party’s leader’s position. Simply reminding people to think about whether content is accurate before sharing it can significantly reduce the spread of misinformation.
If facts do not die, there is hope for democracy – although this requires an agreed-upon set of facts within which competing factions can work. There is already widespread partisan disagreement over basic facts, and a worrying level of belief in conspiracy theories. However, this does not necessarily mean that our minds are unavoidably distorted by our politics and identities. When many people are confronted with evidence—even uncomfortable or uncomfortable evidence— He does Change their thinking in response. Thus, if accurate information can be disseminated widely enough, perhaps with the help of artificial intelligence, we may be able to re-establish the realistic common ground missing from society today.
You can try our debunking bot for yourself at debunkbot.com.
Thomas Costello is an assistant professor of social and decision sciences at Carnegie Mellon University. His research combines psychology, political science, and human-computer interaction to examine where our views come from, how they differ from person to person, and why they change—as well as the sweeping effects of artificial intelligence on these processes.
Gordon Pennycook is the Dorothy and Ariz Mehta Faculty Leadership Fellow and associate professor of psychology at Cornell University. It examines the causes and consequences of analytical thinking, exploring how intuitive versus deliberative thinking shapes decision-making to understand the errors behind issues such as climate inaction, health behaviors, and political polarization.
David Rand is professor of information science, marketing, management communication, and psychology at Cornell University. It uses approaches from computational social science and cognitive science to explore how human-AI dialogue can correct inaccurate beliefs, why people share lies, and how to reduce political polarization and promote cooperation.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-10-30 10:00:00



