The era of AI persuasion in elections is about to begin
All of this means that actors, whether well-resourced organizations or grassroots collectives, have a clear path to deploying politically persuasive AI on a large scale. Early demonstrations have already occurred elsewhere in the world. In India’s 2024 general election, tens of millions of dollars are being spent on AI to segment voters, identify swing voters, deliver personalized messages through automated calls and chatbots, and much more. In Taiwan, officials and researchers have documented China-linked operations using generative AI to produce more accurate disinformation, ranging from deepfakes to language model outputs biased toward Chinese Communist Party-approved messages.
It’s only a matter of time before this technology reaches the US election, if it hasn’t already. Foreign adversaries are well positioned to move first. China, Russia, Iran, and other countries already maintain networks of troll farms, bot accounts, and covert influence operators. Combined with open source language models that generate fluent and localized political content, these processes can be dramatically enhanced. In fact, there is no longer a need for human operators who understand language or context. Using a spotlight setting, a model can impersonate a neighborhood organizer, a union representative, or a disgruntled parent without anyone ever setting foot in the country. Political campaigns themselves are likely to lag behind. Every major operation is already segmenting voters, testing messages, and improving delivery. AI reduces the cost of doing it all. Instead of testing a slogan in a poll, a campaign can generate hundreds of arguments, present them one-on-one, and monitor in real time which ones change opinions.
The basic truth is simple: persuasion has become effective and cheap. Campaigns, political action committees, foreign actors, advocacy groups, and opportunists are all playing on the same field, and there are very few rules.
Political vacuum
Most policymakers have not been able to catch up. Over the past several years, US lawmakers have focused on deepfakes, but ignored the broader persuasive threat.
Foreign governments have begun to take the problem more seriously. The EU Artificial Intelligence Act 2024 classifies election-related persuasion as a “high-risk” use case. Any system designed to influence voting behavior is now subject to strict requirements. Administrative tools, such as artificial intelligence systems used to plan campaign events or optimize logistics, are exempt. However, tools intended to shape political beliefs or voting decisions are not.
By contrast, the United States has so far refused to draw any meaningful borders. There are no binding rules about what constitutes a political influence operation, no external standards to guide implementation, and no common infrastructure for tracking AI-generated persuasion across platforms. Federal and state governments have shown interest in regulation — the Federal Election Commission is enforcing outdated fraud provisions, the Federal Communications Commission has proposed narrow disclosure rules for radio ads, and a group of states have passed deepfake laws — but these efforts are fragmented and leave most digital campaigns untouched.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-12-05 10:00:00



