With Election Day so close, you might be thinking about using AI in your ads to save you time and money in these final weeks. Before you jump in, be sure you know where AI can be used, which states require disclaimers, and how those disclaimers could impact your message.
Federal Updates
New FCC Rules on AI: What They Mean for your Texting Strategy
Texting is key for reaching voters and donors, and AI makes it more efficient and personal. But the FCC’s new proposed rules could complicate this by requiring disclaimers on AI-generated text messages and robocalls. This could turn off voters before they hear your message, including GOTV outreach. Smaller campaigns might also struggle with added costs. The AAPC is advocating for more targeted rules to protect political free speech, ensuring that campaigns can still effectively reach voters, while addressing transparency concerns.
State Updates
New Delaware Law Targets Deep Fakes in Political Campaigns
Governor John Carney signed HB 316 into law, making it illegal to distribute altered media—like manipulated images, audio, or video—within 90 days of an election if the goal is to deceive voters or harm a candidate. There’s an important exception: if the content includes a clear disclosure that it’s been altered or artificially generated, or if the candidate has consented to it, it’s allowed. Satire, parody, and genuine news outlets are also protected, as long as they include the label: “This audio/video/image has been altered or artificially generated.”
The new law is a direct response to concerns about synthetic media—content that digitally alters someone’s appearance, speech, or conduct. While the AAPC condemns the use of AI for deceptive purposes, it believes AI can be an appropriate and valuable tool when used ethically, enhancing campaign effectiveness without misleading voters.
Latest Research
AI in Political Ads: Boosting Efficiency or Hurting Trust?
With over $10 billion expected to be spent on ads this cycle, it’s hard to ignore the potential of AI to save time and money without sacrificing quality. Two new studies show that using AI can be an effective tool for creating persuasive political ads but, there’s a catch: labeling requirements might limit the impact.
Higher Ground Labs (HGL) explored whether AI can be as persuasive as those made by humans while also cutting costs. Their research found that it can–if done right. Teams that fully embraced AI and had the expertise to use it effectively produced ads that were just as persuasive as those created solely by human teams. But that’s only part of the story. New research from NYU’s Center on Technology Policy highlights a potential “backfire effect.” Director Scott Babwah Brennan explains that ads labeled as AI-generated were seen as less trustworthy, with voters more skeptical of the candidate behind the ad. You can hear more from Dr. Brennan on POLITICO’s Tech podcast, where he discusses the broader impact of AI in political messaging.
The AAPC Foundation and AAPC Advocacy are keeping a close eye on innovations in this space, especially as states introduce new labeling requirements. Do you have an AI success story? Share it with us and our partners at the NYU Center on Technology Policy and NYU Center on Social Media and Policy at the AIPoliticalArchive.org.