Notice of Proposed Rulemaking on Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements

FOR IMMEDIATE RELEASE

September 19, 2024

 

AAPC CONDEMNS THE USE OF DEEPFAKES IN POLITICAL ADVERTISEMENTS 

Notice of Proposed Rulemaking on Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements

 

American Association of Political Consultants (AAPC)
1750 Tysons Boulevard, Suite 1500
McLean, VA 22102 

To: 

Chairwoman Jessica Rosenworcel
Federal Communications Commission
45 L Street NE
Washington, DC 20554

Dear Chairwoman Rosenworcel, 

We are writing on behalf of the American Association of Political Consultants (“AAPC”) to oppose the FCC’s proposed rule on the disclosure of AI-generated content in political advertisements. The AAPC believes that this proposed rule is overly broad and arbitrary, exceeds the authority of the FCC, threatens to harm protected speech, and fails to address the true problem of deceptive AI-generated content. 

Founded in 1969, the AAPC is a bipartisan organization of political and public affairs professionals dedicated to improving democracy. The AAPC has more than 1,700 members worldwide. The Board of Directors is comprised of 32 members, evenly divided between Republicans and Democrats. It is the largest association of political and public affairs professionals in the world. 

As a preliminary matter, the AAPC highlights its longstanding position that it staunchly opposes the use of deepfakes in political advertisements. As early as May 2023, the AAPC issued a public statement that its bipartisan Board of Directors unanimously agreed to condemn the use of generative AI “deepfake” content in political campaigns.1In a press release, the AAPC Board “unanimously agreed that the use of ‘deepfake’ generative Artificial Intelligence (AI) content is dramatically different and dangerous threat to democracy.”2 The AAPC’s position on AI-generated deepfakes has not changed—it remains staunchly opposed to the use of such deception in political advertisements.

Yet, the AAPC opposes the FCC’s proposed rule for four key reasons, which are detailed further below. At bottom, the AAPC’s opposition is grounded in the fact that the FCC’s proposed rule fails to address the true problem of deceptive and fake advertisements in politics. It is axiomatic that AI is not a necessary or sufficient condition for the creation of deceptive or fake media; and thus, the FCC’s broad proposed rule unfairly and improperly regulates AI use, while failing to combat the true underlying issue of deceptive and fake advertisements. 

  1. The Proposed Rule is Overly Broad

First, the AAPC believes that the FCC’s proposed rule is overly broad. By requiring the disclosure of all AI-generated content, the FCC casts an overbroad net that fails to appropriately regulate deceptive AI-generated content. The proposed rule defines AI-generated content as follows: 

an image, audio, or video that has been generated using computational technology or other machine-based system that depicts an individual’s appearance, speech, or conduct, or an event, circumstance, or situation, including, in particular, AI-generated voices that sound like human voices, and AI-generated actors that appear to be human actors. 

This definition of AI-generated content is so broad that it would include many routine and harmless uses of AI in photo editing, color retouching, basic video editing, and voice narration. By requiring disclosure of every use of AI-generated content, the rule unfairly penalizes political campaigns for using AI tools to perform tasks that could be completed (less efficiently) without them. For example, two political campaigns could run advertisements that used edited photos of their candidates’ hair color. If the first campaign edited the hair color with an AI tool, it would be required to disclose that innocuous AI use in the advertisement. But if the second campaign edited the hair color manually (without an AI tool), then the second campaign would not be required to include an AI disclosure. Either way, the images have been edited—yet the proposed rule creates an arbitrary distinction and simply promotes the inefficient use (and nonuse) of available and common technologies. 

The arbitrary distinction becomes even more problematic in extreme cases. For example, two opposing political campaigns run advertisements back-to-back on television. The first campaign runs a positive ad and uses an AI tool to edit the candidate’s hair color; as a result, this ad requires an AI disclosure. The second campaign then runs a deceptive attack ad on the first campaign’s candidate, using deceptive stock photos and deceptive editing—however, it did not use AI in the ad. As a result, the first, non-deceptive ad required an AI disclosure, which could undermine a voter’s perception of authenticity and veracity, while the truly deceptive ad by the second campaign did not require any AI disclosure. 

Moreover, the definition of AI-generated content is also overbroad due to the inclusion of any content generated “using computational technology or other machine-based system.” Today, essentially all online content is generated using some form of computational technology or some machine-based system. As such, virtually all political advertisements could be subject to the FCC’s AI disclosure requirement, thereby making truly deceptive AI-generated ads indistinguishable from an ad that simply uses an AI tool to change a candidate’s hair color. 

  1. The Proposed Rule is Arbitrary and Inconsistent

Second, the AAPC believes that the FCC’s proposed rule is arbitrary and actually exacerbates the problem it purportedly aims to solve. The proposed rule creates an imbalanced regulatory system by imposing disclosure requirements exclusively on broadcasters and cable companies while exempting digital platforms. This selective approach not only undermines the proposed rule’s overall objectives but could also worsen the problem of deceptive AI-generated ads. For instance, while viewers of traditional media outlets, like news stations or cable networks, would be informed if content is AI-generated, those consuming content on digital platforms such as YouTube, TikTok, or social media would be left uninformed. By regulating traditional media but leaving digital platforms unregulated, the FCC risks fostering a false sense of security regarding content authenticity. Audiences used to seeing disclosures on TV or cable may mistakenly assume that AI-generated content on unregulated digital platforms is equally authentic or valid simply because no disclosure is provided. 

For example, if an AI-generated political ad aired on a cable channel comes with a disclaimer, but the same ad appears undisclosed on a social media platform, viewers might question the cable content more than the digital one, despite both being AI-generated. This inconsistency not only confuses viewers but could also encourage the spread of misinformation on unregulated digital platforms, ultimately undermining the rule’s original purpose to promote transparency and accountability in media. Declining viewership in traditional media outlets and growing viewership on digital platforms will only exacerbate this phenomenon. Instead of creating trust, the proposed rule’s selective application might further blur the lines between legitimate and misleading content across different media sources. 

  1. The Proposed Rule Will Harm Protected Political Speech

Third, the FCC’s proposed rule is also likely to cause significant harm to political communications which are protected by the First Amendment as central to the functioning of our free society and the democratic process. By requiring on-air AI disclosures, valuable airtime is consumed in on-air ads. These AI disclosures—timed at around four seconds—impede important messaging opportunities. Additionally, with both the stand-by-your-ad disclaimer and the AI disclaimer, nearly eight seconds of a typical thirty-second spot would be dedicated to disclaimers—just two seconds shy of one-third of the entire ad. This not only eats into the time needed for key messaging but also risks “poisoning the well,” negatively impacting the persuasive or educational effect of the advertisement and ultimately weakening its overall impact on the audience. 

  1. The Proposed Rule Exceeds the FCC’s Statutory Authority

Fourth, the FCC exceeded its statutory authority in promulgating this proposed rule. The Communications Act, 47 U.S. Code § 315, limits the FCC’s regulation of political advertisements to keeping public records, requiring sponsor identification, and ensuring equal broadcast access. While the Bipartisan Campaign Reform Act grants limited duties to the FCC, it does not grant the FCC any independent regulatory power to require AI-content disclosures in political advertisements. The FCC’s generalized justification that this proposed rule is in the “public interest” is baseless and unsupported by law. 

The FCC’s contravention of its mandate is further highlighted by FEC Chairman Sean Cooksey’s recent statement that the Bipartisan Campaign Reform Act does not grant the FCC the power to require political advertisement disclosures.3In fact, Chairman Cooksey explicitly stated that he was “concerned that parts of [the FCC’s] proposal would fall within the exclusive jurisdiction of the Federal Election Commission (“FEC”), directly conflict with existing law and regulations, and sow chaos among political campaigns for the upcoming election.”4 

For these reasons, the AAPC strongly opposes the FCC’s proposed rule on the disclosure of AI-generated content in political advertisements. 

###

Respectfully,

Julie Sweet
Director of Advocacy and Industry Relations
AAPC Executive Committee 

Alana Joyce
Executive Director
AAPC Executive Committee