Connecting with Gen Z: Authenticity Above All Else

Listen Up: AAPC’s Podcast Debut

 

AAPC’s Future of Political Consulting podcast will explore the evolving world of political strategy. In the first episode (out today, September 27th!), Kelly Gibson (Stronger Than Communications) and Ryan Davis (People First) dive into the hot topic around how social media influencers are making waves in political campaigns and the best ways to collaborate for real results.

Check out the episode and be sure to subscribe!

FEC Issues No New Rulemaking in the use of AI

September 20, 2024

 

FEC issues no new rulemaking to regulate or prohibit the use of AI by political campaigns; clarifies that the fraudulent use of AI is already regulated by existing statutes. 

 

Jason Torchinsky and Oliver Roberts

On September 19, 2024, the FEC held an open meeting to address guidance and regulations related to the use of artificial intelligence (AI) in political campaigns. At the meeting, the FEC discussed whether to issue new rulemaking to ban AI “deepfakes” and also discussed two draft documents that it released on September 10, 2024. By a vote of 3 to 1 (with 2 abstentions), the FEC refused to issue new rulemaking to regulate AI “deep fakes.” The FEC then voted to approve the “Draft Notice of Disposition” and the “Draft Interpretive Rule” by a 5-1 vote. The FEC Chairman clarified that the “Draft Interpretive Rule” does not introduce any new regulation or prohibition on the use of AI in political campaigns—rather, it is non-binding guidance clarifying that the fraudulent use of AI is already regulated by existing statutes.  The FEC Chairman specifically warned the press against misinterpreting this rule.

Specifically, the “Draft Interpretive Rule” provides “guidance on the scope of 52 U.S.C. 30124, which bars the fraudulent misrepresentation of campaign authority.” The Draft Interpretive Rule clarified that 52 U.S.C. 30124 and 11 CFR 110.16 “apply irrespective of the technology used to conduct fraudulent misrepresentation.” Chairman Cooksey emphasized that “it does not matter whether a regulated person uses any particular form of technology, including AI” because the key legal question is whether the regulated individual acted fraudulently. Importantly, as a Draft Interpretive Rule, this document outlines the “general course of action that the Commission intends to follow,” meaning it does not constitute an official agency action nor carry the force of law. The FEC approved the Draft Interpretive Rule by a 5-1 vote.

The “Draft Notice of Disposition” indicates that the FEC disposed of a prior Petition for Rulemaking filed on July 13, 2023, by a public citizen. The Petition requested that the FEC issue rulemaking “to clarify that the law against ‘fraudulent misrepresentation’ (52 U.S.C. 30124) applies to deliberately deceptive AI produced content in campaign communications.” On August 16, 2023, the FEC had published a Notice of Availability seeking public comment on the Petition. After the public comment period, the FEC decided not to pursue rulemaking in response to the Petition because “[t]he statute . . . is technology neutral and applies on its face to all means of accomplishing the specified fraud, including AI-assisted media.” The FEC’s guidance on this statute was later clarified in the “Draft Interpretive Rule,” discussed above. The FEC approved the Draft Notice of Disposition by a 5-1 vote.

Jason Torchinsky is a partner at Holtzman Vogel Baran Torchinsky Josefiak PLLC (“HVBTJ”) and specializes in campaign finance and election law. Oliver Roberts is an attorney at HVBTJ and specializes in regulatory and artificial intelligence matters.

AAPC Files Comments Opposing FCC’s Proposal Rule on Generative AI in Political Ads

Subject: Notice of Proposed Rulemaking on Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements

Dear Chairwoman Rosenworcel,

We are writing on behalf of the American Association of Political Consultants (“AAPC”) to oppose the FCC’s proposed rule on the disclosure of AI-generated content in political advertisements. The AAPC believes that this proposed rule is overly broad and arbitrary, exceeds the authority of the FCC, threatens to harm protected speech, and fails to address the true problem of deceptive
AI-generated content.

Founded in 1969, the AAPC is a bipartisan organization of political and public affairs professionals dedicated to improving democracy. The AAPC has more than 1,700 members worldwide. The Board of Directors is comprised of 32 members, evenly divided between Republicans and Democrats. It is the largest association of political and public affairs professionals in the world. 

As a preliminary matter, the AAPC highlights its longstanding position that it staunchly opposes the use of deepfakes in political advertisements. As early as May 2023, the AAPC issued a public statement that its bipartisan Board of Directors unanimously agreed to condemn the use of generative AI “deepfake” content in political campaigns.1In a press release, the AAPC Board “unanimously agreed that the use of ‘deepfake’ generative Artificial Intelligence (AI) content is dramatically different and dangerous threat to democracy.”2 The AAPC’s position on AI-generated deepfakes has not changed—it remains staunchly opposed to the use of such deception in political advertisements.

Yet, the AAPC opposes the FCC’s proposed rule for four key reasons, which are detailed further below. At bottom, the AAPC’s opposition is grounded in the fact that the FCC’s proposed rule fails to address the true problem of deceptive and fake advertisements in politics. It is axiomatic that AI is not a necessary or sufficient condition for the creation of deceptive or fake media; and thus, the FCC’s broad proposed rule unfairly and improperly regulates AI use, while failing to combat the true underlying issue of deceptive and fake advertisements. 

  1. The Proposed Rule is Overly Broad

First, the AAPC believes that the FCC’s proposed rule is overly broad. By requiring the disclosure of all AI-generated content, the FCC casts an overbroad net that fails to appropriately regulate deceptive AI-generated content. The proposed rule defines AI-generated content as follows: 

an image, audio, or video that has been generated using computational technology or other machine-based system that depicts an individual’s appearance, speech, or conduct, or an event, circumstance, or situation, including, in particular, AI-generated voices that sound like human voices, and AI-generated actors that appear to be human actors. 

This definition of AI-generated content is so broad that it would include many routine and harmless uses of AI in photo editing, color retouching, basic video editing, and voice narration. By requiring disclosure of every use of AI-generated content, the rule unfairly penalizes political campaigns for using AI tools to perform tasks that could be completed (less efficiently) without them. For example, two political campaigns could run advertisements that used edited photos of their candidates’ hair color. If the first campaign edited the hair color with an AI tool, it would be required to disclose that innocuous AI use in the advertisement. But if the second campaign edited the hair color manually (without an AI tool), then the second campaign would not be required to include an AI disclosure. Either way, the images have been edited—yet the proposed rule creates an arbitrary distinction and simply promotes the inefficient use (and nonuse) of available and common technologies. 

The arbitrary distinction becomes even more problematic in extreme cases. For example, two opposing political campaigns run advertisements back-to-back on television. The first campaign runs a positive ad and uses an AI tool to edit the candidate’s hair color; as a result, this ad requires an AI disclosure. The second campaign then runs a deceptive attack ad on the first campaign’s candidate, using deceptive stock photos and deceptive editing—however, it did not use AI in the ad. As a result, the first, non-deceptive ad required an AI disclosure, which could undermine a voter’s perception of authenticity and veracity, while the truly deceptive ad by the second campaign did not require any AI disclosure. 

Moreover, the definition of AI-generated content is also overbroad due to the inclusion of any content generated “using computational technology or other machine-based system.” Today, essentially all online content is generated using some form of computational technology or some machine-based system. As such, virtually all political advertisements could be subject to the FCC’s AI disclosure requirement, thereby making truly deceptive AI-generated ads indistinguishable from an ad that simply uses an AI tool to change a candidate’s hair color. 

  1. The Proposed Rule is Arbitrary and Inconsistent

Second, the AAPC believes that the FCC’s proposed rule is arbitrary and actually exacerbates the problem it purportedly aims to solve. The proposed rule creates an imbalanced regulatory system by imposing disclosure requirements exclusively on broadcasters and cable companies while exempting digital platforms. This selective approach not only undermines the proposed rule’s overall objectives but could also worsen the problem of deceptive AI-generated ads. For instance, while viewers of traditional media outlets, like news stations or cable networks, would be informed if content is AI-generated, those consuming content on digital platforms such as YouTube, TikTok, or social media would be left uninformed. By regulating traditional media but leaving digital platforms unregulated, the FCC risks fostering a false sense of security regarding content authenticity. Audiences used to seeing disclosures on TV or cable may mistakenly assume that AI-generated content on unregulated digital platforms is equally authentic or valid simply because no disclosure is provided. 

For example, if an AI-generated political ad aired on a cable channel comes with a disclaimer, but the same ad appears undisclosed on a social media platform, viewers might question the cable content more than the digital one, despite both being AI-generated. This inconsistency not only confuses viewers but could also encourage the spread of misinformation on unregulated digital platforms, ultimately undermining the rule’s original purpose to promote transparency and accountability in media. Declining viewership in traditional media outlets and growing viewership on digital platforms will only exacerbate this phenomenon. Instead of creating trust, the proposed rule’s selective application might further blur the lines between legitimate and misleading content across different media sources. 

  1. The Proposed Rule Will Harm Protected Political Speech

Third, the FCC’s proposed rule is also likely to cause significant harm to political communications which are protected by the First Amendment as central to the functioning of our free society and the democratic process. By requiring on-air AI disclosures, valuable airtime is consumed in on-air ads. These AI disclosures—timed at around four seconds—impede important messaging opportunities. Additionally, with both the stand-by-your-ad disclaimer and the AI disclaimer, nearly eight seconds of a typical thirty-second spot would be dedicated to disclaimers—just two seconds shy of one-third of the entire ad. This not only eats into the time needed for key messaging but also risks “poisoning the well,” negatively impacting the persuasive or educational effect of the advertisement and ultimately weakening its overall impact on the audience. 

  1. The Proposed Rule Exceeds the FCC’s Statutory Authority

Fourth, the FCC exceeded its statutory authority in promulgating this proposed rule. The Communications Act, 47 U.S. Code § 315, limits the FCC’s regulation of political advertisements to keeping public records, requiring sponsor identification, and ensuring equal broadcast access. While the Bipartisan Campaign Reform Act grants limited duties to the FCC, it does not grant the FCC any independent regulatory power to require AI-content disclosures in political advertisements. The FCC’s generalized justification that this proposed rule is in the “public interest” is baseless and unsupported by law. 

The FCC’s contravention of its mandate is further highlighted by FEC Chairman Sean Cooksey’s recent statement that the Bipartisan Campaign Reform Act does not grant the FCC the power to require political advertisement disclosures.3In fact, Chairman Cooksey explicitly stated that he was “concerned that parts of [the FCC’s] proposal would fall within the exclusive jurisdiction of the Federal Election Commission (“FEC”), directly conflict with existing law and regulations, and sow chaos among political campaigns for the upcoming election.”4 

For these reasons, the AAPC strongly opposes the FCC’s proposed rule on the disclosure of AI-generated content in political advertisements. 

________________________

1 AAPC, AAPC Condemns Use of Deceptive Generative AI Content in Political Campaigns, May 3, 2023, https://theaapc.org/american-association-of-political-consultants-aapc-condemns-use-of-deceptive-generative-ai-content-in-political-campaigns-2/ (last accessed September 8, 2024)
2 Id.

AAPC Insider: Your Community Connection