Meaningful Disclosure Framework
A professional standard for when and how political communicators should disclose the use of synthetic and AI-generated content.
The Three-Question Test
Before distributing content produced or altered with any significant tool, run through these three questions. The issue is never which tool you used—it's whether the content crosses from making an argument into manufacturing evidence.
Does this content imply a factual claim?
Not an opinion, emotional appeal, or policy argument—but a claim that something happened, someone said something, or someone was somewhere. If the content is purely illustrative, argumentative, or clearly stylized, it lives in the domain of persuasion.
Do you have independent evidence for that claim?
A citation, source, public record, testimony, or reporting that supports the factual assertion. When content illustrates something independently verifiable—like visualizing a documented event or translating a real speech—the claim is provable on its own terms. The content is a production choice, not a source of proof.
Is the content itself doing the work of evidence?
This is the decisive question. Is the content creating the impression of documentation—footage, audio, a photograph—for something that cannot be independently proven? Is it standing in as proof rather than illustrating an argument?
What Meaningful Disclosure Looks Like
When disclosure is warranted, the goal is to help the audience distinguish what's captured from what's created. Generic labels like “AI-generated content” don't do this—they tell voters nothing useful and, per AAPC Foundation research, erode trust even when the content is truthful.
These are examples of the kinds of labels that work—not a fixed list. The right label is the one that tells your audience what they need to know about what's real.
See It in Practice
Real-world scenarios showing how the test applies. Click any scenario for the full analysis.
Full Framework
Need the complete policy language? The full framework text is below.
Purpose
Political communications is undergoing its fastest production shift in a generation. Artificial intelligence now enters nearly every step of the creative workflow, from research and writing to editing, translation, voice, image, and video. This is not a crisis to be managed; it is a transition to be handled with professional judgment, responsibility, and accountability.
At the same time, legislators, journalists, and voters are rightly asking where the line sits between persuasive communication and deceptive fabrication. Generic responses, especially mandates to disclose the mere use of AI, do not answer that question. They misidentify the harm, penalize honest practitioners for routine production choices, and, according to AAPC Foundation research, erode audience trust in the messenger even when the underlying content is truthful.
This Framework offers a different answer. It focuses on what matters: whether content is being used to make an argument the audience can evaluate, or to manufacture evidence the audience cannot. It applies to any method of production or alteration, whether AI, traditional editing tools, or technologies not yet invented.
This Framework is a statement of professional practice. It is not a legal safe harbor, and it does not displace any applicable federal, state, or platform requirement. Where legal obligations are stricter, those obligations govern.
The Core Principle
The question is never: What tool was used?
The question is: Is this content doing the work of evidence for a claim you cannot prove?
Content is not inherently deceptive because of the tool that produced it. The method of creation is irrelevant. What matters is whether the output crosses the line from argument to fabricated evidence.
The Distinction: Argument vs. Evidence
Political communications is persuasive by nature. It makes arguments. It appeals to emotion. It dramatizes consequences, illustrates stakes, and moves people to action. None of this is deceptive, and none of it requires disclosure of the tools used to produce it.
The line is crossed when content stops making an argument and starts manufacturing proof. When the content itself is the evidence—presenting itself as a record of something that happened, a statement someone made, or a scene someone was present for—and that thing did not happen, it has crossed from persuasion to fabrication.
Argument (Persuasion) vs. Evidence (Fabrication)
| Argument (Persuasion) | Evidence (Fabrication) |
|---|---|
| Makes a case for a position. | Presents itself as documentation of something real. |
| Illustrates consequences or stakes. | Creates the impression of a record: footage, audio, a photograph, a transcript. |
| Appeals to values or emotion. | Asks the audience to believe something happened that did not. |
| The audience understands this as advocacy. | Technology is being used to manufacture proof. This is fabrication regardless of the tool. |
| The tool is a production choice, no different from Photoshop, a recording studio, or a printing press. |
The Test (Detailed)
Before distributing content that was produced or altered with any significant tool or technology, ask three questions in sequence.
Question 1—Does This Content Imply a Factual Claim?
Not an opinion, not an emotional appeal, not a policy argument—but a claim that something happened, someone said something, or someone was somewhere.
If the content is purely illustrative, argumentative, or clearly stylized, the communication lives in the domain of persuasion. No further analysis needed.
If no: This is argument. No disclosure needed. Stop here.
If yes or uncertain: Continue to Question 2.
Question 2—Do You Have Independent Evidence for That Claim?
Is there a citation, a source, a public record, testimony, or reporting that supports the factual assertion the content is making?
When content illustrates something independently verifiable—such as visualizing a documented event, translating a real speech, or re-creating a public statement—the claim is provable on its own terms and the content is a production choice, not a source of proof.
If yes: The content illustrates a provable claim. Best practice is to cite your source. Meaningful disclosure is optional. Stop here.
If no: Continue to Question 3.
Question 3—Is the Content Itself Doing the Work of Evidence?
This is the decisive question. Is the content creating the impression of documentation—such as footage, audio, or a photograph—for something that cannot be independently proven?
Is it being asked to stand in as proof of the factual claim, rather than merely illustrating an argument?
If yes: Do not distribute. This is fabricated evidence. This is the line AAPC draws. No disclosure regime fixes this. Fabricated proof is deceptive whether or not it is labeled.
If uncertain: Disclose meaningfully. If you cannot confidently say the content is on the right side of the line, tell the audience what is captured versus what is created so they can make their own judgment.
Effective Disclosure
When disclosure is warranted, the goal is to help the audience distinguish opinion from fact and captured from created. Generic labels like “AI-generated content” fail this test. They tell voters nothing about what is real and what is not. They penalize honest practitioners who use automation for routine production. And AAPC Foundation research shows they erode trust in the messenger regardless of what the content actually depicts.
Meaningful Labels
| Label | When to Use |
|---|---|
| Dramatization | Scene is constructed to illustrate a point. Events depicted did not occur as shown. |
| Simulated | Visual or audio is generated to approximate what something might look or sound like. Not a recording. |
| Re-enactment | Depicting a documented event using generated visuals or audio. The event happened; this specific footage did not. |
| Synthetic voice | Audio generated or cloned by automated tools. The words may be the speaker's own; the recording is not. |
| Translated | Original content in another language, delivered via automated translation. Content and meaning are the speaker's; language delivery is automated. |
| Image enhanced / Image altered | “Enhanced” for quality improvements; “altered” for substantive changes to appearance or context. |
Technical Standards
Disclosures should be visible, prominent, durable (not removable by resharing), and proximate to the content. Where a viewer-facing label is used, it should be placed within the same frame or channel as the content rather than in a separate location a viewer is unlikely to consult.
AAPC recommends the routine application of content provenance metadata using the C2PA standard or equivalent where available. Metadata serves the professional record and supports downstream verification by platforms, journalists, and fact-checkers even when a viewer-facing label is not required under this Framework. Metadata is complementary to, not a substitute for, meaningful viewer-facing disclosure where this Framework calls for one.
Consent: A Separate Analysis
Consent matters, but not for the reasons disclosure frameworks typically invoke. Using someone's likeness or voice without permission raises right of publicity, defamation, and potentially criminal liability concerns under existing law. These are serious and enforceable.
But consent is separate from deception: there can be fully consented content that is deceptive, and non-consented content that is completely honest. This Framework addresses deception. Existing law addresses consent. They are separate analyses, and conflating them produces bad answers for both.
Relationship to Law
This Framework is a statement of professional practice. It is not a legal opinion and does not constitute legal advice. Where federal, state, or platform requirements apply, those requirements govern.
Several states now impose disclosure obligations keyed to the use of AI or synthetic media rather than to deception. A practitioner may therefore find that this Framework calls for no disclosure where state law requires one, or that state law is silent where this Framework calls for meaningful disclosure. In every case, the stricter standard controls. Members should consult counsel on jurisdiction-specific obligations and should refer to the AAPC Deepfake Compliance Guide for state-by-state requirements.
This Framework sets the professional standard AAPC members hold themselves to above whatever legal floor may apply in a given jurisdiction.
Professional Accountability
Gray areas will exist. The Test above provides structure, but political communication is creative work, and the line between illustration and evidence will not always be bright. That is a judgment call, and the professional community ultimately decides whether a practitioner got it right.
A useful internal check: if the practitioner is reaching to explain why the content is not fabricated evidence, the content is probably on the wrong side of the line. When in doubt, disclose meaningfully and let voters make their own assessment.
AAPC members are expected to apply this Framework in good faith and to use meaningful disclosure when the Test calls for it. The Framework is intended to guide practitioners, inform clients, and set the professional expectation that the industry's standard is deception-focused, not tool-focused.
Key Definitions
- Argument
- Content that makes a case, illustrates a point, appeals to emotion or values, or advocates for a position. The audience understands it is being persuaded. The method of creation is a production choice, not a source of proof.
- Evidence
- Content that presents itself as documentation of something real: a recording, a photograph, a transcript, or a factual record of an event, statement, or interaction.
- Fabricated Evidence
- Content that creates the impression of documentary evidence for a factual claim the creator cannot independently prove. The content itself is the proof, and the proof is manufactured.
- Material Fabrication
- Creating or altering information, including tone, context, attributes, or events, in a way that changes the substantive meaning a reasonable viewer would take from the content. Routine production enhancements (color correction, audio cleanup, translation, layout) do not constitute material fabrication.
- Reasonable Person
- A typical member of the content's intended audience, applying ordinary attention and skepticism. Not the most credulous possible viewer, but also not a trained media analyst.
- Meaningful Disclosure
- A label that tells the audience what is captured versus what is created, enabling them to distinguish opinion from fact. “AI-generated content” is not meaningful disclosure. “Dramatization,” “Simulated,” and “Re-enactment” are.
- Deepfake
- Fabricated or altered media presented as real or authentic.
- Synthetic Media
- Audio, video, or image content fabricated using AI or other digital technology.
