• AuxSignal
  • Posts
  • AI Deepfakes, Disinformation, and the 2024 Campaign: New Challenges for PACs

AI Deepfakes, Disinformation, and the 2024 Campaign: New Challenges for PACs

Political Action Committees (PACs) are entering uncharted territory in the wake of the 2024 U.S. elections. Advances in generative AI have made it easier than ever to create “deepfakes” – hyper-realistic but fake audio, video, and images – blurring the line between fact and fiction in political campaigning. From AI-generated attack ads to bogus robocalls cloning candidates’ voices, these new tools offer enticing creative possibilities and serious risks. In this post, we take a journalistic yet analytical look at how PACs are grappling with AI-driven disinformation. We’ll explore the legal and ethical challenges of deepfakes in political ads, the emerging regulations around AI content transparency, and the growing misinformation liability facing PACs. Finally, we’ll outline best practices for PACs to navigate this volatile landscape responsibly.

Deepfakes and Manipulated Media: A New Wild Card in Campaign Ads

Not long ago, the idea of a forged video or cloned voice in a political ad seemed like science fiction. Now it’s a reality that campaigns must contend with. In mid-2023, Florida Governor Ron DeSantis’s team posted a video attacking Donald Trump using AI-generated images – including fake photos of Trump hugging Dr. Anthony Fauci – intermingled with real images to make them more convincing. It was “the latest example of how rapidly evolving AI tools are supercharging political attacks by allowing politicians to blur the line between fact and fiction,” NPR noted. Soon after President Biden announced his re-election bid, the Republican National Committee released a dystopian attack ad built entirely with AI-generated imagery, depicting hypothetical future crises if Biden won. The RNC proudly noted it was their first 100% AI-produced ad, highlighting how mainstream this technology has become in campaign messaging.

Ethical dilemmas. The use of AI-manipulated media in politics raises thorny ethical questions. On one hand, generative AI can save costs and enable creative visualizations (e.g. illustrating an imagined future or parody). On the other, realistic fake media can easily mislead voters and undermine trust in what they see and hear. A deepfake video could falsely depict a candidate taking a bribe or saying a slur – a digital smear that spreads before it can be debunked. PACs face a moral choice: using such techniques might score short-term points, but at the cost of eroding democratic norms and public confidence. Even seemingly benign uses of AI (like the RNC’s “what if” scenario ad) have drawn criticism for potentially confusing viewers about what’s real. Many observers are warning that 2024 was just the “beginning of the deepfake era” in elections, and that without restraint, we risk a “widespread confusion” among voters.

Legal red lines. Beyond ethics, PACs must consider the legal minefields around deepfakes. A patchwork of new state laws makes certain deceptive uses of AI in campaign ads illegal. For example, Minnesota now prohibits circulating any unauthorized deepfake of a candidate intended to harm their reputation or influence an election. Texas made it a crime to distribute a “deep fake video” of a candidate within 30 days of an election if intended to deceive voters. California, meanwhile, bans materially deceptive audio or visuals of candidates within 60 days of an election unless they carry a clear disclaimer that they are altered or fake. Many of these laws give candidates the right to sue the sponsor of a deepfake ad for damages or get an injunction to stop the ad. In other words, a PAC that releases a malicious deepfake could face swift civil litigation. Even without specific deepfake laws, there’s still defamation law: in 2022 a federal jury famously hit a Democratic super PAC with an $8.2 million verdict for a TV ad that defamed Roy Moore with misleadingly presented allegations. A savvy opponent could argue that a harmful deepfake constitutes knowing falsification made with “actual malice,” potentially meeting the high bar for defaming a public figure. The bottom line is that weaponizing AI deception in ads carries real legal peril, and PACs treading that path do so at their own risk.

Regulatory Proposals: Pushing for Transparency in AI-Driven Campaigns

Lawmakers and regulators have awakened to the threat of AI-driven disinformation in elections. While there’s not yet a blanket federal law, a flurry of regulatory proposals aim to inject transparency into political messaging and rein in the worst abuses of AI.

As of late 2024, numerous states have enacted or proposed laws regulating AI-generated content in political advertising. States shown in green (e.g. California, Texas, Minnesota) have adopted laws – mostly requiring clear disclosure of AI-manipulated media, and in some cases banning deceptive deepfakes outright during election periods. States in orange have bills under consideration, highlighting the rapidly evolving patchwork of rules across the country.

State-level action. Facing little immediate action from Washington, state legislatures rushed to address AI in campaigns throughout 2023-24. By August 2024, 16 states had adopted laws governing AI-generated content in political ads, with another 16 states debating bills. Most of these state laws stop short of banning AI in ads; instead, they require a disclosure if any content (image, video, or audio) has been created or altered by AI. A typical mandate is to include language like “This video/audio/image has been generated by artificial intelligence” in a clear, conspicuous manner. Some laws only kick in close to elections (e.g. within 60 or 90 days of Election Day). A few states have gone further: Washington State now requires labeling of any AI-generated content in political ads by law, and states like Texas and California impose penalties for deepfakes lacking disclosure in the pre-election period. California recently beefed up its regime with a trio of 2024 laws – extending the timeframe during which deceptive AI campaign materials are banned, empowering officials to seek court orders against violators, and requiring any campaign ad with AI-altered content to carry a disclosure on the face of the ad. This flurry of state activity means PACs operating nationally must navigate a complex mosaic of requirements. An ad that’s legal in State A might incur fines or lawsuits in State B if not properly labeled. The lack of a uniform standard is frustrating for campaigns and has prompted calls for a federal solution.

FCC and federal regulators. At the federal level, regulators are starting to weigh in. In July 2024, the Federal Communications Commission (FCC) proposed a new rule that would require TV and radio stations to disclose when any political ad they air contains AI-generated content. This came on the heels of a disturbing incident before the New Hampshire primary, where voters received robocalls with an AI-cloned voice of President Biden urging them not to vote – a blatant attempt at voter suppression. (The operative behind those deepfake calls now faces a $6 million FCC fine and criminal charges for voter suppression.) In response, the FCC not only cracked down on AI-driven robocalls under existing law (clarifying that using AI voices in political calls is illegal), but also moved to mandate on-air disclaimers for AI content. Under the proposal, broadcasters would have to ask ad sponsors if their material uses AI and, if so, include a standardized disclosure (e.g. “This ad contains AI-generated content”) whenever it runs. FCC Chair Jessica Rosenworcel stressed that the goal is to inform viewers, not to ban AI outright or referee truthfulness. However, due to the slow rulemaking process, this FCC rule is unlikely to take effect until after the 2024 election. The Federal Election Commission (FEC) also explored action: it considered a petition to expand an existing rule against candidates “fraudulently misrepresenting” others to explicitly cover deliberately deceptive AI ads. After extensive public comment, the FEC’s commissioners deadlocked along party lines and ultimately tabled the proposal, with opponents arguing the Commission lacked clear authority and should wait for Congress. In the meantime, the onus falls on Congress.

Legislation in Congress. Lawmakers from both parties have introduced bills to bring some order to AI in political messaging. One bipartisan effort is the proposed AI Transparency in Elections Act, which would require that political ads disclose AI-generated content when it is used in a “significant” or “substantial” way. Under this bill (sponsored by Sen. Amy Klobuchar and Sen. Lisa Murkowski), the FEC would define the criteria for what counts as “substantially” AI-generated and set rules for the disclaimer wording. The idea is simple: if a campaign or PAC uses AI to create a realistic image, video, or audio in an ad, the audience should be tipped off. “Something I think we can all agree we’d like to know,” as Sen. Murkowski put it. Another proposal, the REAL Political Advertisements Act, was introduced earlier by Klobuchar and Rep. Yvette Clarke to similarly mandate disclaimers on any election ad with synthetic images or video. (So far that one has gained only Democratic support.) Members of Congress have floated more aggressive measures too: the Protect Elections from Deceptive AI Act would outright prohibit distributing materially deceptive AI audio or visual of candidates (akin to some state bans), and the DEEPFAKES Accountability Act seeks to create criminal offenses for malicious deepfakes while carving out exceptions for parody or satire. As of 2025, none of these bills have passed, but the legislative momentum is notable. It reflects a growing, bipartisan consensus that AI-generated political content must at minimum be transparent to voters.

Tech platforms’ policies. Even before any laws kick in, major online platforms have imposed their own rules on AI in political ads – effectively setting industry standards that PACs must heed. In late 2023, Google became the first tech giant to require explicit disclosure of AI content in election ads. Starting November 2023, Google’s policy mandates that any AI-generated imagery or sounds in political ads be accompanied by a “clear and conspicuous” label on the ad itself (for example: “This video content was synthetically generated”). The rule applies across Google’s platforms, including YouTube, and covers synthetic depictions of real people or events. Google said it acted because deepfake-style deceptions “threaten to blur the lines between fact and fiction, making it difficult for voters to distinguish the real from the fake”. Meta (Facebook) has taken a similar stance – as of early 2024, Meta requires advertisers to disclose when they use digitally altered or AI-created images in political or issue ads, and it will label known fake videos as such. In fact, a coalition of major tech firms (including Meta, Google, TikTok, and others) voluntarily signed on to a “Tech Accord to Combat Deceptive AI in Elections” in February 2024, pledging steps to identify, label, or remove AI-driven disinformation. What this means for PACs is that even if the law doesn’t yet compel transparency, the platforms carrying your message might. A PAC that tries to run a deepfake ad without disclosure could find its ads rejected or labeled by the host platform, with reputational fallout.

The regulatory writing is on the wall: transparency and accountability are the new expectations. PACs should be preparing now for a future where hiding an AI-generated manipulation in an ad is not only unethical but explicitly illegal or technically blocked. As a leading political law firm summed it up, “rules of the road are emerging to help mitigate AI risks” in campaigns – so savvy organizations will adapt early.

Misinformation Liability: New Risks and Expectations for PACs

Beyond compliance with specific AI laws, PACs must confront a broader question: what responsibility do they bear for misinformation they spread, even if it isn’t handcrafted by the campaign? In the age of AI, the traditional attitude of “anything goes” in negative ads is colliding with new legal and public relations risks.

Liability for false or defamatory claims. American politics has a long history of rough-and-tumble ads, including distortions or out-of-context claims. Often such speech is protected by the First Amendment, especially against public figures. But the landscape may be shifting. The Roy Moore case in Alabama sent shockwaves when a jury held a super PAC liable for defamation over a misleading ad, demonstrating that PACs are not immune if they propagate provable falsehoods. Moore’s victory, unusual as it was, has emboldened candidates to push back on slanderous ads. In the 2022 Utah Senate race, independent candidate Evan McMullin sued a conservative super PAC for running ads that doctored his words to falsely make it sound like he disparaged Republicans as racist. Meanwhile, then-President Trump sued a Wisconsin TV station for airing a PAC attack ad that edited his statements about COVID-19 (that case was settled/dismissed, but only after costly litigation). These cases underscore a point: if a PAC disseminates manipulated or outright false content – whether generated by AI or by old-fashioned editing – it can face legal challenges and financial penalties. With deepfakes, the likelihood of egregious false depictions rises, and so does the chance of a successful lawsuit if harm can be shown. Even if lawsuits don’t ultimately succeed, they can tie up a PAC’s resources and damage its credibility with the public and donors.

Third-party content and curation. What about misinformation that PACs spread without creating it? For instance, sharing a viral doctored video produced by someone else, or relying on a dubious “news” source for claims in an ad. Here too, PACs are increasingly expected to perform due diligence. Legally, a PAC cannot hide behind the fact that content came from a third party – once they republish or amplify it, they assume responsibility for its veracity. (Notably, the legal immunity that online platforms enjoy under Section 230 of the Communications Decency Act does not protect the PACs that create or sponsor content.) If a PAC-run Facebook page reposts a fake quote attributed to an opponent, for example, it could be considered maliciously spreading false information. In an era when misinformation can spark real-world consequences, there’s a higher public expectation that political groups fact-check claims before blasting them out. Failing to do so can not only blow back politically; it might even attract regulators’ attention under general fraud or truth-in-advertising laws. The Federal Trade Commission (FTC) has hinted it could use its consumer protection authority against clearly deceptive campaign fundraising practices or advertising, which would be a new front in liability.

Media gatekeepers and platform policies. Another factor incentivizing honesty: broadcasters and digital platforms are showing less tolerance for provably false political ads, especially from non-candidate groups. Unlike candidate committees (whose ads TV stations generally must air uncensored under federal law), independent PAC ads can be refused or taken down if they’re blatantly false or legally questionable. Broadcasters have been sued for continuing to run third-party ads after being warned of inaccuracies, which gives them a strong motive to reject deceptive PAC ads to avoid liability themselves. The result: if a PAC puts out a deepfake video of an opponent, TV stations and even social media companies might preemptively block or label it, especially if the victim complains. In 2024, we saw Facebook and TikTok move to remove certain election disinformation posts and even reject misleading political ads during critical periods. Twitter (now X) has a policy to label synthetic or manipulated media and remove it if it could cause harm. All of this creates an “expectation of responsibility” – PACs that play fast and loose with facts risk not just moral condemnation, but having their megaphone taken away.

Finally, there’s the court of public opinion. In a hyper-connected world, a dramatic deepfake or false claim can go viral – but so can its debunking. PACs found peddling misinformation may face intense media scrutiny and public backlash. Donors may shy away from being associated with a group notorious for deception. Voters are increasingly aware of deepfakes and AI trickery, and a savvy electorate may punish campaigns that appear to rely on dirty tricks. In short, the incentives are slowly realigning in favor of truthfulness (or at least plausible deniability). The 2024 cycle may be remembered as a turning point when disinformation in campaigning started carrying a heavier price.

In this fraught environment, PAC professionals need a game plan to harness AI’s creative power without falling victim to its pitfalls. Here are some recommendations and best practices for navigating AI and disinformation challenges in political campaigns:

  • Label and disclose AI content. Embrace transparency as a policy, not just a legal duty. If your ad uses any AI-generated imagery, voice, or video, clearly disclose it to viewers – even in jurisdictions where it’s not yet required. Proactively adding a line like “This video uses AI-generated imagery for dramatization” not only keeps you compliant with emerging laws, it also builds trust with the audience. Savvy voters appreciate honesty, and being upfront can preempt accusations of trying to mislead. Importantly, disclosure might shield you from some legal liability (several state laws offer safe harbors or reduced penalties for AI ads that include proper disclaimers). It’s a simple step that turns a potential disinformation negative into a transparency positive.

  • Strengthen fact-checking and vetting processes. In the rush of a campaign, it’s easy to grab a juicy clip or meme online and blast it out – but stop and verify before amplifying third-party material. Institute a strict review workflow for any external content: verify the original source, check reputable fact-checkers, and scrutinize for signs of manipulation (for example, unnatural visual artifacts or audio inconsistencies that might indicate a deepfake). Consider using AI tools that can detect deepfakes or track the provenance of images. In short, treat every sensational piece of media with skepticism. It may feel like overkill, but as one elections expert warned, the precision of AI fakery is improving rapidly and can create “widespread confusion” if we’re not vigilant. Your PAC doesn’t want to be the one caught pushing a blatant fake because no one checked twice.

  • Draw ethical red lines – and stick to them. Even if the law allows certain tactics, define ethical standards for your organization that align with your values and public expectations. For instance, the American Association of Political Consultants (AAPC) updated its Code of Ethics in 2023 to explicitly prohibit members from using deceptive “deep fake” content in campaigns. Adopting similar internal rules can guide staff and consultants. Decide ahead of time that your PAC will not impersonate an opponent’s voice or fabricate statements, for example. By setting these guardrails, you reduce the temptation in the heat of battle to cross into dubious territory. It also gives you moral high ground to contrast with opponents who may use dirty tricks. In an era of eroding trust, a reputation for integrity can be a strategic asset.

  • Stay updated on laws and regulations. The legislative and regulatory terrain around AI and disinformation is evolving monthly. Assign someone on your team (or outside counsel) to track new state laws, FEC rulings, FCC regulations, and platform policy changes that may affect your advertising plans. For example, if you’re planning a nationwide ad buy that uses AI-modified images, you need to know which states require a disclaimer on the ad itself or even prohibit that content outright close to Election Day – so you can adjust your creative or media plan accordingly. Keep an eye on Congress for any federal law that might kick in by 2026. And note changes in platform policies (Google, Meta, etc.) to avoid getting your ads banned. By staying ahead of the rules, you can innovate safely and avoid nasty surprises like a cease-and-desist letter from a state attorney general in the middle of your campaign.

  • Prepare for deepfake attacks (not just deployment). PACs should not only consider how to use AI, but also how to defend against its malicious use by others. Develop a crisis response plan for potential deepfake incidents targeting your favored candidates or your organization. This might include monitoring social media and forums for viral fake videos, having forensic experts on call who can quickly analyze suspicious media, and coordinating with platforms and press to flag and debunk disinformation before it spreads widely. The faster you can expose a deepfake as fake, the less damage it can do. In 2024, authorities moved swiftly when fake Biden robocalls emerged, and the FCC Chair noted how easily people can be fooled when they think they hear a familiar voice. Speed and clarity of response are critical. A well-prepared PAC can even turn the tables: using the incident to highlight the opponent’s dishonesty (if they’re behind it) or to call for higher standards. In short, make resilience against AI-fueled lies part of your strategy.

  • Leverage AI’s benefits carefully. Finally, remember that AI is not just a threat – it’s also a tool that, when used responsibly, can enhance your campaign. AI can help with efficient ad targeting, voter outreach, data analysis, and even generating benign content (like quick transcriptions or background visuals). Some PACs are exploring AI for things like optimizing fundraising appeals or simulating voter responses. These are low-risk uses. If you do venture into generative content, use AI to augment reality, not distort it. For example, AI can recreate historical scenarios or visualize policies (with disclaimers), which can educate voters without deceiving. Always pair creative AI content with a human judgment filter: Does this cross a line? Would we be comfortable seeing this done to our candidate? If something gives you pause, err on the side of caution. As one legal advisor put it, “identify, understand, and mitigate the risks” of AI, and weigh them against the potential benefits. In many cases, a clever concept without deception will serve you better in the long run than a calamitous deepfake that backfires.

Conclusion

The 2024 election cycle forced PACs and campaigns to confront the reality that AI-driven disinformation is no longer a theoretical threat – it’s here and now. Deepfake videos, AI-generated voices, and algorithmically turbocharged misinformation are challenging the foundations of political communication. In the U.S., regulators are scrambling to catch up – patching holes with disclosure laws and debating how to hold bad actors accountable. But smart PAC professionals won’t wait to be told what to do by authorities. They recognize that with great new power (of AI) comes great new responsibility.

At its heart, political advocacy is about persuasion and trust. Misleading voters with AI-manipulated “evidence” might win a news cycle, but it can just as easily backfire legally and reputationally. The path forward for PACs lies in harnessing AI ethically – finding creative ways to engage the electorate without undermining the very democracy we’re trying to influence. By committing to transparency, accuracy, and accountability, PACs can still be innovative in their tactics while upholding credibility. And in an age of skepticism, demonstrating that commitment may be the key to breaking through to audiences bombarded with noise.

The 2025 landscape promises both exciting AI-driven campaign innovations and new battles over disinformation. PACs that prepare now – by updating their playbooks and adhering to best practices – will be better positioned to navigate whatever surprises the next election throws their way. In the Wild West of AI in politics, those who set their own higher standard will lead by example, helping ensure that technology is used to empower voters, not fool them. The stakes – public trust in our elections – couldn’t be higher, and everyone from lawmakers to campaign operatives to voters has a role in meeting this moment.

Sources:

  1. Davis+Gilbert LLP – AI in Political Advertising: State and Federal Regulations in Focus

  2. Davis+Gilbert LLP – Ibid. (FCC proposed rule and FEC petition)

  3. NPR – DeSantis Campaign Shares Apparent AI-Generated Images of Trump

  4. Axios – RNC’s First AI-Generated Attack Ad

  5. NPR – Political Consultant Fined for Biden Deepfake Robocall

  6. Reuters – Google to Require Disclosure of AI in Election Ads

  7. FedScoop – Bipartisan Bill on AI Transparency in Elections

  8. Broadcast Law Blog – AI in Political Ads – Media Companies Beware

  9. NPR / AP – Roy Moore Defamation Suit against Super PAC

  10. Brennan Center – Regulating AI Deepfakes in Political Arena

  11. Al Jazeera – “Wild West”: AI in US Elections

  12. Wiley Rein LLP – 7 Tips for Managing AI Risks in Campaigns