The Rise of Deceptive Generative AI-Based Political Ads
There is worry about an increase of deceptive generative AI-based political ads, triggering more confusion as the political season heats up. That in turn, will make lies easier to sell.
The rising temperature of political ads between former President Donald Trump and Florida Gov. Ron DeSantis for the 2024 GOP nomination makes this real November 2019 photo look out of place today. Could it be a generative AI image? No. This is a real picture.
The FEC voted recently to open public comment on how to regulate generative AI political ads. But news organizations need to learn about reporting on these ads. This is their opportunity.
A DeSantis PAC’s attack ad on frontrunner Donald Trump is a recent example of this. In it, the ad shows the former president criticizing Iowa’s Republican Governor Kim Reynolds, except Trump’s voice was generated, but Trump had not gone on a speech and said those words. The PAC supporting DeSantis took the text from Trump’s text posts and generated audio in his voice. In July, WESH 2 News (a TV outlet in Central Florida) did an effective debunk.
This is a great start, and news outlets can do more. The key question is: How does generative AI change the already vexing issue of political ads, deception, and lies? Is this just more of the same, just produced differently and more sophisticatedly, or are there fundamentally new questions altogether along with responsibilities for news organizations to think about?
The power distinctive to generative AI used in political ads is this. It destroys the authenticity we currently associate only with the whole person. Any element of a person’s likeness can now be separately generated by AI, especially for public figures. It could have generated voice or image or video, independent of generated text or language and combinations of the above.
Journalists and editors, now, need to be thinking about the authenticity of likeness in the era of generative AI. Generative AI can disassemble your likeness as a whole person – voice, visual, presence, gestures, eye movements, smile and whatnot. Given that, different authentic elements of any public figure can be synthesized and recombined arbitrarily.
Until recently, the focus has been on the content of the claims in an ad. There are also standardization efforts underway to help encode the origin and authenticity of a branded video and prevent tampering. But, that the authenticity of your likeness can itself be disassembled and reassembled for the express purpose of propaganda and deception in a political ad is the new issue generative AI capacities bring.
I convene the Markkula Center’s Journalism and Media Ethics Council. Several members of the council addressed this topic during one of our meetings.
Questions reporters could ask as they consider reporting out the ad:
- Who released the ad? Is the ad about a candidate the ad’s sponsor supports, or does it contain messages against or attacking an opponent?
- Is it satire?
- Identify where generative AI was used. Video, audio, text, headline, mix of all of these? Did the producer disclose that?
- Dig around and ascertain whether the public figure in the ad actually said those things somewhere in speech. Details matter.
- Is the ad simply using generative AI to create the likeness of a person to relay an otherwise true or factual message?
- Likewise, is the ad simply using generative AI to create the likeness of a person to relay a rhetorical claim, and hence is not fact-checkable because it is too general? Such claims are protected political speech anyway.
Depending on the answers to these questions, the reporters and editors would be better positioned to decide whether and how they want to report the claims.
There is a real opportunity to go beyond the content of the ads. News outlets could use generative AI ads to educate the public how about generative AI works and how it comes into play.
Beyond quick debunks, journalists could write explainer articles on generative AI ads to let the public see how the sausage got made.
This will help with generative AI literacy at a time when this is much needed as the 2024 elections cycle goes into full gear. The more background knowledge we have as political ad viewers, the more likely the ad may trigger our curiosity about generative AI being used in it. In turn, we may look for explanations, and this will slow us down before we get pulled into believing the claims or quickly share the ad online. This rapid uncritical behavior is what deceptive and manipulative campaigns want.
There are no magic bullets, no one-shoe-fits-all answers. Debunk, explain, educate.