DeSantis, Trump Jr. Leverage AI In Political Campaigning: Experts Fear Deepfake Misinformation In 2024 Elections

Comments
Loading...
Zinger Key Points
  • There are no laws in place requiring the use of a disclaimer when AI is used to mimic a real-life event that never happened.
  • AI can be leveraged by political campaigns beyond deepfakes, to write personalized emails or analyze large data sets.
  • Discover Fast-Growing Stocks Every Month

Political experts are beginning to worry about the dangerous ways in which the artificial intelligence revolution could affect political campaigning during the 2024 elections.

As developments in generative AI technology continue to surprise on a daily basis, dealing with political misinformation is quickly becoming one of the biggest concerns ahead of the presidential election.

It is now easier than ever to fabricate life-like audio and images that can realistically imitate human speech or portray public figures doing or saying things they never did.

This isn't a dystopic prediction: the use of deepfakes as a political weapon has already begun. Late last month, Donald Trump Jr. posted a parodic video in which Florida Gov. Ron DeSantis' face and voice was inserted into a scene from "The Office." DeSantis is currently Donald Trump's most powerful challenger for the Republican presidential nomination.

DeSantis went one step further and used AI-generated images as part of his official campaign. Earlier this month, the official DeSantis War Room Twitter account posted photorealistic images of Trump hugging former Chief Medical Advisor to the White House Anthony Fauci, which were later revealed to have been made using generative AI software.

Also Read: Legal Fallout: Former Trump Attorney Faces Disbarment Over 2020 Election Tactics

In April, the Republican National Committee produced a video using AI-made images of President Joe Biden presenting a what-if scenario of a major crisis occurring after his hypothetical reelection.

There are currently no laws in place requiring a disclaimer when AI is used to mimic a real-life event that never happened. A bill introduced in early June by Rep. Ritchie Torres (D-NY) could soon change that. The "AI Disclosure Act of 2023," would require "videos, photos, text, audio, and/or any other AI generated material" to have an accompanying disclaimer reading "this output has been generated by artificial intelligence," according to Torres' office. A similar bill introduced last week in the Senate would require the same for materials made by the U.S. government.

According to Politico, more than 70 Democratic strategists met last week via Zoom to discuss the rapid introduction of AI technology in the 2024 presidential campaign, with members saying they don't expect regulation to be in place fast enough to prevent the proliferation of this type of messaging in the campaigns.

Officials in the meeting highlighted the need for campaign staff to learn how to deal with AI-generated political content.

Pat Dennis, president of the research group American Bridge 21st Century, said that bad actors will have "an exponentially easier time writing more stuff," and "flooding the zone." 

A recent article published in The Conversation, and reviewed by Scientific American, proposes that AI platforms could be used to produce effective micro-targeting campaigns meant to influence voting behavior, which could far exceed the opinion-changing capabilities of previous campaigns.

The need to educate voters on how to identify misinformation and disinformation made by AI was also described as crucial at the strategists meeting. Yet, the attendants also acknowledged that AI technology can assist campaign efforts by allowing less resourceful campaigns to produce graphics and texts or analyze large pieces of data.

A low-hanging use of easily available AI text generation programs is to have them quickly write hundreds of campaign emails at once, replacing fleets of interns. The New York Times reported earlier this year that the Democratic Party has already begun experimenting with ChatGPT to write fund-raising messages.

On the other hand, the unofficial use of generative AI could have vastly negative consequences for the outcome of the election. For instance, the voices of candidates and other influential figures can be easily mimicked to convey intentionally confusing messaging in automated calls, asking voters to go to the polls on the wrong date.

A recent review of the issue by progressive think tank Brennan Center for Justice proposed that the executive government should designate "a lead agency to coordinate governance of AI issues in elections."

Now Read: Trump Classified Docs Trial Date Set, May Collide With GOP Debate: 4 Other Investigations Are Pending

Image made using AI-software Midjourney.

Market News and Data brought to you by Benzinga APIs

Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!