Ahead Of Trump Vs. Harris Faceoff, ChatGPT Parent OpenAI Uncovers Election Interference Misuse, But Sees No 'Meaningful Breakthrough'

ChatGPT-parent OpenAI has disclosed that its platform is being misused by malicious entities to meddle with democratic elections across the globe.

What Happened: According to the 54-page report published on Wednesday, OpenAI has thwarted over 20 global operations and deceptive networks that sought to misuse its models. The threats ranged from AI-generated website articles to social media posts by fake accounts.

It also highlights that election-related uses of AI ranged from simple content generation requests to complex, multi-stage efforts to analyze and respond to social media posts.

“Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” the AI startup stated.

See Also: Nvidia CEO Jensen Huang Once Spoke About The Importance Of Every Country Needing A Sovereign AI — But What Exactly Is This?

The majority of the social media content is related to elections in the U.S. and Rwanda, and to a lesser extent, elections in India and the EU.

Despite these attempts, OpenAI stated that none of the election-related operations were able to attract viral engagement or build sustained audiences using its tools.

OpenAI also said that a suspected China-based threat actor, “SweetSpecter,” attempted to spear phish its employees’ personal and corporate email accounts but was unsuccessful.

Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox.

Why It Matters: OpenAI’s report comes less than a month before the U.S. presidential election. Kamala Harris has a slight edge over Donald Trump, according to a recent Reuters/Ipsos poll. The poll shows Harris leading with 46% compared to Trump's 43%, reflecting a closer contest as Trump reduces his previous six-point gap.

Earlier this year, AI image creation tools from OpenAI and Microsoft Corp were reported to be used for spreading election-related disinformation.

Previously, networks associated with Russia, China, Iran, and Israel have been found exploiting OpenAI’s AI tools for global disinformation.

In February 2024, AI chatbots like GPT-4 and Google's Gemini were found to be spreading false and misleading information about the U.S. presidential primaries.

Following this, Google took preemptive measures to prevent its AI chatbot, Gemini, from becoming a source of misinformation.

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next:

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Image Generated Using Midjourney

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In: NewsPoliticsTechartificial intelligencebenzinga neuroChatGPTConsumer TechCybersecurityElection InterferenceOpenAiStories That Matter
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!