OpenAI Blocks Iranian Group's ChatGPT Accounts For Targeting US Election, Says Covert Operation Did Not Achieve 'Meaningful Audience Engagement'

OpenAI has uncovered and dismantled a covert Iranian influence operation leveraging ChatGPT, marking the second such instance of disclosing the adversarial use of its AI models since May.

What Happened: According to a post by Microsoft Corp.-backed OpenAI, the operation, named Storm-2035, was generating content using ChatGPT accounts to manipulate public opinion during the 2024 elections. The accounts involved have been banned from using OpenAI’s services.

The operation produced long-form articles and short social media comments on various topics, including U.S. politics, global events, and the U.S. presidential election. The content was shared via social media and websites posing as news outlets.

Despite the operation’s efforts, it did not achieve significant audience engagement. Most social media posts received minimal interaction, and web articles were not widely shared, the AI startup said. OpenAI’s investigation was aided by information from Microsoft.

"The operation generated content about several topics: mainly, the conflict in Gaza, Israel's presence at the Olympic Games, and the U.S. presidential election," the company noted.

OpenAI has shared threat intelligence with government, campaign, and industry stakeholders to support the wider community in disrupting such activities.

Why It Matters: This incident is part of a broader trend of using AI tools for disinformation campaigns. In May, OpenAI revealed that its AI models were being exploited by networks associated with Russia, China, Iran, and Israel to spread disinformation globally.

The company disclosed that five covert influence operations had utilized its AI models to generate misleading text and images.

In June, OpenAI announced plans to restrict access to its tools in China amid rising tensions and pressure from the U.S. government to curb China’s access to advanced AI technology. Despite these restrictions, developers in China have been using OpenAI’s tools via virtual private networks and other means.

Concerns about election security have been heightened following a cyberattack on Donald Trump’s presidential campaign. Former White House officials have warned about potential future cyberattacks, suggesting that someone might be running the 2016 playbook again.

On Thursday, Alphabet Inc.'s Google confirmed that Iranian hackers linked to the Revolutionary Guard targeted the personal email accounts of individuals associated with the U.S. presidential campaigns of President Joe Biden and former President Trump.

These attacks, which began in May, have been aimed at current and former government officials, as well as campaign affiliates.

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next:

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Photo courtesy: Shutterstock

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In: NewsGlobalTechartificial intelligencebenzinga neuroIranOpenAi
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!