From Ukraine To Gaza: OpenAI Reveals Its AI Models Exploited For Global Disinformation By Russia, China, Iran And Israel

OpenAI has revealed that its AI tools are being exploited by networks associated with Russia, China, Iran, and Israel to spread disinformation.

What Happened: OpenAI has disclosed that networks linked to Russia, China, Iran, and Israel are exploiting its AI tools to disseminate disinformation, according to a report by the Financial Times on Thursday.

The San Francisco-based company, known for its ChatGPT chatbot, revealed that five covert influence operations have utilized its AI models to generate misleading text and images.

These operations have focused on topics such as Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, European and U.S. politics, and criticisms of the Chinese government.

OpenAI’s policies explicitly prohibit using its models to deceive or mislead. However, these networks have managed to enhance their productivity by using AI for tasks like debugging code and researching public social media activity.

Social media giants like Meta and Google‘s YouTube have been working to curb disinformation since the 2016 U.S. presidential election.

The rapid advancements in AI technology have made it easier and cheaper for disinformation perpetrators to create realistic deepfakes and manipulate media.

See Also: Trump In Trouble After Memorial Day Post Defaming Judge, E. Jean Carroll? Author’s Attorney Says ‘All Options Are On The Table’

OpenAI’s principal investigator, Ben Nimmo, stated that while these campaigns haven’t significantly boosted their engagement, the threat remains. OpenAI is committed to uncovering such disinformation campaigns and is developing its own AI-powered tools for better detection and analysis.

Why It Matters: The misuse of AI tools for disinformation is not a new concern. Earlier this year, it was reported that AI-generated content from OpenAI and Microsoft Corp MSFT image tools were fueling election misinformation scandals. This raised significant alarm among stakeholders, especially in a busy election year worldwide.

In addition, OpenAI has been working on new AI image detection tools to identify if photos were created by their DALL-E model. This effort is part of a broader initiative to combat the spread of false information.

Concerns about AI’s role in spreading misinformation were also highlighted when AI chatbots provided incorrect and harmful answers about U.S. elections. This incident underscored that chatbots might not be ready for primetime in critical areas like elections.

Moreover, prominent figures like Hilary Clinton have voiced their worries about AI-generated disinformation ahead of the 2024 elections, emphasizing the need for vigilance.

Read Next: Paul Krugman Schools Americans ‘Righteously Angry’ About Soaring Costs: Biden’s Price Levels Mirror Reagan’s Inflation Triumph

Image Via Shutterstock

This story was generated using Benzinga Neuro and edited by Kaustubh Bagalkote

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In: NewsSocial MediaGlobalMarketsTechChinaKaustubh BagalkoteOpenAiRussia
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...