OpenAI Is Working On New AI Image Detection Tools In Busy Election Year Worldwide To Tell If Photos Were Made By DALL-E

OpenAI has unveiled new AI tools that can detect if an image was created using its DALL-E AI image generator. The company has also introduced advanced watermarking techniques to better identify the content it generates.

What Happened: Microsoft Corp.-backed MSFT OpenAI is developing advanced methods to trace and authenticate AI-generated content. This involves a cutting-edge image detection classifier to identify AI-generated photos and a watermarking system for tagging audio content discreetly, the company said in a blog post.

Additionally, OpenAI also introduced Model Spec, a framework outlining expected AI tool behaviors, to guide future responses from AI tools like GPT-4.

The classifier is capable of determining whether an image was generated by DALL-E 3. OpenAI claims that the classifier remains accurate even if the image undergoes cropping, compression, or alterations in saturation.

However, its ability to identify content from other AI models is limited, as it only flags around 5 to 10% of images from other image generators, such as Midjourney.

OpenAI has previously added content credentials to image metadata from the Coalition of Content Provenance and Authority (C2PA). This month, OpenAI also joined C2PA's steering committee. The AI startup has also started adding watermarks to clips from Voice Engine, its text-to-speech platform currently in limited preview.

Both the image classifier and the audio watermarking signal are still being refined. Researchers and nonprofit journalism groups can test the image detection classifier by applying it to OpenAI's research access platform.

This comes at a time when a record number of countries worldwide are either holding national elections or will hold them later in 2024. Countries like the U.S., India, and the United Kingdom are set to hold elections within the next six months.

See Also: ChatGPT Is Not A ‘Long-Term’ Engagement Model, OpenAI’s Top Executive Says: ‘Today’s Systems Are Laughably Bad

Why It Matters: OpenAI’s new AI-generated image detection tools come at a time when concerns over misinformation spread via AI-generated content are on the rise.

In March, it was reported that AI image creation tools from OpenAI and Microsoft were being used to fabricate images that could contribute to election-related disinformation. This raised concerns about the potential for AI tools to be misused for malicious purposes.

AI and misinformation have been a hot topic leading up to the 2024 election, with more than half of Americans expressing concerns about the potential for AI to spread misinformation.

Read Next: Open AI CEO Sam Altman Once Called GPT-2 ‘Very Bad’ But Now Confesses He Has A ‘Soft Spot’ For The Version — Here’s ChatGPT’s Evolution Story

Image Via Shutterstock

This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In: NewsPoliticsTechMediaartificial intelligencebenzinga neuroConsumer TechDALL-EKaustubh BagalkoteOpenAiSoftware & Apps
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!