OpenAI's Mira Murati Lists Out The Real 'Issues' Delaying Sora AI Release: 'This Is The Reason Why We're Not Actually Deploying'

Zinger Key Points
  • OpenAI announced the Sora AI model in February, but it has not been released to the public yet.
  • Sora AI can generate videos based on simple text prompts, which impressed many people.
  • OpenAI CTO Mira Murati spilled the beans and revealed the “actual reason” behind Sora AI’s release being delayed.

OpenAI CTO Mira Murati has spilled the beans on why the startup has not yet released the Sora AI model despite its impressive launch in February.

What Happened: In an interview with The Wall Street Journal's Joanna Stern, Murati revealed that Microsoft Corp.-backed MSFT OpenAI is still "red teaming" the Sora AI model to find and fix possible flaws before it is released to the public.

But the "actual reason" is something that a lot of people have been worried about with respect to AI – that it can be used for spreading misinformation.

One challenge of generative AI is that it makes it difficult to distinguish what is real from what isn't.

See Also: Elon Musk’s Brother Kimbal Shares The ‘Holy S***’ Moment When They Knew Their First Company’s Product Was Destined For ‘Everyone Forever’

While the ability to edit photos and videos has been around for a while, leveraging generative AI powered by supercomputers and advanced algorithms makes it exponentially easier to pass off something fake as being real.

OpenAI's Murati does not have a definitive answer just yet, but she underlines that the company is "doing research" into it.

"We're watermarking the videos, but really figuring out the content provenance and how do you trust what is real content, versus something that happened in reality," is something that her team is trying to figure out.

But that is not OpenAI's biggest problem with Sora and not the primary reason behind why it has not yet been released to the public nearly a month since its demonstration that left many people impressed.

"Content created for misinformation… is the reason why we're not actually deploying the systems yet."

"We need to figure out these issues before we can confidently deploy them broadly."

For the time being, Murati says that Sora AI's safety guardrails will be along the lines of those for its text-to-image generator Dall-E. This includes the policy of not generating images of public figures.

"Right now, we're in the discovery mode and we haven't figured out exactly where all the limitations are."

That said, Murati says she is "not sure" if Sora AI will be allowed to generate videos involving nudity, but she notes that OpenAI is "working with artists" to understand "what's useful and what level of flexibility the tool should provide."

Why It Matters: Sora AI model's announcement in February left a lot of people impressed, with CEO Sam Altman calling it a "remarkable moment."

The AI video generator is capable of not only creating videos based on text prompts, but also enhancing existing videos by adding or removing frames based on its understanding of the contextual situation of a given scene in the physical world.

However, its misuse also presents a greater problem at a time when AI-generated deepfake images are wreaking havoc, with victims being minor girls to pop star Taylor Swift. Microsoft CEO Satya Nadella called it "alarming and terrible," promising "quick action" against the culprits.

The White House has called for legislative action, but the specifics are still unclear.

Check out more of Benzinga's Consumer Tech coverage by following this link.

Read Next: OpenAI Could Let Users Send Two Times Longer Prompts In GPT-4’s Next Big Update, Leak Suggests

Photos courtesy: Shutterstock and Flickr

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!