'Not That Interested In Killer Robots:' Instead, OpenAI CEO Sam Altman Is Worried 'Subtle Misalignments' Could Make AI Dangerous

Zinger Key Points
  • OpenAI CEO Sam Altman says neither his company, or others in the AI industry, should have a role in forming regulations.
  • He also stressed on the need to set up a regulatory body akin to the International Atomic Energy Agency, tailored for the AI industry.
  • Altman thinks we’re “subtle societal misalignments” away from AI becoming dangerous and wreaking havoc.

In a recent address at the World Government Summit in Dubai, OpenAI CEO Sam Altman expressed apprehensions about the potential hazards of artificial intelligence (AI), which could become dangerous and "chaotic" due to "very subtle societal misalignments."

What Happened: Altman emphasized the need for a regulatory body akin to the International Atomic Energy Agency (IAEA) to oversee the rapid advancement of AI.

Altman voiced his concerns about the societal implications of unregulated AI systems.

He stressed that the AI industry should not be self-regulating and advocated for a global action plan, echoing the sentiments of OpenAI's biggest backer, Microsoft Corp. MSFT CEO Satya Nadella said in October last year.

"There's some things in there that are easy to imagine where things really go wrong," Altman said, adding that he's not interested in killer robots roaming the streets.

"I'm much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong."

See Also: Nvidia’s ‘Chat With RTX’ Lets You Make Your Own ChatGPT And Run It On Your PC For Free

Despite being the face of one of the leading AI players in the world, Altman maintained that regulations should not be driven by OpenAI or in fact any AI company in the world.

"I think we're still at a time where debate is needed and healthy, but at some point in the next few years, I think we have to move towards an action plan with real buy-in around the world."

AI’s potential risks have also sparked debates among industry experts. In November, Geoffrey Hinton and Yann LeCun, two of the three “Godfathers of AI,” had a heated argument about the risks of AI leading to humanity’s extinction.

Why It Matters: Altman’s concerns echo those of other industry leaders –  Nadella expressed similar worries about AI’s potential dangers. He sought guidance from moral philosophers and urged developers to remain cautious.

In the same year, Altman and xAI founder Elon Musk agreed with Quora CEO Adam D'Angelo that the creation of artificial general intelligence (AGI) would be a defining moment in world history.

These concerns led to the establishment of the United States AI Safety Institute in 2023, tasked with developing guidelines and best practices for examining potentially harmful AI systems.

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next: Nvidia CEO Jensen Huang Urges Nations To Embrace ‘Sovereign AI,’ Predicts Data Center Spending To Hit $2 Trillion By 2029

Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.

Photo courtesy: Shutterstock

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In: NewsTop StoriesTechGeneralartificial intelligencebenzinga neuroConsumer TechPeople In TechSam Altman
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!