What Is OpenAI's Q*: A Revolutionary AI Tool Or Global Menace?

Zinger Key Points
  • Q* is reportedly able to solve math problems it has not seen before.
  • OpenAI board members deny Sam Altman's sacking was over safety issues.

Before his sacking as CEO of OpenAI last week — and reinstatement this week — could Sam Altman have been working on an artificial intelligence system so powerful it might threaten the safety of humankind?

It sounds like a horror story worthy of Skynet — the AI system in the Terminator films that took control of the planet and subjugated humanity to death and slavery. With Microsoft Corp MSFT as your chief backer, you’d have access to most of the computer terminals around the world.

Much of the media speculation behind Altman’s sacking focused on “safety concerns.” Tech news website The Information suggested Altman had been working on a model called Q* (Q-Star) that was developing at such a pace it caused alarm to safety researchers.

‘No Disagreement On Safety’

This was refuted last week by several of OpenAI’s board members, and Emmett Shear — interim CEO during Altman’s brief removal — wrote this week the board “did not remove Sam over any specific disagreement on safety.”

Nevertheless — for many — where there’s smoke there’s fire. So what is Q*, and what triggered the speculation behind these safety issues?

Q* was reportedly able to solve basic math problems it had never seen before — a major leap forward in AI technology’s abilities, if true. But not at the level of the much-debated artificial general intelligence (AGI) that could possibly perform tasks at, or above, human levels of ability. A Skynet moment perhaps?

On Thursday last week, Altman appeared at a conference saying that in the last couple of weeks, he was in the room when OpenAI “pushed the veil of ignorance back and the frontier of discovery forward.” He was sacked the next day.

Also Read: OpenAI Engineers Flexed The Power Of Their Rare Skillset In Sam Altman Reinstatement

OpenAI’s AGI Mission

OpenAI’s mission statement on its website reads: “We believe our research will eventually lead to artificial general intelligence, a system that can solve human-level problems. Building safe and beneficial AGI is our mission.”

AGI’s critics were less concerned about existential threats to humankind, but more worried about the solving of “human-level problems,” and what impact that might have on issues such as jobs, privacy and cybersecurity.

Writing in Forbes, Nisha Talagala, entrepreneur and technologist in AI and AI literacy, said: “If nothing else, this week's drama at OpenAI shows how little we know about the technology development that is so fundamental to humanity's future — and how unstructured our global conversation on the topic is.”

Now Read: Key AI Debates For 2024: Nvidia Competitors Threaten Dominance With Custom Chips

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In: EquitiesNewsManagementTop StoriesMarketsTechGeneralAGIartifical intelligenceConsumer TechOpenAiQ*Sam Altman
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!