ChatGPT creators at OpenAI have ramped up efforts to prevent the emergence of rogue AI, acknowledging the critical importance of averting potential dangers associated with superintelligent machines.
What Happened: OpenAI has doubled down on efforts to prevent the emergence of rogue AI systems. Concerned about the potential disempowerment or extinction of humanity, OpenAI has been investing significant resources and forming a specialized research team called the Superalignment team.
See Also: How To Use ChatGPT With Your Voice
According to a blog posted on Wednesday, the OpenAI team aims to create a “human-level” AI alignment researcher that can address the risks posed by superintelligent AI. The company intends to dedicate 20% of its computing power over the next four years to this mission.
“The vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” said OpenAI, adding,”Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”
OpenAI's co-founder and chief scientist, Ilya Sutskever would be leading a new team that will be dedicated to the alignment of superintelligent AI with human values. The company plans to share its findings widely and actively engage with interdisciplinary experts to address broader societal concerns.
Why It's Important: In March, more than 1000 tech experts, including Elon Musk signed an "open letter" calling for an immediate pause in AI developments "more powerful" than OpenAI's GPT-4 until safety protocols for such designs are created, executed and evaluated by independent experts.
Previously, Alphabet Inc.'s GOOG GOOGL CEO Sundar Pichai acknowledged that current AI systems face challenges of "hallucination problems" without clear explanations.
During its launch, Google Bard received massive backlash for providing incorrect information, which subsequently sparked a debate regarding the accuracy of the chatbot.
OpenAI's chatGPT along with Microsoft Corporation's Bing AI, which utilizes the same underlying OpenAI technology has also faced criticism from being "woke" and going rogue, respectively.
OpenAI's latest announcement regarding the prevention of rogue AI comes at a time when discussion about stringent regulations on the technology's growth is at its prime. In May, OpenAI's CEO Sam Altman testified before Congress and made bold statements regarding AI regulations. "I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that," he said.
Check out more of Benzinga’s Consumer Tech coverage by following this link.
Read Next: ChatGPT’s Website Sees A Dip In Visitors: Is The World Already Tired Of Chatting With AI?
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.