ChatGPT-parent OpenAI has announced an independent board oversight committee focused on safety and security. It was established in response to ongoing security process controversies.
What Happened: In a blog post on Monday, OpenAI revealed that the Safety and Security Committee, which was formed in May, will now function as an independent board oversight committee.
The committee will be led by Zico Kolter, the director of the machine learning department at Carnegie Mellon University's School of Computer Science.
Other members include Adam D'Angelo, an OpenAI board member and co-founder of Quora, former NSA chief and board member Paul Nakasone, and Nicole Seligman, former executive vice president at Sony.
See Also: OpenAI’s ‘O1’ Model, Nvidia’s AI Demand, Google’s Missed Opportunity, And More: This Week In AI
The committee’s mandate is to supervise “the safety and security processes guiding OpenAI's model deployment and development.”
It has completed a 90-day review of OpenAI's processes and safeguards, making recommendations to the board.
This includes establishing independent governance for safety and security, enhancing security measures, promoting transparency about OpenAI's work, collaborating with external organizations, and unifying the company's safety frameworks.
The committee will also oversee model launches, including the authority to delay a release until safety concerns are addressed.
Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox.
Why It Matters: OpenAI has faced controversy and high-level employee departures, with concerns that the company is growing too quickly to operate safely.
Currently, OpenAI is also pursuing a funding round that could value the company at over $150 billion. Thrive Capital is leading the round with a planned investment of $1 billion, with Tiger Global also expected to join. Microsoft, Nvidia, and Apple are reportedly in talks to invest.
The AI startup has also launched a new model, called “o1,” which is focused on reasoning and “solving hard problems.”
Earlier this month, OpenAI acknowledged that o1 could potentially be misused for creating biological weapons, rating it as having a "medium risk" for issues related to chemical, biological, radiological, and nuclear (CBRN) weapons.
Image via Shutterstock
Check out more of Benzinga's Consumer Tech coverage by following this link.
Read Next:
Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.