ChatGPT parent OpenAI has formed a new team to assess, evaluate, and probe AI models for risk protection.
Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning, will lead the team called Preparedness.
Preparedness’ chief responsibilities will be tracking, forecasting, and protecting against the dangers of future artificial intelligence systems, including phishing attacks.
OpenAI CEO Sam Altman had flagged the risks of AI, including its role in human extinction.
The company’s also open to studying “less obvious” — and more grounded — areas of AI risk, too, TechCrunch reports.
The Microsoft Corp MSFT backed AI company solicits ideas for risk studies from the community, with a $25,000 prize and a job at Preparedness on the line for the top ten submissions.
OpenAI says that the Preparedness team will also formulate a “risk-informed development policy,” which will detail OpenAI’s approach to building AI model evaluations and monitoring tooling, the company’s risk-mitigating actions, and its governance structure for oversight across the model development process.
Recently, an OpenAI, Microsoft, Google parent Alphabet Inc GOOG GOOGL and AI startup Anthropic led AI safety forum elected its first director and shared plans to create an advisory board shortly to help guide its strategy.
Several countries remain engaged in devising AI regulation. Britain is hosting a global AI safety summit in November.
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.