OpenAI, Google, Microsoft, and Anthropic Spearhead Industry-Wide Initiative For Responsible AI

Zinger Key Points
  • AI leaders including OpenAI, Google, Microsoft, and Anthropic created the "Frontier Model Forum" on Wednesday.
  • The forum aims to advance AI safety research, establish best practices and standards, and facilitate knowledge sharing among policymakers.

ChatGPT maker OpenAI, along with Anthropic, Alphabet Inc GOOG GOOGL, and Microsoft Corporation MSFT on Wednesday announced the creation of the "Frontier Model Forum," an industry group committed to promoting the safe and responsible development of frontier AI systems.

What Happened: The forum aims to advance AI safety research, establish best practices and standards, and facilitate knowledge sharing among policymakers and industry players.

The forum defines frontier models as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, capable of performing a wide array of tasks.

Membership is open to organizations that develop and deploy frontier models, demonstrate a commitment to frontier model safety, and are willing to contribute to advancing the forum’s efforts.

The Frontier Model Forum said it will work on three main areas over the coming year:

  • It will identify best practices for the safe development of frontier AI models and promote knowledge sharing among industry, governments, civil society, and academia.
  • The forum will support the AI safety ecosystem by identifying the most important open research questions on AI safety and coordinating research efforts in key areas.
  • It will also establish trusted mechanisms for sharing information regarding AI safety and risks among companies, governments, and relevant stakeholders.

Kent Walker, president of global affairs at Google, said, “we’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation.”

Brad Smith, vice chair & president at Microsoft, added, “companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly.”

Read also: AI Has A New Sheriff… And It’s Matthew McConaughey?

The establishment of “The Forum” follows a commitment made by seven tech companies, including the aforementioned ones, in agreement with President Joe Biden to advocate for safe AI practices.

The seven companies – Alphabet, Amazon.com Inc AMZN, Microsoft, Meta Platforms Inc META, OpenAI, Anthropic, and Inflection – joined the Biden administration in a pledge to adhere to voluntary guidelines aimed at mitigating risks associated with AI.

Voluntary commitments signed by the companies underscore three fundamental principles: safety, security, and trust. AI developers have an obligation to ensure the safety of their tech, prioritize the security of their systems against cyber threats, and earn the people’s trust by empowering users to make informed decisions.

The establishment of "The Forum" and the pledge signed by the tech giants align with the industry’s commitment to ensure that the potential of AI is harnessed in a manner that promotes safety, security, and trust.

Read next: Musk’s ‘Unprecedented’ Rebrand Of Twitter To X Could Cost $100M: Expert Labels It The ‘Trademark Story Of The Year’

Photo: Shutterstock

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!