Alphabet Inc. GOOG GOOGL, Amazon Inc. AMZN, Microsoft Corp. MSFT, Meta Platforms Inc. META, along with OpenAI, Anthropic, and Inflection, have pledged to adhere to a series of voluntary guidelines to mitigate the risks associated with artificial intelligence (AI).
This move, orchestrated by the White House, marks a significant stride in the Biden administration’s efforts to regulate AI technology.
The companies have committed to implementing safety measures, including internal and external security testing of their AI systems prior to public release. Independent experts will be involved in this testing process to guard against potential risks such as biosecurity and cybersecurity threats, as well as broader societal impacts. The companies have also pledged to share information on managing AI risks with industry peers, governments, civil society, and academia.
To foster public trust, the companies will develop robust technical mechanisms to inform users when content is AI-generated, such as a watermarking system. They will also publicly report their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security and societal risks.
Furthermore, they have committed to prioritizing research on societal risks posed by AI systems, including avoiding harmful bias and discrimination, and protecting privacy.
Read also: AI, Big Tech And Blue Chips: These 3 Charts Signal Shift In The Market
Biden-Harris Administration Takes Steps To Regulate AI
According to a White House statement released Friday, the Biden-Harris administration is also developing an executive order and pursuing bipartisan legislation to further regulate AI technology. The administration has consulted with several countries on these voluntary commitments, indicating a global effort to establish a robust international framework for AI governance.
Top executives, including Microsoft President Brad Smith, Inflection AI CEO Mustafa Suleyman, and Nick Clegg, President of Facebook and Instagram parent Meta, are anticipated to convene at the White House to disclose their projects on Friday, according to the Financial Times.
"They are an important first step in ensuring responsible guardrails are established for A.I. and they create a model for other governments to follow," Nick Clegg said, the New York Times reported.
While these commitments are voluntary and lack specific enforcement mechanisms, the White House has expressed its intent to use every available lever within the federal government to enforce these standards. Critics have welcomed the move, but caution that tech companies have a mixed track record of adhering to voluntary safety and security commitments.
Read now: Elon Musk Tells Why xAI Will End Up Enhancing Tesla’s Value
This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.
Photo: Shutterstock
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.