The U.S. and U.K. were among 18 countries that signed an agreement on recommendations to keep artificial intelligence (AI) safe from hackers and other rogue actors who might wield the technology irresponsibly.
Published primarily by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the U.K. National Cyber Security Centre (NCSC), many other international agencies were consulted, including some of the major developers and backers of AI technology, including Amazon AMZN, Alphabet GOOGL and Microsoft MSFT, which is a major investor of ChatGPT owner OpenAI.
The 18 countries that signed the document agreed that companies developing AI systems must do so in a secure and responsible way. This would require raising awareness of threats and risk, and modeling these to the systems.
“These guidelines should be considered in conjunction with established cyber security, risk management and incident response best practice. In particular, we urge providers to follow the ‘secure by design' principles developed by CISA, NCSC and all our international partners,” it recommended.
Also Read: Meta’s Chief AI Scientist Once Again Slams AI Regulation
Cybersecurity: While hackers have, thus far, maintained existing methods to deploy malware or attack organizations’ systems, experts believe AI-enabled threats to cybersecurity are already being built by hackers.
“As AI grows in importance, attackers will seek to outpace defenders’ efforts with their own research. It is crucial for security teams to stay up to date with attackers’ tactics to defend against them,” said Dave Shackleford, founder of Voodoo Security, writing in TechTarget.
While the document only issued guidelines on how AI should be developed and deployed securely and responsibly, some countries have taken the threat more seriously and are discussing how the industry should be regulated.
Germany, France and Italy, earlier this month, agreed that AI should be regulated through a binding voluntary commitment for developers to follow a code of conduct that, if breached, would invite a system of sanctions.
Last month, President Joe Biden signed an executive order requiring safety assessments into the AI industry on issues such as consumer privacy and impact on the labor market.
Monday’s agreement followed an AI Safety Summit held at Bletchley Park in the U.K. at the start of November at which, a previous document called the “Bletchley Declaration” was signed by 28 countries on the need to manage the safety risks posed by AI.
Other signatories to Monday’s agreement included Germany, Italy, Poland, Australia and Singapore.
Secretary of Homeland Security Alejandro Mayorkas said: “The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a common sense path to designing, developing, deploying, and operating AI with cyber security at its core.
“By integrating ‘secure by design' principles, these guidelines represent an historic agreement that developers must invest in, protecting customers at each step of a system's design and development.”
Now Read: Will Biden’s Landmark AI Executive Order Push Banking Industry Deeper Into Artificial Intelligence?
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.
Your update on what’s going on in the Fintech space. Keep up-to-date with news, valuations, mergers, funding, and events. Sign up today!