Amid the heightened conversations surrounding AI regulations during the U.K. AI Summit and the executive order by U.S. President Joe Biden, Meta Platforms Inc.’s META chief AI scientist has voiced his opposition, citing concerns about potential restrictions on “freedom to compute,” echoing the views of a fellow NYU researcher.
What Happened: Meta’s Yann LeCun’s perspective found resonance with AI and games researcher Julian Togelius, who has also been vocal about the potential hazards of stringent AI regulations.
In his latest blog post and then subsequently on X (formerly Twitter), Togelius argued that focusing on regulating AI at a technical level rather than at an application level poses a significant threat to digital freedoms.
He said, “Effective AI regulation is impossible without broad surveillance and regulation of our personal computing. But personal computing is central to communication and expression in modern society.”
Togelius also suggested that the proponents of AI safety might unwittingly lead us toward a surveillance state, a prospect he vehemently opposes.
Further, in his blog post, he said that people often compare the regulation of AI to nuclear weapons. Still, there are several critical differences, noting that AI encompasses a diverse array of technical methods and capabilities that can be challenging to regulate effectively.
He said that, unlike nuclear weapons, AI does not require costly, centralized production, and its potential applications are vast and varied, from personal devices to large-scale distributed systems.
LeCun agreed with Togelius and said, “regulating AI R&D (as opposed to products) would lead to unacceptable levels of policing and restrictions on the ‘freedom to compute.'”
Why It’s Important: Last month, Google Brain co-founder Andrew Ng also asserted that big tech companies are deceiving the public about the threat posed by AI. He said that the belief that AI could wipe out humanity is wrong. When people suggest making strict rules for AI, it’s based on this wrong idea and leads to policies that don’t make sense.
For the unversed, earlier this year, more than 1000 industry leaders, including Elon Musk and Apple co-founder Steve Wozniak, signed an “open letter” calling for a six-month moratorium on training AI models “more powerful” than OpenAI’s GPT-4.
Later, Google CEO Sundar Pichai agreed with the sentiments of OpenAI co-founder and CEO Sam Altman and Musk, saying, “I still believe AI is too important not to regulate and too important not to regulate well.”
Image Source – Shutterstock
Check out more of Benzinga’s Consumer Tech coverage by following this link.
Read Next: X’s Value Takes A Nosedive, Falling Below 50% Of Musk’s Purchase Price: Report
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.