Open Letter For AI Companies Calls For Better Transparency, Protections For Whistleblowers

Zinger Key Points
  • The employees write AI companies have strong financial incentives to avoid effective oversight.
  • Broad confidentiality agreements block employees from voicing concerns, and ordinary whistleblower protections are insufficient.

Some former and current employees at AI technology companies have signed a letter highlighting the risks posed by these technologies.

The letter said, “These risks range from the further entrenchment of existing inequalities to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

The letter, signed by 13 people, including current and former employees at OpenAI, Anthropic, and Alphabet Inc’s GOOG GOOGL Google DeepMind, said AI could exacerbate inequality, increase misinformation and allow weapons systems to become autonomous and cause significant death.

The employees write that though some of these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public, AI companies have strong financial incentives to avoid effective oversight.

Currently, AI companies are not strictly obligated to share information with the government related to their systems’ capabilities and limitations, protective measures, and risk levels of different kinds of harm.

There needs to be more effective government oversight of these corporations, and current and former employees are among the few who can hold them accountable to the public.

Yet broad confidentiality agreements block employees from voicing concerns, and ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks the employees discuss are not yet regulated.

The need for better whistleblower protection comes as some OpenAI insiders are trying to highlight a culture of recklessness and secrecy at the Microsoft Corp MSFT-backed AI company.

Those members allege that OpenAI started as a nonprofit research lab but now prioritizes profits and growth.

They also claim that OpenAI has used ‘hardball tactics’ to prevent workers from voicing their concerns about the technology.

The New York Times noted that the campaign comes at a rough moment for OpenAI, which is still recovering from an attempted coup last year, which involved the dismissal and then reassignment of the chief executive, Sam Altman, over concerns about his candor.

Image via Pixabay.

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!