'Can't Wait For OpenAI To Have Access To My Phone' Says Musk As Former NSA Chief Joins OpenAI Board

OpenAI recently announced that retired U.S. Army General Paul M. Nakasone, the former Chief of the National Security Agency (NSA) and former leader of the U.S. Cyber Command, has joined their Board of Directors. 

The company introduced Nakasone as a "leading expert in cybersecurity" to reinforce its commitment to safety and security. However, as AI technology seeps into almost all pores of society, this decision wasn’t received lightly and has sparked significant controversy and backlash. 

Don't Miss:

Edward Snowden, the former NSA employee and whistleblower, strongly criticized the decision. He tweeted, "They’ve gone full mask-off: do not ever trust @OpenAI or its products (ChatGPT etc). There is only one reason for appointing an @NSAGov Director to your board. This is a willful, calculated betrayal of the rights of every person on Earth. You have been warned."

Elon Musk also commented sarcastically with a tweet, "Can't wait for OpenAI to have access to my phone." Musk is no stranger to criticizing OpenAI and its products, and this comment is just one in a series of tweets that fit into a larger pattern of Musk’s ongoing criticisms of OpenAI, the company he cofounded in 2015. 

Trending: Executives and founders of Uber, Facebook and Apple are bullish on this wellness app that you can coinvest in at $1.15 per share.

Matthew Green, a cryptography professor at Johns Hopkins University, also expressed his concerns on Twitter. He wrote, "I do think that the biggest application of AI is going to be mass population surveillance, so bringing the former head of the NSA into OpenAI has some solid logic behind it."

Nakasone’s appointment comes amid a series of high-profile departures from OpenAI, including safety researchers like Daniel Kokotajlo and William Saunders. The AI company also completely disbanded the "Superalignment" safety team, which has added to the skepticism surrounding the new board member’s role and the future direction of OpenAI.

When he announced his departure, Jan Leike, who worked on OpenAI’s long-term safety initiatives and was part of the "Superalignment" team, criticized the company for not supporting his team's work. Policy researcher Gretchen Krueger, who also left recently, echoed some of Leike’s concerns and mentioned additional issues she observed.

Keep Reading:

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!