Elon Musk and OpenAI CEO Sam Altman Continue Warnings on Risks of Artificial Intelligence: Here's How It Could Be Used Maliciously

Following the highly anticipated release of ChatGPT’s successor GPT-4 earlier this month, OpenAI’s CEO Sam Altman has raised concerns regarding the dangers of artificial intelligence (AI). 

This isn’t the first time a tech mogul has voiced support for regulating AI. Billionaire polymath Elon Musk, who co-founded OpenAI along with Altman in 2015, has advocated for the same over the past decade. 

ChatGPT-4’s Capabilities 

The launch of ChatGPT's base version in November disrupted the global tech industry and became the fastest-growing consumer application in history. Tech giants including Alphabet Inc.’s Google and Microsoft Corp. raced to launch their own AI models shortly after the viral success of ChatGPT, reflecting the ripple effect created by the relatively unknown tech startup. 

Within three months of launch, OpenAI released GPT-4 with new and enhanced multimodal capabilities, facilitating search through image and video inputs. The highly sophisticated chatbot can score 90% on the U.S. bar exam and almost perfect scores on SAT math tests. Altman termed GPT-4 “our most capable and aligned model yet.”

Other AI-based startups are making surprising advancements in the field as well. RAD AI recently launched the first AI marketing platform built to understand emotion, and some of the largest companies on the planet are already using it.

GenesisAI is a startup building a marketplace to allow for any business to integrate AI and automation into their business. Meaning soon AI mightl be just as much of an integral part of a business as employees themselves.

To stay updated with top startup investments, sign up for Benzinga’s Startup Investing & Equity Crowdfunding Newsletter

Altman’s Concerns 

But as with all high-tech services, generative AI software can also be used maliciously. Because it can write computer code, GPT-4 can be used for large-scale disinformation and offensive cyberattacks, according to Altman. While Altman has said he believes GPT-4 could be the greatest technology developed by humans, he also cautioned that it’s important to be careful.

ChatGPT-4 uses deductive reasoning in lieu of memorization, which can lead to inaccurate results and misstatements. 

“The thing that I try to caution people the most is what we call the ‘hallucinations problem’,” Altman said in an interview with ABC News. “The model will confidently state things as if they were facts (but they) are entirely made up.”

Read Next: The Tesla Of Lawn Mowers: Soon Your Cars Won't Be Your Only Self-Driving, All-Electric Vehicle

Need For Regulation 

Though ChatGPT requires manual input, and so is under human control, Altman says many people will try to bypass OpenAI’s safety restrictions, which could have negative consequences.

“There will be other people who don't put some of the safety limits that we put on,” Altman stated. "Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”

As countries race to replicate the ChatGPT software, the dangers of misuse are immense.

“If we just developed this in secret — in our little lab here — and made GPT-7 and then dropped it on the world all at once ... that, I think, is a situation with a lot more downside," Altman said. "People need time to update, to react, to get used to this technology [and] to understand where the downsides are and what the mitigations can be."

Altman is constantly in touch with government officials and safety experts to navigate the technology and whether it’s used properly. He said he’s talking to policy and safety experts and getting audits of the system to address the issues and develop a safe product.

Musk And Altman Aren’t The Only Ones 

AI is the next big thing for the tech industry. But tech titans have raised concerns about the negative implications of artificial intelligence, especially in today’s tumultuous geopolitical backdrop. While Musk predicts a “Terminator-like” outcome, others including Altman and Alphabet Inc. CEO Sundar Pichai are worried about cyberattacks.

Earlier in 2020, Pichai stated, “There is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it.” 

Former IBM CEO Ginny Rometty, Salesforce Inc. CEO Marc Benioff and Microsoft CEO Satya Nadella all have spoken up about the negative potential of AI and the need for regulation and oversight. 

See more on startup investing from Benzinga.

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!