OpenAI's GPT-4 Demonstrates Limited Utility In Bioweapon Development: Study

The latest artificial intelligence offering from startup ChatGPT-parent OpenAI, known as GPT-4, has shown a minimal propensity to aid in the development of biological threats, a new study reveals.

What Happened: OpenAI carried out tests to gauge the probable dangers of its GPT-4 being exploited to assist in the production of biological weapons. 

This study was initiated in response to apprehensions voiced by legislators and tech industry leaders over potential misuse of AI technologies for harmful purposes, reported Bloomberg. 

Last October, an executive order signed by President Joe Biden mandated the Department of Energy to ensure AI technologies do not pose chemical, biological, or nuclear threats. In response, OpenAI assembled a “preparedness” team to tackle potential risks associated with AI.

See Also: Microsoft CEO Satya Nadella Doesn’t Know How He’d Work ‘If You Took Away’ This AI Tool — Hint: It’s Not ChatGPT

As part of the team's inaugural study, unveiled on Wednesday, OpenAI's researchers engaged with 50 biology experts and 50 college-level biology students who were divided into two groups. 

One group used a special version of GPT-4 and the internet for tasks related to making a biological threat, while the other group used only the internet.

Despite a slight uptick in accuracy and completeness for the group using the AI model, the researchers concluded that GPT-4 only marginally improves the acquisition of information for creating biological threats.

Alexander Madry, who heads the preparedness team, stated that this study is part of a series aimed at understanding potential abuses of OpenAI’s technology. Other ongoing studies are exploring AI’s potential use in creating cybersecurity threats and influencing personal beliefs.

Why It Matters: The increasing concern over the potential misuse of AI tools was evident when the Biden administration considered regulating AI tools like ChatGPT over fears of potential harm and discrimination.

OpenAI CEO Sam Altman’s subsequent launch of a “preparedness challenge” in October 2023, was a clear response to growing fears over AI’s potential misuse.

Concerns over AI’s dual-use potential were further amplified with the release of GPT-4, the latest AI engine behind ChatGPT, which experts warned could make it easier for malicious actors to build malware programs and create new cybersecurity threats. 

Photo Courtesy: Shutterstock.com

Check out more of Benzinga's Consumer Tech coverage by following this link.

Read Next: The Hidden ChatGPT Trick: Being Nice Can Give Surprisingly Better Results

Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!