EU's AI Act Includes A Category For Unacceptable Risk Applications

Both in the United States and abroad, concerns are mounting about the potential for artificial intelligence (AI) technology to cause more harm than good. Along with the Biden administration, the European Union is taking serious action with the approval of its Artificial Intelligence Act.

ChatGPT continues to send waves through the world as it disrupts everything from education to marketing. The problem will likely only continue to as venture capital continues to invest billions into AI Startups.

See Next: This Startup Invented Programmable, Drinkable Plastic That Dissolves In Water In 60 Hours

What's The AI Act?

A committee of lawmakers in the European Parliament approved the EU's AI Act, pushing it one step closer to becoming law. 

The AI Act is the first law for AI systems in the West. With a risk-based approach to regulating AI, the law seeks to manage how companies develop and employ generative AI technologies. 

The AI Act categorizes applications into four risk categories: unacceptable risk, high risk, limited risk and minimal or no risk.

In a perfect world, all applications would fit within the "minimal or no risk" category, but that's not realistic. With concerns about the misuse of AI continuing to grow, the AI Act has a defined category for "unacceptable risk" applications. Any application that falls within this category is banned from being used in the European Union. 

As reported by CNBC, here's how these technologies are defined:

  • AI systems using subliminal techniques or manipulative or deceptive techniques to distort behavior
  • AI systems exploiting vulnerabilities of individuals or specific groups
  • Biometric categorization systems based on sensitive attributes or characteristics
  • AI systems used for social scoring or evaluating trustworthiness
  • AI systems used for risk assessments predicting criminal or administrative offenses
  • AI systems creating or expanding facial recognition databases through untargeted scraping
  • AI systems inferring emotions in law enforcement, border management, the workplace and education

With these regulations in place, Europe is at the forefront of suppressing potentially dangerous AI technology.

To stay updated with top startup news & investments, sign up for Benzinga's Startup Investing & Equity Crowdfunding Newsletter

Foundation Models Are Also In The Crosshairs Of The AI Act

Foundation models, such as ChatGPT, are also receiving plenty of scrutiny under the AI Act. Requirements are in place to ensure that these large language models act responsibly.

For example, developers of these models must pass governance measures, risk mitigations and safety checks before making the technology available for public use. These models must also take steps to guarantee that their training data doesn't violate copyright law.

See more on startup investing from Benzinga.

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!