Skip to main content

The AI Act takes a risk-based approach to regulating AI: the greater the risk an AI system poses, the more regulated it will be. 

The AI Act considers many factors when considering an AI system’s risk. These factors are linked to and derived from the values of the EU, fundamental human rights and the safety of natural persons. The more an AI system interacts with these values and rights, the greater the risk they pose and the more regulated the AI systems are.  

The AI Act creates four risk-based categories.

  1. Unacceptable Risk 
  2. High Risk
  3. Limited Risk 
  4. Minimal Risk

 

1. Unacceptable Risk to AI Systems

 

Article 5 of the AI Act bans unacceptable AI systems. The drafters of the AI Act found that unacceptable AI systems infringe on fundamental human rights and EU values by posing a threat to the safety and well-being of natural persons. These AI systems are capable of influencing natural persons to cause physical harm to themselves or others. The systems also empower authorities to monitor and profile natural persons based on their activities. 

There are four such classes of unacceptable-risk AI systems. They include an AI system: 

  1. Capable of using subliminal techniques which “materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm” (Article 5(1)(a)).
  2. That “exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm” (Article 5(1)(b)).
  3. That provides social scoring by evaluating or classifyingthe trustworthiness of natural persons over a certain period based on their social behaviour or known or predicted personal or personality characteristics, with the social score” (Article 5(1)(c)).
  4. Can run “real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement” (Article 5(2)).

Examples of unacceptable Risk in AI systems: 

  • AI systems used in real-time remote biometric identification 
  • AI systems that can deploy subliminal and manipulative techniques
  • AI systems capable of monitoring and profiling natural persons 
  • AI systems that target vulnerable groups. 

 

2. High-Risk AI Systems 

 

Article 6 of the AI Act regulates high-risk AI systems. These systems are deemed high risk as they have the potential to negatively impact fundamental human rights or the health and safety of natural persons. 

The AI Act identifies two types of high-risk systems. The first class of high-risk AI systems are either a safety component of a product or a safety product requiring a third-party conformity assessment as per the Union Health and Safety Harmonising legislation. Examples of first-class systems include toys, cars, medical devices and lifts. 

Article 7 identifies the second class of high-risk AI systems—the eight AI systems listed in Annex III. 

  1. Biometric identification and categorisation of natural persons 
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, worker management and access to self-employment
  5. Access to and the enjoyment of essential services and public services and benefits 
  6. Law enforcement
  7. Migration, asylum and border control management
  8. Administration of justice and democratic processes

These systems pose a significant enough risk to the rights, health and safety of natural persons that they are required to comply with the requirements detailed in Chapter 2 of the AI Act. These requirements relate to risk management systems (Article 9), data and data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency and the provision of information to users (Article 13), human oversight (Article 14) and accuracy, robustness and cybersecurity (Article 15).

 Examples of high-risk AI systems:

  • Biometric ID systems 
  • Automated recruiting systems 
  • Automated exam scoring systems
  • Automated risk scoring systems for bail or crime detection 

 

3. Limited Risk AI Systems 

 

Article 52 of the Act identifies three types of limited risk systems. 

  1. AI systems designed to interact with natural persons
  2. AI systems capable of emotion recognition or biometric categorisation 
  3. AI systems that generate or manipulate image, audio or video content to create deep fakes.

In order for these AI systems to work effectively, they have to interact with natural persons. For example, emotion recognition and biometric categorisation systems only work because AI systems are able to interpret emotional expressions and categorise natural persons based on their features. This exposure attracts risk because the AI system used could, in some instances, offend fundamental human rights or EU values.

Further, the Act places a transparency obligation on the AI system owners. Under this obligation, users of these AI systems must know they are interacting with an AI system. You must make this disclosure explicitly unless it is apparent from the context in which you use the AI system.

Examples of limited-risk AI systems:

  • Chatbots
  • Emotion recognition systems
  • Biometric categorization systems 
  • Deepfake or synthetic content-generating systems 

 

4. Minimal Risk AI systems

 

Minimal-risk AI systems are AI systems that pose little to no risk to: 

  • fundamental human rights, 
  • the safety of natural persons, or 
  • The EU’s values.

Most AI systems would fall within this category in the AI risk-based model. 

This category of AI systems is not subject to strict regulatory control. Accordingly, the AI Act does not require you to put any specific or mandatory safeguards in place when designing, developing, deploying, or using these systems. However, the AI Act, in Article 69, recommends that the Commission and Member States support and encourage efforts to draft and adopt voluntary codes of conduct to regulate these AI systems. 

Examples of minimal-risk AI systems:

  • AI-enabled video games 
  • Inventory-management systems
  • Market segmentation systems

 

Insights

 

The risk-based approach is not fool-proof, but it is a good starting point for regulating AI systems. Under the risk-based approach, AI systems are classified with their intended purpose in mind. This approach does not necessarily consider the impact an AI system may have if used outside of the scope of its intended purpose. A knock-on effect of this is that developers and providers of AI systems shoulder most of the regulatory burden. It also provides a loophole for high-risk systems to be used by buyers or users of AI systems if the AI system can be altered after it has been acquired. 

However, this approach provides principle-based guidance for regulating AI systems as AI systems continue to develop.