On 13 March, the European Union (EU) passed the world’s first AI-specific law. The EU AI Act creates a uniform framework that will regulate all development and deployment of artificial intelligence in the EU. The Act is being implemented in a staggered manner allowing companies with varying grace periods to comply with the different requirements of the Act.
An overview of the EU AI Act
The EU AI Act sets rules for how artificial intelligence is used in Europe. It takes a risk-based approach, dividing AI systems into categories like banned, high-risk, and low-risk. The Act bans harmful uses, such as social scoring, and requires high-risk systems to follow strict rules on data, oversight, and security. It also ensures people know when they’re interacting with AI. By introducing these rules, the Act aims to balance innovation with protecting fundamental rights.
Key features of the EU AI Act
-
- The Act sorts AI systems into four risk levels: banned, high-risk (strict rules), limited risk (some transparency rules), and low-risk (few or no rules).
- The Act bans harmful AI uses, like social scoring by governments or taking advantage of vulnerable people.
- High-risk AI systems, like those in healthcare or public safety, must meet strict rules for data quality, transparency, human oversight, and security.
- AI systems must tell users they’re AI if they interact with people, create content, or make automated decisions.
- The Act creates authorities to enforce the rules, monitor compliance, and issue penalties if needed.
Key dates
-
- 1 August 2024 | The EU AI Act enters into force.
- 2 February 2025 | Unacceptable AI risk systems will be prohibited.
- 2 May 2025 | The Codes of Practice will be ready.
- 1 August 2025 | The obligations on general purpose AI models take effect.
Here is an EU Artificial Intelligence Act Summary.