Skip to main content

The EU is on the brink of a major breakthrough in the regulation of artificial intelligence (AI). The latest updated draft text of the AI Act includes some significant changes that will affect the AI industry in the EU and beyond. One of the most significant changes is the inclusion of foundation models, such as ChatGPT, GPT4, and Llama, in the updated draft text.

Foundation models

Foundation models are defined as AI models that are trained on broad data at scale, designed for the generality of output and can be adapted to a wide range of distinctive tasks.

This is a significant development for the situation with copyright owners. Providers of generative AI foundation models like Midjourney, Dall-E, and ChatGPT must now comply with transparency obligations and ensure safeguards against the generation of content that would breach EU law. Providers must also document and make publicly available a detailed summary of the use of copyright-protected training data. This change is intended to prevent the use of AI-generated content to deceive people or spread false information.

Before releasing a foundation model, providers must ensure compliance with the requirements of the Act. Providers must demonstrate risk identification, reduction, and mitigation in areas like

  • health,
  • safety,
  • fundamental rights,
  • environment,
  • democracy, and
  • the rule of law throughout development.

Providers of generative AI should use datasets with proper data governance measures, ensuring the examination of data sources for suitability, possible biases, and appropriate mitigation. The updated draft of the AI Act also introduces new requirements for AI providers. AI providers must provide clear and understandable information to users about how their data is being used. The proposed regulation is intended to protect fundamental rights and values while promoting innovation and competitiveness in the EU’s AI industry. 

In addition, providers must design and develop foundation models with appropriate levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity. AI developers and providers should assess their models through methods such as:

  • independent expert involvement,
  • documented analysis, and
  • extensive testing during conceptualization, design, and development.

AI Impact Assessments

The updated draft text also covers AI Impact Assessments. AI Impact Assessments should include a clear outline of

  • the intended purpose,
  • the geographic and temporal scope of the system’s use, and
  • categories of natural persons and groups likely to be affected.

These changes are a significant step towards the regulation of AI in the EU. These changes, if passed, will have a far-reaching impact on the AI industry. Providers will have to comply with transparency and safety measures and ensure that their foundation models meet the required standards. The AI Act will be the world’s first comprehensive regulatory framework for AI, setting the standard for AI development globally.

The AI Act will now move to the European Parliament and Council for further consideration and potential amendments before becoming law.