Skip to main content

The AI Act requires providers of AI systems to follow a set of ethical principles instead of strict rules. This means that they must keep broad principles in mind when developing AI systems. The goal is to create a flexible and adaptable framework for the development of AI that is safe and responsible. This approach also allows for innovation while setting clear ethical standards. The AI Act mandates providers of AI systems to conduct impact assessments to ensure compliance with these principles and to identify and reduce any associated risks.

The 6 AI principles:

The AI Act has six principles that providers of AI systems should follow:

1. Human agency and oversight:

AI systems should assist humans in decision-making, and humans should be able to override decisions made by the system.

2. Technical robustness and safety:

Providers and developers of AI systems should be designing AI systems to work well, be predictable, and be safe to use.  AI providers should ensure that their systems comply with quality management systems.

3. Privacy and data governance:

AI system providers and developers should be designing AI systems with data privacy and data protection in mind. The datasets used to train AI systems should be properly governed.

4. Transparency:

AI systems should be transparent. AI providers should provide clear information about the system’s capabilities and limitations, as well as the data sources used to train it.

5. Diversity, non-discrimination, and fairness:

Developers and providers of AI systems should be creating AI systems to avoid discrimination and bias and promote diversity. Providers should carefully examine data sources for bias and use proper measures to mitigate any potential biases.

6. Social and environmental well-being:

Creators of AI systems should be designing AI systems to contribute to sustainable and inclusive growth, social progress, and environmental well-being. Providers should consider the potential impact of AI systems on society and the environment.


These principles are designed to ensure that AI systems are developed and used in a responsible and safe manner. AI providers can help build public trust in AI and contribute to a more inclusive and sustainable society by following these principles. AI providers and developers can achieve this in a number of ways such as  by conclyding a Privacy Impact Assessment or by following these five foolproof steps to building a lawful generative AI model.