The European Parliament and Council have reached a political agreement on the Artificial Intelligence Act proposed by the European Commission in April 2021. Ursula von der Leyen, President of the European Commission, emphasizes the significance of this agreement as the first-ever comprehensive legal framework on AI globally.
The new rules will be applied directly in the same way across all Member States, based on a future-proof definition of AI. They follow a risk-based approach:
- Minimal risk AI: they receive a free pass with no obligations, but companies can voluntarily commit to additional codes of conduct. The vast majority of AI systems fall into the category of minimal risk.
- High-risk AI: they will be required to comply with strict requirements, including risk-mitigation systems, logging of activity, detailed documentation, etc. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Examples of high-risk AI systems include critical infrastructures, medical devices, certain systems in law enforcement, etc.
- Unacceptable risk AI: AI systems posing a clear threat to fundamental rights will be banned. Examples include AI manipulating human behavior, certain biometric systems that allow for 'social scoring' or emotion recognition, and applications of predictive policing.
- Specific transparency risk AI: When employing AI systems such as chatbots, users should be aware that they are interacting with a machine.
Companies not complying with the rules will be fined, within a range of €7.5 million to €35 million depending on the infraction. The National competent market surveillance authorities will supervise rules at the national level.
The AI Act will become applicable two years after its entry into force, with some provisions taking effect sooner and the the Commission will launch an AI Pact to bridge the transitional period, encouraging voluntary compliance with key AI Act obligation.