- Blockchain Council
- September 02, 2024
EU Council Approves New Risk-Based Regulations for AI
The European Council has approved the AI Act, a set of rules to standardize artificial intelligence (AI) across the European Union (EU). Proposed in April 2021, this law categorizes AI systems by risk levels and enforces strict rules for high-risk systems. It bans practices like cognitive behavioral manipulation and social scoring to protect societal welfare.
The AI Act affects both private and public sectors in the EU but does not apply to AI systems used only for research, military, and defense purposes. It focuses on transparency, protection of fundamental rights, and accountability, aiming to foster innovation and economic growth in Europe.
AI systems are classified into different risk categories under the AI Act. Low-risk systems have minimal transparency requirements, while high-risk systems face strict rules to access the EU market. Practices considered too risky, such as cognitive behavioral manipulation, social scoring, predictive policing based on profiling, and using biometric data to classify individuals by race, religion, or sexual orientation, are banned.
The law also addresses General-Purpose AI models (GPAIs), used in sectors like healthcare, finance, transportation, and entertainment. GPAIs without systemic risks have fewer requirements, while those with systemic risks face stricter regulations for transparency and accountability.
The AI Act sets up several bodies for effective enforcement. An AI Office within the European Commission will oversee common rules, supported by a scientific panel of independent experts. An AI Board, with representatives from member states, will advise on applying the AI Act. Additionally, an advisory forum will offer technical expertise to the AI Board and the Commission.
Non-compliance with the AI Act can lead to significant fines based on a company’s global annual turnover or a predetermined amount. Small and medium-sized enterprises (SMEs) and startups face proportional administrative fines to ensure consistent enforcement.
The regulation requires more transparency in developing and using high-risk AI systems. Certain users must register in the EU database, and users of emotion recognition systems must inform individuals when exposed to these systems. Public service entities must assess the impact of high-risk AI systems on fundamental rights before deployment.
To support innovation, the AI Act creates a legal framework encouraging evidence-based regulatory learning. It establishes AI regulatory sandboxes for developing, testing, and validating innovative AI systems, including real-world testing, to stimulate investment and innovation in AI within Europe.
After the Council’s approval, the AI Act will go through further steps, including signatures by the Presidents of the European Parliament and Council, before being published in the EU’s Official Journal. Implementation is expected two years after it comes into force, with some provisions taking effect sooner.
Belgian Secretary of State for Digitization, Mathieu Michel, emphasized the importance of the AI Act, stating it addresses technological challenges and fosters innovation. The law’s approval shows Europe’s commitment to setting a global standard for AI regulation, ensuring safe and lawful AI development while respecting fundamental rights