The regulatory framework is expected to come into force in 2026 and provides for fines of up to €35 million
The European Parliament voted 523 in favor, 46 against, and 49 abstentions in favor of the Artificial Intelligence Act, i.e. the regulatory framework aimed at managing artificial intelligence. In Strasbourg (where Eurochambres is based), experts spoke of a “historic day,” as this is the world’s first attempt to define the use of artificial intelligence and could set a standard at a global level.
The overall objective is to guarantee the protection of fundamental rights, from democracy to the rule of law, while allowing the development and implementation of state-of-the-art technologies. Publication in the Official Journal should take place by June 2024 and come into effect in 2026.
The AI Act provides for four levels of risk: unacceptable, high, limited, and minimal. Applications that have “unacceptable risk,” such as social assessment systems, police predictive tools, emotion recognition systems in the workplace or school will be banned, and systems that can exploit vulnerable human characteristics, such as age or disability, and biometric recognition data are banned in public places, except for uses authorized by judicial authorities to find missing persons or prevent terrorist attacks.
“High-risk” systems require that a risk assessment be conducted, so human intervention could be provided as a safeguard, and programming be “transparent.” Transparency in the functioning of algorithms is also a requirement of “limited-risk” AI. Finally, “minimum risk” systems have no constraints.
Generative artificial intelligence systems, such as the well-known ChatGPT, are embedded in high-risk systems, and the content they generate must be clearly recognizable. In short, there must be an indicator that a certain piece of written content, video, or image was created with the help of artificial intelligence. Fines for companies that violate these rules can be as high as 35 million euros or 7% of revenue.