EU Parliament adopts landmark AI law

Artificial intelligence

(BRUSSELS) – The European Parliament approved Wednesday the world’s first horizontal legislation on artificial intelligence to ensure safety and compliance with fundamental rights, and to boost innovation.

The stated aim of the Artificial Intelligence Act is to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI. It also looks to boost innovation and establish Europe as a leader in the field.

Parliament co-rapporteur Dragos Tudorache MEP welcomed the vote as it “linked the concept of artificial intelligence to the fundamental values that form the basis of our societies.”

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.

Clear obligations are  foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

General-purpose AI (GPAI) systems, and the GPAI models they are based on, need to meet certain transparency requirements which include compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

To support innovation and smaller businesses, regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

The adoption of the AI Act was welcomed by the European Consumer Organisation: “By adopting the world’s first horizontal legislation on artificial intelligence, the EU is rightfully making clear that a technology as powerful as AI, for all of its benefits, needs guardrails,” said BEUC’s Deputy Director General Ursula Pachl. “Consumers will be protected from some demeaning practices like social scoring and will be able to join collective redress claims if they have been harmed by the same AI system.”

The regulation, which will now be subject to a final lawyer-linguist check, is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.

The Artificial Intelligence Act will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will apply six months after the entry into force date; codes of practise (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).

Further information, European Parliament

Link to adopted text (13.03.2024)

Procedure file

Leave A Reply Cancel Reply

Exit mobile version