New EU legal framework for AI sets standards for trust

Artificial intelligence – Image by Alejandro Zorrilal Cruz

(BRUSSELS) – The EU Commission proposed Wednesday a first-ever legal framework on artificial intelligence, with guarantees of safety and fundamental rights while strengthening AI uptake, investment and innovation.

At the same time, new rules on machinery are set to complement this approach by adapting safety rules to increase users’ trust in the new, versatile generation of products.

“On Artificial Intelligence, trust is a must, not a nice to have,” said EC vice-president Margrethe Vestager: “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way.”

The new AI regulation will make sure that Europeans can trust what AI has to offer, says the EU executive. Proportionate and flexible rules will address the specific risks posed by AI systems and set the highest standard worldwide. The Coordinated Plan outlines the necessary policy changes and investment at Member States level to strengthen Europe’s leading position in the development of human-centric, sustainable, secure, inclusive and trustworthy AI.

The new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI. They follow a risk-based approach:

Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.

High-risk: AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimise risk;
  • High level of robustness, security and accuracy.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.

In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation.

On Machinery products, these cover an extensive range of consumer and professional products, from robots to lawnmowers, 3D printers, construction machines, industrial production lines. The Machinery Directive, replaced by the new Machinery Regulation, defined health and safety requirements for machinery. This new Machinery Regulation will ensure that the new generation of machinery guarantees the safety of users and consumers, and encourages innovation. While the AI Regulation will address the safety risks of AI systems, the new Machinery Regulation will ensure the safe integration of the AI system into the overall machinery. Businesses will need to perform only one single conformity assessment.

The European Parliament and the Member States now need to adopt the Commission’s proposals on a European approach for Artificial Intelligence and on Machinery Products in the ordinary legislative procedure. Once adopted, the Regulations will be directly applicable across the EU. In parallel, the Commission will continue to collaborate with Member States to implement the actions announced in the Coordinated Plan.ery Directive.

New rules for Artificial Intelligence – Questions and Answers

New rules for Artificial Intelligence – Facts page

Communication on Fostering a European approach to Artificial Intelligence

Regulation on a European approach for Artificial Intelligence

New Coordinated Plan on Artificial Intelligence

Regulation on Machinery Products

EU-funded AI projects

Leave A Reply Cancel Reply

Exit mobile version