Daily Archives


European Union (EU) challenges on artificial intelligence (AI) impact assessment

By | Blog | No Comments

European Union (EU) challenges on artificial intelligence (AI) impact assessment

Nikolaos Ioannidis[1]

Vrije Universiteit Brussel (VUB)


The European Commission’s (EC) Proposal for a Regulation laying down harmonized rules on artificial intelligence (AI Act) has drawn extensive attention as being the ‘first ever legal framework on AI, resuming last year’s discussions on the AI White Paper. The aim of the Proposal and the mechanisms it encompasses is the development of an ecosystem of trust through the establishment of a human-centric legal framework for trustworthy AI.

The ecosystem aims to establish a framework for a legally and ethically trustworthy AI, promoting socially valuable AI development, ensuring and respecting fundamental rights, the rule of law, and democracy, allocating and distributing responsibility for wrongs and harms and ensuring meaningful transparency and accountability. Any AI application, including the HR-Recycler project and its associated technology, is expected to be subject to specific legal requirements set by this framework.

For individuals to trust that AI-based products are developed and used in a safe and compliant manner and for businesses to embrace and invest in such technologies, a series of novelties have been introduced in the proposed Act. Those novelties include, but are not limited to, i) the ranking of AI systems, depending on the level of risk stemming from them (unacceptable, high, low, and minimal), as well as ii) the legal requirements for high-risk AI systems.

Risk-based approach

AI applications are categorized based on the estimated risk that they may generate. Accordingly, there are various levels of risk – unacceptable, high, limited, minimal. AI applications of unacceptable risk – and thus prohibited – comprise those which, for instance, deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour, causing physical or psychological harm or systems whose function is the evaluation or classification of the trustworthiness of natural persons based on social behaviour or known or predicted personal or personality characteristics, leading to detrimental or unfavourable treatment, inter alia.

High-risk AI applications are the most problematic because they are permitted to be deployed, under specific circumstances. In this category fall applications, which pertain to biometric identification and categorization of natural persons, management and operation of critical infrastructure, education and vocational training, employment, workers management and access to self-employment, access to and enjoyment of essential private services and public services and benefits, law enforcement, migration, asylum and border control management and administration of justice and democratic processes. The research of HR-Recycler could fall under the scope of the safety-critical domain, in which safety components and machinery are used. For these, the major subsequent obligation is the conformity assessment procedure.

Conformity assessment

The conformity assessment procedure for high-risk AI applications is tied with several legal requirements, which can be summarized as follows: i) introduction of a risk management system (a step forward compared to the data protection impact assessment process (DPIA)), ii) setting up a data governance framework (training, validation and testing data sets), iii) keeping technical documentation (ex-ante and continuous), iv) ensuring record keeping (automatic recording of events – ‘logs’), v) enabling transparency and provision of information (interpretation of the system’s output), vi) ensuring human oversight (effectively overseen by natural persons) and vii) guaranteeing accuracy, robustness and cybersecurity (consistent performance throughout the AI lifecycle).

Challenges and questionable areas

However, it is still not clear to which extent and how to carry the ex-ante conformity assessment, bringing to the fore novel challenges; the proposal itself generates multiple points of debate as per its applicability and interpretation of its provisions, some listed below:

  • Definition of AI (being incredibly broad, comprising virtually all computational techniques)
  • Complex accountability framework (introduction of new stakeholders and roles, cf. GDPR)
  • Legal requirements (clarification on data governance, transparency and human oversight)
  • Risk categorization (fuzzy and simplistic, lack of guidance on necessity and proportionality)
  • Types of harms protected (excluded financial, economic, cultural, societal harms etc.)
  • Manipulative or subliminal AI (being an evolutive notion)
  • Biometric categorization and emotion recognition systems (disputed and debatable impact)
  • Self-assessment regime (lack of legal certainty and effective enforcement)
  • Outsourcing the discourse on fundamental rights (large discretionary power for private actors)
  • Independence of private actors (in need of strengthened ex-ante controls)
  • Lack of established methodology (a new kind of impact assessment, cf. DPIA)
  • Checklist attitude (binary responses to questionnaires)
  • Technocratic approach towards fundamental rights (against their spirit)
  • Standards-setting (role of incumbent organizations such as CEN / CENELEC)
  • Relation with other laws (interplay with the GDPR and LED?)
  • Multidisciplinarity (miscommunication among internal stakeholders)
  • External stakeholder participation (insufficient engagement of them)
  • Societal acceptance of AI applications (scepticism, mistrust, disbelief)

Given the fact that the AI Act proposal is expected to pass a long negotiation phase, in which the European Commission, the European Parliament and the Council will articulate their opinion and reservations, along with numerous feedback submissions by research and policy actors, the final text may differ from the current formulation, hopefully addressing and clarifying the majority of legal gaps identified until this point.

[1] E-mail: Nikolaos.Ioannidis@vub.be.