The concept of risk in the proposal for an AI Regulation
The regulatory model selected for artificial intelligence in the EU allows for a categorization of risks, depending on their level. This blogpost discusses the concept of risk, as used in the proposal for an AI Act, introduced by a draft Regulation mid-2021. Generally, a risk can be expressed as a combination of the likelihood of an event and the severity of its consequences. The AI proposal, in particular, presents a major novelty in this domain; it revolves around the risk of harm to the i) health and ii) safety, or a risk of adverse impact on iii) fundamental rights. These three areas of impact (or ‘harm absorption’) do not necessarily converge with each other since they are founded in distinct scientific fields. Inspired by the theory of risk (regulation), together with modern risk assessment methods and tools in an algorithmic environment, the emphasis is put on the risk-based approach in the context of AI, comprehending its origin as well as its rationale.
It has been argued that the manner the term ‘risk’ is employed in AI regulation qualitatively differs in comparison to how risks are encapsulated in the regulatory model of the GDPR. On the one hand, the data protection framework aims to create a level playing field for better compliance with data protection principles and firmer accountability on behalf of controllers, in which the identification of risks is succeeded by an assessment and the attempt to mitigate them. On the other hand, the AI act, firstly, aspires to set thresholds for ranking the risks (and thus, the technologies) which stem from AI applications, secondly, to prohibit or allow them, and, thirdly, to regulate the AI technologies in a manner commensurate to the category of risk where they belong, introducing ‘principle-based requirements’.
Understanding what risks denote in the AI proposal is of pragmatic importance, in the area of compliance. The concept of risk is crucial in the establishment and operation of a risk management system (for high-risk AI applications) and in the setting up of an inventory of appropriate risk management measures, as well as in the requirement of human oversight which aims at preventing or minimizing the risks to health, safety or fundamental rights. Although the ranking of AI applications (by risk level) is predefined by the draft Regulation, what is currently unclear is the way risks are identified, analysed, estimated, evaluated documented and mitigated (current Article 9 of the proposal). Such risk treatment, in the context of AI, is inextricably interrelated to the dedicated concept of risk per se and is expected to be better comprehended through the examination of its modalities.
The HR-Recycler project is directly related to the proposed Regulation’s modifications in the legal framework around artificial intelligence. Although the impact assessment which is carried out and monitored in this project concerns the fundamental rights to data protection and privacy, as well as ethics, it cannot be excluded that legal compliance would be warranted for artificial intelligence, on top of other obligations. The risks to health and safety need to be identified, systematized, assessed and mitigated where necessary; presently, risks to health and safety are not under the impact assessment’s scope.
However, the regulation of artificial intelligence is an ongoing process and no binding document exists till today. This is not expected to be in force before 2024-2025, where additional legal obligations will be introduced to cover aspects of artificial intelligence which, nowadays, are either voluntarily regulated or are developed under ethical guidelines for trustworthy applications.
Nikolaos Ioannidis
Vrije Universiteit Brussel