Proposed Regulation on Artificial Intelligence

Artificial Intelligence Regulation

The publication in April 2021 of the European Commission’s proposal for a European regulation on artificial intelligence is part of a global desire by all stakeholders to define the framework for the use of this new technology.

One of the major principles introduced by this proposal is the risk-based approach as well as the classification of AI systems into four categories based on risk:

        • Unacceptable risk (use prohibited in the EU as a threat to human safety, life and rights)
        • High risk (pre-market assessment and lifecycle monitoring)
        • Moderate risk (transparency requirements for users regarding the use of an AI system)
        • Minimal risk (no specific requirements)

This proposal imposes a transparency requirement for AI systems that:

        • interact with humans, or
        • are used to detect emotions or determine association with (social) categories based on biometric data, or
        • generate or manipulate content (ultra-realistic video trickery)

The designers of these systems will also have to inform users of their interaction with the AI system as well as of the user that will be made of the collected data.

Article 14 of the AI Regulation specifies human control measures. The purpose of human control is to limit the risks when using a high-risk AI system. It can be ensured either by measures internal to the device or by actions performed by the user of the system. The responsibilities given within the framework of human control allow to limit the malfunctioning of the system and to terminate the use of the system in case the results deviate from the intended operation.