Publication of two sets of guidelines in connection with the AI Act
On 4 and 6 February 2025, the European Commission published two sets of guidelines in connection with Regulation 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence (the “AI Act“)[1] :
- the European Commission’s guidelines on the definition of artificial intelligence systems within the meaning of the AI Act (the “AI Systems Guidelines“)[2] (1.) ;
- the European Commission’s guidelines on prohibited AI practices (the “Prohibited Practices Guidelines“)[3] (2.).
These two publications are not binding, as the interpretation of the AI Act is formally reserved to the Court of Justice of the European Union.[4] However, they provide important guidance that will be taken into account by national authorities when implementing the AI Act.
At the time of writing, the AI Systems Guidelines and the Prohibited Practices Guidelines, prepared in accordance with Article 96 of the AI Act, have been approved by the European Commission but have not yet been formally adopted.
1. The AI system guidelines
The AI System Guidelines are intended to help providers and other interested parties to determine whether a solution constitutes an AI system, as defined in the AI Act (an “AI System“).[5] In doing so, the AI System Guidelines clarify the parameters to be taken into account when determining the scope of application of the AI Act.
The AI System Guidelines define and specify the seven criteria that must be taken into account to characterise an AI System:
- a machine-based system;
- a system designed to operate autonomously;
- a system that may (but does not have to) exhibit adaptiveness after deployment;
- a system with explicit or implicit objectives;
- a system that infers, from the inputs it receives, how to generate outputs;
- a system that generates predictions, content, recommendations or decisions;
- a system that can influence physical or virtual environments.
Assessing the combination of these criteria involves certain complexities.
- While the lifecycle of an AI System includes both its pre-deployment (building) and post-deployment (use) phases,[6] the European Commission indicates that a system may qualify as an AI System even when the seven elements mentioned above are not continuously present throughout its lifecycle.[7]
- The European Commission insists on the need for an analysis on a case-by-case basis, depending on the precise features of a system.[8]
- It is also clear from the AI Systems Guidelines that the seven criteria are not equal. While a system can be defined as an AI System in the absence of any capacity for adaptation (criterion n°3), the inference capacity (criterion n°5) is “a key, indispensable condition that distinguishes AI systems from other types of systems.”[9]
The AI Systems Guidelines also provide guidance on systems that may fall outside the scope of AI Systems, due to their limited inference.[10] These include systems used to improve mathematical optimisation methods, such as linear regression or logistic methods.[11] This clarification was meant to address a call for clarification, raised notably by the financial industry, whose service providers particularly leverage these systems, especially in the insurance sector. However, given the drafting of the AI Systems Guidelines, certain stakeholders question the exact scope of the aforementioned exclusion, which may limit the effective comfort brought by the AI Systems Guidelines on this aspect.
The AI Systems Guidelines provide a welcome clarification on the assessment of the criteria defining AI Systems. However, the AI Systems Guidelines still raise questions about the interpretation of some of these criteria, in particular the inference capacity. In this context, stakeholders should anticipate these definition issues and, where appropriate, engage in discussions with the European and national authorities to define concrete answers to the questions raised by the AI Act.
2. Guidelines for prohibited practices
The provisions of Article 5 of the AI Act, prohibiting certain AI practices, came into force on 2 February 2025. Failure to comply with these prohibitions may result in an administrative fine of up to EUR 35 million or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.[12] The clarifications provided by the Prohibited Practices Guidelines on 4 February were therefore particularly awaited.
Article 5 of the AI Act covers, as prohibited practices, the “placing on the market“,[13] the “putting into service“[14] and more generally the use of certain AI Systems in the European Union. According to the Prohibited Practices Guidelines, this notion of use should be understood as ” the use or deployment of the system at any moment of its lifecycle after having been placed on the market or put into service.”[15]
The European Commission has chosen to focus its Prohibited Practices Guidelines on providers and deployers.[16] The provider is defined as placing an AI System on the market or putting it into service “under its own authority“, while the deployer uses an AI System “under its own authority“.
However, Article 5 of the AI Act does not explicitly refer to either providers or distributors. Thus, its provisions could be understood as applying more broadly to any person “placing on the market” or “putting into service” an AI System, regardless of whether that person is acting under his or her own authority.
The Prohibited Practices Guidelines details the nature of the eight sets of practices prohibited by Article 5 of the AI Act (with concrete examples of different use cases from a wide range of sectors and industries).
They also provide instructive practical insights. For example, providers are required not to place on the market or put into service AI Systems, in particular general-purpose AI Systems that are “reasonably likely to behave or be directly used in a manner prohibited by Article 5 AI Act.“[17] These providers are also expected to take “effective and verifiable measures to build in safeguards and prevent and mitigate such harmful behaviour and misuse to the extent they are reasonably foreseeable and the measures are feasible and proportionate depending on the specific AI system and circumstances of the case.“[18]
In their contractual relationships with deployers (including general terms and conditions of use), providers are expected to (i) exclude the use of their AI System for prohibited practices and (ii) provide appropriate information on the use of their AI System and the necessary human supervision.[19]
Deployers, for their part, are required not to use any AI System in a manner prohibited by Article 5 of the AI Act, in particular not to circumvent the safeguards put in place by the providers of the AI System.[20]
The Prohibited Practices Guidelines provide useful operational guidance to help stakeholders better understand the prohibitions with which they have been required to comply since February 2025. While questions remain as to the impact of the prohibitions set out in Article 5 of the AI Act, this publication clarifies some of the expectations of the European institutions. The interpretation of French and European regulators has yet to be confirmed.
***
These two sets of guidelines are therefore complementary. They provide essential details for understanding the scope of application of the AI Act and certain practices that it now prohibits. Stakeholders need to incorporate them into their efforts to comply with the AI Act and, where necessary, identify any questions that remain unanswered.