8 April 2020
On 19 February 2020, the European Commission published a White Paper to propose a new framework for artificial intelligence (AI) in the European Union. The Commission's White Paper seeks to strike a balance between supporting the development of AI and regulating the risks it generates. To this end, it is considering various reforms, amending existing texts or laying down new requirements. It recognises that the health field is one of the sectors in which this balance is one of the most delicate to strike, given the stakes involved. The crisis generated by the spread of Covid-19 affected Europe shortly after its publication. The use of AI that this crisis may have generated could lead to a re-examination of the balance put forward by the European Commission. It is therefore essential for AI stakeholders to contribute to the discussions initiated by this White Paper to ensure that the European regulatory framework defined for AI will be relevant, including for such exceptional periods as the one we are currently experiencing.
The crisis generated by the spread of Covid-19 is giving rise to numerous initiatives to find solutions. Some of them are being developed using artificial intelligence (AI) tools.
These projects and their challenges highlighted by the current crisis (1.) can be analysed in light of the European Commission's White Paper on AI published in February 2019 (2.). This White Paper was published a few weeks before the massive spread of Covid-19 in the European Union and the exceptional measures it prompted. It attempts to define a new balance between supporting the development of AI in the European Union and regulating the risks it generates.
One can wonder about the lessons of the current crisis, and the need to review the balance set forth by the European Commission to take into account the experience of recent weeks concerning the use of AI within the health sector (3.).
AI is one of the tools that both public and private players are calling on to combat the spread of the virus. Its applications cover a broad spectrum of activities, ranging for example from assisting in the development of a vaccine, to assisting in diagnosis, as well as in the management of patient flows and their hospital care.
To contribute to the rapid development of a potential vaccine, DeepMind, owned by Google, offers its AI tool developed for protein 3D modelling to define the protein structure of the virus[1]. The results should be freely accessible to feed ongoing research on the subject.
To help identify affected individuals, Microsoft has developed an automated application to provide a preliminary diagnosis of patients based on the symptoms they report[2]. This application is offered in conjunction with the American Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO). It can help relieve congestion in certain medical services and better guide patients in their healthcare.
Beyond projects in the medical field, the use of AI is also under consideration to manage those measures taken to contain the crisis. For instance, it is used by companies such as YouTube to moderate videos published for its platform. The US company has announced that it makes greater use of AI to filter the videos published, according to their content, in order to compensate for the reduction in the number of staff usually in charge of this task[3].
The crisis has thus widely illustrated the solutions that AI is trying to bring to the healthcare sector, especially in the extreme circumstances we are currently experiencing. It comes at a time when European institutions are considering the possible reforms needed to provide the European Union with a "framework for trustworthy artificial intelligence, based on excellence and trust[4]".
On 19 February 2020, the European Commission published a White Paper on AI[5]. While the European Commission certainly considers that the European framework already covers many AI issues (particularly in terms of personal data protection or of certain specific sectors, such as healthcare), it also states that certain reforms may be necessary to address all the risks that AI can generate, and to provide effective responses.
The White Paper thus pursues "the dual objective of promoting the use of AI and taking account of the risks associated with certain uses of this new technology[6]". The Commission proposes a balance between these two issues and to this end presents policy options in terms of regulation.
In this respect, the European Commission proposes an approach focused on the risks presented by AI. According to the Commission, AI could be considered high-risk depending on (i) the sector in which it is applied (and the likelihood of related risks) and (ii) the use it is considered to be put to (depending in particular on the consequences of AI for potentially affected parties).
Above and beyond the content of potential future regulatory obligations, the European Commission is considering which players should be subject to them, both in terms of their role and the geographical scope of their activities[7].
This approach could lead to the reform of all or part of the regulatory texts applicable to AI, particularly in the area of "protection of personal data and the right to privacy"[8).
The White Paper identifies the healthcare sector as strategic sector for the application of AI. For example, the Commission highlights the significant benefits that AI could bring, such as "more accurate diagnosis or better disease prevention[9]". In view of the risks also likely to be generated by its applications in this field, AI used for healthcare is at the heart of the European Commission's considerations.
The European Commission invites both public and private stakeholders to react to the proposals made and to contribute to the reflection, and draw on the lessons that can be learnt from Covid-19.
The context of the crisis could indeed feed into the reflections initiated by the European Commission, in particular on the classification of AI applications according to the level of risk they present (3.1.), the content of the obligations being considered for high-risk AI applications (3.2.) and the scope of the proposed requirements (3.3.).
As indicated, the European Commission proposes to apply an enhanced regulatory framework for AI applications considered to be high risk given the sector in which they are developed and the use made of them.
For the health sector, the White Paper considers that it is one of the sectors "where, given the characteristics of the activities typically undertaken, significant risks [related to the use of AI] can be expected to occur[10]". However, it also stresses that not all applications of AI in this sector are likely to be considered high risk. The criterion for use of the tool may thus make it possible to not consider it as high-risk, despite the sector in which it is developed. The European Commission takes the example of a "the appointment scheduling system in a hospital[11]" that, in its opinion, would not justify reform.
The Covid-19 crisis leads us to question the relevance of such categorisation criteria. In the context of a public health crisis, the applications considered by the European Commission as being non-risky could take on new importance. This could justify a review of the proposed classification.
Up until now, AI tools used to manage the allocation of hospital beds could be regarded as low risk with regard to consequences for the parties concerned. In the current context, however, such applications present major challenges. This could justify subjecting the tools to enhanced regulatory requirements. In particular, the biases that could affect their operation could justify, in the current context, heightened caution. Similarly, the explanability of the underlying algorithms, and the transparency provided in this respect, could be decisive in terms of public acceptance of the system.
This example illustrates the difficulties in implementing the classification proposed by the European Commission, and the criteria on which it is based, depending on the context in which this assessment is made.
As indicated above, where AI is considered high risk, the European Commission proposes a set of measures to strengthen the regulatory framework. These measures are designed to ensure better control and greater transparency in order to protect European citizens and their fundamental rights.
However, here again, the spread of Covid-19 may question the relevance of the European Commission's proposals in times of crisis. For example, for high-risk AI, the Commission considers that human intervention is a determining factor in the tools’ operation to guarantee "trustworthy, ethical and human-centric AI"[12]. The Commission recognises that this human involvement may vary from case to case and should be defined in relation to 'the intended use of the systems and the effects that the use could have for affected citizens and legal entities[13]”.
At this stage, this proposal does not provide that this assessment vary according to the context. However, the Covid-19 crisis raises the question of whether human intervention and control over an AI tool can be reduced, given the context, and despite the importance of the fundamental rights likely to be affected. The YouTube example cited above provides an illustration. The operation of moderation tools, based on AI, can affect such fundamental rights as freedom of expression. Human control over the AI tool has been reduced by the company given the context and the risks for its teams. In light of this example, one can thus question the relevance of this proposal from the European Commission in the current context.
In its White Paper, the European Commission questions the scope that European regulation on AI should have. This reflection concerns both the nature of the actors to which it would apply (the developer of the AI tool, or the user, etc.) and its geographical scope. In this respect, the European Commission considers that the regulation should apply to the "actor(s) who is(are)best placed to address any potential risks[14]" related to the AI tool. In geographical terms, it states that the European regulation should apply to all players "providing AI-enabled products or services in [the European Union], regardless of whether they are established in [the European Union][15]".
The Covid-19 crisis has led to numerous international coordination initiatives to capitalise on AI tools. For example, players such as the Chinese company Tencent are offering to make their AI tools available to support initiatives aimed at responding to the spread of the virus[16]. Their solutions can thus be used by players around the world. The power of the solutions proposed by these non-European companies could therefore be used by European players to manage the current health crisis. Could they, and how should they, be subject to the European framework considered by the European Commission?
The Commission's White Paper attempts to strike a balance between supporting the development of AI and regulating the risks it generates. To this end, it considers various reforms that either amend existing texts or enact new requirements. It recognises that the health sector is one of those in which this balance is one of the most delicate to define, given the issues at stake. The crisis generated by the spread of Covid-19 affected Europe shortly after its publication. The use of AI that it may have generated could lead to a re-examination of the balance proposed by the European Commission. It is therefore essential for AI stakeholders to contribute to the discussions initiated by this White Paper to ensure that the European regulatory framework defined for AI will be relevant, including for such exceptional periods as the one we are going through.
______
[1] DeepMind, press release of 5 March 2020.
[2] Microsoft, press release of 24 March 2020.
[3] YouTube, press release of 16 March 2020.
[4] EU Commission press release, 19 February 2020.
[5] White Paper on Artificial Intelligence - A European approach to excellence and trust, 19 February 2020. This White Paper is complemented by other publications, including (i) a Report from the commission to the European parliament, the council and the European economic and social committee on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics, 19 February 2020; and (ii) a Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the regions on a European strategy for data, 19 February 2020.
[6] EU Commission White Paper, ibidem, p. 1.
[7] EU Commission White Paper, ibidem, p. 17.
[8] EU Commission White Paper, ibidem, p. 15.
[9] EU Commission White Paper, ibidem, p. 1.
[10] EU Commission White Paper, ibidem, p. 17.
[11] EU Commission White Paper, ibidem, p. 17.
[12] EU Commission White Paper, ibidem, p. 22.
[13] EU Commission White Paper, ibidem, p. 22.
[14] EU Commission White Paper, ibidem, p. 22.
[15] EU Commission White Paper, ibidem, p. 22.
[16] Tencent, press release of 11 February 2020.
This legal update is not intended to be and should not be construed as providing legal advice. The addressee is solely liable for any use of the information contained herein and the Law Firm shall not be held responsible for any damages, direct, indirect or otherwise, arising from the use of the information by the addressee.
>> Click here to read the legal updates of Gide's multidisciplinary taskforce set up to answer all your legal issues relating to Covid-19.