The success of Chat GPT since its launch on 30 November 2022 has put generative Artificial Intelligence (AI) in the spotlight. These AIs can autonomously generate content: text for Chatgpt, images for Dall- E or Midjourney. Human intervention is limited to giving instructions to the AIs.
The legal status of this content and its protection under a copyright or patent law is under debate. Moreover, the functioning of these AIs is based on a learning process, called "machine learning" or "deep learning", which requires the analysis of existing data which are themselves potentially protected. This raises the question of the infringement of third party rights by AIs.
Under French law, to be original, and therefore protectable by "droits d'auteur", a work must bear the stamp of its author's personality.
The Court of Justice of the European Union (CJEU) also has a subjective definition of originality, an original work being "the expression of the author's own intellectual creation" (16 July 2009, C-5/08). The Court stated in Painer that a work must "reflect the personality of its author, manifesting the latter's free and creative choices" (1er December 2011, ‑C145/10). This is not the case 'where the realisation of an object has been determined by technical considerations, rules or other constraints' (Football Dataco and Others, 1er March 2012, ‑C604/10).
Since Rousseau, we know that, philosophically, "Freedom consists less in doing one's own will than in not being subject to the will of others". It is this freedom of the author that is enshrined in law.
Yet, an AI produces a result according to the instructions it receives and the data it has analysed. This result, conditioned by its programming, is therefore not free and cannot express its personality, which an IA is moreover by nature devoid of. The subjective definition of originality therefore excludes an AI from being the author of a copyrightable work.
However, there is a temptation to ignore this subjective definition and consider only the result obtained by AI. AI-generated "works" can be as beautiful or even more aesthetically beautiful than works created by a human artist. For example, in February 2023, an AI-generated image won a photography competition in Australia, deceiving all the participants and the professional jury. The company responsible for this deception claimed that the machine was "now the superior artist to man". Why shouldn't these "works", whose aesthetic value is recognised, be entitled to protection?
There are at least two objections to this reasoning.
First of all, pursuant to "droit d'auteur" law the merit of the work is not a criterion. An object may be beautiful and yet not be protected, and vice versa. The CJEU thus recalled in the Cofemel case that "the fact that a model generates an aesthetic effect does not, in itself, make it possible to determine whether that model constitutes an intellectual creation reflecting the freedom of choice and the personality of its author, and thus satisfying the requirement of originality" (Cofemel, 12 September 2019, C-683/17).
Moreover, recognising AI-generated content as a protectable work would imply a profound change in the objective of "droit d'auteur": to encourage creation by ensuring an income for authors. However, an AI does not need to be encouraged to generate content, nor does it need an income. Copyright would then be used to reward the investments of AI companies or AI users.
The European courts that will be called upon to rule on this issue will probably be influenced by American precedents. The US Copyright Office has twice refused to protect an image generated by an AI. In the first case, Dr Thaler sought to register in 2016 an image created "autonomously by a computer algorithm", an AI named DABUS. The Copyright Office refused on the grounds that "the requirement of a human author is a long-standing requirement of copyright law". An appeal is pending. More recently, on 21 February 2023, the Copyright Office partially revoked the copyright protection of a graphic novel "Zarya of the Dawn". While the text was written by an individual author, the images were generated by the AI Midjourney.
Patent law does not seem to offer more prospects for protection even though AIs can also be used to solve technical problems, i.e. to develop inventions.
Although apparently more objective than copyright, patent law is nevertheless built around the postulate that an invention is the result of the work of an individual inventor.
Thus, in the United States, only natural persons may file a patent, to the exclusion of legal persons who may only be assignees.
Dr. Thaler, who was already behind the first decision of the US Copyright Office, filed applications before several patent offices naming the AI Dabus as the inventor. The US, UK, Taiwanese have rejected these applications. In Australia and South Africa, however, the fact that an AI is an inventor has been accepted.
The European Patent Offices (EPO) has refused to grant these patents but only on the grounds that the inventor, within the meaning of the European Patent Convention, must be a person with legal capacity, which is not the case for an AI. Conferring legal personality on the AI, as suggested by a resolution of the European Parliament on 16 February 2017, would therefore a priori overcome this difficulty, as the EPO does not require the inventor to be a natural person.
Such a development could lead to significant collateral legal damage, in particular with regard to the assessment of the criterion of inventive step currently based on the obviousness of the claimed solution from the point of view of the person skilled in the art. Indeed, if an AI can be an inventor, why can't it also be a "person skilled in the art"? The average skills of this "artificial person skilled in the art" could be so enhanced that access to patent protection would necessarily be more difficult, which would risk penalising research and innovative companies.
AI is based on a learning process that requires the analysis of a great deal of pre-existing data. AIs thus access content available on the Internet most often by means of the "web scraping" technique. This technique enables content to be extracted from websites automatically so that it can be reused in another context. However, this content may be protected by copyright or the sui generis right of database producers.
If the content subsequently generated by AI reproduces in whole or in part a content protected by copyright/"droits d'auteur", the infringement will be characterised in application of the classic rules of copyright/"droits d'auteur".
However, the question of whether there is an infringement by virtue of the mere fact of accessing and reproducing protected images to train AIs, in particular by web scraping, is a matter of debate.
In the United States, several infringement actions have been brought on this basis against companies specialising in AI, notably Midjourney. Getty, the famous online image database, took action in the United States, and in the United Kingdom before the High Court of Justice in London against Stability AI, which markets the AI Stable Diffusion, which also generates images.
In Europe, Directive (EU) 2019/790 of 17 April 2019 provides a framework for the practice of web scraping by authorising text and data mining, including for commercial purposes, provided that the rights holder has not expressed his refusal (opt-out). This directive has been transposed in France, in particular by adding Article L.122-5-3 of the Intellectual Property Code. The opt-out may be materialised by machine-readable processes (in particular metadata) or by a mention in the general terms of use of a website or service.
While most online content sharing platforms have put in place such opt-out, this constraint is cumbersome and complex for authors who share their works on their own platforms. In addition, there is the issue of remuneration for authors whose works have already been used by AIs without their consent before the opt-out measures were put in place.
Ethical AI that respects the intellectual property rights of authors has yet to be imagined. Several companies, notably Adobe, seem to have embarked on this path. They have announced that they are setting up AIs trained exclusively on royalty-free images or images for which they have acquired the rights. Shutterstock, which markets the Dall-e image-generating AI, has also indicated that it is creating a fund to pay creators whose photos are used to train it.
The debates on the draft European regulation on artificial intelligence could be an opportunity to clarify the legal status of content created by AI and, above all, as called for by the League of Professional Authors and many creators, particularly illustrators, to provide a stricter framework for the practice of web scraping.