Generative AI has only been in public consciousness for a relatively short space of time but the new crop of tools utilising the technology has already started to rock the status quo. Take Stability AI for example. It uses generative AI to create music and images and its text-to-image AI model, “Stable Diffusion” generates synthetic images in response to word commands entered by users. It is this AI model in particular which has caused the creative industry stalwart Getty Images to bring English proceedings against Stability AI claiming various intellectual property infringements. Meanwhile, in a bid to keep up Getty Images has just announced its own AI image generating tool.
On what basis are the AI claims made?
Getty Images claims that the Stable Diffusion models were trained by (mis)using more than 12 million of their copyrighted images and accompanying caption data. Getty Images allege that Stability AI committed copyright, database and trade mark infringement.
Aside from damages, Getty Images seek declarations, injunctions to prevent Stability AI from committing these alleged unlawful acts and an order for the delivery up or destruction of all documentation or items which would breach those proposed injunctions.
Should such remedies be awarded, Stability AI would need to produce or destroy any items storing the copyrighted works. This would presumably require the production or destruction of the Stable Diffusion models themselves, given that they have allegedly been trained on such material.
Stability AI was granted an extended deadline for serving their defence until 28 July 2023. However, on the date of that deadline Stability AI applied for summary judgment on the grounds that Getty Images have no real prospect of succeeding on their claim and that there is no other compelling reason why they should be disposed of at court. Effectively, Stability AI has asked the English Court to kick out the claim without allowing it to go to a full trial. The parties are in the process of exchanging expert evidence in respect of this application.
What are the implications for Getty Images?
Misuse of generative AI poses a direct threat to the Getty Images business model. It is hard to envisage users electing to pay Getty Images to use their images if they can freely access bespoke images already deriving from Getty Images resources. In this regard, it will be particularly important for Getty Images to attain its claimed injunctive relief.
But the impact may go beyond the commercial implications - there is also the claimed prospect of reputational risk should Getty Images not secure injunctive relief against Stability AI. The claim sets out concern about the ability of Stable Diffusion to generate propaganda and pornographic or violent images which are associated with Getty Images through the distorted inclusion of their watermark.
For Getty this could be a watershed moment. The use of AI generated images is on the rise. This could potentially open up a market that Getty Images stands to benefit from, provided it protects its material from being used by other AI companies. In the context of image generation, it arguably has an unparalleled volume of resources upon which to train models. This does not appear to be lost on the company. In September Getty Images released its own AI image generating tool, which is described as “commercially safe”. It is described as such because Getty Images have offered an indemnity against any related copyright claim which a user might find themselves subject to after using an image produced by their AI generating tool. We have written a separate article exploring the purpose and impact of AI indemnities, here.
Are there wider implications for artists and creators?
Should Getty Images be successful in their claim, the case could provide a helpful precedent for individual or smaller artists and creators who might be less able to engage in novel litigation. In any event, the case will provide useful guidance in this burgeoning area.
The breadth of the Getty Images claim might be useful in this regard. Getty Images are concerned not only with the misuse of their images, but also the accompanying captions, metadata and other products such as videos. As just one example, the case might open the door to writers pursuing AI companies that train models with their work.
However, the lack of an identifiable watermark on creators’ products or artworks might make it difficult from an evidential perspective to bring claims similar to those brought by Getty Images. Such claims might be left heavily reliant upon evidence arising from disclosure processes, potentially too speculative for some prospective claimants. It might be that adopting Getty Image-like watermarks will become necessary for individual artists to protect their intellectual property as AI usage becomes more common. But the evidential significance that the court will attach to the watermarks in this case does remain to be seen.
At the very least, the claim appears to be alerting the wider creative industries to the risks faced not only by creators, but also by those AI companies that might misuse their work. Open-AI has already taken steps to allow artists to opt their work out of future text-to-image AI models produced by the company. However, this requires the submission of an online form uploading an image of each copyrighted work. Such a requirement could conceivably become onerous, particularly if an artist has a large number of copyrighted works which they wish to protect and is forced to undertake this exercise in relation to numerous AI companies.
With special thanks to Daniel Pearce, a current trainee in the dispute resolution team, for their contribution to this article.
This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.
© Farrer & Co LLP, October 2023