This article examines the increasing trend of Generative AI system providers offering indemnities to users of their systems. These cover copyright infringement claims brought by third parties as a result of users using Generative AI tools. We explain why providers are offering these indemnities and the extent to which users can take greater comfort in using Generative AI systems as a result. As we explain, however, these indemnities do not offer a “get out of jail free card” to users.
With rapid growth in the use of Generative AI tools, questions have been asked about the extent to which providers have infringed copyright in order to create them. At the moment, the most high-profile dispute about this in the UK is that brought in the English High Court by Getty Images against Stability AI. Getty Images claims that Stability AI used Getty’s copyright protected images to train its Stable Diffusion system. There is a plethora of similar claims being brought in the United States courts by content owners against Stability AI and other Generative AI providers.
Users of Generative AI tools obviously need to be cautious of the risk of using copyright infringing material produced in the output of these tools. Recognising the increasing concerns of users, providers have begun to offer certain indemnities. For example, Microsoft recently announced its Copilot Copyright Commitment, through which it pledges to assume responsibility for potential legal risks involved in using Microsoft’s Copilot services and the output they generate. If a third party were to bring a claim against a commercial customer of Copilot for copyright infringements for the use and distribution of its output, Microsoft has committed to defending the claim and covering any corresponding liabilities. In a similar vein, businesses using Getty Images’ image generating AI tool are reportedly being offered an indemnity under which Getty Images assumes full legal and financial responsibility for any copyright infringement claims brought by third parties against the user of its tool.
Benevolence or self-interest?
What is behind these moves by providers to offer indemnities to users? We think there are two principal reasons.
First, providers want to give users greater confidence to use their generative AI tools. For those bringing infringement claims, one likely effective way to undermine the providers’ business model is to attack the users who have little interest in fighting claims and are unlikely to want to be drawn into disputes. As claims of this nature become more widespread and better known, so other users are put off from using the tools. Therefore, the providers standing behind the users with appropriate indemnities are now attempting to offer users the confidence and comfort to continue using the tools “risk free”. We return to whether that is truly the case below. Allied to this is that those bringing the infringement claims may be dissuaded from pursuing them once they become aware that they are facing a deep-pocketed and determined provider standing behind the user.
A second key reason for the indemnities is that they allow the providers to control the basis on which defences to claims are run in what is likely to be new and developing area of case law. Indemnities of this nature invariably include conduct of claim provisions whereby the providers will step into the shoes of the user and run the defence (or settlement) of the claim in return for granting the indemnity to the user. Below, we consider whether handing over the defence of the claim to the provider is always in the best interests of the user.
Are indemnities the answer for users?
In principle the offering of indemnities is good news for users. However, they should not be seen as a panacea.
Firstly, the indemnities themselves will be offered on a standard non-negotiable basis. It is unrealistic to expect that those involved in providing these tools will be prepared to negotiate the terms under which the indemnities are offered (except perhaps in very large bespoke projects).
Secondly, the conditions under which the indemnities are offered need to be understood by the user. For example, Microsoft’s Copilot Copyright Commitment only covers commercial users, and requires that they utilise integrated safeguards and relevant content filters in the tools. Furthermore, users must not attempt to create infringing outputs by inserting data into the AI which they knowingly have no lawful access to, or right to use. Users should also be careful of other exclusions or limitations in the indemnities and check whether they are capped or uncapped.
Thirdly, the indemnities may only offer a partial answer for the user. Indemnities may work well in claims for compensation from third parties for infringements, but this is often only part of the story when it comes to the consequences of infringing copyright. A claimant would also typically be looking to stop the infringement continuing by seeking an injunction. That could be very disruptive to the user. For example, if the infringement relates to source code underpinning critical software deployed in the user’s business, then the consequences of an injunction (or the giving of undertakings to cease infringing) could be very serious indeed.
Finally, even if an indemnity is in place, the user might not want to call on the provider to honour it if that means the user loses control over the defence of the infringement claim by ceding conduct of the claim to the provider. Imagine a situation in which the provider might compromise the claim by agreeing that the user will stop using the output from the tool in circumstances where the indemnity (because of its coverage, limitations, exclusions or caps) does not cover the resultant disruption caused to the user’s business. In such a case, the user may decide not to call on the indemnity at all, but with the consequence that it has no coverage from the provider.
Whilst offering indemnities to users is an encouraging sign, organisations should be very careful to avoid viewing them as offering risk-free use of Generative AI tools. Providers are likely to make much of offering indemnity cover to users. However, users should not be drawn into thinking that this means they can apply an approach that if an indemnity is offered then the use of the Generative AI tool can be justified in all cases. Clearly, reading and understanding the relevant terms and conditions is a critical message to get across to those in your organisation who are using these tools. Then, users should think carefully about what the Generative AI tool is being used for. An analogy might be drawn here with buying software off the shelf for non-critical use where the risks of something going wrong are minimal, as opposed to acquiring a business-critical bespoke piece of software where the consequences of it failing are potentially catastrophic. Obviously, there are grey areas in-between. However, our key message is do not see the offering of provider indemnities as a complete answer to the risk of third-party infringement claims. Instead, weigh the offering of such indemnities in the balance when determining whether to use that provider’s Generative AI tool for a project.
With special thanks to Laura Biro, a current paralegal in the dispute resolution team, for their contributions to this blog.
This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.
© Farrer & Co LLP, October 2023