Skip to content

Can ChatGPT and other generative AI tools be liable for producing inaccurate content?

Insight

nfts

The use of generative artificial intelligence (AI), such as ChatGPT, has grown dramatically in recent months. ChatGPT is a natural language processing tool driven by generative AI technology. ChatGPT has access to huge amounts of data obtained through “web scraping” to gather data from multiple sources across the internet, extracting data from various websites through automated technology. The service then works to predict the most likely response to user input, based on the data available.

This rapid growth has led to concerns around the accuracy of the content that ChatGPT and similar tools published through answers to questions asked by users. Inforrm, the International Forum for Responsible Meda, has recently published this article demonstrating ChatGPT’s tendency to create plausible-looking caselaw out of thin air. Indeed Meta’s chief AI scientist Yann Le Cun has heavily criticised ChatGPT, stating that both Google and Meta possess similar AI models to ChatGPT but that these companies are unwilling to release them until they are ready.

If ChatGPT is indeed “hallucinating” (providing inaccurate answers based on other information available to it) information about individuals, do these individuals have any recourse against its provider, OpenAI? As the number of ChatGPT’s users grows, the law of averages suggests that the potential for the chatbot to circulate inaccurate (and potentially very harmful) information about individuals grows in parallel. However, there are some potential issues with simply applying the usual principles of a defamation and / or data protection claim against the operator of ChatGPT (used here as an illustrative example of a generative AI tool).

Defamation

Liability of OpenAI

Firstly, can an individual take a claim against OpenAI for defamation if ChatGPT produces a statement about an individual that meets the key preliminary criteria for a defamatory statement?

It seems that OpenAI, as the provider of ChatGPT, will be a potential defendant in any defamation claim taken regarding the outputs from ChatGPT. For example, OpenAI is currently the defendant in a claim filed in the US by a Florida radio host named Mark Walters. In that case ChatGPT provided a synopsis of a fictional legal claim in which Walters was named as a defendant.

If the statement provided through ChatGPT’s output is found to be untrue and capable of defaming the named individual, OpenAI may run into difficulties if it seeks to rely on two commonly pleaded defences to defamation: those of honest opinion and public interest. Both defences involve some element of subjectivity. This is more so the case with the public interest defence. This defence only applies to statements made on matters of genuine public interest, but also requires that (a) the publisher believes that publication of the statement in question was in the public interest and (b) that this belief was reasonable to hold at the time of publication.

If OpenAI is the entity being sued for the statement, there are obvious issues with applying defences containing subjectivity as a key component to such lawsuits when dealing with outputs that are generated without any human intervention.

OpenAI may also seek to rely on defences available to intermediaries in defamation claims. However, OpenAI and ChatGPT do not appear to fall neatly into some of the more common defences relied upon by hosts of user-generated content, as OpenAI ultimately establishes the algorithms behind ChatGPT and has gathered the data used by ChatGPT. It goes beyond being a passive host or even editor of content that is defamatory. Rather (as the term generative AI suggests) it creates the content, and thereby resembles a primary publisher.

What also remains unclear is the extent to which OpenAI might be responsible for the re-publication of defamatory statements by the users of its services.

Liability of users of ChatGPT

If a user of ChatGPT subsequently re-publishes a defamatory statement contained in the output received from ChatGPT, then that user could also be sued for this. There is no defence for repeating a false and defamatory statement made by someone else. We suspect a court will be similarly unsympathetic to a defence from a re-publisher that they reasonably believed the information to be true, as it arose from ChatGPT. This is particularly interesting given the caveats OpenAI has published regarding the content produced by ChatGPT.

These caveats include that OpenAI acknowledges that there are issues with accuracy with the platform and indeed has tried to distance itself from claims that the information is wholly factual. The ChatGPT terms and conditions state that:

“...given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places of facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review…”

OpenAI further warns on its website that ChatGPT “can occasionally produce incorrect answers” and “may also occasionally produce harmful instructions or biased content”.

Such statements certainly look like an attempt by OpenAI to push the responsibility for incorrect answers provided by ChatGPT back onto users (or at least to absolve itself of liability). These caveats and warnings are likely to be noted by courts when assessing defences pleaded by individuals who have re-published inaccurate information originally produced by ChatGPT. However, the authors' view is that they are less likely to be deemed adequate to remove OpenAI's responsibility for unlawful content generated via ChatGPT, especially if detriment is caused to the reputation of an individual to whom such content relates.

Data protection

In addition to potential defamation claims, publication of inaccurate statements about an individual by ChatGPT has the potential to breach that individual’s data protection rights. The relevant UK rights arise under the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018 (DPA 2018).

Article 4 of UK GDPR makes clear that collecting personal data, storing it temporarily, hosting such information and / or making it available to internet users amounts to processing of that data. It is the data controller who is responsible for compliance with the data protection principles under the GDPR. Article 5(1) (d) UK GDPR then states that information processed about an individual must be accurate. It seems clear that OpenAI is acting as a data controller (ie the entity that determines the purposes and means of the processing of personal data) in collecting and storing information about individuals (whether via its collating of online information or through information users supply when using ChatGPT) and then publishing such data in response to questions asked of ChatGPT.

Though OpenAI appears to have no establishment in the UK by way of offices or representatives, it would still appear likely to be subject to regulation by UK GDPR. This is due to the extra-territorial scope of the UK GDPR as it applies to controllers who process personal data of individuals in the UK, where those processing activities relate to the offer of goods and services to such individuals or the monitoring of their behaviour.

OpenAI has explicitly acknowledged concerns around the processing of personal data in the ChatGPT terms and conditions, asking that users planning on using ChatGPT to process personal data must “provide legally adequate privacy notices and obtain necessary consents” for this processing to the individuals whose data they are processing. Any users planning on processing personal data must present to OpenAI that the data is being processed in accordance with applicable law and users must fill out a form to request to execute OpenAI’s Data Processing Addendum.

However, this form does not speak specifically to issues with processing inaccurate personal data, and OpenAI cannot avoid its responsibilities under the UK GDPR by warning users of the importance of obtaining necessary consents for uploading personal data to ChatGPT and using the outputs from ChatGPT. If OpenAI is found to be caught by the UK GDPR, then it will be responsible for ensuring that only accurate personal data is being processed by it. If an individual can demonstrate that the personal data being stored by ChatGPT or appearing in its outputs is inaccurate (or processed unlawfully for some other reason), it should be possible to require OpenAI to remove the relevant data and ensure outputs do not repeat the inaccuracy using the existing rights under the UK GDPR. How that works in practice with a large language model like ChatGPT is another matter. However, in principle, this would be similar to the process used in relation to inaccurate data held on due diligence databases such as WorldCheck (see this article for further information on this process).

Conclusion

It remains to be seen how the courts will view cases taken in relation to inaccurate claims published by ChatGPT in the UK. We have highlighted a number of outstanding questions in this respect. What is clear is there is no obvious immunity from liability for tools like ChatGPT or for the users of those tools should inaccurate information about individuals be stored, generated or repeated. Whether this will turn into a battleground for defamation and data protection-based claims remains to be seen. One reason it might not is that ChatGPT is predominantly used by an individual user. However, as these large language models are integrated into larger platforms (eg search engines), so their content will be published more widely and the risk of reputational harm to individuals referred to will increase. We have no doubt this article will need to be updated and developed as the months and years go by.

This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.

© Farrer & Co LLP, July 2023

Want to know more?

Contact us

About the authors

Ian De Freitas lawyer photo

Ian De Freitas

Partner

Ian has over thirty years' experience as a commercial litigator. He specialises in disputes involving data, technology and intellectual property. Ian leads the firm’s Data, IP and Technology Disputes team. 

Ian has over thirty years' experience as a commercial litigator. He specialises in disputes involving data, technology and intellectual property. Ian leads the firm’s Data, IP and Technology Disputes team. 

Email Ian +44 (0)20 3375 7471
Thomas Rudkin lawyer photo

Thomas Rudkin

Partner

Tom is a leading reputation, media and information lawyer.  He advises the firm’s clients on all issues relating to their reputation, privacy, confidential information and data. Tom is a member of the firm’s Reputation Management and Data, IP and Technology Disputes practices.   

Tom is a leading reputation, media and information lawyer.  He advises the firm’s clients on all issues relating to their reputation, privacy, confidential information and data. Tom is a member of the firm’s Reputation Management and Data, IP and Technology Disputes practices.   

Email Thomas +44 (0)20 3375 7586
Headshot

Emily Costello

Associate

Emily specialises in reputation management and dispute resolution across a broad spectrum of privacy, defamation, tech and data protection issues. Emily provides bespoke legal advice to a wide range of clients, including high-profile individuals, schools, charities, corporations and executives.

Emily specialises in reputation management and dispute resolution across a broad spectrum of privacy, defamation, tech and data protection issues. Emily provides bespoke legal advice to a wide range of clients, including high-profile individuals, schools, charities, corporations and executives.

Email Emily +44 (0)20 3375 7300
Back to top