AI in higher education: legal insights
Insight
On 17 October, David Copping and Ethan Ezra hosted a webinar discussing the legal implications of artificial intelligence (AI) within the higher education sector, billed as Cutting Through the Noise. The key takeaways were as follows:
Introductory points
- The existing legal framework: There are clearly gaps in some areas (such as the law around deepfakes, or the development / deployment of AI tools which pose existential risks) and areas of policy which need to be better developed (such as around plagiarism). However, we do have an existing legal framework which enables many questions around the use of AI to be answered without any new law or regulation being implemented.
- Avoid treating AI as a technological monolith: Whilst much of the current focus relates to generative AI, it is important to remember the breadth and variety of AI as a technological phenomenon. This certainly rings true in the HE sector: current AI examples include intelligent tutoring systems, personalised feedback software, and educational data mining platforms.
Student and academic outputs
- Differing views on (generative) AI: Currently, there is a huge discrepancy in how students and academics view the use of generative AI. The former is overwhelmingly favourable towards its use, whilst the latter largely view AI as a risky plagiarism tool. Indeed, student use of generative AI does present challenges, from the misrepresentation of one’s academic abilities to infringement of third party-owned IP.
- Integration rather than prohibition? Given the existing prevalence and popularity of generative AI within the student community, it may be difficult to enforce an outright ban on its use. Universities may be better off implementing strategies to manage and regulate the use of (generative) Some of these include:
-
- Internal and external liaison: Consider setting up an oversight committee or board to discuss and evaluate the risks of generative AI. Also, keep a close eye on what other universities/industry bodies and regulatory groups are saying. The Russell Group’s July 2023 principles on the use generative AI tools in education is illustrative of this, with principle 5 noting the need for collaboration between various industry bodies.
- Stay on top of initiatives which counter the risks: For example, Imperial College Business School’s IDEA Lab has proposed a "Generative AI stress test" for teachers to assess the vulnerability of their modules to generative AI tools.
- Implement an AI use policy: Consider addressing: (i) what AI the policy covers, (ii) how / when can it be used by students, (iii) tying in the policy with your existing plagiarism policies, (iv) ways of verifying student learning (eg when to require signed declarations and full source citations), (v) restricting the disclosure of proprietary information and third party IP infringement, and (vi) when faculty can use (generative) AI.
IP issues with AI
- Can AI invent / own patents? As a reminder, patents protect novel inventions which have industrial application. The UKIPO has previously refused two 2018 applications under sections 7 and 13 of the Patents Act 1977 (requirement for the involvement of “a person”) because the AI machine in question, known as DABUS, was deemed not to be “a person” and so could not be the inventor or owner of the patent. The case is currently in the Supreme Court and we are awaiting the decision. Note that a UK IPO consultation on the matter concluded no change to the existing law was needed.
- What about copyright? Under the Copyright Designs and Patents Act 1988, the “author” of a work is the person who creates it. In the case of a work which is computer-generated, s.9(3) CDPA provides that “the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken”. Quite who this is in the case of AI-generated art is up for debate: it could be the creator of the AI system itself, and / or the person who has instructed that system to generate such artwork, but clearly the current copyright framework in the UK (and elsewhere) is capable of applying to AI-generated work. In terms of ownership, unclear: the rule of thumb is that the author of a work will be its owner (or an employer when created under an employment scenario). AI platforms adopt differing approaches, with OpenAI, for example, assigning ownership of the generated output to the user.
- Infringement: As noted above, the use of AI carries an IP infringement risk primarily because AI tools like ChatGPT are "trained" on vast third-party datasets across the internet to generate output material. There are infringement exceptions, eg text / data mining for non-commercial research, but these are unlikely to apply in larger scale data extraction scenarios in a commercial context.
Data protection issues and safeguarding against bias
- Data protection: The risks here relate to: (i) users inputting personal information into AI tools (given many tools retain / make onward use of input data), and (ii) AI tools which target publicly available personal data (eg profiles and registries). HE institutions should mitigate these issues with suitable data protection impact assessments, AI policies regulating permissible input data (eg limiting personal details), and technical measures to ward off data scrapers.
- Bias: Multiple studies have demonstrated various social and political biases inherent in AI tools like ChatGPT. HE bodies should consider what types of work and tasks they would permit AI being used for, eg typing in queries on sensitive political issues, student admissions, and welfare queries.
Government and regulatory pronouncements
These include:
- UK Government: National AI Strategy (September 2022), White Paper on AI Regulation (March 2023, seeking to promote the UK as a leader in AI innovation and regulation), Vallance Review of Digital Technologies (March 2023), Government Response to the Vallance Review (March 2023), and the AI Safety Summit (November 2023).
- UK IPO: Code of practice on copyright and AI (a work in progress).
- UK ICO: Guidance on AI and data protection and Generative AI: eight questions that developers and users need to ask.
- CMA: Initial Report on AI Foundation Models.
- OfS: Limited output so far apart from some focus on funding and scholarships for postgraduate education in AI.
- QAA: Advice on the opportunities and challenges posed by generative AI.
- EU: The EU is in the process of implementing a new AI regulation which will take effect in 2025. Contrast this with the UK’s current view that no new, AI-specific legislation is required. The EU regulation adopts a cross-sector, risk-based approach, with harmful AI uses banned and a defined list of “high risk” AI systems subject to strict requirements (eg transparency). There are lower-burden or no requirements for medium / low risk AI. Enforcement will be handled by a national regulators and overseen by a new “EU AI Board”.
This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.
© Farrer & Co LLP, November 2023