Skip to content

Artificial intelligence – an overview for schools

Insight

AI art

Contrary to popular belief, lawyers – well, most lawyers anyway – are happiest when enabling opportunity and growth for their clients, rather than identifying red flags and blockers. But when it comes to artificial intelligence (AI) our experience is that many schools clients are seeing risks first, rather than getting comfortable embracing potential efficiencies. Equally, others are proudly seeing themselves as early adopters.

In fairness, it is an area where risk and reward are in real tension. As well as positive use cases, there are real and valid concerns for schools in the rapid development – and increasing accessibility – of AI technologies. This article, a version of which first appeared in the most recent edition of ISBA’s Bursar’s Review, offers an overview of the area for schools. Inevitably, it can only scratch the surface of the issues in most areas.

Different AI types and use cases

Much of the focus in the past year has been on generative AI (GAI) and large-language model applications, such as ChatGPT, which have democratised access to powerful tools. This focus is understandable, although many of the most promising advances in AI technology are being made in narrower, specialised use cases such as healthcare and education.

Therefore, while schools do need to put in place policies to counteract and manage risk around use of public domain GAI tools (supplementing existing policies on acceptable use of IT, data protection and e-safety), the most radical developments for schools – in core areas such as teaching and safeguarding – are just around the corner. These will likewise require careful risk assessment and policy discussion, at a senior leadership and governor / board level.

AI regulation

Proposals for a new legislative and regulatory regime around AI are still nascent. The EU AI Act is the furthest developed. While the world will look to its lead, some have criticised it for focusing on risk rather than opportunity, and it will not in any event have direct effect in the UK post-Brexit.

The UK government is seeking to be a world leader in the development of a framework for AI governance. While continuing to emphasise a pro innovation approach the narrative has focused more recently on “AI safety” and the development of common-sense use principles. While the government has said that the time will come for new laws to regulate AI, for now the framework of existing UK legislation continues to apply. This article considers the key areas of law that apply.

Data Protection Law

The UK General Data Protection Regulation (UK GDPR) and Data Protection Act 2018 (DPA) provide a framework for the use, disclosure, management, and security of personal data. Relevant provisions include additional protections for children's data and various mitigations against the use of personal data for automated decision-making. However, the core principles of UK GDPR all apply, including:

  • accountability (in AI design, as well as decision-making and risk management by schools);
  • transparency (being open about how you are using data and technology);
  • necessity and proportionality in how you use data for AI (does it need to be personally identifiable, rather than anonymised or aggregated data?); and
  • security (a key issue with large datasets and whenever engaging third party providers).

More fundamentally, all use of personal data requires a lawful basis. This is not always going to be consent: although for potentially high-impact and privacy intrusive uses involving children's data (and in contrast to the majority of processing carried out at schools) informed pupil consent may be preferable – even necessary – as a condition of AI processing. Either way, all data subjects (of any age) will have the absolute right to be informed of, and object to, any fully automated processing or profiling that may have legal effect on an individual without human input.

Schools also need to be increasingly careful about what identifiable personal data is currently made available on their websites. We know that generative AI relies on huge datasets, scraped from the web, to learn and improve. IT departments may be able to assist in detecting which “bots” have been scraping and spidering your websites (and site maps) for data; and what technical measures can be used to deter it, and grant or refuse permissions.

Confidential information and Intellectual Property

The data on which generative AI models may be trained may also include proprietary and confidential information. Usage policies should prevent staff and pupils from making use of such data as part of what they feed into the machine.

Most “open” generative AI platforms will offer limited assurances that confidentiality or IP will be protected. Increasingly we are seeing “enterprise” versions offering more in this regard, so it is crucial to check the terms and conditions that apply to any technology licensed by the school. For the time being, in the absence of that kind of bespoke / “walled garden” licence, it remains safest to assume that the AI system will train itself on whatever you input.

It is also possible such confidential information can be recreated in the outputs of AI searches and user questions. Copyright law only offers limited protection here, because of the transformative nature of AI outputs, so the safest policy is not to risk using anything as part of a generative input which you would not willingly publish online.

There is the further copyright issue of plagiarism, considered below.

Child safeguarding, online harms and duty of care

The risks posed to child welfare by harmful online content, unregulated access and breaches of privacy are well-documented, but of course AI opens a new front in this familiar battle. Concerns include the use of AI voice cloning to supplement already-advanced “deepfake” technology for the purposes of child impersonation or pornography.

While it is hard to legislate against these risks, we have existing statutory guidance in the form of KCSIE (which places obligations on schools in terms of online safety, filtering and monitoring); the ICO's Children's Code which helps regulate online service providers' use of child data; and the new Online Safety Act.

Discrimination

An area of considerable focus for both the ICO in the UK and globally is the potential for AI programs to reflect and reinforce – but fail to take account of – existing human biases in terms of the information they learn from, and the outcomes they create. This is a concern not simply in legal terms – as matter of fair data processing, and of the Equality Act – but more widely. It is possible to see how AI biases could impact children's progress and tailored learning at schools, without judicious human intervention (alongside technological improvements and better data).

Contract law and policy

Some of the risks above can be mitigated by careful review of contractual terms with software providers – not simply as a factor of commercial risk, but also ethical due diligence. There is a considerable overlap here with UK GDPR: and specifically, given that most providers will be data processors acting on behalf of schools, the requirements under UK GDPR for various minimum contractual protections.

Schools should also look beyond the contract to assure themselves that the provider has an adequate track record – plus the security policies, credentials and references – to take safe receipt of their school community's data. That is not to mention the commercial resources and insurance to deal effectively with breaches, should anything go wrong.

There are also different consent rules when contracting online with children. Generally, schools will not be considered the “information society service” for these purposes – it will be the EdTech or other provider offering the service to users (even if it does so via a contract with the school). However, it underlines the need to engage providers who understand and comply with applicable rules.

Schools should review their own website terms of use, to check whether there are restrictions on how third parties may use data and copyright taken from their sites; and also their privacy notices and policies, to ensure these accurately reflect any new or unexpected uses of personal data in connection with AI.

Risks with use of generative AI platforms (by staff and pupils)

A clear policy should be in place with staff both around what school material they may use as inputs, for which permitted use cases; and how they use and properly credit any AI “outputs”. The school must, in turn, consider how to manage transparency about its uses of AI with parents and pupils: for legal and ethical reasons, but also to bring them along on the journey.

As between the staff member and the school, it should be explicit that the school will own the outputs of whatever staff generate using generative AI tools in the course of their employment – even if there are broader, unresolved questions of underlying IP ownership in computer generated material.

In addition, areas of concern specific to pupils include:

  • Plagiarism and other questions of academic integrity

    The Joint Council for Qualifications has produced an extensive resource on AI use in examinations, for teachers and assessors and this has recently been updated with more practical guidance and case studies. It is recommended that this is read carefully but in summary, it takes a hard line on AI misuse by way of preserving existing integrity principles in assessments.

  • Reliability of outputs

    As impressive as the advances in AI technology have been it remains the case that large language models still produce regular errors and misunderstandings. This has implications for learning, but also for other areas such as safeguarding. Pupils should be steered towards known and trusted personnel and resources even as sophisticated AI may begin to give an impression of omniscience.

  • School policy vs. home behaviour

    As with all risks of online harm, schools face a dilemma in terms of:

    • the acceptable use rules they can apply on school premises or with school WiFi and equipment – by means of policies and technical measures (blocking, filters, monitoring); and
    • how they can seek to guide pupils and parents in respect of using the technology at home.

Naturally, schools will not wish to overreach in terms of where their duty of care begins and ends, but equally it is critical that ethical and disciplinary standards are upheld with homework or coursework produced out of school, and so pupils' use of AI resources at home should reflect this.

Specific AI applications for schools

All schools will hopefully be considering the potential efficiencies that AI applications could bring in terms of staffing, assessments and administration – which of course, alongside the promise of freeing up teacher time, raise job security concerns. However, in terms of AI use applications which have the potential to impact pupil personal data (as well as having implications for staff-generated IP):

Intelligent Tutoring Systems (ITS) and Personalised Learning

AI can generate personalised learning pathways, adapting both content and the pacing of teaching to the needs of the individual pupil, including making improvements in “real time”. However, doing so requires a good deal of pupil data – including eg SEND data – and it must be processed on a “names” basis in order to be effective and personalised. The benefits may outweigh these risks, including in identifying the need for early learning needs intervention from analysis of pupils' outputs, and improving accessibility for pupils with disability; but their use will clearly require suitable prior risk and privacy impact assessments, as well as secure systems.

AI can achieve certain improvements, in areas such as curriculum development and teaching materials, by analysing existing trends and resources – and helping improve them – by reference to non-personal inputs and / or aggregated data. This is clearly less risky in terms of privacy (albeit less tailored to individuals), but it still poses risks in terms of how staff – and third-party companies – handle school proprietary information.

Gamification, virtual and augmented reality

The prospect of creating immersive learning experiences for pupils, to make complex subjects more understandable and engaging for them, has been touted for years as a potential boon of EdTech. The advent of AI-driven VR and AR applications brings this prospect closer than ever, although cost remains a factor.

The notion of “gamifying” education has its detractors, including those who would like to see fewer screens in children's lives and less focus on short-term rewards in neural development. But there is no doubt that gaming elements can produce motivating structure and goals for some children, and AI can help tailor programmes to individuals.

Safeguarding tools

We have already considered the safeguarding risks. Equally, there are positive use cases here: AI-driven safeguarding tools will, as they develop, help improve real-time monitoring of online activity; and identify risks of harm, patterns of behaviour, and cases suitable for early help. Of course, this also requires processing of some of the most sensitive data a school will hold, which puts particular stress on the attendant security risks and need for due diligence with providers.

Language learning and translation

AI-driven language learning apps and translation tools are getting increasingly powerful and reliable (see for example DeepL). While this can clearly benefit learning, it carries with it the risk of short cuts and cheating.

Plagiarism detection

Concerns that pupils may use generative AI to cheat are well documented. The other side of this coin is that AI will be better placed than any human to identify where content has been lifted and plagiarised – or even whether it bears the hallmarks of specific generative AI programmes.

It is also worth noting that – while personally identifiable information is usually buried in the datasets of “large language models” and is unlikely to be disclosed in outputs – smaller, targeted datasets (based around pupil groups) will likely lead to more readily identifiable outputs in respect of individuals, and policies need to reflect this additional risk.

Overall, what may be termed “data-driven decision-making” is clearly going to bring net positives in both efficiency and accuracy for educators and educational administrators – it will improve research, insights and teaching materials, and will help tailor and customise learning and classroom management. However, we are not at a stage – even if we ever wanted to be – where we can forgo human oversight and input. This is not simply a matter of staff training, but also policy-making and the UK GDPR principles of “privacy by design and default”: ie prior risk assessments, data protection impact assessments (DPIAs), anti-discriminatory safeguards, and baking in mitigations (including the checks and balances of human oversight) from the start.

Summary

Many schools have understandably adopted a “wait and see” policy with AI up until now; but the time has come to acknowledge both the potential of AI and the reality that many staff and pupils will already be using it. At the very least, if your school has not done this already, an update will be required to existing policies – and many schools will want a policy specific to AI use by pupils and staff.

This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.

© Farrer & Co LLP, February 2024

Want to know more?

Contact us

About the authors

Owen O'Rorke lawyer photo

Owen O'Rorke

Partner

Owen is a rights specialist with expertise in data protection and intellectual property, and considerable experience in both contentious and advisory contexts. He is a recognised authority in information sharing and data privacy in schools, fundraising, and the sports sectors, with a particular interest in safeguarding.

Owen is a rights specialist with expertise in data protection and intellectual property, and considerable experience in both contentious and advisory contexts. He is a recognised authority in information sharing and data privacy in schools, fundraising, and the sports sectors, with a particular interest in safeguarding.

Email Owen +44 (0)20 3375 7348
Sam Talbot Rice lawyer photo

Sam Talbot Rice

Senior Associate

Sam provides practical and focused advice on business-critical areas across the fields of data protection, intellectual property and commercial contracts.

Sam provides practical and focused advice on business-critical areas across the fields of data protection, intellectual property and commercial contracts.

Email Sam +44 (0)20 3375 7222
Back to top