It won’t have escaped your attention that AI is in the news a lot at the moment. Following the release of ChatGPT at the end of 2022, not a week seems to go by without headlines either extolling its benefits or panicking about its risks.
Irrespective of which side of the fence you sit on, what is clear is that rapidly advancing AI is here to stay. With that comes the increasing need to consider AI risk management, particularly in areas where AI has the potential to make or inform decisions about individuals. The field of employment is a prime example of this.
In this blog, we look at the current (though evolving) legal and regulatory landscape in the UK regarding the use of AI in employment, as well as how employers might navigate their way through it.
The regulatory landscape
When it comes to worldwide regulation of AI, there is currently no consensus as to approach. While the EU is preparing strict regulation and tough restrictions on the use of AI, with Italy banning ChatGPT over privacy concerns, the UK is planning “an innovative and iterative approach” to regulation.
In its recently published White Paper A pro-innovation approach to AI regulation, rather than introducing new legislation the UK Government proposes a system of non-statutory principles overseen and implemented by existing regulators.
What this means for the employment sector is that the Government intends to “encourage” the Equality and Human Rights Commission and the Information Commissioner to work with the Employment Agency Standards Inspectorate to issue joint guidance on the use of AI systems in recruitment or employment. In particular, the Government envisages the joint guidance will:
- Clarify the type of information businesses should provide when implementing AI systems.
- Identify appropriate supply chain management processes, such as due diligence or AI impact assessments.
- Suggest proportionate measures for bias detection, mitigation and monitoring.
- Provide suggestions for the provision of contestability and redress routes.
Quite whether this approach is the one that the Government will adopt, however, was left in question following Rishi Sunak’s comments on his way to the G7 Conference. Here he adopted what felt like a more cautious tone, emphasising the need for AI to be used “safely and securely, and with guardrails in place”. Could this be an indication that a move to a more regulated position might be on the cards?
For more detailed analysis on the Government’s current White Paper, Ian De Freitas (a partner in our Data, IP and Technology Disputes team), provides helpful commentary in his article Regulating Artificial Intelligence. In the article he explores the five common principles proposed by the Government, assessing them against other recent developments.
The legal landscape
In the absence of specific legislation governing AI in the workplace, and pending possible guidance, it is important employers understand how existing legal risks and obligations may affect their use of AI. These include:
- Discrimination: There has been much said about the risk of bias in algorithms and AI creating or replicating existing discrimination: Amazon for example famously had to scrap an AI recruiting tool which taught itself that male candidates were preferable to female. Existing protections from discrimination under the Equality Act 2010 continue to apply to all forms of AI used in employment, and employers should ensure the AI they use is not in breach of that. Acas’ article My boss is an algorithm takes a more detailed look at the ethics of algorithms in the workplace.
- Data protection: Generative AI, such as ChatGPT, uses the data it is given to identify patterns and create new and original data or content. Any employers using data in this way must ensure they do so in a manner which is compliant with the Data Protection Act 2018 and the UK GDPR. See the ICO’s Guidance on AI and data protection for more information.
- Monitoring and surveillance: Reports suggest that a third of workers are being digitally monitored at work, for example via remotely controlled webcams or tracking software. Royal Mail for instance recently admitted to using tracking technology to monitor the speed of postal workers. As above, employers should ensure compliance with data protection legislation in any monitoring of its workforce, as well as ensuring it doesn’t breach the right to privacy under the Human Rights Act 1998.
- Unfair dismissal: Employees with over two years’ service have the right not to be unfairly dismissed under the Employment Rights Act 1996. In the event AI reduces the need for employees to carry out a particular type of work, employers should ensure an appropriate procedure is followed before making any decisions in respect of those staff members. Where dismissal is contemplated, they must ensure that there is a fair reason for dismissal. Care should also be taken to ensure that the way AI is used does not breach the implied term of trust and confidence between employers and employees, since doing so could give employees the right to bring a constructive unfair dismissal claim.
What can employers do about AI?
We have provided detailed commentary on using AI in employment in two blogs:
In summary, employers may want to consider the following:
- Develop a strategy for the use of AI in the workplace, with consideration as when its use is and isn’t acceptable.
- Introduce a policy (or update existing policies) regarding the appropriate use of AI by staff.
- Use AI impact assessments to identify and mitigate any risks when introducing AI into the workplace.
- Retain a human element in decision making, to ensure managers have final responsibility for decisions.
- Ensure full transparency over when and how AI is used, especially when it impacts employees or potential employees.
- Deliver training on the use of AI, ensuring it covers issues such as appropriate use of data, accuracy and bias.
The changing nature of jobs
There is no escaping the fact that AI has the potential to radically transform employment as we know it. Recent reports predict that AI could “replace the equivalent of 300 million full-time jobs”. With that comes concerns about the treatment of workers and the erosion of workers’ rights (for example as highlighted by the TUC in its latest conference).
Employers will need to prepare strategically for the changing nature of work and the need to integrate AI into workplace operations. Currently there are likely to be more questions than answers: will there be a need to redesign roles or change work allocation and workflow processes? How can employees be supported in this transition? Is there a need to invest in workforce training to help employees develop the skills needed to work with AI or take on different roles? Regardless, with AI likely to impact most jobs in some way, there is a need for employers to look afresh at their workforce strategies in order to keep pace with the rapid changes that AI might bring.
This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.
© Farrer & Co LLP, May 2023