Skip to content

AI in recruitment


Blue abstract image

Artificial intelligence (AI) is no longer the stuff of science fiction. AI is an increasing feature of the modern workplace and has become invaluable for many organisations in their recruitment processes. There are various AI-assisted tools on offer in the recruitment space to help employers filter applications, including performing the initial screening of CVs and application forms, searching prospective employees’ social media using search terms, or using chatbots at the first stage of the interview process, with applicants answering questions posed by AI in a video call. For organisations with a high volume of applicants, these tools can save an enormous amount of time and cost.  Organisations should be wary of over-reliance on AI in recruitment, however, due to the risk of discrimination.

When you understand what AI is, you can quickly see why AI might lead to discrimination. There are many definitions of AI, but a useful definition is the one coined by Professor Frederik Zuiderveen Borgesius: "the science of making machines smart". The key to this phrase is ‘making’. Humans have to make the AI do the specific task and if they are inputting flawed or biased data, that is what the machine will learn, and biases (discrimination) will therefore be built into the system. In 2018, for example, Amazon had to abandon a CV screening algorithm it had been using, as the algorithm (using Amazon’s own data on previous recruitment decisions) had taught itself that male candidates were preferable to female candidates. This resulted in CVs being filtered out if they included the word “women’s”. 

In the UK, the use of biased AI systems poses the risk of claims of indirect discrimination, as AI used during recruitment could be seen as a PCP (provision, criterion or practice) that’s applied to everyone equally but disadvantages a certain group of people who share the same protected characteristic. As we know, discrimination claims can be bought by job applicants as well as those engaged by organisations. Although we have yet to see a case in the Employment Tribunal on the use of AI during recruitment, the lack of litigation to date should not create a false sense of security.

So, what steps can organisations using (or considering using) AI during the recruitment process take to mitigate the risk of the AI becoming discriminatory and therefore placing the organisation at risk of a claim?


As part of the public sector equality duty, public sector organisations will need to carry out an equality impact assessment when considering the use of AI. Even for organisations not subject to this duty, an AI impact assessment covering the risk of discrimination is a sensible safety measure and will help justify any decision to use AI.


AI is only as good as the data it is given by humans. Therefore, organisations should train the individuals building the AI tool on their own unconscious biases and on how to spot bias in the data they are inputting into the AI.


It is crucial that organisations test the output of the AI tool before rolling it out and that they continue to test the tool throughout its use.


In its policy paper on algorithms in the workplace, Acas recommends that “algorithms should be used to advise and work alongside human line managers but not to replace them”. In most cases fully automated recruitment decisions will not be advisable, and instead keeping an element of human intervention (eg managers having the final decision) will be a sensible check and balance on the use of AI.


Transparency is key to gaining and maintaining trust in the process, so organisations should invite open and honest feedback from applicants interacting with the AI recruitment tool. Organisations should also obtain regular feedback from the hiring managers on the type of applicants the AI is putting forward.


Organisations should keep a clear documentary record of any tests and feedback to build a picture of how the AI is functioning. Organisations can then assess whether the tool has become discriminatory.

For more information, including definitions of key terms use in the context of AI, see our blog Artificial intelligence in the workplace: helpful or harmful?

This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.

© Farrer & Co LLP, December 2022


Want to know more?

Contact us

About the authors

Chloe Westerman lawyer photo

Chloe Westerman


Chloe provides advice to clients from a variety of sectors on both contentious and non-contentious employment matters.

Chloe provides advice to clients from a variety of sectors on both contentious and non-contentious employment matters.

Email Chloe +44 (0)20 3375 7689
Back to top