Skip to content

Using AI in recruitment: new guidance from the ICO on key compliance obligations

Blog

Workplace investigations

In this article Sam Talbot Rice and Arisa Terada, Trainee in the Intellectual Property team, discuss risks and responsibilities associated with using AI tools in recruitment.

Many employers are turning to AI tools to improve the efficiency of their recruitment processes, from sourcing candidates and summarising CVs to assessing and scoring applicants’ competencies and skills. However, these tools also bring new risks. If not used lawfully, AI tools may unfairly exclude individual applicants from roles due to ingrained biases or compromise their privacy.

The Information Commissioner’s Office (ICO) has recently published its AI in Recruitment Outcomes Report, detailing key findings from several consensual audits on AI tools in recruitment, which were undertaken with organisations that develop or provide AI tools for recruitment. While acknowledging the innovative potential of AI in streamlining recruitment and highlighting many existing good practices, the ICO identified significant areas for improvement in how personal data is used by AI sourcing, screening, and selection tools.

In this article, we provide a high-level summary of the key concerns and recommendations from the ICO’s report for employers considering or currently using AI tools in their recruitment processes.

Data protection impact assessments (DPIAs)

If you are an employer considering implementing AI tools in your recruitment process, the procurement stage is crucial for understanding, addressing and mitigating any potential privacy risks or harms to individuals. The ICO report stresses the importance of employers completing a DPIA before integrating AI tools in their recruitment processes, ideally at the procurement stage.

The DPIA must include a comprehensive assessment of privacy risks, outline appropriate ways to mitigate these risks and consider the balance between people’s privacy risks and other competing interests (eg improved efficiencies). It is designed to help you ask the right questions of your AI provider. Your DPIA should be kept up to date as the processing (and its impact on individuals) evolves. Ensuring a DPIA is undertaken is an important element in helping to meet the organisation’s accountability obligations under data protection law.

Lawful basis for processing

When processing personal data, it is essential to establish an appropriate lawful basis, such as legitimate interests, from the outset. Most lawful bases also require that processing is “necessary” for a specific purpose. It is important to get it right the first time. In particular, you cannot ordinarily switch from “consent” to a different lawful basis.

For processing sensitive special category data, such as racial, ethnic origin, or health data, you must identify both a lawful basis and an additional condition for processing special category data under the UK GDPR and Data Protection Act 2018. This should be documented and explained in relevant policies and privacy notices. As the report points out, it is also important to remember that inferred personal data may well constitute special category personal data, triggering the additional compliance obligations of Article 9 UK GDPR.

Data controller and processor roles

Both employers and AI providers share responsibility for data protection compliance. It is important to identify who is the controller (or joint controller) and who is the processor of personal information. This should be clearly recorded in a contract with the provider. For example, if an AI provider uses personal information processed on behalf of the employer to develop a central AI model deployed to other organisations, it would be considered a controller. It comes down to whether the provider is exercising overall control of the means and purpose of processing in practice.

If the AI provider is a processor, explicit and comprehensive written instructions should be provided by the controller. Employers should ensure the provider’s compliance with these instructions and could also set performance measures such as statistical accuracy and bias targets.

Fairness

Employers must ensure that the AI tools they procure are processing personal information fairly, working to reduce subconscious biases of human subjectivity rather than merely further embedding and exaggerating historic biases. The fact that AI systems learn from datasets does not guarantee that their outputs will not lead to discriminatory outcomes. The data used to train and test AI systems, as well as the way they are designed and used, might lead to AI systems that treat certain groups less favourably and in breach of the Equality Act 2010.

The audit revealed instances where AI tools did not process personal information fairly, such as allowing recruiters to filter out candidates with certain protected characteristics. Employers must ensure that personal information is processed fairly by monitoring for potential or actual fairness, accuracy or bias issues in the AI tool and its outputs. These issues should be raised with the provider so they can be addressed appropriately. Employers should seek clear assurances from the AI provider that they have mitigated bias and ask to see any relevant documentation.

Transparency

Candidates must be informed about how AI tools will process their personal information. This should be communicated through clear privacy information, explaining why the tool is being used and the logic involved in making predictions or producing outputs that may affect candidates. For instance, if an AI tool uses machine learning algorithms to rank candidates, the privacy notice should detail how these algorithms work and how they impact the recruitment process. Candidates must also be informed about how they can challenge any automated decisions made by the tool (although the report noted that most AI providers included human intervention/review at some point in the process). If relevant, it should also explain how the data will be used for training, testing or otherwise developing AI.

Data minimisation and purpose limitation

The ICO found that some AI tools collected far more personal information than necessary and retained it indefinitely to build large databases of potential candidates without their knowledge. For example, personal data was sometimes scraped from social media and job networking sites to build databases of potential candidates. Such practices risk non-compliance with data protection principles under the UK GDPR, particularly data minimisation and purpose limitation. Employers must ensure that AI tools collect only the minimum amount of personal information required to achieve their purpose and consider how to ensure it is not used for other incompatible purposes. Employers should be alert to the fact that the ICO identified several cases where AI providers had repurposed candidate personal information in their systems to train, test, and maintain their AI tools.

Conclusion

The use of AI tools in recruitment has quickly become the norm for many organisations, particularly when dealing with large scale recruitment. While it is understandable that many employers will want to embrace new technology in this way, as the ICO report highlights, doing so comes with the responsibility to mitigate risk and potential negative impacts. This will undoubtedly remain an important topic, both in terms of regulatory focus and public debate, as the impact of AI on various aspects of daily life continues to be explored.

This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.

© Farrer & Co LLP, December 2024 

Want to know more?

Contact us

About the authors

Sam Talbot Rice lawyer photo

Sam Talbot Rice

Senior Associate

Sam provides practical and focused advice on business-critical areas across the fields of data protection, intellectual property and commercial contracts.

Sam provides practical and focused advice on business-critical areas across the fields of data protection, intellectual property and commercial contracts.

Email Sam +44 (0)20 3375 7222
Back to top