The pros and cons of using AI in the workplace have been much debated over recent years – periodically grabbing the headlines, but coming under surprisingly little scrutiny from a regulatory perspective. The benefits for employers have been significant, often providing a lifeline for business continuity during the lockdowns of the last year, but what about the impact on the employee?
The rise of AI in the workplace has principally been discussed in the context of the gig economy, where workers are no longer directed by a human manager, but by an automated algorithm. Now, with the acceleration in the use of radical new technologies, AI is very much a feature of working life in general. However, while stories of robots replacing workers have (on the whole) not (as yet?) come true, the rise of algorithmic HR and management tools means that it’s the ‘bosses’ whose jobs risk to some extent becoming automated.
AI in the workplace today
AI technologies are already being used across a broad range of industries, at every stage in the employment cycle. From recruitment to dismissal, their use has significant implications for today’s workplaces - whether that workplace is a bank, an accountancy firm or a university.
It is worth defining some of the terms in the context of the functions that AI performs:
- Algorithmic decision making (ADM) – this is where the AI - be it an algorithm or a software application (an “app”) - takes over the decision making in the employment relationship. That could be to manage a schedule or assign tasks. It might also be used in the recruitment process. For example, in algorithmic hiring, applicants could be asked to upload a video of themselves talking; yet no human being or HR manager will ever watch that video to make the decision as whether to hire them. Likewise, ADM has been used to make “high risk” decisions, such as analysing performance metrics to establish who should be promoted or Think of the scenario where a driver in the gig economy has a rating which goes below 4.5 out of 5 and then, one day, she tries to log on to the app and it no longer works.
- Machine learning - a machine is made “smart” by allowing the algorithms in the AI system to adapt by learning from correlations it finds in one data input (known as the “training set”) to make predictions relating to another data set. For instance, a hiring system trained on the data of a company’s historical employees might use this data to decide who to hire in the future.
- Profiling - machines can use “feature selection” to categorise employees by analysing known and inferred personal data about each person. This data, in turn, could be used to make predictions about individuals, such as the risk of them turning up late for work or whether they are likely to move to a competitor.
As these functions show, algorithmic management is starting in some areas to replace – or, at the very least, augment - the exercise of a whole range of traditional employment functions.
Where algorithms are used in place of human decision making, it is perhaps unsurprising that they sometimes risk replicating existing biases and inequalities in society. Consequently, unfair rules have been inadvertently hardcoded into AI systems, thereby resulting in discriminatory decisions:
- In March, Uber’s use of facial identification software in its app was inconsistent and inaccurate when it came to skin colour. This indirectly caused racial discrimination by denying BAME drivers and couriers the use of its app (and therefore their access to work).
- Amazon’s hiring algorithm was ultimately scrapped after its machine learning system taught itself that male candidates were preferable. This sex discrimination was brought about by using data relating to Amazon’s top engineers (who were mainly men) as the “training set”. This resulted in CVs being discarded where they included the word “women’s”, (such as “women’s netball team captain”) and by giving low scores to graduates of all-women colleges.
But even where the data has been “cleaned”, there are still legal traps which need to be avoided. For instance, one approach which computer scientists favour to avoid such biases, is to run a different algorithm for each subpopulation in a data set. However, this is the equivalent to saying that there should be one machine to score women and another machine to score men (or separate machines to score people of different ethnicities, for example). Computationally, this approach could be an elegant solution. Nevertheless, in employment law terms, the inconsistent application of criteria to people with different protected characteristics would fall foul of the Equality Act. This would make such a technological solution wholly unlawful – and ultimately unhelpful - as a starting point.
Technological problems, legal solutions?
The paradox of AI is that it expands its control over the workforce, while simultaneously diffusing accountability in the decision-making chain. An HR manager in charge of hiring workers will be subject to anti-discrimination duties in law. But who is held responsible when an AI tool, itself, introduces discrimination into the “hiring and firing” process?
An employee is unlikely to be able to claim for direct discrimination in such circumstances, as it is not possible to show any kind of intent on the part of the AI. As indirect discrimination is the only claim available, this opens up a space for justification for use of the AI. So, in the example of an automated hiring process, an employer might have a defence that its use of ADM is a proportionate means of achieving the legitimate aim (in this case, efficiently dealing with a very high number of applications for a given role). As such, there may be a risk that a claimant will not be able to enforce their right not to be discriminated against, wherever the employer can defer accountability to the AI system.
As it currently stands, the law does allow employers to make decisions without human intervention, albeit these are in limited circumstances and require transparency, under the UK General Data Protection Regulation (UK GDPR). The Trade Union Congress (TUC) (in conjunction with Robin Allen QC and Dee Masters of Cloisters Chambers) has argued for changes in the law, in its recently published report on the use of AI at work. This report sets out the TUC’s recommendations for better protection of workers against algorithmic discrimination, including:
- Re-evaluating existing legal structures, such as the Equality Act, to protect against “discrimination by algorithm”. This would include: “a complete and express reversal of the burden of proof in relation to “high risk” uses of AI or ADM systems, such that the employer must prove from the outset that discrimination has not occurred rather than the conventional burden of proof in discrimination claims where the claimant bears the initial burden.”
- “Red lines” beyond which the deployment of new technologies should not occur. Specifically, they argue that the use of AI and ADM in high-risk decisions should only be permitted where existing and potential employees and workers themselves can sensibly understand those decisions, be part of the decision-making process and satisfy themselves that the technology is being used in a way which is rational, accurate and non-discriminatory.
- AI registers to be made mandatory for employers to maintain and update regularly and for these registers to be accessible to employees, workers, and job applicants.
Likewise, the government in its (much lambasted) race report, has made its own recommendations for improving the transparency of AI and ADM. This report calls for:
- Mandating the use of tools, such as Algorithmic Impact Assessments, that help raise fairness risks and detect and mitigate bias before, during, and after any system deployment;
- Placing a transparency obligation on all public sector organisations that apply algorithms with an impact on significant decisions that affect individuals; and
- Asking the Equality and Human Rights Commission (EHRC) to issue guidance on the lawfulness of bias mitigation techniques, the collection of data to measure bias, and to clarify how to apply the Equality Act to ADM.
Where next for the regulation of AI?
So far AI has on the whole been developed and deployed with very little oversight. However, given the rapid rise in its reach and impact, we are starting to see a shift towards calls for greater restrictions and regulation. Perhaps not surprisingly (given its far-reaching approach to privacy in the GDPR), the EU appears to be leading the way on this front, with its recently published Proposal for a Regulation laying down harmonised rules on artificial intelligence. This is described as the first ever legal framework on AI, aimed at harnessing the opportunities and addressing the challenges of AI and positioning the EU to play a leading role globally on the issue.
From a work perspective, the EU’s report states that any AI-systems involved in employment, worker management and self-employment (eg in recruitment, evaluation of performance, promotion and termination) should be classified as “high-risk”, and so subject to specific safeguards, “since those systems may appreciably impact future career prospects and livelihoods of these persons”. Proposed obligations include (from a provider point of view) ensuring human oversight of AI systems and focusing on the quality of the data used to teach the AI system, and (from a user point of view) following instructions on the use of AI systems, monitoring the functioning of the system and keeping records. The proposed sanctions for breach are significant, with fines of up to €20 million or 4 per cent of annual worldwide turnover.
The rules are still in draft form and subject to a lengthy approvals process, so it is likely to take several years before anything comes into effect. Moreover, now that the UK has left the EU, the proposals will not be binding on the UK as and when they are implemented. Nevertheless, in publishing its proposals, the EU has sent a strong message about its intent to push for tighter extra-territorial oversight on how AI is used and it will be interesting to see how the UK, and indeed the rest of the world, respond. Even if the UK Government does not follow suit in introducing regulations of its own, it is worth noting that UK based entities creating or deploying AI in the EU will still need to follow whatever EU rules eventually come into force.
For more information, see our article on Artificial Intelligence: Analysing the EU’s new Regulation.
What this means for employers
In the absence of any clear guidance, employers should be live to the potential unintended consequences of any automated AI systems they use. This requires organisations to stay up to date with rapidly changing technological developments and, wherever such AI systems are implemented, to carefully consider how these might impact on fairness, accountability, and transparency in the workplace.
The extent to which employers bear legal responsibility for any discriminatory decisions arising from such AI systems is currently unclear; but it is incumbent on employers to inform themselves of such issues. In other words, there is still very much a need to keep the ‘human’ in human resources decision-making.
If you require further information about anything covered in this blog, please contact Rachel Lewis, Iman Kouchouk, or your usual contact at the firm on +44 (0)20 3375 7000.
This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.
© Farrer & Co LLP, April 2021