Skip to content

Artificial Intelligence: Analysing the EU’s new Regulation

Insight

breaking barriers

On 21 April 2021, the EU Commission published the first draft of the Artificial Intelligence Regulation.

This is a significant piece of legislation, as potentially impactful and complex as GDPR and is therefore a development that should be on the radar of companies both in the EU and beyond, for the reasons set out below.

Key features to note at the outset:

It has global reach (regulating AI systems and outputs that have an impact in the EU); it sets out complex regulatory requirements (mainly placed on the creators of AI systems); and it is accompanied by huge fines (up to €30M or 6% of annual turnover).

The legislation also creates a regulatory eco-system focussed on “High-Risk” uses of AI (e.g. in employment, education and credit scoring settings), bans some manipulative uses of AI and requires transparency in other contexts.

The draft Regulation now enters a period of negotiation with the EU Parliament and Member States, and then there is a two-year implementation period. So, we might not expect it to be fully operational until 2024 at the earliest.  In this article, we explain the main provisions and answer some of the key questions about this new draft Regulation.

What is an AI system?

The legislation looks to regulate AI systems and therefore it is important to understand how widely the EU seeks to define what is a developing area of technology and use.

The short answer here is that the EU are proposing a wide term in order to accommodate future developments in technology.

Article 3(1) defines the term as software that is developed with one or more of the techniques and approaches listed in Annex I to the Regulation (e.g. via Machine Learning, or Logic or Knowledge based approaches) which can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments the AI interacts with.

Who does the Regulation apply to?

Two main types of entities are caught by the Regulation:

  1. “Providers” of AI systems (those creating the AI systems); and
  2. “Users” of AI systems (those who use the AI systems created by the Providers, other than for personal use).

Providers have a wider range of obligations than Users.

Does the Regulation have global impact?

Yes. In its press release accompanying the publication of the draft Regulation, the EU Commission was clear that it wants to set global standards. Specifically, the Regulation applies to Providers based anywhere who first make available the AI system in the EU market or supply the AI system for first use by a User on the EU market. It also applies to Providers and Users located in a third country where the output of the AI system is used in the EU. In other words, the Regulation adopts a “pay to play” approach – if you want to bring your AI system to the EU, or use its outputs in ways which affect EU based individuals, then you have to play by the EU rules. Note that Article 2 also makes it clear that Users of AI systems located in the EU are also caught by the Regulation, as would be expected.

How does it set about regulating AI?

The Regulation divides AI systems into three categories:

  1. AI systems which are prohibited;
  2. low risk AI systems subject to transparency requirements; and
  3. AI systems which are designated as High-Risk.  

Prohibited AI Systems

In the words of the EU Commission’s press release, prohibited AI is that which represents a clear threat to the safety, livelihoods and rights of people.

Prohibited AI systems in Article 5 include:

  • AI systems that manipulate individuals subliminally (in ways that they do not comprehend) in order to materially affect their behaviour in a manner that causes or is likely to cause physical or psychological harm;
  • AI systems that exploit vulnerabilities of persons due to their age, physical or mental disability, in order to materially affect their behaviour in a manner that causes or is likely to cause physical or psychological harm;
  • AI systems operated by public authorities for social scoring of individuals; and
  • AI systems used for real time identification of individuals in public spaces by law enforcement (though this is subject to some exceptions deemed as strictly necessary and subject to detailed conditions). 

Low Risk AI Systems

In terms of the second category of low risk AI systems, the focus is on transparency and bringing the existence of the AI system to the attention of parties who are interacting with AI and might not otherwise appreciate that this is the case. The  transparency requirements are set out in Article 52.

Providers must ensure that any individual interacting with an AI system (like chatbots) are informed of this unless it is obvious.

Users of AI systems which recognises emotions or intentions on the basis of biometric data or which assign individuals to specific categories (such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation) on the basis of their biometric data must inform individuals that they are being exposed to these particular types of systems.

Finally, Users causing content consisting of “deep fakes” to be generated by an AI system must label it as such. 

High Risk AI Systems

As might be expected, the high risk systems attract the majority of the regulatory requirements.

These requirements are mainly placed on Providers, though there are some obligations on Users too.

High Risk AI systems are defined in Article 6 as AI systems that have a potentially significant harmful impact on the health, safety or fundamental rights of persons in the EU, taking into account the severity of the possible harm and the probability of its occurrence.

The Regulation divides these systems between products and systems incorporating AI systems that are already regulated under existing EU legislation (listed in Annexe II), and “Standalone AI systems” that are not.

For the former, the idea is that the legislation underpinning them will be brought into line with the AI Regulation. For Standalone High-Risk AI systems, these include (by reference to Annexe III) the following:

  • those used in an Education setting (e.g. to determine access to an institution or assess students’ performance);
  • those used in an Employment or work setting (e.g. in recruitment, evaluation of performance, promotion and termination); and
  • those used to determine creditworthiness and for credit scoring.

They also include AI used in remote biometric identification systems, AI used to manage the safety of critical infrastructure like Transport and Utilities, AI used in law enforcement settings, AI used in the context of immigration, asylum and border control and AI used in the context of the administration of justice.

Note that excluded from the scope of the Regulation altogether are AI systems developed or used exclusively for military purposes. 

What obligations accompany High Risk AI systems?

The obligations placed on Providers of High-Risk AI systems include:

  • Putting in place a Risk Management System to evaluate and manage the risks associated with the use of the AI system. There is a list of requirements for this in Article 9;
  • Ensuring that the data used to train the AI system is of high quality. There is a list of requirements for this in Article 10. This attracts the highest level of fines (along with using prohibited AI), emphasising how critical the EU Commission views it and its concerns about bias and discrimination;
  • Keeping records to demonstrate how AI systems have been designed and operate in compliance with the Regulation (see Article 11, Article 18 and Annexe IV as regards “Technical Documentation” and Articles 12 and 20 regarding logging the operation of the AI system);
  • Providing Users with clear instructions about how to use the AI system and its outputs and the risks involved (see Article 13);
  • Ensuring that human oversight and ultimate control over the AI system is designed in (see Article 14). Human control over AI is a fundamental cornerstone of the Regulation;
  • Making sure that the systems achieve an appropriate level of consistency, accuracy, robustness and cybersecurity in accordance with the state of the art (see Article 15);
  • Putting in place a Quality Management System in accordance with Article 17, to form the overall plan for compliance by the Provider with the requirements of the Regulation;
  • Conducting a “Conformity Assessment” before putting an AI system on the EU market or after any material change to it. This can be by way of self-certification in most instances (see Article 43.2). It will be signified by the use of a CE marking (see Articles 19 and 43);
  • Registering details of the AI system on an EU database accessible to the public and managed by the EU Commission (see Article 51 and Article 60 and Annexe VIII);
  • Monitoring the performance of the AI system once on the EU market (see Article 61) and reporting any serious incidents or breaches of the Regulation to the relevant authorities (see Article 62);
  • Correcting, or withdrawing and recalling from the market any system which is no longer in conformity with the Regulation (see Article 21); and
  • Providing on request to regulators all of the records required to demonstrate compliance (see Article 23).

The obligations placed on Users are relatively light by comparison (see Article 29). They include:

  • Following the instructions issued by the Provider for use of the AI system and its output;
  • Monitoring the functioning of the AI system and maintaining logs;
  • Informing the Provider of any serious incident with or malfunctioning of the AI system; and
  • Maintaining appropriate records for the above.

What sanctions apply for breaches of these obligations?

There are three levels of fines (see Article 71):

  1. Breaches of Article 5 (prohibited AI systems) and Article 10 (ensuring that the data used to train the AI system is of high quality) attract fines of up to €30M or 6% of annual worldwide turnover, whichever is higher;
  2. Other breaches attract fines of up to €20M or 4% of annual worldwide turnover, whichever is higher;
  3. The supply of incorrect, incomplete or misleading information to regulators in reply to a request, attracts fines of up to €10M or 2% of worldwide annual turnover whichever is higher.

Regulators can also require that AI systems be corrected or withdrawn or recalled from the EU market (Article 65).

Who will oversee compliance?

National Supervising Authorities will be designated by each EU Member State to enforce the requirements of the AI Regulation, with a coordinating European AI Board on which the national Supervising Authorities will sit. The roles and functions of the EAIB mimic those of the EDPB in the context of GDPR – to create consistency of application of the AI Regulation and issue opinions, recommendations and guidance. In relation to regulated financial services providers, the Supervising Authorities will be those already designated under other sector specific EU legislation.

EU Representatives must be appointed by Providers who are located in third countries. This is so that regulators have easy access to someone in the EU who can demonstrate compliance with the AI Regulation and who will possess all of the documentation required to be maintained by the Provider under the Regulation (see Article 25).

Is the Regulation retrospective?

As far as High-Risk AI systems are concerned, Article 83 states that the Regulation will not apply to those AI systems that have been placed on the market or put into service before the Regulation comes into operation, provided that they are not subject to significant change after that time. The Regulation is silent on whether it is retrospective in other respects.  

How does the AI Regulation interact with the General Data Protection Regulation?

The AI Regulation is stated to be “without prejudice” to GDPR. In other words, it is an additional set of requirements which sits alongside GDPR. This means that where personal data is being processed in the context of an AI system, then GDPR rules will have to be followed as well.  

Is the UK Government likely to follow suit with its own version of the AI Regulation?

The answer for the moment appears to be “no”. Rather than having an overarching law for AI systems, the UK Government seems content to rely on other laws that address the issues raised by AI, e.g. UK GDPR, the Human Rights Act, the Equality Act and Consumer Rights and Product Safety legislation.

Of course, we must not forget that the ambition of the AI Regulation is to be global on a “pay to play” basis, so UK based Providers and Users will need to look to comply if they want to exploit their AI systems and outputs in the EU market.   

What happens next?

This draft Regulation represents a baseline for arriving at a final piece of legislation. We anticipate that it will be heavily negotiated with the EU Parliament (who are likely to try to strengthen it) and the EU Member States (some of whom may try to reduce its impact). This process took four years for GDPR, followed by an implementation period of twenty four months. We would expect the EU Commission to try to push the AI Regulation forward at a much quicker pace as it is a central plank in its Digital agenda. This is why it could be finalised in 2022 and in operation by 2024. However, we will keep an eye on its progress.

If you require further information about anything covered in this briefing, please contact Ian De Freitas, David Fletcher, or your usual contact at the firm on +44 (0)20 3375 7000.

This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.

© Farrer & Co LLP, April 2021

Want to know more?

Contact us

About the authors

Ian De Freitas lawyer photo

Ian De Freitas

Partner

Ian has over thirty years' experience as a litigator. He specialises in disputes involving data, technology and intellectual property. Ian leads the firm’s Data, IP and Technology Disputes team. 

Ian has over thirty years' experience as a litigator. He specialises in disputes involving data, technology and intellectual property. Ian leads the firm’s Data, IP and Technology Disputes team. 

Email Ian +44 (0)20 3375 7471
David  Fletcher lawyer

David Fletcher

Partner

David is a partner in the firm’s corporate team and acts for private businesses, family businesses, entrepreneurs and investors.

David is a partner in the firm’s corporate team and acts for private businesses, family businesses, entrepreneurs and investors.

Email David +44 (0)20 3375 7117
Back to top