Understanding the EU AI Act
On July 12, 2024, Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence (the “AI Act”) was published in the Official Journal of the EU. The AI Act will enter into force on August 1, 2024. This article intends to provide a brief overview of the scope and key requirements of the AI Act.
What Are the Key Definitions Under the AI Act?
Article 3 of the AI Act contains 68 definitions. The following four definitions are key in understanding whether an organization is subject to the AI Act and if so, what obligations it is subject to:
- AI System: a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments;
- General-Purpose AI Model: an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market;
- Provider: a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;
- Deployer: a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
What Is the Scope of the AI Act?
For organizations in the process of assessing whether they are subject to the AI Act, it is important to carefully evaluate the scope of application of the AI Act.
A) Territorial Scope
Territorially, the AI Act will be applicable to:
- deployers that have their place of establishment or who are located within the EU;
- providers placing AI systems on the market or putting them into service, or placing general-purpose AI models on the market in the EU, irrespective of whether those providers are established or located within the EU or in a third country;
- importers and distributors importing or distributing AI systems on the EU market;
- product manufacturers placing on the market or putting into service AI systems under their own name or trademark in the EU;
- providers and deployers of AI systems where the output of the AI systems are used in the EU, regardless of where they are established or located; and
- persons affected by the use of an AI system that are located in the EU.
B) Material Scope
The AI Act introduces a risk-based legal framework that imposes requirements based on the level and type of risks related to the use of the concerned AI system. The AI Act establishes the following types of AI systems: (i) prohibited AI systems, (ii) high-risk AI systems, (iii) AI systems with transparency requirements, and (iv) general-purpose AI models, each of which are defined below. The types of AI systems are not mutually exclusive. For example, a high-risk system may also be subject to transparency requirements.
The obligations of the AI Act are largely focused on providers and deployers of AI systems. There are circumstances where the rules under the AI Act apply to other parties, such as importers, distributors and product manufacturers, but those are more limited.
It is important to note that any deployer, distributor, importer or other third-party may have to assume the role of the provider in certain cases, including when: (i) they put their name or trademark on a high-risk system already placed on the market or put into service, without putting in place contractual arrangements that stipulate that the obligations are allocated otherwise; (ii) they make a substantial modification to a high-risk AI system that had already been placed on the market or had already been put into service and it remains a high-risk AI system notwithstanding the modification; (iii) they modify the intended purpose of an AI system which had originally not been classified as high-risk and had already been placed on the market or put into service, in such a manner that the AI system becomes high-risk.
C) Types of AI Systems
1. Prohibited AI Systems
Prohibited AI systems are AI systems and/or uses of AI that have been deemed unacceptable from a fundamental rights perspective and are, therefore, prohibited. These include:
- AI systems that use subliminal, purposefully manipulative or deceptive techniques;
- AI systems that exploit a person’s or a specific group of persons’ vulnerabilities;
- AI systems used for social scoring;
- AI systems used for predictive policing;
- AI systems used to build facial recognition databases;
- AI systems used to infer emotions of a natural person in the areas of workplace and education institutions;
- AI systems used for biometric categorization based on sensitive data; and
- AI systems used for real-time biometric identification in public by law enforcement.
2. High-Risk AI Systems
High-risk AI systems are deemed to present a potentially high-risk to the rights and freedoms of individuals and are subject to stringent obligations. The AI Act differentiates between two buckets of high-risk AI systems.
The first bucket comprises AI systems that are considered high-risk under EU harmonization legislation (see Article 6(1) and Annex I of the AI Act). An AI system in this category will be considered high-risk when: (i) it is intended to be used as a safety component of a product, or the AI system is itself a product covered by the EU harmonization legislation identified in Annex I of the AI Act and (ii) the product or system has to undergo a third-party conformity assessment under applicable EU harmonization legislation. This will likely cover many AI systems used in, for example, machinery, toys, lifts, equipment and safety components for use in medical devices, civil aviation related products and various types of vehicles.
The second bucket comprises AI systems that are considered high-risk because they are used for specific tasks directly identified in Annex III of the AI Act. These include:
- Biometrics. (except for AI systems intended to be used for biometric verification where the sole purpose is to confirm that a specific individual is the person they claim to be). Examples include:
- remote biometric identification systems;
- AI systems intended to be used for biometric categorization, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics; and
- emotion recognition AI systems.
- Critical Infrastructure. These include AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating or electricity.
- Education and Vocational Training. These are AI systems intended to be used in a manner that either determines an individual’s access to education or the qualitative result of said education, such as AI systems used to evaluate learning outcomes.
- Employment, Workers Management and Access to Self-Employment. These include AI systems intended to be used:
- for recruitment or selection of individuals, notably to place targeted job advertisements, to analyze and filter job applications and to evaluate candidates;
- to make decisions affecting terms of the work relationship, promotion and termination of work-related contractual relationships;
- to allocate tasks based on individual behavior or personal traits or characteristics; or
- to monitor and evaluate performance and behavior of individuals in such relationships.
- Access to and Enjoyment of Essential Private Services and Essential Public Services and Benefits. These include AI systems intended to be used:
- to evaluate the creditworthiness of individuals or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud;
- to evaluate and classify emergency calls or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems; or
- for risk assessment and pricing in relation to individuals in the case of life and health insurance.
In addition, certain AI systems that are used in the areas of law enforcement, migration, asylum and border management, and administration of justice and democratic processes, are also considered high-risk.
2.1 Obligations of Providers of High-Risk AI Systems
The AI Act allocates obligations based on the organization’s role in the development or deployment of an AI system (e.g., providers or deployers). Within this context, the AI Act subjects providers of high-risk AI systems to the majority of and to the strictest requirements under the new law, including:
- establishing, implementing, documenting and maintaining a risk management system and quality management system;
- data governance requirements, including bias mitigation;
- drafting and maintaining technical documentation with respect to the high-risk AI system;
- record-keeping, logging and traceability obligations;
- designing the AI system in a manner that allows effective human oversight;
- designing the AI system in a manner that ensures an appropriate level of accuracy, robustness and cybersecurity;
- complying with registration obligations;
- ensuring that the AI system undergoes the relevant conformity assessment procedure;
- making the provider’s contact information available on the AI system, packaging or accompanying documentation;
- drawing up the EU declaration of conformity in a timely manner; and
- affixing the “CE marking” to the AI system.
2.2 Obligations of Deployers of High-Risk AI Systems
Deployers of high-risk AI systems will also have a significant number of direct obligations under the AI Act, although these are more limited in scope than the providers’ obligations. The deployers’ obligations include:
- assigning human oversight of the AI system to a person with the necessary competence, training, authority and support;
- if the deployer controls input data, ensuring that the data is relevant and sufficiently representative in light of the purpose of the AI system;
- informing impacted individuals when the deployer plans to use a high-risk AI system to make decisions or assist in making decisions relating to such individuals;
- if the deployer is an employer and the AI system will impact workers, informing workers representatives and the impacted workers that they will be subject to a high-risk AI system;
- conducting a fundamental rights impact assessment for certain deployers and high-risk systems, namely deployers using AI systems to evaluate the creditworthiness of individuals or establish their credit score, and for risk assessment and pricing in relation to individuals in the case of life and health insurance; and
- when a decision generated by the AI system results in legal effects or similarly significantly affects an individual, providing a clear and meaningful explanation of the role of the AI system in the deployer’s decision-making procedure and the main elements of the decision.
3. Systems with Transparency Requirements
AI systems with transparency requirements are those that pose specific transparency risks or may mislead end-users due to their nature. The AI Act requires providers and deployers to comply with specific transparency rules designed to mitigate such risks. This type of AI system includes:
- AI systems intended to interact directly with individuals;
- AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content;
- emotion recognition systems;
- biometric categorization systems;
- AI systems that generate or manipulate image, audio or video content constituting a deep fake; and
- AI systems that generate or manipulate text which is published with the purpose of informing the public on matters of public interest.
The specific obligations on providers and deployers vary according to the specific AI system in question, but may include disclosure obligations, labelling obligations and transparency obligations vis-a-vis the user.
4. General-Purpose AI Models
Due to their flexibility, versatility and capability to form the basis of multiple AI systems, general-purpose AI models are regulated as a separate category from AI systems. The AI Act places specific obligations on providers of general-purpose AI models including:
- preparing and keeping up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, and providing it to the AI Office or national authorities on request;
- preparing, keeping up-to-date and making available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems;
- implementing a policy to comply with EU law on copyright and related rights; and
- preparing and making publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model.
Providers of general-purpose AI models that are deemed to be presenting a systemic risk will be subject to additional, more stringent, requirements, such as an obligation to ensure an adequate level of cybersecurity protection and to assess and mitigate possible systemic risks at an EU level. At this stage, only the most advanced general-purpose AI models are likely to be considered as having a systemic risk based on their capabilities.
What Are the Penalties?
Market Surveillance Authorities under the AI Act will have the power to impose significant penalties for infringements, particularly monetary fines. The level of fines varies depending on the type of infringement:
- €35 million or 7% of total worldwide annual turnover for the preceding year, whichever is higher, for non-compliance with the rules about prohibited AI systems;
- €15 million or 3% of total worldwide annual turnover for the preceding year, whichever is higher, for non-compliance with most obligations under the AI Act; and
- €7.5 million or 1% of total worldwide annual turnover for the preceding year, whichever is higher, for the supply of incorrect, incomplete or misleading information.
Distinct from the fines listed above, the European Commission also has the power to impose fines on providers of general-purpose AI models. Specifically, it may impose fines of €15 million or 3% of total worldwide annual turnover for the preceding year, whichever is higher.
Natural or legal persons can also submit a complaint to a Market Surveillance Authority if they consider that an infringement of the AI Act has taken place.
When Will It Apply?
The AI Act will enter into force 20 days following its publication, i.e., on August 1, 2024. However, the provisions will become applicable following a transition period. The length of the transition period will vary depending on the type of AI system:
- obligations applicable to prohibited AI systems and the obligations related to AI literacy will become applicable on February 2, 2025;
- specific obligations applicable to general-purpose AI models will become applicable on August 2, 2025;
- most obligations under the AI Act, including the rules applicable to high-risk AI systems under Annex III of the AI Act and systems subject to specific transparency requirements will become applicable on August 2, 2026; and
- obligations related to high-risk systems included in Annex I of the AI Act will become applicable on August 2, 2027.
It is worth noting that certain AI systems and models already on the market may be exempt or have longer compliance deadlines.
Read the AI Act.
Related People
Related Services
Media Contact
Lisa Franz
Director of Public Relations
Jeremy Heallen
Public Relations Senior Manager
mediarelations@HuntonAK.com