On January 22, 2024, a draft of the final text of the EU Artificial Intelligence Act (“AI Act”) was leaked to the public. The leaked text substantially diverges from the original proposal by the European Commission, which dates back to 2021. The AI Act includes elements from both the European Parliament’s and the Council’s proposals.
Key Definitions
- “AI system” is defined as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” This follows the definition proposed by the European Parliament, which is aligned with the Organization for Economic Co-operation and Development’s definition of AI.
- “General-purpose AI system” is separately defined under the AI Act as an “AI system which is based on a general-purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.”
- An AI “provider” is defined as the entity that “develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge.” Providers will be subject to the majority of the AI Act’s requirements.
- A “deployer” is defined as an entity under whose authority an AI system is used. Deployers will be subject to a more limited set of requirements under the AI Act.
Classification of AI Systems
The AI Act will introduce a risk-based legal framework for AI in the European Union that classifies AI systems as follows:
- Prohibited AI Systems. AI systems that present unacceptable risks to the fundamental rights of individuals would be prohibited under the AI Act. Examples include AI systems that are used for social scoring based on social behavior or personal characteristics; AI systems designed to explore vulnerabilities that result in significant harm and the material distortion of behavior; and AI systems that engage in the untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
- High Risk AI Systems. AI systems that present a high risk to the rights and freedoms of individuals will be subject to the most stringent rules under the AI Act.
- Transparency Risks. AI systems that are not high-risk but pose transparency risks will be subject to specific transparency requirements under the AI Act. Examples include AI systems that are intended to directly interact with individuals and act like a human, or AI systems designed to generate content (e.g., to prepare news articles).
In addition to the above categories of AI systems, the AI Act will impose specific obligations on providers of generative AI models on which general purpose AI systems, like ChatGPT, are based (e.g., an obligation to make a summary of the content used to train the models publicly available). Providers of generative AI models that present a systemic risk will be subject to additional, more stringent, requirements, such as an obligation to ensure an adequate level of cybersecurity protection and to assess and mitigate possible systemic risks at an EU level.
High-Risk AI Systems
The AI Act divides high-risk AI systems into two subsets:
- Annex II of the AI Act (EU Harmonization Legislation): Annex II of the AI Act sets forth AI systems considered to be high-risk because they are covered by certain EU harmonization legislation. An AI system in this category will be considered high-risk when 1) it is intended to be used as a safety component of a product, or the AI system is itself a product covered by the EU harmonization legislation; and 2) the product or system has to undergo a third-party conformity assessment under the EU harmonization legislation. The list under Annex II is fairly long, but it includes laws on matters such as the safety of toys, machinery, radio equipment, civil aviation and motor vehicles, among others; and
- Annex III of the AI Act: high-risk systems under Annex III of the AI Act, which are considered to be high-risk because they are classified as such by the AI Act itself.
AI systems under Annex III include (per the current wording of the Annex of the leaked text):
- Biometrics. Remote biometric identification systems (except for AI systems intended to be used for biometric verification that have as their sole purpose to confirm that a specific individual is the person he or she claims to be); AI systems intended to be used for biometric categorization, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics; and AI systems intended to be used for emotion recognition.
- Critical Infrastructure. AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating or electricity.
- Education and Vocational Training. AI systems intended to be used:
- to determine access or admission to, or to assign individuals to, educational and vocational training institutions at all levels;
- to evaluate learning outcomes, including when those outcomes are used to steer the learning process of individuals in educational and vocational training institutions at all levels;
- to assess the appropriate level of education that an individual will receive or will be able to access, in the context of/within education and vocational training institutions; or
- to detect and monitor prohibited behavior of students during tests in the context of/within education and vocational training institutions.
- Employment, Workers Management and Access to Self-Employment. AI systems intended to be used:
- for recruitment or selection of individuals, notably to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates; to make decisions affecting terms of the work relationship, promotion and termination of work-related contractual relationships; to allocate tasks based on individual behavior or personal traits or characteristics; or
- to monitor and evaluate performance and behavior of individuals in such relationships.
- Access to and Enjoyment of Essential Private Services and Essential Public Services and Benefits. AI systems intended to be used:
- by public authorities or on behalf of public authorities to evaluate the eligibility of individuals for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
- to evaluate the creditworthiness of individuals or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud;
- to evaluate and classify emergency calls or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems; or
- for risk assessment and pricing in relation to individuals in the case of life and health insurance.
In addition to the systems listed above, certain AI systems that are used in the areas of law enforcement, migration, asylum and border management, and administration of justice and democratic processes are also considered high-risk.
Obligations Applicable to High-Risk AI Systems
The AI Act subjects providers of high-risk AI systems to the strictest requirements, including:
- establishing, implementing, documenting and maintaining a risk management system and quality management system;
- data governance requirements, including bias mitigation;
- drafting and maintaining technical documentation with respect to the high-risk system;
- record-keeping, logging and traceability obligations;
- designing the systems in a manner that allows effective human oversight;
- designing the systems in a manner that ensures an appropriate level of accuracy, robustness and cybersecurity;
- complying with registration obligations;
- ensuring that the AI system undergoes the relevant conformity assessment procedure;
- making the provider’s contact information available on the AI system, packaging or accompanying documentation;
- drawing up the EU declaration of conformity in a timely manner; and
- affixing the “CE marking of conformity” to the AI system.
Deployers of high-risk AI systems will also have a significant number of direct obligations under the AI Act, although these are more limited in scope than the providers’ obligations. The deployers’ obligations include:
- assigning the human oversight of the AI system to a person with the necessary competence, training, authority and support;
- if the deployers control input data, ensuring that the data is relevant and sufficiently representative in light of purpose of the AI system;
- informing impacted individuals when the deployer plans to use a high-risk AI system to make decisions or assist in making decisions relating to such individuals. Deployers of high-risk AI systems who are employers must inform workers representatives and the impacted workers that they will be subject to a high-risk AI system;
- using information provided by providers to carry out a Data Protection Impact Assessment (if required);
- conducting a fundamental rights impact assessment for certain deployers and high-risk systems. This requirement will be particularly applicable to deployers using AI systems to evaluate the creditworthiness of individuals or establish their credit score; and for risk assessment and pricing in relation to individuals in the case of life and health insurance; and
- when a decision generated by the AI system results in legal or similarly significantly effects, providing a clear and meaningful explanation of the role of the AI system in the deployer’s decision-making procedure and the main elements of the decision.
The AI Act sets forth certain cases where a deployer will be considered a provider, and subject to provider obligations, e.g., if a deployer puts its trademark on a high-risk system already placed on the market or put into service without implementing contractual arrangements to prevent the change in allocation of obligations. Note that there are also obligations for distributors and importers of high-risk AI systems.
Penalties
Non-compliance with the AI Act may lead to significant fines. Penalties range from €35 million or 7% of annual global turnover for violations with respect to prohibited AI systems, €15 million or 3% of annual global turnover for other AI Act violations, and €7.5 million or 1.5% of annual global turnover for providing incorrect information to regulators.
Application
The AI Act has been formally approved by the Council’s Committee of Permanent Representatives on February 2, 2024 and is expected to be approved by the European Parliament within the next month. The AI Act will apply to regulated entities in stages, following a transition period. The length of the transition period will vary depending on the type of AI system:
- six months for prohibited AI systems;
- 12 months for specific obligations regarding general purpose AI systems;
- 24 months for most other obligations, including the rules for high-risk AI systems included in Annex III; and
- 36 months for obligations related to high-risk systems included in Annex II (list of Union harmonization legislation).
Search
Recent Posts
Categories
- Behavioral Advertising
- Centre for Information Policy Leadership
- Children’s Privacy
- Cyber Insurance
- Cybersecurity
- Enforcement
- European Union
- Events
- FCRA
- Financial Privacy
- General
- Health Privacy
- Identity Theft
- Information Security
- International
- Marketing
- Multimedia Resources
- Online Privacy
- Security Breach
- U.S. Federal Law
- U.S. State Law
- Workplace Privacy
Tags
- Aaron Simpson
- Accountability
- Adequacy
- Advertisement
- Advertising
- American Privacy Rights Act
- Anna Pateraki
- Anonymization
- Anti-terrorism
- APEC
- Apple Inc.
- Argentina
- Arkansas
- Article 29 Working Party
- Artificial Intelligence
- Australia
- Austria
- Automated Decisionmaking
- Baltimore
- Bankruptcy
- Belgium
- Biden Administration
- Big Data
- Binding Corporate Rules
- Biometric Data
- Blockchain
- Bojana Bellamy
- Brazil
- Brexit
- British Columbia
- Brittany Bacon
- Brussels
- Business Associate Agreement
- BYOD
- California
- CAN-SPAM
- Canada
- Cayman Islands
- CCPA
- CCTV
- Chile
- China
- Chinese Taipei
- Christopher Graham
- CIPA
- Class Action
- Clinical Trial
- Cloud
- Cloud Computing
- CNIL
- Colombia
- Colorado
- Committee on Foreign Investment in the United States
- Commodity Futures Trading Commission
- Compliance
- Computer Fraud and Abuse Act
- Congress
- Connecticut
- Consent
- Consent Order
- Consumer Protection
- Cookies
- COPPA
- Coronavirus/COVID-19
- Council of Europe
- Council of the European Union
- Court of Justice of the European Union
- CPPA
- CPRA
- Credit Monitoring
- Credit Report
- Criminal Law
- Critical Infrastructure
- Croatia
- Cross-Border Data Flow
- Cyber Attack
- Cybersecurity
- Cybersecurity and Infrastructure Security Agency
- Data Brokers
- Data Controller
- Data Localization
- Data Privacy Framework
- Data Processor
- Data Protection Act
- Data Protection Authority
- Data Protection Impact Assessment
- Data Transfer
- David Dumont
- David Vladeck
- Delaware
- Denmark
- Department of Commerce
- Department of Health and Human Services
- Department of Homeland Security
- Department of Justice
- Department of the Treasury
- District of Columbia
- Do Not Call
- Do Not Track
- Dobbs
- Dodd-Frank Act
- DPIA
- E-Privacy
- E-Privacy Directive
- Ecuador
- Ed Tech
- Edith Ramirez
- Electronic Communications Privacy Act
- Electronic Privacy Information Center
- Elizabeth Denham
- Employee Monitoring
- Encryption
- ENISA
- EU Data Protection Directive
- EU Member States
- European Commission
- European Data Protection Board
- European Data Protection Supervisor
- European Parliament
- Facial Recognition Technology
- FACTA
- Fair Credit Reporting Act
- Fair Information Practice Principles
- Federal Aviation Administration
- Federal Bureau of Investigation
- Federal Communications Commission
- Federal Data Protection Act
- Federal Trade Commission
- FERC
- FinTech
- Florida
- Food and Drug Administration
- Foreign Intelligence Surveillance Act
- France
- Franchise
- Fred Cate
- Freedom of Information Act
- Freedom of Speech
- Fundamental Rights
- GDPR
- Geofencing
- Geolocation
- Georgia
- Germany
- Global Privacy Assembly
- Global Privacy Enforcement Network
- Gramm Leach Bliley Act
- Hacker
- Hawaii
- Health Data
- Health Information
- HIPAA
- HIPPA
- HITECH Act
- Hong Kong
- House of Representatives
- Hungary
- Illinois
- India
- Indiana
- Indonesia
- Information Commissioners Office
- Information Sharing
- Insurance Provider
- Internal Revenue Service
- International Association of Privacy Professionals
- International Commissioners Office
- Internet
- Internet of Things
- Iowa
- IP Address
- Ireland
- Israel
- Italy
- Jacob Kohnstamm
- Japan
- Jason Beach
- Jay Rockefeller
- Jenna Rode
- Jennifer Stoddart
- Jersey
- Jessica Rich
- John Delionado
- John Edwards
- Kentucky
- Korea
- Latin America
- Laura Leonard
- Law Enforcement
- Lawrence Strickling
- Legislation
- Liability
- Lisa Sotto
- Litigation
- Location-Based Services
- London
- Madrid Resolution
- Maine
- Malaysia
- Markus Heyder
- Maryland
- Massachusetts
- Meta
- Mexico
- Microsoft
- Minnesota
- Mobile App
- Mobile Device
- Montana
- Morocco
- MySpace
- Natascha Gerlach
- National Institute of Standards and Technology
- National Labor Relations Board
- National Science and Technology Council
- National Security
- National Security Agency
- National Telecommunications and Information Administration
- Nebraska
- NEDPA
- Netherlands
- Nevada
- New Hampshire
- New Jersey
- New Mexico
- New York
- New Zealand
- Nigeria
- Ninth Circuit
- North Carolina
- Norway
- Obama Administration
- OECD
- Office for Civil Rights
- Office of Foreign Assets Control
- Ohio
- Oklahoma
- Opt-In Consent
- Oregon
- Outsourcing
- Pakistan
- Parental Consent
- Payment Card
- PCI DSS
- Penalty
- Pennsylvania
- Personal Data
- Personal Health Information
- Personal Information
- Personally Identifiable Information
- Peru
- Philippines
- Phyllis Marcus
- Poland
- PRISM
- Privacy By Design
- Privacy Policy
- Privacy Rights
- Privacy Rule
- Privacy Shield
- Protected Health Information
- Ransomware
- Record Retention
- Red Flags Rule
- Regulation
- Rhode Island
- Richard Thomas
- Right to Be Forgotten
- Right to Privacy
- Risk-Based Approach
- Rosemary Jay
- Russia
- Safe Harbor
- Sanctions
- Schrems
- Scott H. Kimpel
- Scott Kimpel
- Securities and Exchange Commission
- Security Rule
- Senate
- Serbia
- Service Provider
- Singapore
- Smart Grid
- Smart Metering
- Social Media
- Social Security Number
- South Africa
- South Carolina
- South Dakota
- South Korea
- Spain
- Spyware
- Standard Contractual Clauses
- State Attorneys General
- Steven Haas
- Stick With Security Series
- Stored Communications Act
- Student Data
- Supreme Court
- Surveillance
- Sweden
- Switzerland
- Taiwan
- Targeted Advertising
- Telecommunications
- Telemarketing
- Telephone Consumer Protection Act
- Tennessee
- Terry McAuliffe
- Texas
- Text Message
- Thailand
- Transparency
- Transportation Security Administration
- Trump Administration
- United Arab Emirates
- United Kingdom
- United States
- Unmanned Aircraft Systems
- Uruguay
- Utah
- Vermont
- Video Privacy Protection Act
- Video Surveillance
- Virginia
- Viviane Reding
- Washington
- Whistleblowing
- Wireless Network
- Wiretap
- ZIP Code