On June 14, 2023, the European Parliament (“EP”) approved its negotiating mandate (the “EP’s Position”) regarding the EU’s Proposal for a Regulation laying down harmonized rules on Artificial Intelligence (the “AI Act”). The vote in the EP means that EU institutions may now begin trilogue negotiations (the Council approved its negotiating mandate on December 2022). The final version of the AI Act is expected before the end of 2023.
The EP proposes a number of significant amendments to the original Commission text, which dates back to 2021. Below we outline some of the key changes introduced by the EP:
Amendments to Key Definitions
The EP introduced a number of meaningful changes to the definitions used in the AI Act (Article 3). Under the EP’s Position:
- The definition of “AI system” is aligned with the OECD’s definition of AI system. An AI system is now defined as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
- Users of AI systems are now called “deployers.”
- The EP’s text further contains a number of new definitions, including:
- “Affected persons,” which are “any natural person or group of persons who are subject to or otherwise affected by an AI system.”
- “Foundation model,” which means an “AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.” Providers of foundation models are now subject to a number of specific obligations under the AI Act.
- A “general purpose AI system,” which is an “AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.”
General Principles Applicable to AI Systems
The EP’s Position establishes a set of six high-level core principles that are applicable to all AI systems regulated by the AI Act. These principles are: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity, non-discrimination and fairness; and (6) social and environmental well-being.
Classification of AI Systems
The EP proposes significant amendments to the list of prohibited AI practices/systems under the AI Act. New prohibitions include: (1) biometric categorization systems that categorize natural persons according to sensitive or protected attributes or characteristics, or based on the inference of those attributes or characteristics; and (2) AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the Internet or CCTV footage.
Furthermore, the EP has expanded the list of AI systems and applications that should be considered high risk. The list of high-risk systems in the EP’s Position, for example, includes certain AI systems used by large social media platforms to recommend content to users.
Under the rules proposed by the EP, providers of certain AI systems may rebut the presumption that the system should be considered a high-risk AI system. This would require that a notification be submitted to a supervisory authority or the AI Office (the latter if the AI System is intended to be used in more than one Member State), which shall review and reply, within three months, to clarify whether they deem the AI system to be high risk.
The EP’s Position further imposes specific requirements on generative AI systems, such as an obligation to disclose that content was generated by AI, designing the AI system in a way that prevents it from generating illegal content, and publishing summaries of copyrighted data used for training.
Adjustments to the Obligations in the Context of High-Risk AI Systems
The EP also introduces significant changes to the obligations on providers of high-risk AI systems by, for example, requiring them to:
- Ensure that natural persons responsible for human oversight of high-risk AI systems are specifically made aware of the risk of automation or confirmation bias.
- Provide specifications for input data, or any other relevant information in terms of the datasets used, including their limitation and assumptions, taking into account the intended purpose and the foreseeable and reasonably foreseeable misuse of the AI system.
- Ensure that the high-risk AI system complies with accessibility requirements.
In addition, the obligations for deployers of high-risk AI systems have been significantly broadened and now include:
- For certain AI systems, informing natural persons that they are subject to the use of high-risk AI systems and that they have the right to obtain an explanation about the output of the system.
- Prior to putting into service or using a high-risk AI system at the workplace, deployers shall consult workers’ representatives and inform employees that they will be subject to the system.
- Carry out the Fundamental Rights Impact Assessment (see below).
Obligation to Carry Out a Fundamental Rights Impact Assessment
As mentioned above, prior to using a high-risk AI system, certain deployers will be required to conduct a Fundamental Rights Impact Assessment. This assessment should include, at a minimum, the following elements: (1) a clear outline of the intended purpose for which the system will be used; (2) a clear outline of the intended geographic and temporal scope of the system’s use; (3) categories of natural persons and groups likely to be affected by the use of the system; (4) verification that the use of the system is compliant with relevant Union and national laws on fundamental rights; (5) the reasonably foreseeable impact on fundamental rights of using the high-risk AI system; (6) specific risks of harm likely to impact marginalized persons or vulnerable groups: (7) the reasonably foreseeable adverse impact of the use of the system on the environment; (8) a detailed plan as to how the harms and the negative impact on fundamental rights identified will be mitigated; and (9) the governance system the deployer will put in place, including human oversight, complaint-handling and redress.
In the process of preparing the Fundamental Rights Impact Assessment, deployers may be required to engage with supervisory authorities and external stakeholders, such as consumer protection agencies and data protection agencies.
Exclusion of Certain Unfair Contractual Terms in AI Contracts with SME or Startups
The EP’s Position introduces a new provision restricting the ability of a contracting party to unilaterally impose certain unfair contractual terms related to the supply of tools, services, components or processes that are used or integrated in a high-risk AI system, or the remedies for the breach or the termination of obligations related to these systems in contracts with SMEs or startups. Examples of prohibited provisions include contractual terms that: (i) exclude or limit the liability of the party that unilaterally imposed the term for intentional acts or gross negligence; (ii) exclude the remedies available to the party upon whom the term has been unilaterally imposed in the case of non-performance of contractual obligations or the liability of the party that unilaterally imposed the term in the case of a breach of those obligations; and (iii) give the party that unilaterally imposed the term the exclusive right to determine whether the technical documentation and information supplied are in conformity with the contract or to interpret any term of the contract.
Measures to Support Innovation
Tittle V of the AI Act, which contains measures in support of innovation (including AI regulatory sandboxes), is expanded and clarified by the EP’s Position. One of the new provisions requires EU Member States to promote research and development of AI solutions which support socially and environmental beneficial outcomes, such as (i) solutions to increase accessibility for persons with disabilities; (ii) tackle socio-economic inequalities, and (iii) meet sustainability and environmental targets.
Fines
The EP’s Position substantially amends the fines that can be imposed under the AI Act. The EP proposes that:
- Non-compliance with the rules on prohibited AI practices shall be subject to administrative fines of up to 40,000,000 EUR or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
- Non-compliance with the rules under Article 10 (data and data governance) and Article 13 (transparency and provision of information to users) shall be subject to administrative fines of up to 20,000,000 EUR or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
- Non-compliance with other requirements and obligations under the AI Act shall be subject to administrative fines of up to 10,000,000 EUR or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
- The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 5,000,000 EUR or, if the offender is a company, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
It is also important to note that the EP’s Position proposes that the penalties (including fines) under the AI Act, as well as the associated litigation costs and indemnification claims, may not be subject to contractual clauses or other forms of burden-sharing agreements between providers and distributors, importers, deployers, or any other third parties.
Reinforced Remedies
A new chapter was introduced in the AI Act concerning remedies available to affected persons when faced with potential breaches of the rules under the AI Act. Particularly relevant is the introduction of a GDPR-like right to lodge a complaint with a supervisory authority. Read the EP’s Position.
Search
Recent Posts
- Website Use of Third-Party Tracking Software Not Prohibited Under Massachusetts Wiretap Act
- HHS Announces Additional Settlements Following Ransomware Attacks Including First Enforcement Under Risk Analysis Initiative
- Employee Monitoring: Increased Use Draws Increased Scrutiny from Consumer Financial Protection Bureau
Categories
- Behavioral Advertising
- Centre for Information Policy Leadership
- Children’s Privacy
- Cyber Insurance
- Cybersecurity
- Enforcement
- European Union
- Events
- FCRA
- Financial Privacy
- General
- Health Privacy
- Identity Theft
- Information Security
- International
- Marketing
- Multimedia Resources
- Online Privacy
- Security Breach
- U.S. Federal Law
- U.S. State Law
- Workplace Privacy
Tags
- Aaron Simpson
- Accountability
- Adequacy
- Advertisement
- Advertising
- American Privacy Rights Act
- Anna Pateraki
- Anonymization
- Anti-terrorism
- APEC
- Apple Inc.
- Argentina
- Arkansas
- Article 29 Working Party
- Artificial Intelligence
- Australia
- Austria
- Automated Decisionmaking
- Baltimore
- Bankruptcy
- Belgium
- Biden Administration
- Big Data
- Binding Corporate Rules
- Biometric Data
- Blockchain
- Bojana Bellamy
- Brazil
- Brexit
- British Columbia
- Brittany Bacon
- Brussels
- Business Associate Agreement
- BYOD
- California
- CAN-SPAM
- Canada
- Cayman Islands
- CCPA
- CCTV
- Chile
- China
- Chinese Taipei
- Christopher Graham
- CIPA
- Class Action
- Clinical Trial
- Cloud
- Cloud Computing
- CNIL
- Colombia
- Colorado
- Committee on Foreign Investment in the United States
- Commodity Futures Trading Commission
- Compliance
- Computer Fraud and Abuse Act
- Congress
- Connecticut
- Consent
- Consent Order
- Consumer Protection
- Cookies
- COPPA
- Coronavirus/COVID-19
- Council of Europe
- Council of the European Union
- Court of Justice of the European Union
- CPPA
- CPRA
- Credit Monitoring
- Credit Report
- Criminal Law
- Critical Infrastructure
- Croatia
- Cross-Border Data Flow
- Cyber Attack
- Cybersecurity and Infrastructure Security Agency
- Data Brokers
- Data Controller
- Data Localization
- Data Privacy Framework
- Data Processor
- Data Protection Act
- Data Protection Authority
- Data Protection Impact Assessment
- Data Transfer
- David Dumont
- David Vladeck
- Delaware
- Denmark
- Department of Commerce
- Department of Health and Human Services
- Department of Homeland Security
- Department of Justice
- Department of the Treasury
- District of Columbia
- Do Not Call
- Do Not Track
- Dobbs
- Dodd-Frank Act
- DPIA
- E-Privacy
- E-Privacy Directive
- Ecuador
- Ed Tech
- Edith Ramirez
- Electronic Communications Privacy Act
- Electronic Privacy Information Center
- Elizabeth Denham
- Employee Monitoring
- Encryption
- ENISA
- EU Data Protection Directive
- EU Member States
- European Commission
- European Data Protection Board
- European Data Protection Supervisor
- European Parliament
- Facial Recognition Technology
- FACTA
- Fair Credit Reporting Act
- Fair Information Practice Principles
- Federal Aviation Administration
- Federal Bureau of Investigation
- Federal Communications Commission
- Federal Data Protection Act
- Federal Trade Commission
- FERC
- FinTech
- Florida
- Food and Drug Administration
- Foreign Intelligence Surveillance Act
- France
- Franchise
- Fred Cate
- Freedom of Information Act
- Freedom of Speech
- Fundamental Rights
- GDPR
- Geofencing
- Geolocation
- Georgia
- Germany
- Global Privacy Assembly
- Global Privacy Enforcement Network
- Gramm Leach Bliley Act
- Hacker
- Hawaii
- Health Data
- Health Information
- HIPAA
- HIPPA
- HITECH Act
- Hong Kong
- House of Representatives
- Hungary
- Illinois
- India
- Indiana
- Indonesia
- Information Commissioners Office
- Information Sharing
- Insurance Provider
- Internal Revenue Service
- International Association of Privacy Professionals
- International Commissioners Office
- Internet
- Internet of Things
- IP Address
- Ireland
- Israel
- Italy
- Jacob Kohnstamm
- Japan
- Jason Beach
- Jay Rockefeller
- Jenna Rode
- Jennifer Stoddart
- Jersey
- Jessica Rich
- John Delionado
- John Edwards
- Kentucky
- Korea
- Latin America
- Laura Leonard
- Law Enforcement
- Lawrence Strickling
- Legislation
- Liability
- Lisa Sotto
- Litigation
- Location-Based Services
- London
- Madrid Resolution
- Maine
- Malaysia
- Markus Heyder
- Maryland
- Massachusetts
- Meta
- Mexico
- Microsoft
- Minnesota
- Mobile App
- Mobile Device
- Montana
- Morocco
- MySpace
- Natascha Gerlach
- National Institute of Standards and Technology
- National Labor Relations Board
- National Science and Technology Council
- National Security
- National Security Agency
- National Telecommunications and Information Administration
- Nebraska
- NEDPA
- Netherlands
- Nevada
- New Hampshire
- New Jersey
- New Mexico
- New York
- New Zealand
- Nigeria
- Ninth Circuit
- North Carolina
- Norway
- Obama Administration
- OECD
- Office for Civil Rights
- Office of Foreign Assets Control
- Ohio
- Oklahoma
- Opt-In Consent
- Oregon
- Outsourcing
- Pakistan
- Parental Consent
- Payment Card
- PCI DSS
- Penalty
- Pennsylvania
- Personal Data
- Personal Health Information
- Personal Information
- Personally Identifiable Information
- Peru
- Philippines
- Phyllis Marcus
- Poland
- PRISM
- Privacy By Design
- Privacy Policy
- Privacy Rights
- Privacy Rule
- Privacy Shield
- Protected Health Information
- Ransomware
- Record Retention
- Red Flags Rule
- Regulation
- Rhode Island
- Richard Thomas
- Right to Be Forgotten
- Right to Privacy
- Risk-Based Approach
- Rosemary Jay
- Russia
- Safe Harbor
- Sanctions
- Schrems
- Scott Kimpel
- Securities and Exchange Commission
- Security Rule
- Senate
- Serbia
- Service Provider
- Singapore
- Smart Grid
- Smart Metering
- Social Media
- Social Security Number
- South Africa
- South Carolina
- South Dakota
- South Korea
- Spain
- Spyware
- Standard Contractual Clauses
- State Attorneys General
- Steven Haas
- Stick With Security Series
- Stored Communications Act
- Student Data
- Supreme Court
- Surveillance
- Sweden
- Switzerland
- Taiwan
- Targeted Advertising
- Telecommunications
- Telemarketing
- Telephone Consumer Protection Act
- Tennessee
- Terry McAuliffe
- Texas
- Text Message
- Thailand
- Transparency
- Transportation Security Administration
- Trump Administration
- United Arab Emirates
- United Kingdom
- United States
- Unmanned Aircraft Systems
- Uruguay
- Utah
- Vermont
- Video Privacy Protection Act
- Video Surveillance
- Virginia
- Viviane Reding
- Washington
- Whistleblowing
- Wireless Network
- Wiretap
- ZIP Code