On February 19, 2020, the Information Commissioner's Office (“ICO”) launched a consultation on its draft AI auditing framework guidance for organizations (“Guidance”). The Guidance is open for consultation until April 1, 2020 and responses can be submitted via the ICO’s online survey.
This is the first piece of guidance published by the ICO that has a broad focus on the management of several different risks arising from AI systems, as well as governance and accountability measures. The Guidance contains advice on how to understand data protection law in relation to artificial intelligence (“AI”) and recommendations for organizational and technical measures to mitigate the risks AI poses to individuals. It also provides a methodology to audit AI applications and ensure they process personal data fairly.
The ICO notes that the Guidance aims to inform organizations about what it thinks constitutes best practice for data protection-compliant AI, and that the Guidance has two distinct outputs:
- auditing tools and procedures which will be used by the ICO’s investigation and assurance teams when assessing the compliance of organizations using AI; and
- the provision of indicative risk and control tables at the end of each section to help organizations audit the compliance of their own AI systems.
The Guidance is targeted at both technology specialists developing AI systems and risk specialists whose organizations use AI systems. The purpose of the Guidance is to assist such specialists in assessing the risks to rights and freedoms that AI can cause, and the appropriate measures an organization can implement to mitigate them.
The ICO is seeking feedback from those with a compliance focus (e.g., data protection officers, general counsel, risk managers etc.), as well as technology specialists (e.g., machine learning experts and data scientists, software developers and engineers, cybersecurity and IT risk managers etc.).
The Guidance is divided into four parts that correspond to different data protection principles and rights:
- Part one addresses accountability and governance in AI, including data protection impact assessments (“DPIA”) and controller / processor responsibilities;
- Part two covers fair, lawful and transparent processing, including lawful bases, assessing and improving AI system performance and mitigating potential discrimination to ensure fair processing;
- Part three addresses security and data minimization in AI systems; and
- Part four covers how an organization can facilitate the exercise of individual rights in its AI systems, including rights related to solely automated decision-making.
Further information on what these sections of the Guidance cover is provided below.
What Are the Accountability and Governance Implications of AI?
- The Guidance states that organizations are legally required to complete a DPIA if they use AI systems that process personal data. The Guidance outlines what information the DPIA should cover, including, for example, an explanation of any relevant variation or margins of error in the performance of the system which may affect the fairness of the personal data processing.
- The ICO notes that it can be difficult to describe the processing activity of a complex AI system. As such, it may be appropriate for an organization to maintain two versions of an assessment, with one version presenting a thorough technical description for specialist audiences, and the other containing a high-level description of the processing to explain how the personal data inputs relate to the outputs affecting individuals.
- The Guidance outlines certain issues the DPIA should address, including how a DPIA should (1) assess necessity and proportionality of an AI system; (2) identify and assess risks; and (3) identify mitigating measures (e.g., data minimization or providing opportunities for individuals to opt out of the processing).
- The ICO recognizes that it is unrealistic to adopt a ‘zero tolerance’ approach to the risks to rights and freedoms. Instead, organizations should ensure that these risks are identified, managed and mitigated. As such, AI systems will inevitably involve trade-offs between privacy and other competing rights and interests.
- A short overview is provided of some of the most notable trade-offs that organizations are likely to face when designing or procuring AI systems, including privacy and statistical accuracy, statistical accuracy and discrimination, explainability and statistical accuracy, and explainability, exposure of personal data, and commercial security. The Guidance outlines how these trade-offs can be managed, and provides worked examples to assist organizations in assessing trade-offs.
What Do Organizations Need to Do to Ensure Lawfulness, Fairness, and Transparency in AI Systems?
- The Guidance notes that when determining purpose and lawful basis, organizations should separate the development or training of AI systems from their deployment. This is because these are distinct and separate purposes, with different circumstances and risks. Accordingly, there may be different lawful bases for an organization’s AI development and deployment. The Guidance outlines some AI-related considerations (including examples) for each of the General Data Protection Regulation’s (“GDPR”) lawful bases to assist in determining whether an organization can, for example, rely on consent or performance of a contract, etc.
- The Guidance explains the controls an organization can implement to ensure that its AI systems are sufficiently statistically accurate to ensure the personal data processing they undertake complies with the fairness principle. For example, in order to avoid personal data being misinterpreted as factual, organizations should ensure that their records indicate that they have made statistically informed inferences rather than relied on facts.
- The Guidance outlines technical approaches to mitigate discrimination risk in machine learning models. In cases of imbalanced training data, it may be possible to balance the processing by adding or removing data about under/overrepresented subsets of the population (e.g., adding more data points on loan applications from women). Alternatively, an organization could train separate models (e.g., one for men and another for women), and design them to perform on each sub-group (although creating different models for different protected classes could itself be a violation of non-discrimination law). In cases where the training data reflects past discrimination, an organization could either modify the data, change the learning process or modify the model after training.
How Should Organizations Assess Security and Data Minimization in AI?
- The Guidance recognizes that there is no “one-size-fits-all” approach to security. Appropriate security measures will depend on the level and type of risks that arise from specific processing activities.
- Hypothetical scenarios are provided to outline some of the known security risks and challenges that AI can exacerbate. These case studies include losing track of training data, and security risks introduced by externally maintained software used to build AI systems.
- The Guidance notes that certain types of privacy attacks can reveal the personal data of the individuals whose data was used to train an AI system. Specific attention is focused on two such privacy attacks, ‘model inversion’ and ‘membership inference,’ with the guidance providing examples of what these attacks are and how they work.
- Certain techniques are outlined for enhancing privacy which can be used to minimize the personal data being processed at the training phase, including perturbation, or adding ‘noise,’ and federated learning. The Guidance also outlines certain techniques for enhancing privacy which can be used to minimize the personal data being processed at the inference stage, including, converting personal data into less ‘human readable’ formats, making inferences locally, and privacy-preserving query approaches.
How Do Organizations Enable Individual Rights in AI systems?
- The Guidance provides an overview and examples of how data subject rights may apply with respect to personal data processed in AI systems.
- The Guidance recognizes that rights relating to automated decisions can be a particular issue for AI systems. By way of example, those based on machine learning may be more complex and present more challenges for meaningful human review. Machine learning systems make predictions or classifications about people based on data patterns. Even when they are highly statistically accurate, they will occasionally reach the wrong decision in individual cases. Errors may not be easy for a human reviewer to identify, understand or fix. While not every challenge from an individual will result in the automated decision being overturned, organizations should expect that many could be. The Guidance notes that there are two particular reasons why this may be the case in machine learning systems: (1) the individual is an ‘outlier,’ or (2) assumptions in the AI design can be challenged.
- The Guidance outlines certain steps that organizations can take to fulfill rights related to automated decision making, including designing and delivering appropriate training and support for human reviewers.
Search
Recent Posts
Categories
- Behavioral Advertising
- Centre for Information Policy Leadership
- Children’s Privacy
- Cyber Insurance
- Cybersecurity
- Enforcement
- European Union
- Events
- FCRA
- Financial Privacy
- General
- Health Privacy
- Identity Theft
- Information Security
- International
- Marketing
- Multimedia Resources
- Online Privacy
- Security Breach
- U.S. Federal Law
- U.S. State Law
- Workplace Privacy
Tags
- Aaron Simpson
- Accountability
- Adequacy
- Advertisement
- Advertising
- American Privacy Rights Act
- Anna Pateraki
- Anonymization
- Anti-terrorism
- APEC
- Apple Inc.
- Argentina
- Arkansas
- Article 29 Working Party
- Artificial Intelligence
- Australia
- Austria
- Automated Decisionmaking
- Baltimore
- Bankruptcy
- Belgium
- Biden Administration
- Big Data
- Binding Corporate Rules
- Biometric Data
- Blockchain
- Bojana Bellamy
- Brazil
- Brexit
- British Columbia
- Brittany Bacon
- Brussels
- Business Associate Agreement
- BYOD
- California
- CAN-SPAM
- Canada
- Cayman Islands
- CCPA
- CCTV
- Chile
- China
- Chinese Taipei
- Christopher Graham
- CIPA
- Class Action
- Clinical Trial
- Cloud
- Cloud Computing
- CNIL
- Colombia
- Colorado
- Committee on Foreign Investment in the United States
- Commodity Futures Trading Commission
- Compliance
- Computer Fraud and Abuse Act
- Congress
- Connecticut
- Consent
- Consent Order
- Consumer Protection
- Cookies
- COPPA
- Coronavirus/COVID-19
- Council of Europe
- Council of the European Union
- Court of Justice of the European Union
- CPPA
- CPRA
- Credit Monitoring
- Credit Report
- Criminal Law
- Critical Infrastructure
- Croatia
- Cross-Border Data Flow
- Cyber Attack
- Cybersecurity and Infrastructure Security Agency
- Data Brokers
- Data Controller
- Data Localization
- Data Privacy Framework
- Data Processor
- Data Protection Act
- Data Protection Authority
- Data Protection Impact Assessment
- Data Transfer
- David Dumont
- David Vladeck
- Delaware
- Denmark
- Department of Commerce
- Department of Health and Human Services
- Department of Homeland Security
- Department of Justice
- Department of the Treasury
- District of Columbia
- Do Not Call
- Do Not Track
- Dobbs
- Dodd-Frank Act
- DPIA
- E-Privacy
- E-Privacy Directive
- Ecuador
- Ed Tech
- Edith Ramirez
- Electronic Communications Privacy Act
- Electronic Privacy Information Center
- Elizabeth Denham
- Employee Monitoring
- Encryption
- ENISA
- EU Data Protection Directive
- EU Member States
- European Commission
- European Data Protection Board
- European Data Protection Supervisor
- European Parliament
- Facial Recognition Technology
- FACTA
- Fair Credit Reporting Act
- Fair Information Practice Principles
- Federal Aviation Administration
- Federal Bureau of Investigation
- Federal Communications Commission
- Federal Data Protection Act
- Federal Trade Commission
- FERC
- FinTech
- Florida
- Food and Drug Administration
- Foreign Intelligence Surveillance Act
- France
- Franchise
- Fred Cate
- Freedom of Information Act
- Freedom of Speech
- Fundamental Rights
- GDPR
- Geofencing
- Geolocation
- Georgia
- Germany
- Global Privacy Assembly
- Global Privacy Enforcement Network
- Gramm Leach Bliley Act
- Hacker
- Hawaii
- Health Data
- Health Information
- HIPAA
- HIPPA
- HITECH Act
- Hong Kong
- House of Representatives
- Hungary
- Illinois
- India
- Indiana
- Indonesia
- Information Commissioners Office
- Information Sharing
- Insurance Provider
- Internal Revenue Service
- International Association of Privacy Professionals
- International Commissioners Office
- Internet
- Internet of Things
- IP Address
- Ireland
- Israel
- Italy
- Jacob Kohnstamm
- Japan
- Jason Beach
- Jay Rockefeller
- Jenna Rode
- Jennifer Stoddart
- Jersey
- Jessica Rich
- John Delionado
- John Edwards
- Kentucky
- Korea
- Latin America
- Laura Leonard
- Law Enforcement
- Lawrence Strickling
- Legislation
- Liability
- Lisa Sotto
- Litigation
- Location-Based Services
- London
- Madrid Resolution
- Maine
- Malaysia
- Markus Heyder
- Maryland
- Massachusetts
- Meta
- Mexico
- Microsoft
- Minnesota
- Mobile App
- Mobile Device
- Montana
- Morocco
- MySpace
- Natascha Gerlach
- National Institute of Standards and Technology
- National Labor Relations Board
- National Science and Technology Council
- National Security
- National Security Agency
- National Telecommunications and Information Administration
- Nebraska
- NEDPA
- Netherlands
- Nevada
- New Hampshire
- New Jersey
- New Mexico
- New York
- New Zealand
- Nigeria
- Ninth Circuit
- North Carolina
- Norway
- Obama Administration
- OECD
- Office for Civil Rights
- Office of Foreign Assets Control
- Ohio
- Oklahoma
- Opt-In Consent
- Oregon
- Outsourcing
- Pakistan
- Parental Consent
- Payment Card
- PCI DSS
- Penalty
- Pennsylvania
- Personal Data
- Personal Health Information
- Personal Information
- Personally Identifiable Information
- Peru
- Philippines
- Phyllis Marcus
- Poland
- PRISM
- Privacy By Design
- Privacy Policy
- Privacy Rights
- Privacy Rule
- Privacy Shield
- Protected Health Information
- Ransomware
- Record Retention
- Red Flags Rule
- Regulation
- Rhode Island
- Richard Thomas
- Right to Be Forgotten
- Right to Privacy
- Risk-Based Approach
- Rosemary Jay
- Russia
- Safe Harbor
- Sanctions
- Schrems
- Scott Kimpel
- Securities and Exchange Commission
- Security Rule
- Senate
- Serbia
- Service Provider
- Singapore
- Smart Grid
- Smart Metering
- Social Media
- Social Security Number
- South Africa
- South Carolina
- South Dakota
- South Korea
- Spain
- Spyware
- Standard Contractual Clauses
- State Attorneys General
- Steven Haas
- Stick With Security Series
- Stored Communications Act
- Student Data
- Supreme Court
- Surveillance
- Sweden
- Switzerland
- Taiwan
- Targeted Advertising
- Telecommunications
- Telemarketing
- Telephone Consumer Protection Act
- Tennessee
- Terry McAuliffe
- Texas
- Text Message
- Thailand
- Transparency
- Transportation Security Administration
- Trump Administration
- United Arab Emirates
- United Kingdom
- United States
- Unmanned Aircraft Systems
- Uruguay
- Utah
- Vermont
- Video Privacy Protection Act
- Video Surveillance
- Virginia
- Viviane Reding
- Washington
- Whistleblowing
- Wireless Network
- Wiretap
- ZIP Code