EDPB Publishes Opinion on Processing of Personal Data in the Context of AI Models
Time 4 Minute Read

On December 17, 2024, the European Data Protection Board (“EDPB”) adopted Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models (the “Opinion”). The Opinion was produced following a request by the Irish Data Protection Commission (the “DPC”) pursuant to Article 64(2) of the EU General Data Protection Regulation.

The Opinion responds to specific questions put forward by the DPC related to (1) AI model anonymity; (2) reliance on the legitimate interest legal basis for processing personal data in the context of AI; (3) the reasonable expectations of individuals with respect to processing of their personal data in AI models; and (4) whether the unlawful processing of personal data in the development phase has consequences on the lawfulness of the subsequent processing or operation of the AI model.

Key takeaways from the Opinion include:

  • Model Anonymity: The EDPB found that whether an AI model which has been trained with personal data can be considered anonymous should be assessed on a case by case basis. In particular, the EDPB found that for an AI model to be considered anonymous, both the (1) likelihood of direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to develop the model; and (2) the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant, taking into account all the means reasonably likely to be used by the controller or another person. The Opinion details various methods that controllers can rely on to demonstrate anonymity of an AI model.
  • Reliance on the Legitimate Interest Basis in an AI Context: The Opinion puts forward general considerations that data protection authorities should consider when assessing if legitimate interest is the appropriate legal basis for processing personal data for the development and deployment of AI models. In particular, the EDPB cites its earlier guidance on processing personal data based on legitimate interest which outlines a three step test to assist with assessing the appropriateness of the legitimate interest ground for processing. The Opinion provides, by way of example, that reliance on legitimate interest is appropriate with respect to a conversational agent to assist users and the use of AI to improve cybersecurity.
  • Reasonable Expectations of Individuals: The Opinion outlines specific criteria that data protection authorities can consider to determine if an individual may reasonably expect certain processing of their personal data in AI models. The criteria include: (a) whether or not the personal data was publicly available; (b) the nature of the relationship between the data subject and the controller; (c) the nature of the service; (d) the context in which the personal data was collected; (e) the source from which the data was collected; (f) potential further uses of the AI model; and (g) whether the data subject is actually aware that his or her personal data is online.
  • Impact of Unlawful Processing on Subsequent Processing or Operations of an AI Model: The Opinion examines various scenarios in which the lawfulness of an AI model’s deployment could be impacted where an AI model is developed with unlawfully processed personal data. The scenarios the EDPB considered include where personal data is unlawfully processed to develop an AI model and (1) is retained in an AI model and subsequently processed by the same controller; (2) is retained in the AI model and processed by another controller in the context of deployment of the model; and (3) where a controller ensures the AI model is anonymized before further processing of personal data takes place in the AI model.

To read more about the EDPB’s findings, please see the Opinion and the related press release.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page