Landmark opinion sets out EU law on personal data and AI models
The European Data Protection Board (EDPB) has adopted a landmark opinion on the use of personal data for the development and deployment of AI models.
The 35-page opinion, published yesterday, was requested by Ireland’s Data Protection Commission (DPC) in September with a view to seeking Europe-wide regulatory harmonisation.
The DPC’s request focused on four key issues:
- Under what circumstances may an AI Model be considered as ‘anonymous’?;
- How controllers may demonstrate the appropriateness of legitimate interest as a legal basis for personal data processing to create, update and/or develop an AI model;
- How controllers may demonstrate the appropriateness of legitimate interest as a legal basis for personal data processing to deploy an AI model; and
- What are the consequences of an unlawful processing of personal data in the development phase of an AI model on the subsequent processing or operation of the AI model?
The EDPB gathered input from a stakeholders’ event and an exchange with the EU AI Office.
Regarding anonymity, the opinion says that whether an AI model is anonymous should be assessed on a case-by-case basis by data protection authorities (DPAs).
For a model to be anonymous, it should be very unlikely to directly or indirectly identify individuals whose data was used to create the model, and to extract such personal data from the model through queries, it says. The opinion provides a non-prescriptive and non-exhaustive list of methods to demonstrate anonymity.
With respect to legitimate interest, the opinion provides general considerations that DPAs should take into account when they assess if legitimate interest is an appropriate legal basis for processing personal data for the development and the deployment of AI models.
A three-step test helps assess the use of legitimate interest as a legal basis. The EDPB gives the examples of a conversational agent to assist users, and the use of AI to improve cybersecurity. These services can be beneficial for individuals and can rely on legitimate interest as a legal basis, but only if the processing is shown to be strictly necessary and the balancing of rights is respected.
The opinion also includes a number of criteria to help DPAs assess if individuals may reasonably expect certain uses of their personal data.
These criteria include whether or not the personal data was publicly available; the nature of the relationship between the individual and the controller; the nature of the service; the context in which the personal data was collected; the source from which the data was collected; the potential further uses of the model; and whether individuals are actually aware that their personal data is online.
If the balancing test shows that the processing should not take place because of the negative impact on individuals, mitigating measures may limit this negative impact. The opinion includes a non-exhaustive list of examples of such mitigating measures, which can be technical in nature, or make it easier for individuals to exercise their rights or increase transparency.
Finally, when an AI model was developed with unlawfully processed personal data, this could have an impact on the lawfulness of its deployment, unless the model has been duly anonymised, it says.
Considering the scope of the request from the DPC, the vast diversity of AI models and their rapid evolution, the opinion aims to give guidance on various elements that can be used for conducting a case by case analysis.
In addition, the EDPB says it is currently developing guidelines covering more specific questions, such as web scraping.
Anu Talus, chair of the EDPB, said: “AI technologies may bring many opportunities and benefits to different industries and areas of life. We need to ensure these innovations are done ethically, safely, and in a way that benefits everyone.
“The EDPB wants to support responsible AI innovation by ensuring personal data are protected and in full respect of the General Data Protection Regulation (GDPR).”
Dr Des Hogan, chair of the DPC, said: “As the lead supervisory authority of many of the world’s largest tech companies, we have a deep awareness and understanding of the complexities associated with regulating the processing of personal data in an AI context.
“Equally, we recognise that the core questions concerning compliance with the GDPR in an AI context are EU-wide industry challenges and as such require a harmonised approach at EU level.
“In having made this request for an opinion, the DPC triggered a discussion, in which we participated, that led to this agreement at EDPB level, on some of the core issues that arise in the context of processing personal data for the development and deployment of AI models, thereby bringing some much needed clarity to this complex area.”
DPC commissioner Dale Sunderland added: “This opinion will enable proactive, effective and consistent regulation across the EU/EEA, giving greater clarity and guidance to industry, while also promoting responsible innovation.
“It will also support the DPC’s engagement with companies developing new AI models before they launch on the EU market as well as the handling of the many AI related complaints that have been submitted to the DPC.”