| Privacy Matters https://privacymatters.dlapiper.com/category/edbp-european-data-protection-board/ DLA Piper's Global Privacy and Data Protection Resource Tue, 14 Jan 2025 13:54:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.8&lxb_maple_bar_source=lxb_maple_bar_source https://privacyblog.dlapiperblogs.com/wp-content/uploads/sites/32/2023/07/cropped-Favicon_512x512-32x32.gif | Privacy Matters https://privacymatters.dlapiper.com/category/edbp-european-data-protection-board/ 32 32 EU: EDPB Opinion on AI Provides Important Guidance though Many Questions Remain https://privacymatters.dlapiper.com/2025/01/eu-edpb-opinion-on-ai-provides-important-guidance-though-many-questions-remain/ Tue, 14 Jan 2025 13:53:05 +0000 https://privacymatters.dlapiper.com/?p=7528 Continue Reading]]> A much-anticipated Opinion from the European Data Protection Board (EDPB) on AI models and data protection has not resulted in the clear or definitive guidance that businesses operating in the EU had hoped for. The Opinion emphasises the need for case-by-case assessments to determine GDPR applicability, highlighting the importance of accountability and record-keeping, while also flagging ‘legitimate interests’ as an appropriate legal basis under specific conditions. In rejecting the proposed Hamburg thesis, the EDPB has stated that AI models trained on personal data should be considered anonymous only if personal data cannot be extracted or regurgitated.

Introduction

On 17 December 2024, the EDPB published a much-anticipated Opinion on AI models and data protection.  The Opinion includes the EDPB’s view on the following key questions: does the development and use of an AI model involve the processing of personal data; and if so, what is the correct legal basis for that processing?

As is sometimes the case with EDPB Opinions, which necessarily represent the consensus view of the supervisory authorities of 27 different Member States, the Opinion does not provide many clear or definitive answers.  Instead, the EDPB offers indicative guidance and criteria, calling for case-by-case assessments of AI models to understand whether, and how, they are impacted by the GDPR.  In this context, the Opinion repeatedly highlights the importance of accountability and record-keeping by businesses developing or using AI, so that the applicability of data protection laws, and the business’ compliance with those laws, can be properly assessed. 

Whilst the equivocation of the Opinion might be viewed as unhelpful by European businesses looking for regulatory certainty, it is also a reflection of the complexities inherent in this intersection of law and technology.

In summary, the answers given by the EDPB to the four questions in the Opinion are as follows:

  1. Can an AI model, which has been trained using personal data, be considered anonymous?  Yes, but only in some cases.  It must be impossible, using all means reasonably likely to be used, to obtain personal data from the model, either through attacks which aim to extract the original training data from the model itself, or through interactions with the AI model (i.e., personal data provided in responses to prompts / queries). 
  2. Is ‘legitimate interests’ an appropriate legal basis for the training and development of an AI model? In principle yes, but only where the processing of personal data is necessary to develop the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  In particular, the issue of data minimisation, and the related issue of web-scraping / indiscriminate capture of data, will be relevant here. 
  3. Is ‘legitimate interests’ an appropriate legal basis for the deployment of an AI model? In principle yes, but only where the processing of personal data is necessary to deploy the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  Here, the impact on the data subject of the use of the AI model is of predominant importance.
  4. If an AI Model has been found to have been created, updated or developed using unlawfully processed personal data, how does this impact the subsequent use of that AI model?  This depends in part on whether the AI model was first anonymised before being disclosed to the deployer of that model (see Question 1).  Otherwise, the deployer of the model may need to assess the lawfulness of the development of the model as part of its accountability obligations.

Background

The Opinion was issued by the EDPB under Article 64 of the GDPR, in response to a request from the Irish Data Protection Commission.  Article 64 requires the EDPB to publish an opinion on matters of ‘general application’ or which ‘produce effects in more than one Member State’. 

In this case, the Irish DPC asked the EDPB to provide an opinion on the above-mentioned questions – a request that is not surprising given the general importance of AI models to businesses across the EU, but also in light of the large number of technology companies developing those models who have established their European operations in Ireland. 

In order to understand the Opinion, it helps to be familiar with certain concepts and terminology relating to AI. 

First, the Opinion distinguishes between an ‘AI system’ and an ‘AI model’. For the former, the EDPB relies on the definition given in the EU AI Act. In short: a machine-based system operating with some degree of autonomy that infers, from inputs, how to produce outputs such as  predictions, content, recommendations, or decisions.  An AI model, meanwhile, is a component part of an AI system. Colloquially, it is the ‘brain’ of the AI system – an algorithm, or series of algorithms (such as in the form of a neural network), that recognises patterns in data. AI models require the addition of further components, such as a user interface, to become AI systems. To take a common example – the generative AI system known as Chat GPT is a software application comprised of an AI model (the GPT Large Language Model) connected to a chatbot-style user interface that allows the user to submit queries (or ‘prompts’) to the model in the form of natural language questions. Whilst the Opinion is notionally concerned only with AI models, at times the Opinion appears to blur the distinction between the model and the system, in particular, when discussing the significance of model outputs that are only rendered comprehensible to the user through an interface that sits outside of the model.

Second, the Opinion relies on an understanding of a typical ‘AI lifecycle’, pursuant to which an AI model is first developed by training the model on large volumes of data.  This training may happen in a number of phases which become increasingly refined (referred to as ‘fine-tuning’). Only after an AI model is developed can it be used, or ‘deployed’, in a live setting, as part of an AI system.  Often, the developer of an AI model will not be the same person as the deployer.  This is relevant because the Opinion variously addresses both development and deployment phases.

The significance of the ‘Hamburg thesis’

With respect to the key question of whether AI models can be considered anonymous, the Opinion follows in the wake of a much-discussed paper published in July 2024 by the data protection authority for the German state of Hamburg.  The paper took the position that AI models (specifically, Large Language Models) are, in isolation, anonymous – they do not involve the processing of personal data. 

In order to reach that conclusion, the paper decoupled the model itself from: (i) the prior training of the model (which may involve the collection and further processing of personal data as part of the training dataset); and (ii) the subsequent use of the model, whereby a prompt/input may contain personal data, and an output may be used in a way that means it constitutes personal data.

Looking only at the AI model itself, the paper decided that the tokens and values which make up the ‘inner workings’ of a typical AI model do not, in any meaningful way, relate to or correspond with information about identifiable individuals.  Consequently, the model itself was found to be anonymous, even if the development and use of the model involves the processing of personal data. 

The Hamburg thesis was welcomed for several reasons, not least because it resolved difficult questions such as how data subject rights could be understood in relation to an AI model (if someone asks for their personal data to be deleted, then what can this mean in the context of an AI model?), and the question of the lawful basis for ‘storing’ personal data in an AI model (as distinct from the lawful basis for collecting and preparing data to train the model).

However, as we go on to explain, the EDPB Opinion does not follow the relatively simple and certain framework presented by the Hamburg thesis.  Instead, it introduces uncertainty by asserting that there are, in fact, scenarios where an AI model contains personal data, but that this must be determined on a case-by-case basis.

Are AI models anonymous?

First, the Opinion is only concerned with AI models that have been trained using personal data.  Therefore, AI models trained using solely non-personal data (such as statistical data, or financial data relating to businesses) can, for the avoidance of doubt, be considered anonymous.  However, in this context the broad scope of ‘personal data’ under the GDPR must be remembered, and the Opinion does not suggest any de minimis level of personal data that needs to be involved in the training of the AI model for the question of GDPR applicability to arise.

Where personal data is used in the training phase, the next question is whether the model is specifically designed to provide personal data regarding individuals whose personal data were used to train the model.  If so, the AI model will not be anonymous.  For example, an AI model that is trained to provide a user, on request, with biographical information and contact details for directors of public companies, or a generative AI model that is trained on the voice recordings of famous singers so that it can, in turn, mimic the voices of those singers.  In each case, the model is trained on personal data of specific individuals, in order to be able to produce other personal data about those individuals as an output. 

Finally, there is the intermediary case of AI models that are trained on personal data, but that are not designed to provide personal data related to the training data as an output.  It is this use case that the Opinion focuses on.  The conclusion is that AI models in this category may be anonymous, but only if the developer of the model can demonstrate that information about individuals whose personal data was used to train the model cannot be ‘obtained from’ the model, using all means reasonably likely to be used.  Notwithstanding that personal data used for training the model no longer exists within the model in its original form (but rather it is “represented through mathematical objects“), that information is, in the eyes of the EDPB, still capable of constituting personal data.

The following question then arises: how does someone ‘obtain’ personal data from an AI model? In short, the Opinion posits two possibilities.  First, that training data is ‘extracted’ via deliberate attacks.  The Opinion refers to an evolving field of research in this area and makes reference to techniques such as ‘model inversion’, ‘reconstruction attacks’, and ‘attribute and membership inference’.  These are techniques that can be deployed to trick the model into revealing training data, or otherwise reconstruct that training data, in some cases relying on privileged access to the model itself.  Second, is the risk of accidental or inadvertent ‘regurgitation’ of personal data as part of an AI model’s outputs. 

Consequently, a developer must be able to demonstrate that its AI model is resistant both to attacks that extract personal data directly from the model, as well as to the risk of regurgitation of personal data in response to queries:  “In sum, the EDPB considers that, for an AI model to be considered anonymous, using reasonable means, both (i) the likelihood of direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to train the model; as well as (ii) the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant for any data subject“. 

Which criteria should be used to evaluate whether an AI model is anonymous?

Recognising the uncertainty in its conclusion that the AI models may or may not be anonymous, the EDPB provides a list of criteria that can be used to assess the likelihood of a model being found to contain personal data.  These include:

  • Steps taken to avoid or limit the collection of personal data during the training phase.
  • Data minimisation or masking measures (e.g., pseudonymisation) applied to reduce the volume and sensitivity of personal data used during the training phase.
  • The use of methodologies during model development that reduce privacy risks (e.g., regularisation methods to improve model generalisation and reduce overfitting, and appropriate and effective privacy-preserving techniques, such as differential privacy).
  • Measures that reduce the likelihood of obtaining personal data from queries (e.g., ensuring the AI system blocks the presentation to the user of outputs that may contain personal data).
  • Document-based audits (internal or external) undertaken by the model developer that include an evaluation of the chosen measures and of their impact to limit the likelihood of identification.
  • Testing of the model to demonstrate its resilience to different forms of data extraction attacks.

What is the correct legal basis for AI models?

When using personal data to train an AI model, the preferred legal basis is normally the ‘legitimate interests’ of the controller, under Article 6(1)(f) GDPR. This is for practical reasons. Whilst, in some circumstances, it may be possible to obtain GDPR-compliant consent from individuals authorising the use of their data for AI training purposes, in most cases this will not be feasible. 

Helpfully, the Opinion accepts that legitimate interests is, in principle, a viable legal basis for processing personal data to train an AI model. Further, the Opinion also suggests that it should be straightforward for businesses to identify a lawful legitimate interest. For example, the Opinion cites “developing an AI system to detect fraudulent content or behaviour” as a sufficiently precise and real interest. 

However, where businesses may have more difficulty is in showing that the processing of personal data is necessary to realise their legitimate interest, and that their legitimate interest is not outweighed by any impact on the rights and freedoms of data subjects (the ‘balancing test’). Whilst this is fundamentally just a restatement of existing legal principles, the following sentence should nevertheless cause some concern for businesses developing AI models, in particular Large Language Models: “If the pursuit of the purpose is also possible through an AI model that does not entail processing of personal data, then processing personal data should be considered as not necessary“. Technically speaking, it may often be the case that personal data is not essential for the training of an AI model – however, this does not mean that it is straightforward to systematically remove all personal data from a training dataset, or otherwise replace all identifying elements with ‘dummy’ values. 

With respect to the balancing test, the EDPB asks businesses to consider a data subject’s interest in self-determination and in maintaining control over their own data when considering whether it is lawful to collect personal data for model training purposes.  In particular, it may be more difficult to satisfy the balancing test if a developer is scraping large volumes of personal data (especially including any sensitive data categories) against their wishes, without their knowledge, or otherwise in contexts that would not be reasonably expected by the data subject. 

When it comes to the separate purpose of deploying an AI model, the EDPB asks businesses to consider the impact on the data subject’s fundamental rights that arise from the purpose for which the AI model is used.  For example, AI models that are used to block content publication may adversely affect a data subject’s fundamental right to freedom of expression.  However, conversely the EDPB recognises that the deployment of AI models may have a positive impact on a data subject’s rights and freedoms – for example, an AI model that is used to improve accessibility to certain services for people with disabilities). In line with Recital 47 GDPR, the EDPB reminds controllers to consider the ‘reasonable expectations’ of data subjects in relation to both training and deployment uses of personal data.

Finally, the Opinion discusses a range of ‘mitigating measures’ that may be used to reduce risks to data subjects and therefore tip the balancing test in favour of the controller.  These include:

  • Technical measures to reduce the volume or sensitivity of personal data at use (e.g., pseudonymisation, masking).
  • Measures to facilitate the exercise of data subject rights (e.g., providing an unconditional right for data subjects to opt-out of the use of their personal data for training or deploying the model; allowing a reasonable period of time to elapse between collection of training data and its use).
  • Transparency measures (e.g., public communications about the controller’s practices in connection with the use of personal data for AI model development).
  • Measures specific to web-scraping (e.g., excluding publications that present particular risks; excluding certain data categories or sources; excluding websites that clearly object to web scraping).

Notably, the EDPB observes that, to be effective, these mitigating measures must go beyond mere compliance with GDPR obligations (for example, providing a GDPR compliant privacy notice, which a controller would in any case be required to do, would not be an effective transparency measure for these purposes). 

When are companies liable to non-compliant AI models?

In its final question, the DPC sought clarification from the EDPB on how a deployer of an AI model might be impacted by any unlawful processing of personal data in the development phase of the AI model. 

According to the EDPB, such ‘upstream’ unlawful processing may impact a subsequent deployer of an AI model in the following ways:

  • Corrective measures taken against the developer may have a knock-on effect on the deployer – for example, if the developer is ordered to delete personal data unlawfully collected for training purposes, the developer would not be allowed to subsequently process this data. However, this raises an important practical question about how such data could be identified in, and deleted from, the AI model, taking into account the fact that the model does not retain training data in its original form.
  • Unlawful processing in the development phase may impact the legal basis for the deployment of the model – in particular, if the deployer of the AI model is relying on ‘legitimate interests’, it will be more difficult to satisfy the balancing test in light of the deficiencies associated with the collection and use of the training data.

In light of these risks, the EDPB recommends that deployers take reasonable steps to assess the developer’s compliance with data protection laws during the training phase.  For example, can the developer explain the sources of data used, steps taken to comply with the minimisation principle, and any legitimate interest assessments conducted for the training phase?  For certain AI models, the transparency obligations imposed in relation to AI systems under the AI Act should assist a deployer in obtaining this information from a third party AI model developer. While the opinion provides a useful framework for assessing GDPR issues with AI systems, businesses operating in the EU may be frustrated with the lack of certainty or definitive guidance on many key questions relating to this new era of technology innovation.

]]>
EU: Engaging vendors in the financial sector: EDPB clarifications mean more mapping and management https://privacymatters.dlapiper.com/2024/11/eu-engaging-vendors-in-the-financial-sector-edpb-clarifications-mean-more-mapping-and-management/ Fri, 08 Nov 2024 14:22:51 +0000 https://privacymatters.dlapiper.com/?p=7493 Continue Reading]]> The European Data Protection Board (“EDPB“) adopted an opinion on 7 October 2024, providing guidance for data controllers relying on processors (and sub-processors) under the GDPR. The two key themes are:

  1. supply chain mapping;
  2. verifying compliance with flow-down obligations.

For many financial institutions, the emphasis on these obligations should not come as a surprise. However, there are some nuanced clarifications in the opinion which could have an impact on general vendor management in the financial services sector. We have summarised the key takeaways below.

Supply Chain Mapping

Controllers should always be able to identify the processing supply chain. This means knowing all processors, and their subprocessors, for all third-party engagements – and not just their identity. The EDPB’s opinion clarifies that controllers should know:

  • the legal entity name, address and information for a contact person for each processor/subprocessor;
  • the data processed by each processor/subprocessor and why; and
  • the delimitation of roles where several subprocessors are engaged by the primary processor.

This may seem excessive. However, the practical benefit of knowing this information stems beyond Article 28 compliance. It is also required to discharge transparency obligations under Articles 13 and 14 and to respond to data subject requests (e.g. of access under Article 15 or erasure under Article 19).

How is this achieved in reality? Vendor engagement can be tedious. While many financial institutions have sophisticated vendor onboarding processes, data protection is often an afterthought, addressed after commercials are finalised.

So, what should you do as a data controller? Revisit your contracts to ensure your processors are obliged to provide the above information proactively. At a frequency and in the format you require.   

Verification of Compliance

Controllers should be able to verify and document the sufficiency of safeguards implemented by processors and subprocessors to comply with data laws. In other words, controllers must be able to evidence a processor’s compliance with key obligations e.g.:

  • making sure personal data is secure; and
  • ensuring data is transferred or accessed internationally in line with the requirements of Chapter V.

The nature of this verification and documentation will vary depending on the risk associated with the processing activity. A low-risk vendor, from a commercial business perspective, may provide a service involving high-risk data processing. In this case, verification might involve seeking a copy of the subprocessor contract to review it. For lower-risk processing, verification could be limited to confirming a subprocessor contract is in place.

The EDPB suggests controllers can rely on information received from their processor and build on it. For example, through diligence questionnaires, publicly available information, certifications, and audit reports.

Where the primary processor is also an exporter of personal data outside the EEA, the EDPB clarified that the obligation is on the exporting processor to ensure there is an appropriate transfer mechanism in place with the importing subprocessor and to ensure a transfer impact assessment has been carried out. The controller should verify the transfer impact assessment and make amends if necessary. Otherwise, controllers can rely on the exporting processor’s transfer impact assessment if deemed adequate. The verification required here will depend on whether it is an initial or onward transfer, and what lawful basis is used for the transfer. This does not impact the controller’s obligation to carry out transfer mapping where it engages primary processors themselves located outside the EEA.

In that regard, the EDPB clarified a subtle but often debated provision of Article 28. The opinion notes that the wording “unless required to do so by law or binding order of a governmental body”, is unlikely to be compliant where data is transferred outside the EEA. It is therefore highly recommended to include the wording:

“unless required to [process] by Union or Member State law to which the processor is subject.”

Either verbatim or in very similar terms. This is particularly relevant in the context of transfer mapping and impact assessments. Regulated entities should be vigilant for third-party contracts which appear to meet the obligations set out in Article 28(3) with respect to the processing data for purposes outside of the controller’s instructions, but are, as confirmed by the EDPB, actually non-compliant.

What steps should you take now then?

The opinion clarifies that controllers can rely on a sample selection of subprocessor contracts to verify downstream compliance and we suggest you do so.

But when?

Regulated entities, particularly in the financial services industry, are facing a swathe of regulations that impact vendor engagement. The Digital Operational Resilience Act and NIS 2 Directive (EU) (2022/2555) require financial institutions to maintain a register of all contractual arrangements with vendors and ensure third-party service providers comply with cybersecurity standards. Effectively, these are enhancements to existing processor requirements under the GDPR. The reality is, however, that many controllers are only now firming up supply chain management to cover key data protection and cyber risks.

We recommend controllers use the clarifications in the EDPB’s opinion to improve negotiations when separately looking at uplifts required by DORA which takes effect on 17 January 2025. The clock is ticking.

Please reach out to your usual DLA Piper contact if you would like to discuss further, including if you are struggling to map these requirements against other emerging laws i.e. DORA or NIS2. We can provide assistance with the data and cyber contractual commitments in your contracts.

]]>
Europe: EDPB issues Opinion on ‘consent or pay’ models deployed by large online platforms https://privacymatters.dlapiper.com/2024/04/europe-edpb-issues-opinion-on-consent-or-pay-models-deployed-by-large-online-platforms/ Wed, 24 Apr 2024 14:26:41 +0000 https://privacymatters.dlapiper.com/?p=7287 Continue Reading]]> The European Data Protection Board (“EDPB”) has adopted an Opinion (“EDPB Opinion”) on the validity of consent to process personal data for the purposes of behavioural advertising in the context of ‘consent or pay’ models deployed by large online platforms. The EDPB concludes that “in most cases”, the requirements of valid consent under the General Data Protection Regulation (“GDPR”), will not be met if users are only given a choice between consenting to processing of personal data for behavioural advertising purposes and paying a fee.

Background

Last year, following a request from the Norwegian Data Protection Authority, the EDPB adopted an urgent binding decision, imposing a ban on the processing of personal data by Meta for behavioural advertising on the legal bases of contract and legitimate interest, across the European Economic Area (“EEA”).  As a result of the EDPB’s decision, Meta announced that it planned to rely on consent as the legal basis for its behavioural advertising activities in respect of users in the EEA – using a subscription model where users who do not consent to share their personal data and receive targeted adverts will be charged a monthly fee. This so-called “consent or pay” model has already been the subject of significant debate among European data protection supervisory authorities and been the subject of complaints from privacy activists.

In response to Meta’s announcement, the Dutch, Norwegian & Hamburg Data Protection Authorities made an Article 64(2) GDPR request to the EDBP to issue an opinion on the circumstances and conditions ’consent or pay’ models relating to behavioural advertising can be implemented by large online platforms, in a way that constitutes valid, and in particular, freely given, consent.

EDPB Opinion

The EDPB has clarified that the scope of its Opinion is limited to the implementation by large online platforms of ‘consent or pay’ models, where users are asked to consent to processing for the purposes of behavioural advertising. The EDPB states that “large online platforms” may cover, but is not limited to, “very large online platforms” as defined under the EU Digital Services Act and “gatekeepers” as defined under the EU Digital Markets Act.

In its Opinion, the EDPB concludes that offering only “a paid alternative to the service which includes processing for behavioural advertising purposes should not be the default way forward for controllers”. Individuals should be provided with an ‘equivalent alternative’, that does not require payment of a fee. The EDPB further states that “if controllers choose to charge a fee for access to the ‘equivalent alternative’, controllers should consider also offering a further alternative, free of charge, without behavioural advertising” – the EDPB considers this “a particularly important factor in the assessment of certain criteria for valid consent under the GDPR”. Individuals must have a genuine free choice – any fee charged cannot make individuals feel compelled to consent.

The EDPB refers to the European Court of Justice (“CJEU”) decision in Meta vs Bundeskartellamt,  which considered whether consent given by the user of an online social network to the operator of such a network meets the requirements of valid consent under the GDPR, in particular the condition that consent must be freely given, where that operator holds a dominant position on the market for online social networks. In its Opinion, the EDPB confirms that, as set out in the Bundeskartellamt judgment, when assessing whether consent is “freely given”, controllers should take into account: “whether the data subject suffers detriment by not consenting or withdrawing consent; whether there is an imbalance of power between the data subject and the controller; whether consent is required to access goods or services, even though the processing is not necessary for the fulfilment of the contract (conditionality); and whether the data subject is able to consent to different processing operations (granularity)”.

In addition, the EDPB confirms that controllers should assess, on a case by case basis, whether imposing a fee for use of the service is appropriate and, if so, the amount of that fee. In particular, controllers should ensure that “the fee is not such as to inhibit data subjects from making a genuine choice in light of the requirements of valid consent and of the principles under Article 5 GDPR, in particular fairness”.

Conclusion

The EDPB Opinion provides some clarity in relation to ‘consent or pay’ models, however, it raises the question as to how online services will be paid for if large online service providers cannot harvest and monetise consumer data . Although the EDPB does not go as far as prohibiting the use of a “consent or pay” models for behavioural advertising purposes, stating only that these models will not satisfy the requirements of valid consent under the GDPR ‘in most cases’, it sets a very high bar.

It is clear that the ‘consent or pay’ model will continue to attract attention from regulators. In particular, although the decision is non-binding, it will be taken into account by the Irish Data Protection Commission, and the Dutch, Norwegian and Hamburg data protection authorities that referred the matter to the EDPB, as they continue to investigate the processing of personal data for behavioural advertising purposes by large online platforms. In the UK, the ICO has also recently launched a call for views on the use of “consent or pay” models; and in the EU, the European Commission has launched investigations against a number of large online service providers in relation to compliance with obligations under the Digital Markets Act, including in relation to Meta’s new “consent or pay” model.

Although the EDPB Opinion is limited to ‘large online platforms’, we expect further guidance for other online service providers. In its press release, the EDPB confirmed that it will also develop guidelines on ‘consent or pay’ models with a broader scope.

]]>
EU: New EDPB guidelines on the scope of the ‘cookie rule’ https://privacymatters.dlapiper.com/2023/11/eu-new-edpb-guidelines-on-the-scope-of-the-cookie-rule/ Wed, 22 Nov 2023 09:49:30 +0000 https://privacymatters.dlapiper.com/?p=7155 Continue Reading]]> The European Data Protection Board has published new guidelines (14 November 2023) on the scope of Article 5(3) of the e-Privacy Directive – i.e., the so-called ‘cookie rule’.  

These guidelines apply a maximalist interpretation to the cookie rule, meaning that a wide variety of technologies other than traditional cookies are, in the opinion of the EDPB, caught by the rule. Where a technology is caught then, depending on the purpose for which the technology is used, its use will be conditional upon obtaining consent.

The guidelines are open for public consultation until 28 December 2023.

Background

By way of reminder, Article 5(3) of the e-Privacy Directive creates requirement to obtain prior consent where a company stores information, or gains access to information already stored, in the terminal equipment of a subscriber or user of an electronic communications network, and that storing of or access to information is not strictly necessary to deliver the service requested by the subscriber or user. As such, the Directive seeks to protect what it regards as the ‘private sphere’ of the user’s terminal equipment from unwanted intrusion.

Historically it has been well-understood that traditional internet cookies trigger this rule. They function by creating a file on the user’s computer which stores information. Later, if the user returns to the website, the information in the file stored on the user’s computer is accessed (e.g., to verify someone’s language preference). 

However, the extent to which newer methods of tracking a user’s digital footprint – such as pixels, URL tracking and JavaScript code – also trigger this rule has, to date, been much less clear.

How does the EDPB interpret the ‘cookie rule’?

In a word: broadly. For each part of the relevant test under the cookie-rule – the nature of information; what constitutes terminal equipment; and what it means to gain access to or store such information – the EDPB applies a wide reading. For example:

  • It does not matter how long information is stored on terminal equipment – the ephemeral storage of any information (for example, in RAM or CPU cache) is sufficient.
  • The nature and volume of information stored or accessed is also irrelevant. Note that it is also irrelevant whether the information is personal data (albeit this much was already well-understood prior to the guidelines).
  • Perhaps most controversially, the EDPB also suggests that it may not matter who gives the instruction to transmit information to the accessing entity – the proactive sending of information by the terminal equipment might also be caught.

Which technologies are caught?

The upshot of this interpretation is that the EDPB considers, in most cases, that the use of the following technologies will trigger the cookie rule:

  • URL and pixel tracking: for example, tracking pixels used to ascertain whether an email has been opened, or tracking links used by websites to identify the origin of traffic to the website, such as for marketing attribution.
  • Local processing: for example, using an API on a website to remotely access locally generated information.
  • Tracking based on IP only: for example, the transmission of a static outbound IPv4 originating from a user’s router, used to track a user across multiple domains for online advertising purposes.
  • Internet of Things (IoT) reporting: for example, smart household devices transmitting information to a remote server controlled by the manufacturer, whether directly or via intermediary equipment (such as a mobile phone).

What are the practical implications?

If a technology is caught by the cookie rule, then the company deploying that technology must obtain prior, opt-in consent before accessing or storing the information, unless the company can demonstrate that the storage of, or access to, the information is strictly necessary for the purpose of delivering the digital service. 

It is probably fair to say that this does not consistently happen in practice as of today. The practicalities of obtaining consent may also be challenging, depending on the context in which the technology is used. From the user’s perspective, questions of ‘consent fatigue’, in a world in which users are already bombarded with cookie consent pop-ups, also arise.

Responses to the EDPB’s consultation on the draft guidelines will make for interesting reading. Even when finalised, the guidelines will represent the EU data protection authorities’ interpretation of the law and are not directly binding law in their own right. Certainly, many of these points would form the basis for an interesting legal challenge before the European courts. In the meantime, however, businesses operating in the EU are advised to start preparing for a world where the scope of the cookie rule, as applied by the regulator, is much broader than they may previously have realised.

]]>