James Clark, Heidi Waem, John Magee and Rachel de Souza | Privacy Matters | DLA Piper Data Protection and Privacy | DLA Piper https://privacymatters.dlapiper.com/author/hwaem/ DLA Piper's Global Privacy and Data Protection Resource Tue, 14 Jan 2025 13:54:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.8&lxb_maple_bar_source=lxb_maple_bar_source https://privacyblog.dlapiperblogs.com/wp-content/uploads/sites/32/2023/07/cropped-Favicon_512x512-32x32.gif James Clark, Heidi Waem, John Magee and Rachel de Souza | Privacy Matters | DLA Piper Data Protection and Privacy | DLA Piper https://privacymatters.dlapiper.com/author/hwaem/ 32 32 EU: EDPB Opinion on AI Provides Important Guidance though Many Questions Remain https://privacymatters.dlapiper.com/2025/01/eu-edpb-opinion-on-ai-provides-important-guidance-though-many-questions-remain/ Tue, 14 Jan 2025 13:53:05 +0000 https://privacymatters.dlapiper.com/?p=7528 Continue Reading]]> A much-anticipated Opinion from the European Data Protection Board (EDPB) on AI models and data protection has not resulted in the clear or definitive guidance that businesses operating in the EU had hoped for. The Opinion emphasises the need for case-by-case assessments to determine GDPR applicability, highlighting the importance of accountability and record-keeping, while also flagging ‘legitimate interests’ as an appropriate legal basis under specific conditions. In rejecting the proposed Hamburg thesis, the EDPB has stated that AI models trained on personal data should be considered anonymous only if personal data cannot be extracted or regurgitated.

Introduction

On 17 December 2024, the EDPB published a much-anticipated Opinion on AI models and data protection.  The Opinion includes the EDPB’s view on the following key questions: does the development and use of an AI model involve the processing of personal data; and if so, what is the correct legal basis for that processing?

As is sometimes the case with EDPB Opinions, which necessarily represent the consensus view of the supervisory authorities of 27 different Member States, the Opinion does not provide many clear or definitive answers.  Instead, the EDPB offers indicative guidance and criteria, calling for case-by-case assessments of AI models to understand whether, and how, they are impacted by the GDPR.  In this context, the Opinion repeatedly highlights the importance of accountability and record-keeping by businesses developing or using AI, so that the applicability of data protection laws, and the business’ compliance with those laws, can be properly assessed. 

Whilst the equivocation of the Opinion might be viewed as unhelpful by European businesses looking for regulatory certainty, it is also a reflection of the complexities inherent in this intersection of law and technology.

In summary, the answers given by the EDPB to the four questions in the Opinion are as follows:

  1. Can an AI model, which has been trained using personal data, be considered anonymous?  Yes, but only in some cases.  It must be impossible, using all means reasonably likely to be used, to obtain personal data from the model, either through attacks which aim to extract the original training data from the model itself, or through interactions with the AI model (i.e., personal data provided in responses to prompts / queries). 
  2. Is ‘legitimate interests’ an appropriate legal basis for the training and development of an AI model? In principle yes, but only where the processing of personal data is necessary to develop the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  In particular, the issue of data minimisation, and the related issue of web-scraping / indiscriminate capture of data, will be relevant here. 
  3. Is ‘legitimate interests’ an appropriate legal basis for the deployment of an AI model? In principle yes, but only where the processing of personal data is necessary to deploy the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  Here, the impact on the data subject of the use of the AI model is of predominant importance.
  4. If an AI Model has been found to have been created, updated or developed using unlawfully processed personal data, how does this impact the subsequent use of that AI model?  This depends in part on whether the AI model was first anonymised before being disclosed to the deployer of that model (see Question 1).  Otherwise, the deployer of the model may need to assess the lawfulness of the development of the model as part of its accountability obligations.

Background

The Opinion was issued by the EDPB under Article 64 of the GDPR, in response to a request from the Irish Data Protection Commission.  Article 64 requires the EDPB to publish an opinion on matters of ‘general application’ or which ‘produce effects in more than one Member State’. 

In this case, the Irish DPC asked the EDPB to provide an opinion on the above-mentioned questions – a request that is not surprising given the general importance of AI models to businesses across the EU, but also in light of the large number of technology companies developing those models who have established their European operations in Ireland. 

In order to understand the Opinion, it helps to be familiar with certain concepts and terminology relating to AI. 

First, the Opinion distinguishes between an ‘AI system’ and an ‘AI model’. For the former, the EDPB relies on the definition given in the EU AI Act. In short: a machine-based system operating with some degree of autonomy that infers, from inputs, how to produce outputs such as  predictions, content, recommendations, or decisions.  An AI model, meanwhile, is a component part of an AI system. Colloquially, it is the ‘brain’ of the AI system – an algorithm, or series of algorithms (such as in the form of a neural network), that recognises patterns in data. AI models require the addition of further components, such as a user interface, to become AI systems. To take a common example – the generative AI system known as Chat GPT is a software application comprised of an AI model (the GPT Large Language Model) connected to a chatbot-style user interface that allows the user to submit queries (or ‘prompts’) to the model in the form of natural language questions. Whilst the Opinion is notionally concerned only with AI models, at times the Opinion appears to blur the distinction between the model and the system, in particular, when discussing the significance of model outputs that are only rendered comprehensible to the user through an interface that sits outside of the model.

Second, the Opinion relies on an understanding of a typical ‘AI lifecycle’, pursuant to which an AI model is first developed by training the model on large volumes of data.  This training may happen in a number of phases which become increasingly refined (referred to as ‘fine-tuning’). Only after an AI model is developed can it be used, or ‘deployed’, in a live setting, as part of an AI system.  Often, the developer of an AI model will not be the same person as the deployer.  This is relevant because the Opinion variously addresses both development and deployment phases.

The significance of the ‘Hamburg thesis’

With respect to the key question of whether AI models can be considered anonymous, the Opinion follows in the wake of a much-discussed paper published in July 2024 by the data protection authority for the German state of Hamburg.  The paper took the position that AI models (specifically, Large Language Models) are, in isolation, anonymous – they do not involve the processing of personal data. 

In order to reach that conclusion, the paper decoupled the model itself from: (i) the prior training of the model (which may involve the collection and further processing of personal data as part of the training dataset); and (ii) the subsequent use of the model, whereby a prompt/input may contain personal data, and an output may be used in a way that means it constitutes personal data.

Looking only at the AI model itself, the paper decided that the tokens and values which make up the ‘inner workings’ of a typical AI model do not, in any meaningful way, relate to or correspond with information about identifiable individuals.  Consequently, the model itself was found to be anonymous, even if the development and use of the model involves the processing of personal data. 

The Hamburg thesis was welcomed for several reasons, not least because it resolved difficult questions such as how data subject rights could be understood in relation to an AI model (if someone asks for their personal data to be deleted, then what can this mean in the context of an AI model?), and the question of the lawful basis for ‘storing’ personal data in an AI model (as distinct from the lawful basis for collecting and preparing data to train the model).

However, as we go on to explain, the EDPB Opinion does not follow the relatively simple and certain framework presented by the Hamburg thesis.  Instead, it introduces uncertainty by asserting that there are, in fact, scenarios where an AI model contains personal data, but that this must be determined on a case-by-case basis.

Are AI models anonymous?

First, the Opinion is only concerned with AI models that have been trained using personal data.  Therefore, AI models trained using solely non-personal data (such as statistical data, or financial data relating to businesses) can, for the avoidance of doubt, be considered anonymous.  However, in this context the broad scope of ‘personal data’ under the GDPR must be remembered, and the Opinion does not suggest any de minimis level of personal data that needs to be involved in the training of the AI model for the question of GDPR applicability to arise.

Where personal data is used in the training phase, the next question is whether the model is specifically designed to provide personal data regarding individuals whose personal data were used to train the model.  If so, the AI model will not be anonymous.  For example, an AI model that is trained to provide a user, on request, with biographical information and contact details for directors of public companies, or a generative AI model that is trained on the voice recordings of famous singers so that it can, in turn, mimic the voices of those singers.  In each case, the model is trained on personal data of specific individuals, in order to be able to produce other personal data about those individuals as an output. 

Finally, there is the intermediary case of AI models that are trained on personal data, but that are not designed to provide personal data related to the training data as an output.  It is this use case that the Opinion focuses on.  The conclusion is that AI models in this category may be anonymous, but only if the developer of the model can demonstrate that information about individuals whose personal data was used to train the model cannot be ‘obtained from’ the model, using all means reasonably likely to be used.  Notwithstanding that personal data used for training the model no longer exists within the model in its original form (but rather it is “represented through mathematical objects“), that information is, in the eyes of the EDPB, still capable of constituting personal data.

The following question then arises: how does someone ‘obtain’ personal data from an AI model? In short, the Opinion posits two possibilities.  First, that training data is ‘extracted’ via deliberate attacks.  The Opinion refers to an evolving field of research in this area and makes reference to techniques such as ‘model inversion’, ‘reconstruction attacks’, and ‘attribute and membership inference’.  These are techniques that can be deployed to trick the model into revealing training data, or otherwise reconstruct that training data, in some cases relying on privileged access to the model itself.  Second, is the risk of accidental or inadvertent ‘regurgitation’ of personal data as part of an AI model’s outputs. 

Consequently, a developer must be able to demonstrate that its AI model is resistant both to attacks that extract personal data directly from the model, as well as to the risk of regurgitation of personal data in response to queries:  “In sum, the EDPB considers that, for an AI model to be considered anonymous, using reasonable means, both (i) the likelihood of direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to train the model; as well as (ii) the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant for any data subject“. 

Which criteria should be used to evaluate whether an AI model is anonymous?

Recognising the uncertainty in its conclusion that the AI models may or may not be anonymous, the EDPB provides a list of criteria that can be used to assess the likelihood of a model being found to contain personal data.  These include:

  • Steps taken to avoid or limit the collection of personal data during the training phase.
  • Data minimisation or masking measures (e.g., pseudonymisation) applied to reduce the volume and sensitivity of personal data used during the training phase.
  • The use of methodologies during model development that reduce privacy risks (e.g., regularisation methods to improve model generalisation and reduce overfitting, and appropriate and effective privacy-preserving techniques, such as differential privacy).
  • Measures that reduce the likelihood of obtaining personal data from queries (e.g., ensuring the AI system blocks the presentation to the user of outputs that may contain personal data).
  • Document-based audits (internal or external) undertaken by the model developer that include an evaluation of the chosen measures and of their impact to limit the likelihood of identification.
  • Testing of the model to demonstrate its resilience to different forms of data extraction attacks.

What is the correct legal basis for AI models?

When using personal data to train an AI model, the preferred legal basis is normally the ‘legitimate interests’ of the controller, under Article 6(1)(f) GDPR. This is for practical reasons. Whilst, in some circumstances, it may be possible to obtain GDPR-compliant consent from individuals authorising the use of their data for AI training purposes, in most cases this will not be feasible. 

Helpfully, the Opinion accepts that legitimate interests is, in principle, a viable legal basis for processing personal data to train an AI model. Further, the Opinion also suggests that it should be straightforward for businesses to identify a lawful legitimate interest. For example, the Opinion cites “developing an AI system to detect fraudulent content or behaviour” as a sufficiently precise and real interest. 

However, where businesses may have more difficulty is in showing that the processing of personal data is necessary to realise their legitimate interest, and that their legitimate interest is not outweighed by any impact on the rights and freedoms of data subjects (the ‘balancing test’). Whilst this is fundamentally just a restatement of existing legal principles, the following sentence should nevertheless cause some concern for businesses developing AI models, in particular Large Language Models: “If the pursuit of the purpose is also possible through an AI model that does not entail processing of personal data, then processing personal data should be considered as not necessary“. Technically speaking, it may often be the case that personal data is not essential for the training of an AI model – however, this does not mean that it is straightforward to systematically remove all personal data from a training dataset, or otherwise replace all identifying elements with ‘dummy’ values. 

With respect to the balancing test, the EDPB asks businesses to consider a data subject’s interest in self-determination and in maintaining control over their own data when considering whether it is lawful to collect personal data for model training purposes.  In particular, it may be more difficult to satisfy the balancing test if a developer is scraping large volumes of personal data (especially including any sensitive data categories) against their wishes, without their knowledge, or otherwise in contexts that would not be reasonably expected by the data subject. 

When it comes to the separate purpose of deploying an AI model, the EDPB asks businesses to consider the impact on the data subject’s fundamental rights that arise from the purpose for which the AI model is used.  For example, AI models that are used to block content publication may adversely affect a data subject’s fundamental right to freedom of expression.  However, conversely the EDPB recognises that the deployment of AI models may have a positive impact on a data subject’s rights and freedoms – for example, an AI model that is used to improve accessibility to certain services for people with disabilities). In line with Recital 47 GDPR, the EDPB reminds controllers to consider the ‘reasonable expectations’ of data subjects in relation to both training and deployment uses of personal data.

Finally, the Opinion discusses a range of ‘mitigating measures’ that may be used to reduce risks to data subjects and therefore tip the balancing test in favour of the controller.  These include:

  • Technical measures to reduce the volume or sensitivity of personal data at use (e.g., pseudonymisation, masking).
  • Measures to facilitate the exercise of data subject rights (e.g., providing an unconditional right for data subjects to opt-out of the use of their personal data for training or deploying the model; allowing a reasonable period of time to elapse between collection of training data and its use).
  • Transparency measures (e.g., public communications about the controller’s practices in connection with the use of personal data for AI model development).
  • Measures specific to web-scraping (e.g., excluding publications that present particular risks; excluding certain data categories or sources; excluding websites that clearly object to web scraping).

Notably, the EDPB observes that, to be effective, these mitigating measures must go beyond mere compliance with GDPR obligations (for example, providing a GDPR compliant privacy notice, which a controller would in any case be required to do, would not be an effective transparency measure for these purposes). 

When are companies liable to non-compliant AI models?

In its final question, the DPC sought clarification from the EDPB on how a deployer of an AI model might be impacted by any unlawful processing of personal data in the development phase of the AI model. 

According to the EDPB, such ‘upstream’ unlawful processing may impact a subsequent deployer of an AI model in the following ways:

  • Corrective measures taken against the developer may have a knock-on effect on the deployer – for example, if the developer is ordered to delete personal data unlawfully collected for training purposes, the developer would not be allowed to subsequently process this data. However, this raises an important practical question about how such data could be identified in, and deleted from, the AI model, taking into account the fact that the model does not retain training data in its original form.
  • Unlawful processing in the development phase may impact the legal basis for the deployment of the model – in particular, if the deployer of the AI model is relying on ‘legitimate interests’, it will be more difficult to satisfy the balancing test in light of the deficiencies associated with the collection and use of the training data.

In light of these risks, the EDPB recommends that deployers take reasonable steps to assess the developer’s compliance with data protection laws during the training phase.  For example, can the developer explain the sources of data used, steps taken to comply with the minimisation principle, and any legitimate interest assessments conducted for the training phase?  For certain AI models, the transparency obligations imposed in relation to AI systems under the AI Act should assist a deployer in obtaining this information from a third party AI model developer. While the opinion provides a useful framework for assessing GDPR issues with AI systems, businesses operating in the EU may be frustrated with the lack of certainty or definitive guidance on many key questions relating to this new era of technology innovation.

]]>
EU: Data Act Frequently Asked Questions answered by the EU Commission https://privacymatters.dlapiper.com/2024/09/data-act-frequently-asked-questions-answered-by-the-eu-commission/ Mon, 23 Sep 2024 16:09:32 +0000 https://privacymatters.dlapiper.com/?p=7432 Continue Reading]]> The EU Data Act is one of the cornerstones of the EU’s Data Strategy and introduces a new and horizontal set of rules on data access and use to boost the EU’s data economy. Most of the provisions of the Data Act will become applicable as of 12 September 2025. To assist stakeholders in the implementation, the European Commission recently published a fairly extensive FAQ document.  In particular, the FAQs contain clarifications in relation to data in scope of the Act; overlap with other data protection laws and EU legislation; implementation of IoT data sharing; and transfer restrictions.  

Our article providing a summary of the key takeaways from the FAQs is available here.

For more information on how DLA Piper can support with the Data Act and other recent EU digital regulations, please refer to our EU Digital Decade website.

]]>
EU/UK: Data-Sharing Frameworks – A State of Play in the EU and the UK https://privacymatters.dlapiper.com/2024/06/eu-uk-data-sharing-frameworks-a-state-of-play-in-the-eu-and-the-uk/ Thu, 06 Jun 2024 12:07:18 +0000 https://privacymatters.dlapiper.com/?p=7335 Continue Reading]]> Disclaimer: This article first appeared in the June 2024 issue of PLC Magazine, and is available at http://uk.practicallaw.com/resources/uk-publications/plc-magazine.

In order to capture the benefits of data-driven innovation, the EU and the UK are taking action to facilitate data sharing across various industries.

In the EU, the European Commission is investing €2 billion to foster the development of so-called “common European data spaces” and the associated digital infrastructure. The UK government has announced similar, mainly policy, initiatives regarding the establishment of data-sharing frameworks, referred to as smart data schemes.

Despite the shared objectives, differences emerge between the EU and UK approaches, raising questions about alignment, implementation efficiency and market dynamics.

In this article, DLA Piper:

  • Explores the concepts of data spaces and data schemes, and the policy objectives behind them.
  • Gives an overview of the emerging rules that will be part of the foundation of these data-sharing frameworks in the EU and the UK.
  • Examines what can be expected from these initiatives and what hurdles still need to be overcome in order to secure successful implementation.

The article is available here.

]]>
CJEU ruling clarifies data protection and e-privacy issues in the ad-tech space https://privacymatters.dlapiper.com/2024/03/cjeu-ruling-clarifies-data-protection-and-e-privacy-issues-in-the-ad-tech-space/ Wed, 13 Mar 2024 10:43:25 +0000 https://privacymatters.dlapiper.com/?p=7240 Continue Reading]]> Introduction

Identifiability; what can amount to personal data; and joint controllership are some of the issues addressed by the Court of Justice of the European Union (CJEU) in its recent judgment in the IAB Europe case (C-604/22). This case concerned the use of personal data for online advertising purposes and the use of real time bidding technology.

The CJEU’s judgment, delivered on 7 March 2024, is a result of IAB Europe’s appeal of a decision of the Belgian Data Protection Authority (Belgian DPA) regarding the Transparency and Consent Framework (TCF) and the IAB Europe’s role within it.

Background

IAB Europe is a non-profit association representing undertakings in the digital marketing and advertising sector at European level. It developed the TCF, which is an operational framework of rules intended to enable online publishers, data brokers and advertisers to obtain users’ consent and lawfully process their personal data.

The TCF is widely applied in the context of a real time auctioning system used to acquire advertising space for the display of targeted advertisements online. A key component of the TCF is the Transparency and Consent String (TC String).

The TC String is a combination of letters and characters which encodes and records user preferences through consent management platforms (CMPs), when they visit a website or app. The TC String is then shared with ad platforms and other participants of the ad-tech ecosystem; the CMP also places a specific cookie on the user device. When combined, the TC String and this cookie can be linked to the user’s IP address.

On 2 February 2022, the Belgian DPA held that the TC String amounts to personal data, that the IAB Europe qualifies as a data controller under the GDPR and that IAB Europe is in non-compliance with certain requirements of the GDPR as a result (for details see our blogpost at Belgian DPA decision on IAB Transparency and Consent Framework | Privacy Matters (dlapiper.com)).

IAB Europe contested the Belgian DPA decision, and the Brussels Court of Appeal referred two questions to the CJEU for a preliminary ruling:

  1. Whether a character string capturing user preferences in connection to the processing of their personal data constitutes personal data.
  2. Whether an organisation which proposes to its members a framework relating to the consent to the processing of personal data containing rules setting out how such personal data is to be stored or disseminated must be classified as a controller within the meaning of the GDPR.

The ruling

First question

Drawing from its previous rulings, the CJEU stated that the concept of personal data under Article 4(1) of the GDPR includes information resulting from the processing of personal data relating to an identified or identifiable person. It was noted that a string such as the TC String contains individual preferences of an individual user in relation to the processing of their personal data.

The CJEU concluded that, if the combination of a TC String with additional data, such as the user’s IP address, allows the user to be identified, then the TC String contains information concerning an identifiable user and constitutes personal data within the meaning of Article 4(1) of the GDPR.

The fact that IAB Europe cannot itself combine the TC String with the user’s IP address and does not have direct access to the data processed by its member does not change that conclusion.

The CJEU took the view that, subject to the verifications that are for the Brussels Court of Appeal to carry out, IAB Europe under the TCF has reasonable means allowing to identify an individual from a TC String by requesting its members to provide it with all information allowing it to identify the users whose data are subject of a TC String.

It follows from this that a TC String can constitute personal data within the meaning of Article 4(1) of the GDPR.

Second question

To address the second question, the CJEU built upon its previous judgments and stated that a natural or legal person exerting influence over the processing of personal data and, as result, participating in the determination of the purposes and means of the processing may be regarded as a controller within the meaning of Article 4(7) of the GDPR.

The CJEU confirmed again that the concept of joint controllership does not necessarily imply equal responsibility and does not require each joint controller to have access to the personal data concerned.

The CJEU took the view that IAB Europe as a sectoral organisation which makes available to its members a standard, appears to exert influence over the personal data processing operations when the consent preferences are recorded in a TC String and jointly determines, with IAB members, the purposes and means of those operations.

It follows that IAB Europe can, in certain instances, be regarded as a controller within the meaning of Article 4(7) of the GDPR.

The court clarified this point further, adding that a distinction must be drawn between the processing of personal data carried out by the members of IAB Europe, when the consent preferences of the users concerned are recorded in a TC String in accordance with the framework of rules established in the TCF, compared with the subsequent processing of personal data by operators and third parties on the basis of those preferences. Accordingly, the court was of the view that IAB Europe cannot be automatically regarded as controller in respect of subsequent data processing operations carried out by the third parties based on the preferences contained in the TC String, such as digital advertising or content personalisation, if IAB Europe does not exert an influence in the determination of either the purposes or the means of the processing.

Conclusion / implications

While not necessarily seismic or revelatory, the CJEU decision does bring welcome clarity on some longstanding data protection and e-privacy issues in the ad-tech space, in particular on the question of identifiability of individuals, the breadth of what can amount to personal data and the reach of joint controllership.

IAB Europe has welcomed the decision that “provides well-needed clarity over the concepts of personal data and (joint) controllership, which will allow a serene completion of the remaining legal proceedings“.

Next steps are for the matter to be assessed by the Brussels Court of Appeal and to issue a final determination. Until then, the Belgian DPA’s decision continues to remain suspended.

Despite all the prophecies of doom, we believe that the TCF will emerge stronger from this decision. This is because neither the questions submitted to the court nor the CJEU’s answers call the TCF into question. On the contrary, IAB Europe should be able to resolve the issue of joint controllership for the participants in the TCF at a technical level, especially since, according to the CJEU, joint controllership cannot automatically be assumed for subsequent processing operations on the basis of the preferences articulated via the TC String. Organisations should assess whether and how they are using the TCF and continue to keep developments in this judgment under review.

]]>
EU: EU formally adopts ‘Data Act’ https://privacymatters.dlapiper.com/2023/11/eu-eu-formally-adopts-data-act/ Tue, 28 Nov 2023 13:31:04 +0000 https://privacymatters.dlapiper.com/?p=7159 Continue Reading]]>

On 27 November 2023, the Council formally adopted the final version of the regulation on harmonised rules on fair access to and use of data (“Data Act”), after the European Parliament had adopted the Data Act earlier this month.

Drafted with the objective of fostering innovation and facilitating the sharing of data between service providers, the Data Act introduces rules on sharing, access to and re-use of data; data sharing agreements; public emergency access to data; cloud switching obligations; and data portability.

This new horizontal regulation has important implications for data regulation, intellectual property and contract law within the European Union.

Summary of key provisions

  • Data sharing obligations – Under the Data Act, several types of data sharing obligations are introduced:

    • Product data and related service data:

      Manufacturers of connected products and suppliers of related services placed on the market in the EU must ensure ‘access by design’ for the user, to “product data and related service data, including the relevant metadata necessary to interpret and use the data”. Such access should be easy, secure, free of charge, in a comprehensive structured, commonly used and machine-readable format, and, where relevant and technically feasible, directly accessible.

      Product data means “data generated by the use of a connected product that the manufacturer designed to be retrievable, via an electronic communications service, physical connection or on-device access, by a user, data holder or a third party, including, where relevant, the manufacturer”.

      Related service data means “data representing the digitisation of user actions or of events related to the connected product, recorded intentionally by the user or generated as a by-product of the user’s action during the provision of a related service by the provider”.

      In addition, where the user cannot directly access those data, the data holder must make “readily available data” and the relevant metadata necessary to interpret and use those data accessible to the user. Readily available data means “product data and related service data that a data holder lawfully obtains or can lawfully obtain from the connected product or related service, without disproportionate effort going beyond a simple operation”.

      To enable data portability, the data holder also needs to provide the same data to a third party – which cannot be a gatekeeper within the meaning of the Digital Markets Act (“DMA”) – upon the user’s or its representative’s request.

      These data must be made available in a comprehensive structured, commonly used and machine-readable format; in an easy and secure manner, in the same quality; on a continuous basis and in real-time (where relevant and technically feasible); and free of charge to the user, and if requested by the user, to a third party.

      While the concept of a ‘user’ includes a data subject (within the meaning of GDPR) that owns, rents or leases a product or receives a related service, the notion also includes legal persons and thus also applies in B2B relations. The availability of these IoT data will likely lead to the creation of (new) secondary markets for many organisations. The obtained data can, however, not be used to compete with the originating product (i.e. on the product’s primary market).   

    • Data sharing with public bodies:

      Data holders that are legal persons will be obliged to share the data they hold (and associated metadata) with public bodies, the European Commission, the European Central Bank and Union bodies, where there is an exceptional need to use the requested data, such as in case of public emergencies or the production of official statistics or the mitigation of or recovery from a public emergency. In the latter situation, the public body must have “exhausted all other means at its disposal to obtain such data, including purchase of non-personal data on the market by offering market rates, or by relying on existing obligations to make data available or the adoption of new legislative measures which could guarantee the timely availability of the data”.

      The Data Act lays down a specific procedure for these public emergency data requests which include requiring the public body to specify what data are required; demonstrate the exceptional need for which the data are requested; explain the purpose of the request, the intended use of the data requested, and the duration of that use; state the legal basis for requesting the data; and specify the deadline by which the data are to be made available or within which the data holder may request the public body to modify or withdraw the request.

      The Data Act sets out further requirements regarding purpose of data use, onward sharing of the data by public bodies; and processing of personal data and disclosure of trade secrets. While the data in principle need to be provided free of charge, compensation would be possible in certain instances. While the data holder receiving a request for access must provide the data without undue delay, the data holder may decline or seek the modification of the request within certain circumstances.

      Requests may also be directed to data holders established in other Members States, subject to approval of the competent authority of that Member State of establishment.

      The procedure does not apply when public bodies are acting in a law enforcement context.
  • Data sharing agreements – The Data Act also restricts contract freedom in relation to data sharing agreements in B2B relations for cases where data sharing obligations apply.

    Provisions that, to the user’s detriment, derogate or deviate from its rights in relation to the data are not binding.

    Data sharing conditions between data holders and data recipients must be fair, reasonable, transparent and non-discriminatory (FRAND) and – unless upon the user’s request – non-exclusive.

    Even in B2B relations, only a reasonable compensation that considers certain criteria can be agreed between data holders and data recipients.

    In case of conflicts, users and data recipients have access to dispute settlement bodies.

    The Data Act also foresees a prohibition of certain ‘unfair’ data related terms which has been unilaterally imposed. The mechanism provides for an open norm, along prohibited or presumably prohibited data related clauses.
  • Cloud switching obligations and contractual requirements – Providers of data processing services (including cloud and edge services) must ensure that their customers can switch to different data processing services of another service provider of the same service type, to an on-premises system or to use several providers at the same time.

    The Data Act prohibits any type of obstacles that inhibit customers, for example, from terminating the service agreement after the maximum notice period, or from porting customer’s exportable data and digital assets to another provider or on-prem system. In addition, customers may not be prevented from maintaining ‘functional equivalence’ of the service in the IT-environment of the different provider(s).

    Furthermore, customer agreements for data processing services are subject to a set of minimum requirements, including the requirement that the rights of the customer and the obligations of the provider of a data processing service in relation to switching between providers of such services shall be clearly set out in a written contract, as well as requirements for transition and notice clauses.

    In addition, a gradual withdrawal of switching and data egress charges within three years after the entry into force is foreseen and, in some situations, technical measures to facilitate switching are made mandatory.
  • Transfer restrictions for non-personal data – As under the Data Governance Act (DGA), the Data Act contains measures to ensure that non-personal data is not transferred to countries outside the European Economic Area (EEA) without sufficient protection of intellectual property rights, trade secrets, confidentiality, and other EU interests. A more detailed analysis of these rules is provided in this article.

    In addition, the Data Act imposes transparency obligations upon providers of data processing services with regard to the jurisdiction to which the IT infrastructure is deployed and the technical and organizational measures adopted to “prevent governmental access to non-personal data held in the EU where such transfer or access would create a conflict with EU or Member State law.
  • Interoperability and essential requirements – The Data Act sets out several interoperability requirements, including for data processing services and participants in data spaces.

    For participants in data spaces, it provides “essential requirements” for harmonised standards.

    For data processing services, the requirements are similar and also aim to achieve operability for the purposes of in-parallel use of data processing services. The new regulation also sets minimum requirements for smart contracts which must be complied with by vendors of applications using smart contracts or, in their absence, ‘deployers’ of certain smart contracts.

    In these cases, the Commission may (or must) request European standardisation organisations to draw up standards complying with these conditions, as well as adopt common specifications based on open interoperability specifications covering the essential requirements by means of an implementing act.
  • Sanctions and enforcement – The Data Act refers to the data protection authorities and their tasks and powers under the GDPR insofar as personal data is concerned. In contrast, for non-personal data, it is still largely left to the Member States to determine which authority (or authorities) they want to assign for supervision and enforcement, the scope of powers awarded to these authorities and to lay down the applicable sanctions, subject to some minimum requirements set out in the Data Act.

    Entities falling within the scope of the Data Act will be subject to the competence of their Member State of establishment or, for entities established in multiple Member States, their main establishment (i.e. their head office or registered office). Unlike under the GDPR’s one-stop-shop, for entities that are not established in the EU, the designation of their legal representative will determine which Member State will have competence.  

    Where a Member State designates more than one competent authority, it shall designate a “data coordinator” to facilitate cooperation between the competent authorities.

    At EU-level, a special role is reserved for the European Data Innovation Board (EDIB), established under the DGA, with advisory and supporting competences similar to those of the European Data Protection Board (EDPB) under the GDPR. However, the EDIB will have no enforcement role.   

    Natural and legal persons can lodge a complaint and seek judicial remedies for alleged infringements of their rights under the Data Act.

Main discussion points in final negotiations

The adopted Data Act contains a number of changes to the original proposed Act. We have set out a summary of the key changes below:

  • Scope – the type of data that is to be covered by the Data Act has been a controversial point throughout negotiations. In the adopted Data Act, the types of data falling within scope have been clarified. In particular, in relation to IoT data, the adopted Data Act focuses on the functionalities of the data collected by connected products instead of the products themselves. A number of definitions have been added with the aim of either aligning the text with existing legislation, such as the Data Governance Act and the Digital Markets Act, or to clarify key concepts.
  • Interplay with existing legislation – In the adopted Data Act a number of changes have been introduced throughout in order to address the relationship between the Data Act and other relevant legislation, such as the GDPR. These changes aim to clarify the interplay between the legislations when both personal and non-personal data is included in the request, including the role of different regulators within these areas.
  • Trade secrets and intellectual property rights – One of the most negotiated areas of the Data Act has been the data sharing obligations and the balance between the protection of trade secrets and intellectual property rights and the objectives of the Data Act. In particular, there has been concern surrounding data access obligations and whether a data holder can prevent the disclosure of sensitive commercial information. Further safeguards in relation third parties have been added and the text now states that under certain conditions, data holders have the right to reject data access requests with a view to protecting trade secrets.
  • Sui generis right to databases – according to the Explanatory Memorandum of the Data Act, the evaluation of the Database Directive (No. 96/9/EC) pointed out that legal uncertainty remains around the application of the sui generis right to databases composed of machine-generated data. As the sui generis right of the Database Directive aims to protect the investments in the collection, and not “the creation of data as a by-product of another economic activity”, the Data Act states that sui generis database right protection does not apply to databases containing data from, or generated by, the use of devices connected to IoT. The Explanatory Memorandum of the Data Act states that Chapter X aims to contribute to legal certainty in cases where the protection of the sui generis right was previously unclear.
  • Functional equivalence – one of the specific objectives of the Data Act is to facilitate switching between cloud and edge services. In particular, the proposed Data Act required that customers maintain functional equivalence (a minimum level of functionality) of the service after they have switched to another service provider. This requirement attracted criticism from commentators, who argued that the scope of the concept was too wide and caused legal uncertainty – for example, by potentially requiring the cloud provider of origin to take responsibility for the performance of a competitor’s service. The final approved version of the Data Act, states that “functional equivalence should not be understood to oblige the source provider of data processing services to rebuild the service in question within the infrastructure of the destination provider of data processing services. Instead, the source provider of data processing services should take all reasonable measures within its power to facilitate the process of achieving functional equivalence through the provision of capabilities, adequate information, documentation, technical support and, where appropriate, the necessary tools”.

Next steps

The Data Act will enter into force after its publication in the Official Journal, which is expected to take place soon. Most of the provisions within the Data Act will become applicable 20 months after its entry into force (likely by August 2025). Organisations should therefore start reviewing their data sharing agreements. 

These rules complement the framework for re-using and sharing of data under the DGA which entered into force on 23 June 2022 and, following a 15-month grace period, is applicable since September 2023.

For a broader picture on the legal initiatives of the EU Data Strategy, we refer to our previous blog posts – “EU Regulatory Data Protection: Many pieces to the regulatory framework puzzle” and “Who’s who under the DMA, DSA, DGA and Data Act?’”.

For further information, please get in touch with your usual DLA Piper contact.

]]>
EU: New EDPB guidelines on the scope of the ‘cookie rule’ https://privacymatters.dlapiper.com/2023/11/eu-new-edpb-guidelines-on-the-scope-of-the-cookie-rule/ Wed, 22 Nov 2023 09:49:30 +0000 https://privacymatters.dlapiper.com/?p=7155 Continue Reading]]> The European Data Protection Board has published new guidelines (14 November 2023) on the scope of Article 5(3) of the e-Privacy Directive – i.e., the so-called ‘cookie rule’.  

These guidelines apply a maximalist interpretation to the cookie rule, meaning that a wide variety of technologies other than traditional cookies are, in the opinion of the EDPB, caught by the rule. Where a technology is caught then, depending on the purpose for which the technology is used, its use will be conditional upon obtaining consent.

The guidelines are open for public consultation until 28 December 2023.

Background

By way of reminder, Article 5(3) of the e-Privacy Directive creates requirement to obtain prior consent where a company stores information, or gains access to information already stored, in the terminal equipment of a subscriber or user of an electronic communications network, and that storing of or access to information is not strictly necessary to deliver the service requested by the subscriber or user. As such, the Directive seeks to protect what it regards as the ‘private sphere’ of the user’s terminal equipment from unwanted intrusion.

Historically it has been well-understood that traditional internet cookies trigger this rule. They function by creating a file on the user’s computer which stores information. Later, if the user returns to the website, the information in the file stored on the user’s computer is accessed (e.g., to verify someone’s language preference). 

However, the extent to which newer methods of tracking a user’s digital footprint – such as pixels, URL tracking and JavaScript code – also trigger this rule has, to date, been much less clear.

How does the EDPB interpret the ‘cookie rule’?

In a word: broadly. For each part of the relevant test under the cookie-rule – the nature of information; what constitutes terminal equipment; and what it means to gain access to or store such information – the EDPB applies a wide reading. For example:

  • It does not matter how long information is stored on terminal equipment – the ephemeral storage of any information (for example, in RAM or CPU cache) is sufficient.
  • The nature and volume of information stored or accessed is also irrelevant. Note that it is also irrelevant whether the information is personal data (albeit this much was already well-understood prior to the guidelines).
  • Perhaps most controversially, the EDPB also suggests that it may not matter who gives the instruction to transmit information to the accessing entity – the proactive sending of information by the terminal equipment might also be caught.

Which technologies are caught?

The upshot of this interpretation is that the EDPB considers, in most cases, that the use of the following technologies will trigger the cookie rule:

  • URL and pixel tracking: for example, tracking pixels used to ascertain whether an email has been opened, or tracking links used by websites to identify the origin of traffic to the website, such as for marketing attribution.
  • Local processing: for example, using an API on a website to remotely access locally generated information.
  • Tracking based on IP only: for example, the transmission of a static outbound IPv4 originating from a user’s router, used to track a user across multiple domains for online advertising purposes.
  • Internet of Things (IoT) reporting: for example, smart household devices transmitting information to a remote server controlled by the manufacturer, whether directly or via intermediary equipment (such as a mobile phone).

What are the practical implications?

If a technology is caught by the cookie rule, then the company deploying that technology must obtain prior, opt-in consent before accessing or storing the information, unless the company can demonstrate that the storage of, or access to, the information is strictly necessary for the purpose of delivering the digital service. 

It is probably fair to say that this does not consistently happen in practice as of today. The practicalities of obtaining consent may also be challenging, depending on the context in which the technology is used. From the user’s perspective, questions of ‘consent fatigue’, in a world in which users are already bombarded with cookie consent pop-ups, also arise.

Responses to the EDPB’s consultation on the draft guidelines will make for interesting reading. Even when finalised, the guidelines will represent the EU data protection authorities’ interpretation of the law and are not directly binding law in their own right. Certainly, many of these points would form the basis for an interesting legal challenge before the European courts. In the meantime, however, businesses operating in the EU are advised to start preparing for a world where the scope of the cookie rule, as applied by the regulator, is much broader than they may previously have realised.

]]>
Belgian DPA decides on the (in)validity of retroactive data processing agreements https://privacymatters.dlapiper.com/2023/11/belgian-dpa-decides-on-the-invalidity-of-retroactive-data-processing-agreements/ Mon, 06 Nov 2023 13:22:33 +0000 https://privacymatters.dlapiper.com/?p=7149 Continue Reading]]> Authors: Heidi Waem, Muhammed Demircan, Nicolas Becker

On 29 September 2023, the Belgian Data Protection Authority (Belgian DPA) issued a decision imposing a reprimand on a public authority and its processor for various infringements of the GDPR, including the lack of a timely signed data processing agreement between the public authority – who is a controller within the meaning of Article 4 GDPR – and its processor. Additionally, the public authority fell short in providing adequate information to data subjects regarding the personal data processing activities it conducts.

This case stands out as a powerful reminder of the paramount importance of GDPR compliance, particularly in the context of data processing agreements between data controllers and processors. These agreements should be in place before any personal data processing activities commence, as confirmed by the Belgian DPA. The case also clarifies the exemption to inform data subjects about processing of their personal data as set out in Article 14(5)c GDPR when member state or European Union law expressly lays down the collection and disclosure of the personal data from those data subjects.

In this blogpost, we will briefly delve into the facts of the case, the findings of the Inspection Service, the subsequent determinations made by the Litigation Chamber and present you the key takeaways from this case.

Facts

The case was initiated on 4 September 2020, with a complaint to the Belgian DPA. The complainant received a parking fine on 20 May 2020, and the communication about the fine was sent to his home address and contained the complainant’s name, address and license plate number. After consulting with the public authority who issued the parking fine, the complainant learned that a third-party service provider was processing the personal data of the complainant, both for the establishment and the collection of the fine. The public authority informed the complainant that there was no data processing agreement in place between the public authority and the third-party service provider at the date when the fine was issued to him.

The public authority signed a “Personal Data Processing Agreement” at a later stage, i.e. on 27 July 2020 and included a clause confirming the retroactive application of the agreement as of the application date of the GDPR on 25 May 2018. In other words, this data processing agreement was deemed effective by the parties as of the GDPR’s application date.

The Findings of the Inspection Service

On 11 May 2021, the Inspection Service of the Belgian DPA issued its report and established the following key findings:

  • The fact that there was no data processing agreement in place between the public authority and the third-party service provider on the date when the personal data of the complainant was processed by the third-party service provider constitutes a breach of Article 28.3 GDPR.
  • The retroactivity clause shall not prejudice the rights of third parties, in particular those of the plaintiff.
  • The public authority could not benefit from the exemption to the transparency obligation foreseen in Article 14(5)c GDPR (“obtaining or disclosure is expressly laid down by Union or Member State law to which the controller is subject and which provides appropriate measures to protection the data subject’s legitimate interest”). Indeed, the legal framework for imposing a parking fee was considered insufficient to qualify for the exemption provided in Article 14(5)c GDPR as it neither mandated the public authority to process the personal data of the complainant for fine collection nor specified measures to protect the legitimate interests of the data subject.

The Findings of the Litigation Chamber

On 9 July 2021, the case was referred to the Litigation Chamber of the Belgian DPA. The Litigation Chamber rendered its decision on 29 September 2023 and came to the following conclusions:

  • With regard to the breach of Article 28.3 GDPR (requirement to enter into a data processing agreement), the Litigation Chamber agreed with the Inspection Service and indicated that i) both controllers and processors are responsible for ensuring that a legally binding agreement governs the processing activities, ii) in the absence of such an agreement, both controllers and processors can be fined by the competent supervisory authority, iii) a retroactive clause in a data processing agreement to cover past processing activities does not compensate for the absence of the data processing agreement at the time of the processing activities and accepting such retroactive clauses would de facto allow a circumvention of Article 28.3 GDPR, iv) retroactive clauses cannot guarantee the rights and freedoms of the data subjects as processing activities were not governed by a legally binding agreement at the time of past processing activities. Therefore, both the controller and the processor were found to be in breach of Article 28.3 GDPR.
  • With regard to the breach of Article 12.1 and 14(5)c GDPR (transparency obligation), the Litigation Chamber first analysed the conditions of the exemption foreseen in Article 14(5)c GDPR and noted a significant difference between the French version of the GDPR and the Dutch/English versions. Basically, the Litigation Chamber agreed with the Dutch/English versions of the text and indicated that the exemption applies when collection and disclosure of data is provided by member state or European Union law.
  • The Litigation Chamber additionally indicated that this exemption in 14(5)c GDPR shall be interpreted restrictively since it constitutes an exception to the data subject’s right to information The Litigation Chamber correctly emphasizes that  this exemption could prevent data subjects from being informed about the existence of their data subject rights in general while these other rights are not subject to a similar exception. Additionally, the Litigation Chamber explained that the laws providing for such an exemption should be particularly clear and shall cover all the data processed by data controllers.

Consequently, the Litigation Chamber found that the scope of national legislation that (i) the public authority referred to as the basis for the exemption in Article 14(5)c did not cover all the data that have been processed by the public authority and the third-party service provider and (ii) the abovementioned national legislation did not provide any appropriate measure to protect the interests of the data subjects. In conclusion, the public authority was found in breach of Article 14 and 12.1 GDPR.

Key Takeaways

The decision at hand again stresses the importance of prioritising data processing agreements when controllers engage with processors and vice versa. Due to the urgencies of business needs, data processing agreements may sometimes be perceived as a lower priority that can be handled after vendors start delivering services. However, this decision reminds us of the importance of consistent and timely compliance with the GDPR, especially in the framework of controller to processor engagements.

Furthermore, the decision also reminds us that exemptions to the GDPR obligations must be interpreted restrictively, and controllers must carefully analyse whether all specific conditions of exemptions – that may not be expressly stated in the GDPR – are fulfilled.

The full decision can be consulted here (in French).

]]>