| Privacy Matters DLA Piper's Global Privacy and Data Protection Resource Tue, 21 Jan 2025 11:53:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.8&lxb_maple_bar_source=lxb_maple_bar_source https://privacyblog.dlapiperblogs.com/wp-content/uploads/sites/32/2023/07/cropped-Favicon_512x512-32x32.gif | Privacy Matters 32 32 EU: DLA Piper GDPR Fines and Data Breach Survey: January 2025 https://privacymatters.dlapiper.com/2025/01/eu-dla-piper-gdpr-fines-and-data-breach-survey-january-2025/ Tue, 21 Jan 2025 11:53:17 +0000 https://privacymatters.dlapiper.com/?p=7534 Continue Reading]]> The seventh annual edition of DLA Piper’s GDPR Fines and Data Breach Survey has revealed another significant year in data privacy enforcement, with an aggregate total of EUR1.2 billion (USD1.26 billion/GBP996 million) in fines issued across Europe in 2024.

Ireland once again remains the preeminent enforcer issuing EUR3.5 billion (USD3.7 billion/GBP2.91 billion) in fines since May 2018, more than four times the value of fines issued by the second placed Luxembourg Data Protection Authority which has issued EUR746.38 million (USD784 million/GBP619 million) in fines over the same period.

The total fines reported since the application of GDPR in 2018 now stand at EUR5.88 billion (USD 6.17 billion/GBP 4.88 billion). The largest fine ever imposed under the GDPR remains the EUR1.2 billion (USD1.26 billion/GBP996 million) penalty issued by the Irish DPC against Meta Platforms Ireland Limited in 2023.

Trends and Insights

In the year from 28 January 2024, EUR1.2 billion fines were imposed. This was a 33% decrease compared to the aggregate fines imposed in the previous year, bucking the 7-year trend of increasing enforcement. This does not represent a shift in focus from personal data enforcement; the clear year on year trend remains upwards. This year’s reduction is almost entirely due to the record breaking EUR 1.2 billion fine against Meta falling in 2023 which skewed the 2023 figures. There was no record breaking fine in 2024.

Big tech companies and social media giants continue to be the primary targets for record fines, with nearly all of the top 10 largest fines since 2018 imposed on this sector. This year alone the Irish Data Protection Commission issued fines of EUR310 million (USD326 million/GBP257 million) against LinkedIn and EUR251 million (USD264 million/GBP208 million) against Meta.  In August 2024, the Dutch Data Protection Authority issued a fine of EUR290 million (USD305 million/GBP241 million) against a well-known ride-hailing app in relation to transfers of personal data to a third country. 

2024 enforcement expanded notably in other sectors, including financial services and energy. For example, the Spanish Data Protection Authority issued two fines totalling EUR6.2 million  (USD6.5 million/GBP5.1 million) against a large bank for inadequate security measures, and the Italian Data Protection Authority fined a utility provider EUR5 million (USD5.25 million/GBP4.15 million) for using outdated customer data.

The UK was an outlier in 2024, issuing very few fines. The UK Information Commissioner John Edwards was quoted in the British press in November 2024 as saying that he does not agree that fines are likely to have the greatest impact and that they would tie his office up in years of litigation. An approach which is unlikely to catch on in the rest of Europe. 

The dawn of personal liability

Perhaps most significantly, a focus on governance and oversight has led to a number of enforcement decisions citing failings in these areas and specifically calling out failings of management bodies. Most significantly the Dutch Data Protection Commission announced it is investigating whether it can hold the directors of Clearview AI personally liable for numerous breaches of the GDPR, following a EUR30.5 million (USD32.03 million/GBP25.32 million) against the company. This novel investigation into the possibility of holding Clearview AI’s management personally liable for continued failings of the company signals a potentially significant shift in focus by regulators who recognise the power of personal liability to focus minds and drive better compliance. 

Data Breach Notifications

The average number of breach notifications per day increased slightly to 363 from 335 last year, a ‘levelling off’ consistent with previous years, likely indicative of organisations becoming more wary of reporting data breaches given the risk of investigations, enforcement, fines and compensation claims that may follow notification. 

A recurring theme of DLA Piper’s previous annual surveys is that there has been little change at the top of the tables regarding the total number of data breach notifications made since the GDPR came into force on 25 May 2018 and during the most recent full year from 28 January 2024 to 27 January 2025. The Netherlands, Germany, and Poland remain the top three countries for the highest number of data breaches notified, with 33471, 27829 and 14,286 breaches notified respectively. 

AI enforcement

There have been a number of decisions this year signalling the intent of data protection supervisory authorities to closely scrutinise the operation of AI technologies and their alignment with privacy and data protection laws. For businesses, this highlights the need to integrate GDPR compliance into the core design and functionality of their AI systems.

Commenting on the survey findings, Ross McKean, Chair of the UK Data, Privacy and Cybersecurity practice said:

“European regulators have signalled a more assertive approach to enforcement during 2024 to ensure that AI training, deployment and use remains within the guard rails of the GDPR.”

We expect for this trend to continue during 2025 as US AI technology comes up against European data protection laws.

John Magee, Global Co-Chair of DLA Piper’s Data, Privacy and Cybersecurity practice commented:

“The headline figures in this year’s survey have, for the first time ever, not broken any records so you may be forgiven for assuming a cooling of interest and enforcement by Europe’s data regulators. This couldn’t be further from the truth. From growing enforcement in sectors away from big tech and social media, to the use of the GDPR as an incumbent guardrail for AI enforcement as AI specific regulation falls into place, to significant fines across the likes of Germany, Italy and the Netherlands, and the UK’s shift away from fine-first enforcement – GDPR enforcement remains a dynamic and evolving arena.”

Ross McKean added:

“For me, I will mostly remember 2024 as the year that GDPR enforcement got personal.”

“As the Dutch DPA champions personal liability for the management of Clearview AI, 2025 may well be the year that regulators pivot more to naming and shaming and personal liability to drive data compliance.”

]]>
EU: EDPB Opinion on AI Provides Important Guidance though Many Questions Remain https://privacymatters.dlapiper.com/2025/01/eu-edpb-opinion-on-ai-provides-important-guidance-though-many-questions-remain/ Tue, 14 Jan 2025 13:53:05 +0000 https://privacymatters.dlapiper.com/?p=7528 Continue Reading]]> A much-anticipated Opinion from the European Data Protection Board (EDPB) on AI models and data protection has not resulted in the clear or definitive guidance that businesses operating in the EU had hoped for. The Opinion emphasises the need for case-by-case assessments to determine GDPR applicability, highlighting the importance of accountability and record-keeping, while also flagging ‘legitimate interests’ as an appropriate legal basis under specific conditions. In rejecting the proposed Hamburg thesis, the EDPB has stated that AI models trained on personal data should be considered anonymous only if personal data cannot be extracted or regurgitated.

Introduction

On 17 December 2024, the EDPB published a much-anticipated Opinion on AI models and data protection.  The Opinion includes the EDPB’s view on the following key questions: does the development and use of an AI model involve the processing of personal data; and if so, what is the correct legal basis for that processing?

As is sometimes the case with EDPB Opinions, which necessarily represent the consensus view of the supervisory authorities of 27 different Member States, the Opinion does not provide many clear or definitive answers.  Instead, the EDPB offers indicative guidance and criteria, calling for case-by-case assessments of AI models to understand whether, and how, they are impacted by the GDPR.  In this context, the Opinion repeatedly highlights the importance of accountability and record-keeping by businesses developing or using AI, so that the applicability of data protection laws, and the business’ compliance with those laws, can be properly assessed. 

Whilst the equivocation of the Opinion might be viewed as unhelpful by European businesses looking for regulatory certainty, it is also a reflection of the complexities inherent in this intersection of law and technology.

In summary, the answers given by the EDPB to the four questions in the Opinion are as follows:

  1. Can an AI model, which has been trained using personal data, be considered anonymous?  Yes, but only in some cases.  It must be impossible, using all means reasonably likely to be used, to obtain personal data from the model, either through attacks which aim to extract the original training data from the model itself, or through interactions with the AI model (i.e., personal data provided in responses to prompts / queries). 
  2. Is ‘legitimate interests’ an appropriate legal basis for the training and development of an AI model? In principle yes, but only where the processing of personal data is necessary to develop the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  In particular, the issue of data minimisation, and the related issue of web-scraping / indiscriminate capture of data, will be relevant here. 
  3. Is ‘legitimate interests’ an appropriate legal basis for the deployment of an AI model? In principle yes, but only where the processing of personal data is necessary to deploy the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  Here, the impact on the data subject of the use of the AI model is of predominant importance.
  4. If an AI Model has been found to have been created, updated or developed using unlawfully processed personal data, how does this impact the subsequent use of that AI model?  This depends in part on whether the AI model was first anonymised before being disclosed to the deployer of that model (see Question 1).  Otherwise, the deployer of the model may need to assess the lawfulness of the development of the model as part of its accountability obligations.

Background

The Opinion was issued by the EDPB under Article 64 of the GDPR, in response to a request from the Irish Data Protection Commission.  Article 64 requires the EDPB to publish an opinion on matters of ‘general application’ or which ‘produce effects in more than one Member State’. 

In this case, the Irish DPC asked the EDPB to provide an opinion on the above-mentioned questions – a request that is not surprising given the general importance of AI models to businesses across the EU, but also in light of the large number of technology companies developing those models who have established their European operations in Ireland. 

In order to understand the Opinion, it helps to be familiar with certain concepts and terminology relating to AI. 

First, the Opinion distinguishes between an ‘AI system’ and an ‘AI model’. For the former, the EDPB relies on the definition given in the EU AI Act. In short: a machine-based system operating with some degree of autonomy that infers, from inputs, how to produce outputs such as  predictions, content, recommendations, or decisions.  An AI model, meanwhile, is a component part of an AI system. Colloquially, it is the ‘brain’ of the AI system – an algorithm, or series of algorithms (such as in the form of a neural network), that recognises patterns in data. AI models require the addition of further components, such as a user interface, to become AI systems. To take a common example – the generative AI system known as Chat GPT is a software application comprised of an AI model (the GPT Large Language Model) connected to a chatbot-style user interface that allows the user to submit queries (or ‘prompts’) to the model in the form of natural language questions. Whilst the Opinion is notionally concerned only with AI models, at times the Opinion appears to blur the distinction between the model and the system, in particular, when discussing the significance of model outputs that are only rendered comprehensible to the user through an interface that sits outside of the model.

Second, the Opinion relies on an understanding of a typical ‘AI lifecycle’, pursuant to which an AI model is first developed by training the model on large volumes of data.  This training may happen in a number of phases which become increasingly refined (referred to as ‘fine-tuning’). Only after an AI model is developed can it be used, or ‘deployed’, in a live setting, as part of an AI system.  Often, the developer of an AI model will not be the same person as the deployer.  This is relevant because the Opinion variously addresses both development and deployment phases.

The significance of the ‘Hamburg thesis’

With respect to the key question of whether AI models can be considered anonymous, the Opinion follows in the wake of a much-discussed paper published in July 2024 by the data protection authority for the German state of Hamburg.  The paper took the position that AI models (specifically, Large Language Models) are, in isolation, anonymous – they do not involve the processing of personal data. 

In order to reach that conclusion, the paper decoupled the model itself from: (i) the prior training of the model (which may involve the collection and further processing of personal data as part of the training dataset); and (ii) the subsequent use of the model, whereby a prompt/input may contain personal data, and an output may be used in a way that means it constitutes personal data.

Looking only at the AI model itself, the paper decided that the tokens and values which make up the ‘inner workings’ of a typical AI model do not, in any meaningful way, relate to or correspond with information about identifiable individuals.  Consequently, the model itself was found to be anonymous, even if the development and use of the model involves the processing of personal data. 

The Hamburg thesis was welcomed for several reasons, not least because it resolved difficult questions such as how data subject rights could be understood in relation to an AI model (if someone asks for their personal data to be deleted, then what can this mean in the context of an AI model?), and the question of the lawful basis for ‘storing’ personal data in an AI model (as distinct from the lawful basis for collecting and preparing data to train the model).

However, as we go on to explain, the EDPB Opinion does not follow the relatively simple and certain framework presented by the Hamburg thesis.  Instead, it introduces uncertainty by asserting that there are, in fact, scenarios where an AI model contains personal data, but that this must be determined on a case-by-case basis.

Are AI models anonymous?

First, the Opinion is only concerned with AI models that have been trained using personal data.  Therefore, AI models trained using solely non-personal data (such as statistical data, or financial data relating to businesses) can, for the avoidance of doubt, be considered anonymous.  However, in this context the broad scope of ‘personal data’ under the GDPR must be remembered, and the Opinion does not suggest any de minimis level of personal data that needs to be involved in the training of the AI model for the question of GDPR applicability to arise.

Where personal data is used in the training phase, the next question is whether the model is specifically designed to provide personal data regarding individuals whose personal data were used to train the model.  If so, the AI model will not be anonymous.  For example, an AI model that is trained to provide a user, on request, with biographical information and contact details for directors of public companies, or a generative AI model that is trained on the voice recordings of famous singers so that it can, in turn, mimic the voices of those singers.  In each case, the model is trained on personal data of specific individuals, in order to be able to produce other personal data about those individuals as an output. 

Finally, there is the intermediary case of AI models that are trained on personal data, but that are not designed to provide personal data related to the training data as an output.  It is this use case that the Opinion focuses on.  The conclusion is that AI models in this category may be anonymous, but only if the developer of the model can demonstrate that information about individuals whose personal data was used to train the model cannot be ‘obtained from’ the model, using all means reasonably likely to be used.  Notwithstanding that personal data used for training the model no longer exists within the model in its original form (but rather it is “represented through mathematical objects“), that information is, in the eyes of the EDPB, still capable of constituting personal data.

The following question then arises: how does someone ‘obtain’ personal data from an AI model? In short, the Opinion posits two possibilities.  First, that training data is ‘extracted’ via deliberate attacks.  The Opinion refers to an evolving field of research in this area and makes reference to techniques such as ‘model inversion’, ‘reconstruction attacks’, and ‘attribute and membership inference’.  These are techniques that can be deployed to trick the model into revealing training data, or otherwise reconstruct that training data, in some cases relying on privileged access to the model itself.  Second, is the risk of accidental or inadvertent ‘regurgitation’ of personal data as part of an AI model’s outputs. 

Consequently, a developer must be able to demonstrate that its AI model is resistant both to attacks that extract personal data directly from the model, as well as to the risk of regurgitation of personal data in response to queries:  “In sum, the EDPB considers that, for an AI model to be considered anonymous, using reasonable means, both (i) the likelihood of direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to train the model; as well as (ii) the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant for any data subject“. 

Which criteria should be used to evaluate whether an AI model is anonymous?

Recognising the uncertainty in its conclusion that the AI models may or may not be anonymous, the EDPB provides a list of criteria that can be used to assess the likelihood of a model being found to contain personal data.  These include:

  • Steps taken to avoid or limit the collection of personal data during the training phase.
  • Data minimisation or masking measures (e.g., pseudonymisation) applied to reduce the volume and sensitivity of personal data used during the training phase.
  • The use of methodologies during model development that reduce privacy risks (e.g., regularisation methods to improve model generalisation and reduce overfitting, and appropriate and effective privacy-preserving techniques, such as differential privacy).
  • Measures that reduce the likelihood of obtaining personal data from queries (e.g., ensuring the AI system blocks the presentation to the user of outputs that may contain personal data).
  • Document-based audits (internal or external) undertaken by the model developer that include an evaluation of the chosen measures and of their impact to limit the likelihood of identification.
  • Testing of the model to demonstrate its resilience to different forms of data extraction attacks.

What is the correct legal basis for AI models?

When using personal data to train an AI model, the preferred legal basis is normally the ‘legitimate interests’ of the controller, under Article 6(1)(f) GDPR. This is for practical reasons. Whilst, in some circumstances, it may be possible to obtain GDPR-compliant consent from individuals authorising the use of their data for AI training purposes, in most cases this will not be feasible. 

Helpfully, the Opinion accepts that legitimate interests is, in principle, a viable legal basis for processing personal data to train an AI model. Further, the Opinion also suggests that it should be straightforward for businesses to identify a lawful legitimate interest. For example, the Opinion cites “developing an AI system to detect fraudulent content or behaviour” as a sufficiently precise and real interest. 

However, where businesses may have more difficulty is in showing that the processing of personal data is necessary to realise their legitimate interest, and that their legitimate interest is not outweighed by any impact on the rights and freedoms of data subjects (the ‘balancing test’). Whilst this is fundamentally just a restatement of existing legal principles, the following sentence should nevertheless cause some concern for businesses developing AI models, in particular Large Language Models: “If the pursuit of the purpose is also possible through an AI model that does not entail processing of personal data, then processing personal data should be considered as not necessary“. Technically speaking, it may often be the case that personal data is not essential for the training of an AI model – however, this does not mean that it is straightforward to systematically remove all personal data from a training dataset, or otherwise replace all identifying elements with ‘dummy’ values. 

With respect to the balancing test, the EDPB asks businesses to consider a data subject’s interest in self-determination and in maintaining control over their own data when considering whether it is lawful to collect personal data for model training purposes.  In particular, it may be more difficult to satisfy the balancing test if a developer is scraping large volumes of personal data (especially including any sensitive data categories) against their wishes, without their knowledge, or otherwise in contexts that would not be reasonably expected by the data subject. 

When it comes to the separate purpose of deploying an AI model, the EDPB asks businesses to consider the impact on the data subject’s fundamental rights that arise from the purpose for which the AI model is used.  For example, AI models that are used to block content publication may adversely affect a data subject’s fundamental right to freedom of expression.  However, conversely the EDPB recognises that the deployment of AI models may have a positive impact on a data subject’s rights and freedoms – for example, an AI model that is used to improve accessibility to certain services for people with disabilities). In line with Recital 47 GDPR, the EDPB reminds controllers to consider the ‘reasonable expectations’ of data subjects in relation to both training and deployment uses of personal data.

Finally, the Opinion discusses a range of ‘mitigating measures’ that may be used to reduce risks to data subjects and therefore tip the balancing test in favour of the controller.  These include:

  • Technical measures to reduce the volume or sensitivity of personal data at use (e.g., pseudonymisation, masking).
  • Measures to facilitate the exercise of data subject rights (e.g., providing an unconditional right for data subjects to opt-out of the use of their personal data for training or deploying the model; allowing a reasonable period of time to elapse between collection of training data and its use).
  • Transparency measures (e.g., public communications about the controller’s practices in connection with the use of personal data for AI model development).
  • Measures specific to web-scraping (e.g., excluding publications that present particular risks; excluding certain data categories or sources; excluding websites that clearly object to web scraping).

Notably, the EDPB observes that, to be effective, these mitigating measures must go beyond mere compliance with GDPR obligations (for example, providing a GDPR compliant privacy notice, which a controller would in any case be required to do, would not be an effective transparency measure for these purposes). 

When are companies liable to non-compliant AI models?

In its final question, the DPC sought clarification from the EDPB on how a deployer of an AI model might be impacted by any unlawful processing of personal data in the development phase of the AI model. 

According to the EDPB, such ‘upstream’ unlawful processing may impact a subsequent deployer of an AI model in the following ways:

  • Corrective measures taken against the developer may have a knock-on effect on the deployer – for example, if the developer is ordered to delete personal data unlawfully collected for training purposes, the developer would not be allowed to subsequently process this data. However, this raises an important practical question about how such data could be identified in, and deleted from, the AI model, taking into account the fact that the model does not retain training data in its original form.
  • Unlawful processing in the development phase may impact the legal basis for the deployment of the model – in particular, if the deployer of the AI model is relying on ‘legitimate interests’, it will be more difficult to satisfy the balancing test in light of the deficiencies associated with the collection and use of the training data.

In light of these risks, the EDPB recommends that deployers take reasonable steps to assess the developer’s compliance with data protection laws during the training phase.  For example, can the developer explain the sources of data used, steps taken to comply with the minimisation principle, and any legitimate interest assessments conducted for the training phase?  For certain AI models, the transparency obligations imposed in relation to AI systems under the AI Act should assist a deployer in obtaining this information from a third party AI model developer. While the opinion provides a useful framework for assessing GDPR issues with AI systems, businesses operating in the EU may be frustrated with the lack of certainty or definitive guidance on many key questions relating to this new era of technology innovation.

]]>
EU: Cyber Resilience Act published in EU Official Journal https://privacymatters.dlapiper.com/2024/11/eu-cyber-resilience-act-published-in-eu-official-journal/ Thu, 21 Nov 2024 11:23:25 +0000 https://privacymatters.dlapiper.com/?p=7506 Continue Reading]]> On 20 November 2024, the EU Cyber Resilience Act (CRA) was published in the Official Journal of the EU, kicking off the phased implementation of the CRA obligations.

What is the CRA?

The CRA is a harmonising EU regulation, the first of its kind focusing on safeguarding consumers and businesses from cybersecurity threats.  It is a key element of the EU’s Cybersecurity Strategy for the Digital Decade.

The CRA is designed to fulfil a perceived gap in EU regulation and sets uniform cybersecurity standards for the design, development and production of hardware and software products with digital elements (PDEs) placed on the EU market – introducing mandatory requirements (e.g. relating to security vulnerabilities, and addressing transparency) for manufacturers and retailers, extending throughout the product lifecycle.  With few exceptions for specific categories, the CRA covers all products connected directly or indirectly to other devices or networks.

Scope of the CRA

The CRA applies to all economic operators of PDEs made available on the EU market. This includes:

  • manufacturers (and their authorised representatives);
  • importers;
  • distributors; and
  • any other natural or legal person subject to obligations in relation to the manufacture of PDEs or making them available on the market (including retailers).

The reach of the proposed CRA is broad, covering all PDEs whose intended and reasonably foreseeable use includes a direct or indirect logical or physical data connection to a device or network.

A PDE is defined as “any software or hardware product and its remote data processing solutions, including software or hardware components to be placed on the market separately” (Article 3(1) CRA).

Remote data processing is defined as “any data processing at a distance for which the software is designed and developed by the manufacturer or under the responsibility of the manufacturer, and the absence of which would prevent the product with digital elements from performing one of its functions” (Article 3(2) CRA).

Whilst the usual example of in-scope products is smart devices, such as smartphones, this is complicated in respect of software products involving remote data processing solutions: the CRA supporting FAQ indicates that software which forms part of a service rather than a product is not intended to be covered.

It is therefore important to identify how products are provided – as software products with remote data solutions, or software which is part of a service. This analysis will need to take into account how the various ‘features’ making up each product are provided.

Manufacturers are broadly defined as “any natural or legal person who develops or manufactures products with digital elements or has products with digital elements designed, developed or manufactured, and markets them under his or her name or trademark, whether for payment or free of charge” (Article 3(13) CRA).

Exceptions:

The CRA excludes from its scope a limited number of products and/or fields which are considered to be already sufficiently regulated, including:

  • Products which are in conformity with harmonised standards and products certified under an EU cybersecurity scheme; and
  • Medical devices, aviation devices, and certain motor vehicle systems/components/technical units, to which existing certification regimes apply.

Obligations of economic operators

The primary objective of the CRA is to address a perception at EU institutional level of a poor level of cybersecurity and vulnerabilities in many software and hardware products on the market. The CRA also aims to address the lack of comprehensive information on the cybersecurity properties of digital products to enable consumers to make more informed choices when buying products. With this in mind, the CRA imposes a large number of obligations upon relevant economic operators, with the majority of obligations falling on “manufacturers” of PDEs.

Key obligations on manufactures under the CRA include:

  • When placing a PDE on the EU market, ensuring that it has been designed, developed and produced in accordance with the essential requirements set out in Section 1 of Annex I CRA. The high level requirements set out in Annex 1, Part 1 CRA, include that products with digital elements “shall be designed, developed and produced in such way that they ensure an appropriate level of cybersecurity”, to ensure protection from unauthorised access by appropriate control mechanisms, and protect the confidentiality and integrity of stored, transmitted or otherwise processed data; to be designed, developed and produced to limit attack surface, including external interfaces. These requirements may be clarified as the European Commission is authorised to adopt implementing acts establishing common specifications covering technical requirements that provide a means to comply with the essential requirements set out in Annex 1 CRA;
  • Undertake an assessment of the cybersecurity risks associated with a PDE, taking the outcome of that assessment into account during the planning, design, development, production, delivery and maintenance phases of the PDE, with a view to minimising cybersecurity risks, preventing security incidents and minimising the impacts of such incidents, including in relation to the health and safety of users;
  • Document and update the assessment of the cybersecurity risks associated with a PDE and take the outcome of that assessment into account during the planning, design, development, production, delivery and maintenance phases of the product with digital elements;
  • Exercise due diligence when integrating components sourced from third parties in PDEs and ensure that such components do not compromise the security of the PDE;
  • Document relevant cybersecurity aspects concerning the PDE, including vulnerabilities and any relevant information provided by third parties, and, where applicable, update the risk assessment of the product;
  • Put in place compliant vulnerability handling processes, including providing relevant security updates, for the duration of the support period (of, in principle, five years);
  • Report actively exploited vulnerabilities to the relevant Computer Security Incident Response Team (CSIRT) and the EU Agency for Cybersecurity (ENISA) without undue delay and in any event within 24 hours of becoming aware. The manufacturer must also inform the impacted users of the PDE (and, where appropriate, all users) in a timely manner about an actively exploited vulnerability or a severe incident and, where necessary, about risk mitigation and any corrective measures that they might deploy to mitigate the impact;
  • Perform (or have performed) a conformity assessment for PDEs to demonstrate compliance with obligations. Depending on the risk classification of the product in question there are different procedures and methods that may be applied, with products considered to be of particular high risk being subject to stricter requirements. The procedures range from internal control measures to full quality assurance, with more stringent provisions introduced for products deemed “critical”, such as web browsers, firewalls, password managers (designated class I) and operating systems, CPUs (designated class II). These products will have to undergo specific conformity assessment procedures carried out by notified third-party bodies. For each of these procedures, the CRA contains checklists with specifications that must all be met in order to successfully pass. Manufactures must also draw up an EU declaration of conformity and affix a CE marking to the product; and
  • Ensure that PDEs are accompanied by information, such as the manufacturer’s details and point of contact where vulnerabilities can be reported, and detailed instructions for users including how security updates can be installed and how the product can be securely decommissioned.

Importers and Distributors

The above obligations primarily fall upon manufacturers. However importers and distributors of these products are subject to related obligations regarding those processes, including, only placing on the market PDEs that comply with the essential requirements set out under the law; ensuring that the manufacturer has carried out the appropriate conformity assessment procedures and drawn up the required technical documentation; and that PEDs bear the CE marking and is accompanied by required information for users. Where an importer or distributor identifies a vulnerability in a PDE, it must inform the manufacturer without undue delay, and must immediately inform market surveillance authorities where a PDE presents a “significant cybersecurity risk.”

Overlap with other EU Legislation

The CRA FAQ states that the Act aims to “harmonise the EU regulatory landscape by introducing cybersecurity requirements for products with digital elements and avoid overlapping requirements stemming from different pieces of legislation”. The application of the CRA is subject to certain exclusions where relevant PDEs are already covered by certain regulations – such as the NIS2 Directive and the AI Act (which are considered lex specialis to the CRA as lex generalis). In relation to high-risk AI systems, for example, the CRA explicitly provides that PDEs that also qualify as high-risk AI systems under the AI Act will be deemed in compliance with the AI Act’s cybersecurity requirements where they fulfil the corresponding requirements of the CRA. The listed regulations do not include DORA (Regulation 2022/2554), so there is the potential for overlap for those caught by DORA.

However, Article 2(4) CRA indicates that the application of the CRA may be limited or excluded where PDEs are covered by other Union rules laying down requirements addressing some or all of the risk covered by the essential requirements set out in Annex 1 CRA, in a manner consistent with the applicable regulatory framework, and where the sectoral rules achieve the same or a higher level of protection as that provided under the CRA.

The European Commission may also use its powers to adopt delegated acts in order to further clarify such limitations or exclusions, but in the absence of such delegated acts, the scope is somewhat unclear in respect of financial services entities, given the overlap with DORA.

Enforcement

The CRA provides for extensive participation by public authorities. Accordingly, the European Commission, ENISA and national authorities are granted comprehensive market monitoring, investigative and regulatory powers. For cross-border matters, the CRA also addresses the different procedures and principles for these authorities to cooperate with each other if disagreements arise in the interpretation and application of the law.

Authorities are also provided with the power to carry out so-called “sweeps”. Sweeps will be unannounced and coordinated, involving area-wide monitoring and control measures that are intended to provide information as to whether or not the requirements of the CRA are being complied with. It is particularly important to note that sweeps may apparently be carried out simultaneously by several authorities in close coordination, thus enabling the investigation of cross-border matters.

The CRA provides for a phased concept of administrative fines for non-compliance with certain legal requirements, which follows the model of recent European legislation and is intended primarily as a deterrent:

  • Breaches of the essential cybersecurity requirements, conformity assessment and reporting obligations may result in administrative fines of up to EUR 15 million or up to 2.5% of annual global turnover, whichever is higher.
  • Breaches of the other CRA rules, including requirements to appoint an authorised representative, obligations applicable to importers or distributors, and certain requirements for the EU declaration of conformity, technical documentation and CE marking, may result in administrative fines of up to EUR 10 million or up to 2% of annual global turnover, whichever is higher.
  • Organisations which provide incorrect, incomplete or misleading information face administrative fines of up to EUR 5 million or, if the offender is an undertaking, up to 1% of annual turnover.

When deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation should be taken into account, including the size and market share of the operator committing the infringement.

Non-compliance with CRA requirements may also result in corrective or restrictive measures, including the Market Surveillance Authorities or the Commission recalling or withdrawing products from the EU market.

As the methods for imposing administrative fines will be left to Member States to implement, there is the risk of significant legal uncertainty in relation to enforcement. Although the CRA specifies certain parameters, in particular criteria for the calculation of administrative fines, the proposed regulation raises concerns with regard to the uniform interpretation and application of the rules on administrative fines throughout the EU.

Next procedural steps

The CRA provides for a phased transition period, with the provisions on notification of conformity assessment bodies (Chapter VI) applying from 11 June 2026, and the reporting obligations for manufacturers taking effect from 11 September 2026. The remaining obligations will come into effect on 11 December 2027.  

The CRA is likely to present significant challenges for many companies. It is important that those entities falling within the scope of the CRA start preparing for its implementation. Manufacturers should assess current cybersecurity measures against the upcoming requirements to identify potential compliance gaps and start planning compliance strategies early, including understanding the requirements relating to conformity assessments; technical documentation; and new incident reporting requirements.

Please reach out to your usual DLA Piper contact if you would like to discuss further.


]]>
EU: EHDS – Access to health data for secondary use under the European Health Data Space https://privacymatters.dlapiper.com/2024/11/eu-ehds-access-to-health-data-for-secondary-use-under-the-european-health-data-space/ Tue, 19 Nov 2024 09:23:40 +0000 https://privacymatters.dlapiper.com/?p=7499 Continue Reading]]> This is Part 3 in a series of articles on the European Health Data Space (“EHDS“).  Part 1, which provides a general overview of the EHDS, is available here. Part 2, which deals with the requirements on the manufacturers of EHR-Systems under the EHDS, is available here.

This article provides an overview of the framework for accessing health data for secondary use under the EHDS. It is based on the compromise text of the EHDS published by the Council of the European Union in March 2024.  

Improving access to health data for the purposes of supporting research and innovation activities is one of the key pillars of the EHDS and offers a potentially significant benefit for life sciences and healthcare companies who are looking for improved access to high-quality secondary use data.

By way of reminder, in general terms the EHDS creates a regime under which organisations may apply to a health data access body (“HDAB“) for access to electronic health data held by a third party, for one of a number of permitted secondary use purposes.  When required to do so by the HDAB, the company holding the health data (the health data holder) must then provide the data to the HDAB in order to satisfy the access request. The EHDS provides for safeguards to protect intellectual property rights and trade secrets, and there is some scope for health data holders to recover costs incurred in making data available.  

In more detail, the process operates as follows:

  1. Access to secondary health data

The EHDS stipulates a specific process as well as certain requirements for the access to secondary health data.

In order to get access to secondary health data under the EHDS, the applicant must submit a data access application to the health data access body (“HDAB”). Each Member State must designate an HDAB which is, inter alia, responsible for deciding on data access applications, authorizing and issuing data permits, providing access to electronic health data and monitoring and supervising compliance with the requirements under the EHDS.

Further, the HDAB is responsible for ensuring that data that are adequate, relevant and limited to what is necessary in relation to the purpose of processing indicated in the data access application. The default position is that data will be provided in an anonymized format. However, if the applicant can demonstrate that the purpose of processing cannot be achieved with anonymized data, the HDAB may provide access to the electronic health data in a pseudonymised format.

The data access application must include at least the following:

  • The applicant’s identity, description of professional functions and operations, including the identity of the natural persons who will have access to electronic health data;
  • Which purposes the access is sought for including a detailed explanation of the intended use and expected benefit related to the use (e.g., protection against serious cross-border threats to health in the public interest, scientific research related to health or care sectors to ensure high levels of quality and safety of health care or medicinal products/devices with the aim of benefitting the end-users, including development and innovation activities for products and services);
  • A description of the requested electronic health data, including their scope and time range, format and data sources, where possible, including geographical coverage where data is request from health data holders in several member states;
  • A description whether electronic health data need to be made available in a pseudonymised or anonymized format, in case of pseudonymised format, a justification why the processing cannot be pursued using anonymized data. Further, where the applicant seeks to access the personal electronic health data in a pseudonymised format, the compliance with applicable data protection laws shall be demonstrated;
  • A description of the safeguards, proportionate to the risks, planned to prevent any misuse of the electronic health data as well as to protect the rights and interests of the health data holder and of the natural persons concerned, including to prevent any re-identification of natural persons in the dataset;
  • A justified indication of the period during which the electronic health data is needed for processing in a secure processing environment;
  • A description of the tools and computing resources needed for a secure processing environment and, where applicable, information on the assessment of ethical aspects

Where an applicant seeks access to electronic health data from health data holders established in more than one Member State, the applicant must submit a single data access application to the HDAB of the main establishment of the applicant which shall be automatically forwarded to other relevant HDABs.

Also, there is the option to only apply for access to health data in anonymized statistical format with less formal requirements as well as a simplified procedure for trusted health data holders. The European Commission is responsible for creating templates for the data access applications.

  • Requirements for the technical infrastructure

The HDAB shall only provide access to electronic health data pursuant to a data permit through a secure processing environment. The secure processing environment shall comply with the following security measures:

  • Access to the data must be restricted to the natural persons listed in the data access application;
  • Implementation of state-of-the-art technical and organisational measures to minimize the risk of unauthorized processing of electronic health data;
  • Limitation of the input of electronic health data and the inspection, modification or deletion of electronic health data to a limited number of authorized persons;
  • Ensure that access is only granted to electronic health data covered by the data access application;
  • Keeping identifiable logs of access to and activities in the secure processing environment for not shorter than one year to verify and audit all processing operations;
  • Monitoring compliance and security measures to mitigate potential security threats.

The HDAB shall ensure regular audits, including by third parties, of the secure processing environments and, if necessary, take corrective actions for any shortcomings or vulnerabilities identified.

  • Data protection roles

From a data protection law perspective, the health data holder shall be deemed controller for the disclosure of the requested electronic health data to the HDAB pursuant to Art. 4 No. 1 GDPR. When fulfilling its tasks under the EHDS, the HDAB shall be deemed controller for the processing of personal electronic health data. However, where the HDAB provides electronic health data to a health data user pursuant to a data access application, the HDAB shall be deemed to act as processor on behalf of the health data user. The EU Commission may establish a template for controller to processor agreements in those cases.

  • Fees for the access to health data for secondary use

The HDAB may charge fees for making electronic health data available for secondary use. Such fees shall cover all or part of costs related to the procedure for assessing a data access application and granting, refusing or amending a data permit, including the costs related to the consolidation, preparation, anonymization, pseudonymization and provisioning of electronic health data. The fees further include compensation for the costs incurred by the health data holder for compiling and preparing the electronic health data to be made available for secondary use. The health data holder shall provide an estimate of such costs to the HDAB.

Conclusion

The access to electronic health data for secondary use is a big opportunity especially for companies operating in the life science and healthcare sectors to get access to potentially large volumes of high-quality electronic health data for research and product development purposes. Although Chapter IV of the EHDS, which deals with the secondary use of electronic health data, will become applicable 4 years after the EHDS enters into force, companies are well-advised to begin preparation to gain access to electronic health data for secondary use at an early stage in order to gain a competitive advantage and to ensure that they are able to make direct use of the opportunities granted by the EHDS. Such preparation includes, inter alia, the early determination of the specific electronic health data required for the specific purpose the company wants to achieve as well as the set up of an infrastructure which meets the requirements under the

]]>
EU: Engaging vendors in the financial sector: EDPB clarifications mean more mapping and management https://privacymatters.dlapiper.com/2024/11/eu-engaging-vendors-in-the-financial-sector-edpb-clarifications-mean-more-mapping-and-management/ Fri, 08 Nov 2024 14:22:51 +0000 https://privacymatters.dlapiper.com/?p=7493 Continue Reading]]> The European Data Protection Board (“EDPB“) adopted an opinion on 7 October 2024, providing guidance for data controllers relying on processors (and sub-processors) under the GDPR. The two key themes are:

  1. supply chain mapping;
  2. verifying compliance with flow-down obligations.

For many financial institutions, the emphasis on these obligations should not come as a surprise. However, there are some nuanced clarifications in the opinion which could have an impact on general vendor management in the financial services sector. We have summarised the key takeaways below.

Supply Chain Mapping

Controllers should always be able to identify the processing supply chain. This means knowing all processors, and their subprocessors, for all third-party engagements – and not just their identity. The EDPB’s opinion clarifies that controllers should know:

  • the legal entity name, address and information for a contact person for each processor/subprocessor;
  • the data processed by each processor/subprocessor and why; and
  • the delimitation of roles where several subprocessors are engaged by the primary processor.

This may seem excessive. However, the practical benefit of knowing this information stems beyond Article 28 compliance. It is also required to discharge transparency obligations under Articles 13 and 14 and to respond to data subject requests (e.g. of access under Article 15 or erasure under Article 19).

How is this achieved in reality? Vendor engagement can be tedious. While many financial institutions have sophisticated vendor onboarding processes, data protection is often an afterthought, addressed after commercials are finalised.

So, what should you do as a data controller? Revisit your contracts to ensure your processors are obliged to provide the above information proactively. At a frequency and in the format you require.   

Verification of Compliance

Controllers should be able to verify and document the sufficiency of safeguards implemented by processors and subprocessors to comply with data laws. In other words, controllers must be able to evidence a processor’s compliance with key obligations e.g.:

  • making sure personal data is secure; and
  • ensuring data is transferred or accessed internationally in line with the requirements of Chapter V.

The nature of this verification and documentation will vary depending on the risk associated with the processing activity. A low-risk vendor, from a commercial business perspective, may provide a service involving high-risk data processing. In this case, verification might involve seeking a copy of the subprocessor contract to review it. For lower-risk processing, verification could be limited to confirming a subprocessor contract is in place.

The EDPB suggests controllers can rely on information received from their processor and build on it. For example, through diligence questionnaires, publicly available information, certifications, and audit reports.

Where the primary processor is also an exporter of personal data outside the EEA, the EDPB clarified that the obligation is on the exporting processor to ensure there is an appropriate transfer mechanism in place with the importing subprocessor and to ensure a transfer impact assessment has been carried out. The controller should verify the transfer impact assessment and make amends if necessary. Otherwise, controllers can rely on the exporting processor’s transfer impact assessment if deemed adequate. The verification required here will depend on whether it is an initial or onward transfer, and what lawful basis is used for the transfer. This does not impact the controller’s obligation to carry out transfer mapping where it engages primary processors themselves located outside the EEA.

In that regard, the EDPB clarified a subtle but often debated provision of Article 28. The opinion notes that the wording “unless required to do so by law or binding order of a governmental body”, is unlikely to be compliant where data is transferred outside the EEA. It is therefore highly recommended to include the wording:

“unless required to [process] by Union or Member State law to which the processor is subject.”

Either verbatim or in very similar terms. This is particularly relevant in the context of transfer mapping and impact assessments. Regulated entities should be vigilant for third-party contracts which appear to meet the obligations set out in Article 28(3) with respect to the processing data for purposes outside of the controller’s instructions, but are, as confirmed by the EDPB, actually non-compliant.

What steps should you take now then?

The opinion clarifies that controllers can rely on a sample selection of subprocessor contracts to verify downstream compliance and we suggest you do so.

But when?

Regulated entities, particularly in the financial services industry, are facing a swathe of regulations that impact vendor engagement. The Digital Operational Resilience Act and NIS 2 Directive (EU) (2022/2555) require financial institutions to maintain a register of all contractual arrangements with vendors and ensure third-party service providers comply with cybersecurity standards. Effectively, these are enhancements to existing processor requirements under the GDPR. The reality is, however, that many controllers are only now firming up supply chain management to cover key data protection and cyber risks.

We recommend controllers use the clarifications in the EDPB’s opinion to improve negotiations when separately looking at uplifts required by DORA which takes effect on 17 January 2025. The clock is ticking.

Please reach out to your usual DLA Piper contact if you would like to discuss further, including if you are struggling to map these requirements against other emerging laws i.e. DORA or NIS2. We can provide assistance with the data and cyber contractual commitments in your contracts.

]]>
EU: NIS2 Member State implementation deadline has arrived https://privacymatters.dlapiper.com/2024/10/eu-nis2-member-state-implementation-deadline-has-arrived/ Thu, 17 Oct 2024 08:32:52 +0000 https://privacymatters.dlapiper.com/?p=7463 Continue Reading]]> Today marks the deadline for EU Member State implementation of the Network and Information Systems Directive II (“NIS2“) into national law.

NIS2 is part of the EU’s Cybersecurity Strategy and repeals and replaces the original NIS Directive which entered into force in 2016 (with Member State implementation by 9 May 2018). Much like its predecessor, it establishes measures for a common level of cybersecurity for critical services and infrastructure across the EU and also aims to respond to perceived weakness of NIS1 regime and the needs of increasing digital change. NIS2 establishes harmonised cybersecurity risk management measures and reporting requirements for highly critical sectors. It has a much wider scope than its predecessor – many sectors come under NIS2 for the first time.

Although some Member States such as Croatia, Hungary and Belgium have transposed the directive into national legislation, as the map below demonstrates, the majority of EU countries do not yet have the relevant implementing legislation in place, even less so the broader frameworks and guidance that would equip organisations with the necessary tools to achieve compliance. This will pose difficulties for organisations, especially those with in-scope operations in multiple EU jurisdictions, as they evaluate the scope of their exposure and work towards compliance.

Visit our EU Digital Decade topic hub for further information on NIS2 and the EU’s Cybersecurity Strategy. If you have any questions, please get in touch with your usual DLA contact.

]]>
EU: CJEU Insight  https://privacymatters.dlapiper.com/2024/10/eu-cjeu-insight/ Tue, 15 Oct 2024 14:31:59 +0000 https://privacymatters.dlapiper.com/?p=7454 Continue Reading]]> October has already been a busy month for the Court of Justice of the European Union (“CJEU”), which has published a number of judgments on the interpretation and application of the GDPR, including five important decisions, all issued by the CJEU on one day – 4 October 2024. 

This article provides an overview and summary of several of the key data protection judgments issued by the CJEU this month. The judgments consider issues including: whether legitimate interests can cover purely commercial interests;  whether competitors are entitled to bring an injunction claim based on an infringement of the GDPR; what constitutes ‘health data’ within the meaning of Art. 4 and Art. 9 of the GDPR, whether a controller can rely on an opinion of the national supervisory authority to be exempt from liability under Art. 82(2) GDPR; and what constitutes sufficient compensation for non-material damages and many more. 

Following preliminary questions from the Amsterdam district court, the CJEU has provided valuable clarification in relation to whether “legitimate interests” under Art. 6 (1)(f) GDPR can be “purely commercial”. In its judgement, the CJEU recognized that a wide range of interests can be considered a ‘legitimate interest’ under the GDPR and there is no requirement that the interests of the controller are laid down by law. While the CJEU decided not to answer the specific preliminary questions received from the Amsterdam district court, the attitude of the CJEU is clear: “legitimate interests” can serve purely commercial interests.  

For further information on this decision, please see our blog post available here.  

In its judgement, the CJEU ruled that Chapter VIII of the GDPR allows for national rules which grant undertakings the right to take action in case of an infringement of substantive provisions of the GDPR allegedly committed by a competitor. Such an action would be on the basis of the prohibition of acts considered to be unfair competition. The CJEU further ruled, that the data of a pharmacist’s customers, which are provided when ordering pharmacy-only but non-prescription medicines on an online sales platform, constitute “health data” within the meaning of Art. 4 (15) and Art. 9 GDPR (to that extent contrary to the Advocate General’s opinion of 25 April 2024). 

For further information on this decision, please see our blog post available here.  

  • Maximilian Schrems v Meta Platforms Ireland Ltd (C-446/21) 

Background 

The privacy activist, Maximilian Schrems, brought an action before the Austrian courts challenging the processing of his personal data by Meta Platforms Ireland (“Meta”) in the context of the online social network Facebook. Mr Schrems argued that personal data relating to his sexuality had been processed unlawfully by Meta to send him personalised advertisements.   

Mr Schrems alleged that this processing took place without his consent or other lawful means under the GDPR. The CJEU noted that Mr Schrems had not posted sensitive data on his Facebook profile and further did not consent to Meta using a wider pool of personal data received from advertisers and other partners concerning Mr Schrems’ activities outside Facebook for the purpose of providing personalised advertising.  

The personalised advertisements in question were not based directly on his sexual orientation but on an analysis of his particular interests, drawn from a wider pool of data processed by Meta, as nothing had been openly published by Mr Schrems via Facebook about his sexuality. 

Key findings 

In its judgment, CJEU held that Art. 5(1)(c) GDPR does not allow the controller, in particular a social network platform, to process data collected inside and outside the platform for the purpose of personalised advertising for unlimited time and without distinction as to type of data. 

The CJEU emphasised that the principle of data minimisation requires the controller to limit the retention period of personal data to what is strictly necessary in the light of the objective of the processing activity. 

Regarding the collection, aggregation and processing of personal data for the purposes of targeted advertising, without distinction as to the type of those data, the CJEU held that a controller may not collect personal data in a generalised and indiscriminate manner and must refrain from collecting data which are not strictly necessary for the processing purpose. 

The CJEU also held that the fact that an individual manifestly made public information concerning their sexual orientation does not mean that the individual consented to processing of other data relating to their sexual orientation by the operator of an online social network platform within the meaning of Art. 9(2)(a) GDPR. 

Background 

The data subject is a shareholder of a company in Bulgaria. The company’s constitutive instrument was sent to the Registration Agency (Agentsia po vpisvaniyata), the Bulgarian authority managing the commercial register. 

This instrument, which includes the surname, first name, identification number, identity card number, date and place of issue of that card, as well as the data subject’s address and signature, was made available to the public by the Agency as submitted. The data subject requested the Agency to erase the personal data relating to her contained in that constitutive instrument. As it is a legal requirement to publish certain information relating to the company’s constitutive instrument in the commercial register under Directive 2017/1132 (relating to certain aspects of company law), the Agency refused to delete it when requested by the data subject. The Agency also did not want to delete the personal data that is not required under the Directive but was nevertheless published as it was contained in the instrument. The data subject brought an action before the Administrative Court of Dobrich (Administrativen sad Dobrich) seeking annulment of the Agency’s decision and an order that the Agency compensates her for the alleged non-material damage she suffered.  

 Key findings 

Of the in total eight questions asked by the national court, the CJEU answered six, of which five related directly to the GDPR. Firstly, the CJEU held that an operator of a public register, which receives personal data as part of the constitutive instrument that is subject to compulsory disclosure under EU law, is both a ‘recipient’ of the personal data insofar the operator makes it available to the public, and also a ‘controller’, even if the instrument contains personal data that is not required based on EU or member state laws for the operator to process. This does not change even if the Agency receives additional information because the data subject did not redact their personal data when sharing the constitutive instrument when they should have according to the operator’s procedural rules. 

Secondly, the controller managing the national register may not outrightly refuse any request of erasure of personal data published in the register using the argument that the data subject should have provided a redacted copy of the constitutive instrument. A data subject enjoys a right to object to processing and a right to erasure, unless there are overriding legitimate grounds (which is not the case here).  

Thirdly, the CJEU confirmed that a handwritten signature of a natural person is considered personal data as it is usually used to identify a person and has evidential value regarding the accuracy and sincerity of a document.  

Fourthly, the CJEU held that Art. 82(1) GDPR must be interpreted as meaning that a loss of control for a limited period by the data subject over their personal data, due to the making available to the public of such data online in the commercial register of a Member State, may be sufficient to cause ‘non-material damage’. What in any case is required, is that the person demonstrates that they actually suffered such damage, however minimal. The concept of ‘non-material damage’ does not require the demonstration of the existence of additional tangible negative adverse consequences.  

Lastly, if the supervisory authority of a member state issues an opinion on the basis of Art. 58(3)(b) GDPR, the controller is not exempt from liability under Art. 82(2) GDPR if it acts in line with that opinion. The Agency namely argued that a company’s constitutive instrument may still be entered into the register even if personal data is not redacted and referred hereby to an opinion of the Bulgarian supervisory authority. However, as such an opinion issued to the controller is not legally binding, it can therefore not demonstrate that damages suffered by the data subject are not attributable to the controller which means that it is insufficient to exempt the controller from liability.  

  • Patērētāju tiesību aizsardzības centrs (Latvia Consumer Rights Protection Centre) (C-507/23) 

Background 

The data subject is a well-known journalist and expert in the automotive sector in Latvia. During a campaign to make consumers aware of the risks involved in purchasing a second-hand vehicle, the Latvian Consumer Rights Protection Centre (“PTAC”) published a video on several websites which, among other things, featured a character imitating the data subject, without his consent.  

The journalist brought an action before the District Administrative Court in Latvia seeking (i) a finding that the actions of the PTAC, consisting in the use and distribution of his personal data without authorisation, were unlawful, and (ii) compensation for non-material damage in the form of an apology and the payment of EUR 2,000. The court ruled that the actions in question were unlawful, ordered the PTAC to end to acts, to make a public apology to the journalist and to pay him EUR 100 in compensation in respect of the non-material damage he had suffered. However, on appeal, although the Regional Administrative Court confirmed that the processing of personal data by the PTAC was unlawful and ordered the processing to cease and the publication of an apology on the websites which had disseminated the video footage, it dismissed the claim for financial compensation for the non-material damage suffered. The court found that the infringement that had been committed was not serious on the ground that the video footage was intended to perform a task in the public interest and not to harm the data subject’s reputation, honour and dignity.  

The journalist appealed this decision, and the Latvian Supreme Court referred a number of questions on the interpretation of Art 82(1) GDPR to the CJEU 

 Key findings 

Firstly, the CJEU found that an infringement of a provision of the GDPR, including the unlawful processing of personal data, is not sufficient, in itself, to constitute ‘damage’ within the meaning of Art. 82(1) GDPR.  

By this, the CJEU repeats and emphasises its previous interpretations of Art. 82(1) GDPR to the effect that a mere infringement of the GDPR is not sufficient to confer a right to compensation, since cumulatively and in addition to an ‘infringement’, the existence of ‘damage’ and of a ‘causal link between damage and infringement constitutes the conditions for the right to compensation in Art. 82(1) GDPR. According to the CJEU, this principle even applies if a provision of the GDPR has been infringed that grants rights to natural persons, as such an infringement cannot, in itself, constitute ‘non-material damage’. In particular, the CJEU held that the occurrence of damage in the context of the unlawful processing of personal data is only a potential and not an automatic consequence of such processing. 

Secondly, the CJEU found the presentation of an apology may constitute sufficient compensation for non-material damage on the basis of Art 82(1) GDPR. This applies in particular where it is impossible to restore the situation that existed prior to the occurrence of that damage, provided that that form of redress is capable of fully compensating for the damage suffered by the data subject. 

According to the CJEU, Art. 82(1) GDPR does not preclude the making of an apology from being able to constitute standalone or supplementary compensation for non-material damage provided that such a form of compensation complies with those principles of equivalence and effectiveness. In the present case, providing an apology as a possible compensation was explicitly laid down in Art. 14 of the Latvian Law on compensation for damage caused by public authorities. Other jurisdictions, however, such as German civil law, do not explicitly provide in their national laws the possibility of an apology as a form of compensation. Nevertheless, some courts have already taken apologies into account when determining the amount of monetary compensation. In light of this decision, courts may therefore consider an apology even more as a means of reducing the monetary amount of compensation for damages.  

Thirdly, according to the CJEU, Art. 82(1) GDPR precludes the controller’s attitude and motivation from being taken into account when deciding whether to grant the data subject less compensation than the damage actually suffered.  

According to the CJEU, Art. 82(1) GDPR has an exclusively compensatory and not a punitive function. Therefore, the gravity of an infringement cannot influence the amount of damages awarded under Art. 82(1) GDPR. The amount of damages may not be set at a level that exceeds full compensation for the actually suffered damage. 

Conclusion/implications 

While these five judgements were published on the same day, the decisions relate to a number of different topics. What they do have in common is that they all demonstrate the CJEU’s willingness to impose its reach and tackle difficult questions on the interpretation of the GDPR, particularly where there has not always been agreement or clarity among supervisory authorities. Although these decisions generally clarify and strengthen the CJEU’s previous interpretation of a number of issues, such as those relating to the compensation of non-material damages pursuant Art. 82(1) GDPR, it is interesting that for both the KLNTB decision and the Agentsia po vpisvaniyata decision, the CJEU followed a different interpretation of the GDPR to that of the relevant supervisory authorities (and in the KLNTB decision, contrary to the AG Opinion).

As we start to head into 2025, we can expect continued judgments from the CJEU on the interpretation and application of the GDPR with more than 20 pending cases with the CJEU relating to the GDPR.

]]>
EU: ECJ rules that competitors are entitled to bring an injunction claim based on an infringement of the GDPR. https://privacymatters.dlapiper.com/2024/10/eu-ecj-rules-that-competitors-are-entitled-to-bring-an-injunction-claim-based-on-an-infringement-of-the-gdpr/ Mon, 07 Oct 2024 12:50:16 +0000 https://privacymatters.dlapiper.com/?p=7448 Continue Reading]]> Introduction

In its judgement of 04 October 2024 (C-21/23), the European Court of Justice (“ECJ”, “Court”) ruled, that the provisions of Chapter VIII of the GDPR, do not preclude national rules which grant undertakings the right to rely, on the basis of the prohibition of acts of unfair competition, on infringements of the substantive provisions of the GDPR allegedly committed by their competitors. The ECJ further ruled, that the data of a pharmacist’s customers, which are provided when ordering pharmacy-only but non-prescription medicines on an online sales platform, constitute “health data” within the meaning of Art. 4 (15) and Art. 9 GDPR (to that extent contrary to the Advocate General’s opinion of 25 April 2024).

Background

The plaintiff and the defendant in the main proceedings each operate a pharmacy. The defendant also holds a mail order license and sells its range of products, including pharmacy-only medicines, through the online sales platform Amazon Marketplace, which allows the seller to offer products directly to consumers. The plaintiff sought an injunction to prohibit the defendant selling pharmacy-only pharmaceuticals via the online sales platform. In the plaintiff’s opinion, such distribution constitutes an unfair commercial practice because the defendant was violating a statutory provision within the meaning of Section 3a of the German Act Against Unfair Competition (Gesetz gegen den unlauteren Wettbewerb – “UWG”).

The District Court upheld the claim. The Higher Regional Court dismissed the defendant’s appeal and ruled that the defendant’s sale of pharmacy-only medicines via Amazon Marketplace violates the provisions of the UWG, as this distribution involves the processing of health data within the meaning of Art. 9(1) GDPR, to which the customers have not explicitly consented. According to the Higher Regional Court, the provisions of the GDPR must be regarded as market conduct rules within the meaning of national competition law, with the result that the plaintiff, as a competitor, is entitled to claim injunctive relief based on national competition law by relying on an infringement of the provisions of the GDPR by the defendant.

The defendant then appealed to the German Federal Court of Justice (Bundesgerichtshof – “BGH”), in which it maintained its application for dismissal of the injunction. The BGH stated that the key factor for the decision is how Chapter VIII and Art. 9 of the GDPR are to be interpreted, and referred the following questions to the ECJ for a preliminary ruling:

  1. Do the rules in Chapter VIII GDPR preclude national rules which – alongside the powers of intervention of the supervisory authorities responsible for monitoring and enforcing the regulation and the options for legal redress for data subjects – empower competitors to bring proceedings for infringements of GDPR against the infringer before the civil courts on the basis of the prohibition of unfair commercial practices?
  1. Do the data of the customers of a pharmacist, who acts as a seller on an online sales platform, provide when ordering pharmacy-only but not prescription-only medicines  (customer’s name, delivery address and information required for individualising the pharmacy-only medicine ordered) constitute data concerning health within the meaning of Article 9(1) GDPR?

Decision

First question (competitor’s right to bring injunction claims)

According to the ECJ, neither the wording of the provisions of Chapter VIII of the GDPR nor their context precludes competitors from bringing claims based on an infringement. On the contrary, where the infringement of the substantive provisions of the GDPR is likely to affect primarily the data subjects, it may also affect third parties. The Court notes that, in the context of the digital economy, access to personal data and the use that can be made of it are of considerable importance. Accordingly, in order to take account of real economic developments and to maintain fair competition, it may be necessary to take into account the rules on the protection of personal data when enforcing competition law and the rules on unfair commercial practices. The judgment recognises that the GDPR does not contain a specific opening clause, which expressly authorises Member States to allow competitors to seek an injunction to prevent an infringement of the GDPR. However, according to the Court, it is clear that the EU legislature, when adopting the GDPR, did not intend to achieve full harmonisation of the remedies available in the event of a breach of the provisions of the GDPR and, in particular, did not intend to exclude the possibility for competitors of an alleged infringer of the rules on the protection of personal data to bring an action under national law on the basis of the prohibition of unfair commercial practices.

Moreover, such an action for an injunction brought by a competitor could prove to be a particularly effective means of ensuring such protection, since it makes it possible to prevent numerous infringements of the rights of the data subjects (in this respect, the Court refers to its judgment of 28 April 2002, Meta Platforms Ireland, C-319/20, in which the Court ruled that the GDPR does not preclude national legislation which allows a consumer protection association to bring an action, in the absence of a mandate given to it for that purpose and irrespective of the infringement of specific rights of the data subjects).

In the light of the foregoing, the answer to the first question is that the provisions of Chapter VIII of the GDPR must be interpreted as not precluding a national law which, in addition to the powers of intervention of the supervisory authorities responsible for monitoring and enforcing that regulation, and the means of redress available to the data subjects, gives competitors of the alleged infringer the power to take action against the infringer before the civil courts on the basis of the prohibition of unfair commercial practices for infringements of the GDPR.

In the present case, it is therefore for the national court to determine whether the alleged infringement of the substantive provisions of the GDPR at issue in the main proceedings, if established, also constitutes an infringement of the prohibition of unfair commercial practices under the relevant national legislation.

Second question (scope of the protection of health data)

In the second part of its decision, the ECJ once again interpreted the term ‘special categories of personal data’ and, in this case specifically the term health data (Art. 4 no. 15 GDPR), very broadly. The Advocate General in its Opinion on the case had assumed that it is not possible to deduce the state of health of the customer with sufficient probability from orders of pharmacy-only but non-prescription medicines and therefore had found that such information is not health data.

The ECJ has now decided otherwise. The Court ruled that the provisions of the GDPR cannot be interpreted as meaning that the processing of personal data that only indirectly reveals sensitive information about a natural person would be exempt from the increased protection. For personal data to be classified as health data within the meaning of Article 9(1) of the GDPR, it is sufficient that the health of the data subject can be inferred by association or deduction. The Court affirms that the data provided by a customer when ordering pharmacy-only medicines via an online platform can be used to infer, by association or deduction, the health status of the data subject, since the order establishes a link between a medicinal product, its therapeutic indications and uses, and an identified natural person or a person who can be identified by information such as his or her name or delivery address.

Moreover, the prohibition on processing health data shall apply in principle, regardless of whether the information disclosed by the processing in question is accurate or not, and regardless of whether the data controller acts with the aim of obtaining information falling within one of the special categories referred to in Article 9(1) of the GDPR. Consequently, the information provided by customers when ordering non-prescription medicines online constitutes health data, even if those medicines are only intended for those customers with a certain probability and not with absolute certainty. In this context, the Court also mentions the possibility that the order data may allow conclusions about the health of third parties (e.g. by means of a different delivery address).

The court of the main proceedings will therefore have to decide whether the processing of health data of the customers of the defendant is permissible on the basis of one of the exceptions in Article 9(2) of the GDPR – in particular, because the data subject has given explicit informed consent, or whether the processing is permissible on the basis of Article 9(2)(h) of the GDPR because it is necessary for the purposes of health care and on the basis of Union or Member State law or pursuant to contract with a health professional .

Practical note

This is the third decision by the ECJ that allows actors other than data protection supervisory authorities to take legal action against controllers: in addition to the Meta Platforms decision of April 2022 mentioned above (C-319/20), in July this year, the ECJ clarified that the right of a consumer protection association to challenge the infringement of a data subject’s right “occurring in the course of processing” also extends to information obligations pursuant Articles 12(1) and 13(1) GDPR (C-752/22).

These rulings have significant consequences – they not only increase compliance risks, but also legal defense costs. In practice, consumer protection organisations – out of ignorance or lack of knowledge of business contexts – often take a more dogmatic approach than the competent data protection supervisory authority.

With the competitors, further inexperienced players are now entering the ring. Unlike in the past, it can be assumed that going forward, competitors will make use of the right to sue for injunctive relief if a controller is,  in its view, violating the provisions of the GDPR and this is deemed unfair within the meaning of national competition law. As the acts against unfair competition are based on the EU Directive 2005/29/EC and therefore largely harmonized within the European Union, the ECJ’ decision is likely to affect all data controllers in the European Union.

Accordingly, in order to identify potential shortcomings that could be the subject of a competitor’s claim, controllers are well advised to review their existing processes in light of their specific business model. With respect to the potential processing of health information, a careful assessment is necessary. In particular, the question arises as to which constellations the extensive interpretation of the ECJ still covers in relation to health data – for example, dietary supplements. Or whether – as we believe – it should remain limited to pharmacy-only medicines.

Furthermore, this aspect should be considered in the planning of future business activities in order to avoid a cease-and-desist order.

For any questions about this decision or any assistance please contact your local DLA Piper contact.

]]>
EU: CJEU Confirms that Legitimate Interests can cover purely commercial interests https://privacymatters.dlapiper.com/2024/10/eu-cjeu-confirms-that-legitimate-interests-can-cover-purely-commercial-interests/ Mon, 07 Oct 2024 09:37:14 +0000 https://privacymatters.dlapiper.com/?p=7443 Continue Reading]]> Introduction

The subject of “legitimate interests” and in particular whether they can be “purely commercial” has been a topic of front and center stage debate in the Netherlands for some time. The Dutch data protection authority (AP) has historically interpreted the concept of legitimate interest narrowly, taking the position that organisations cannot rely on purely commercial interests as a legitimate interest and that instead, the interests must have a basis in law. This narrow interpretation makes it impractically difficult for organisations to rely on Article 6(1)(f) GDPR as the lawful basis on which to process personal data and created uncertainty.

Now, following preliminary questions from the Amsterdam district court, the Court of Justice of the European Union (CJEU) has provided valuable clarification – and one that allows organisations to breathe a sigh of relief. In its judgement of 4 October 20024, the CJEU recognized that a wide range of interests can be considered a ‘legitimate interest’ under the GDPR and there is no requirement that the interests of the controller are laid down by law. While the CJEU decided not to answer the specific preliminary questions received from the Amsterdam district court, the attitude of the CJEU is clear: “legitimate interests” can serve purely commercial interests.[1]

Setting the scene: AP’s historic viewpoint on legitimate interest

The AP has applied its narrow interpretation of the concept of legitimate interests for many years.[2] This position is also reflected in the AP’s enforcement actions. Including:

  • Royal Dutch Tennis Association (KNLTB): The foundation of the referring case from the Amsterdam district court began in 2019 when the AP imposed a fine of EUR 525,000 on KNLTB for unlawfully sharing personal data of its members with two sponsors for marketing purposes, in return for payment. The AP concluded that the KNLTB could not rely on their legitimate interests as the interest was solely of a commercial nature. According to the AP, legitimate interests must “belong to the law, being lawful, enshrined in a law” and the interests claimed by KNLTB were lacking this.
  • VoetbalTV: In 2020, the AP also issued a fine of EUR 575,000 to VoetbalTV for processing personal data on the basis of a purely commercial legitimate interest – please see our previous blog here.

The AP has consistently upheld their interpretation despite heavy criticism, including from the European Commission who raised their concerns on the AP’s strict interpretation in an open letter. According to the European Commission, the interpretation severely limits businesses’ possibilities of processing personal data for commercial interests.

The Dutch courts have also weighed in. In the VoetbalTV case the district court Midden-Nederland (perhaps boldly) concluded that the AP misinterpreted the concept of legitimate interest. The court ruled that the fact that VoetbalTV has a commercial interest does not mean that they have no legitimate interest, and excluding a particular interest as a legitimate interest in advance is contrary to European case law. However, in appeal, the Netherland’s highest appellate court was unable to ‘resolve’ this difference of opinion on the basis that there were other relevant legitimate interests that weren’t exclusively commercial in nature. Hence, while the court ruled in favour of VoetbalTV, the judgment did not yet clarify the use of legitimate interests as a legal basis for purely commercial interests.

CJEU judgment  4 October 2024

The KNLTB case has been brought before the Amsterdam district court[3], which turned to the CJEU for guidance in September 2022. The court referred preliminary questions to the CJEU to explain how the term “legitimate interest” should be interpreted. Should this be interpreted in such a way to include exclusively interests established in law, or can any interest be a legitimate interest provided that such interest does not conflict with the law? More specifically: can a purely commercial interest and the interest of the matter at hand (i.e., sharing personal data with a third party against payment without consent of the data subject) be regarded as legitimate interest? And if so, under what circumstances?

The CJEU chose not to answer these direct questions and reframed them based on the facts of the KNLTB case. However, regardless of this approach, the view of the CJEU is clear.

The CJEU reiterates the 3-step test to assess whether the legitimate interest can be used as lawful basis: (1) a legitimate interest should be pursued, (2) there must be a need to process personal data, and (3) the interests or fundamental freedoms and rights of the person(s) concerned should not override the legitimate interests.

The CJEU’s judgement on the first step is most crucial to the above-mentioned debate. At this step, the CJEU ruled that (i) a wide range of interests may be regarded as legitimate; and (ii) there is no need that the interest is provided for by law (but, of course, the legitimate interest should be lawful). Hence, commercial interests, such as the interest pursued by KNLTB, can also be legitimate interests.

It is now up to the Amsterdam district court to assess, based on the specifics of the KNLTB case, whether such a legitimate interest exists in that case and whether the second and third conditions are also met. In our view, given the facts of the KNLTB case and the CJEU’s remarks on the second and third conditions, it is unlikely that the commercial interests of KNLTB will pass the 3-step test. However, this does not detract from the (long awaited) clarification on the scope of what can be considered a “legitimate interest”.

Impact on Dutch businesses

The CJEU judgment is the final act in the heavily criticized strict interpretation of the AP on “legitimate interests”. While this judgment will not save VoetbalTV (which went bankrupt during the dispute with the AP) and might not save KNLTB either, this is a welcome development for other Dutch business.

For any questions about this decision or assistance on assessing legitimate interests, please contact Richard van Schaik (partner) Francesca Pole (Senior Associate) or Demi Rietveld (Associate) or your local DLA Piper contact.


[1] Judgement CJEU 4 October 2024, C-621/22 (Koninklijke Nederlandse Lawn Tennisbond v Autoriteit Persoonsgegevens)

[2] AP-normuitleg grondslag gerechtvaardigd belang | Autoriteit Persoonsgegevens.

[3] ECLI:NL:RBAMS:2022:5565, Rechtbank Amsterdam, 20/4850 (rechtspraak.nl)

]]>
EU: Data Act Frequently Asked Questions answered by the EU Commission https://privacymatters.dlapiper.com/2024/09/data-act-frequently-asked-questions-answered-by-the-eu-commission/ Mon, 23 Sep 2024 16:09:32 +0000 https://privacymatters.dlapiper.com/?p=7432 Continue Reading]]> The EU Data Act is one of the cornerstones of the EU’s Data Strategy and introduces a new and horizontal set of rules on data access and use to boost the EU’s data economy. Most of the provisions of the Data Act will become applicable as of 12 September 2025. To assist stakeholders in the implementation, the European Commission recently published a fairly extensive FAQ document.  In particular, the FAQs contain clarifications in relation to data in scope of the Act; overlap with other data protection laws and EU legislation; implementation of IoT data sharing; and transfer restrictions.  

Our article providing a summary of the key takeaways from the FAQs is available here.

For more information on how DLA Piper can support with the Data Act and other recent EU digital regulations, please refer to our EU Digital Decade website.

]]>