Morven Henderson, Hayley Curry and Rachel de Souza | Privacy Matters | DLA Piper Data Protection and Privacy | DLA Piper https://privacymatters.dlapiper.com/author/rachel-desouza/ DLA Piper's Global Privacy and Data Protection Resource Wed, 16 Apr 2025 08:41:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.8&lxb_maple_bar_source=lxb_maple_bar_source https://privacyblog.dlapiperblogs.com/wp-content/uploads/sites/32/2023/07/cropped-Favicon_512x512-32x32.gif Morven Henderson, Hayley Curry and Rachel de Souza | Privacy Matters | DLA Piper Data Protection and Privacy | DLA Piper https://privacymatters.dlapiper.com/author/rachel-desouza/ 32 32 US: Department of Justice issues final rule restricting the transfer of Sensitive Personal Data and United States Government-Related Data to “countries of concern” https://privacymatters.dlapiper.com/2025/04/us-department-of-justice-issues-final-rule-restricting-the-transfer-of-sensitive-personal-data-and-united-states-government-related-data-to-countries-of-concern/ Wed, 16 Apr 2025 08:40:41 +0000 https://privacymatters.dlapiper.com/?p=7572 Continue Reading]]> On April, 8 2025, the Department of Justice’s final rule, implementing the Biden-era Executive Order 14117 restricting the transfer of Americans’ Sensitive Personal Data and United States Government-Related Data to countries of concern (the “Final Rule“), came into force. The Final Rule imposes new requirements on US companies when transferring certain types of personal data to designated countries of concern or covered persons.

Executive Order 14117, and the implementing Final Rule , intends to address the threat of foreign powers and state-sponsored threat actors using Americans’ sensitive personal data for malicious purposes. The Final Rule sets out the conditions under which a bulk transfer of sensitive personal data or US government-related data to a country of concern or covered person will be permitted, restricted or prohibited.

The Final Rule underpins the higher levels of scrutiny from the US government over bulk cross-border data transfers which may pose a risk to the US national interests, and the tightening of compliance requirements on US companies to protect sensitive personal data and government data when engaging with these countries, or those connected.

Scope of the Final Rule

The key elements determining the applicability and scope of the Final Rule, when applied to a data transaction by a US entity, are:

  • Countries of Concern: As noted above, the Final Rule designates six countries as countries of concern: (1) China (including Hong Kong SAR and Macau SAR), (2) Cuba, (3) Iran, (4) North Korea, (5) Russia, and (6) Venezuela. The transfer of sensitive data to Covered Persons within these jurisdictions could therefore be captured.
  • Covered Persons: The Final Rule defines four classes of covered persons as the transacting party that will require additional scrutiny: (1) foreign entities that are 50% or more owned by a country of concern, organized under the laws of a country of concern, or have their principal place of business in a country of concern; (2) foreign entities that are 50% or more owned by a covered person; (3) foreign employees or contractors of countries of concern or entities that are covered persons; and (4) foreign individuals primarily resident in countries of concern.
  • Sensitive Personal Data: The Final Rule regulates transactions involving six categories of sensitive personal data: (1) certain covered personal identifiers; (2) precise geolocation data; (3) biometric identifiers; (4) human genomic data and three other types of human ‘omic data (epigenomic, proteomic, or transcriptomic); (5) personal health data; and (6) personal financial data.
  • Bulk Sensitive Personal Data: Within these Sensitive Personal Data categories, different thresholds for the volume of data being transferred are applied. These thresholds determine the applicability of the Final Rule to the transaction. The prohibitions and restrictions apply to covered data transactions involving sensitive personal data exceeding certain thresholds over the preceding 12 months before the transaction. For example, compliance requirements for the transfer of precise geolocation data will not be triggered unless location data from over 1,000 US persons or devices is being transferred. Contrastingly, the data transfer of the personal identifiers (such as social security numbers) of over 100,000 US persons will be required before the threshold is met. The definition of ‘bulk’ and how this applies across the categories of personal data is therefore key.

Prohibited or restricted transactions?

Alongside these key elements, the Final Rule determines that the type of transaction under which the data is being transferred will inform whether the transaction is restricted, prohibited or exempt from scrutiny. A transaction falling into the category of restricted will impose the new, additional compliance requirements on US Companies before the transaction can proceed.

The Final Rule prohibits transactions involving (1) data brokerage (i.e., “the sale of data, licensing of access to data, or similar commercial transactions involving the transfer of data”), and (2) covered data transactions involving access to bulk human ‘omic data or human biospecimens from which such data can be derived. The outright prohibition on data brokerage agreements with countries of concern is extended further, with the Final Rule also requiring US persons to contractually ensure that data brokerage transactions with other foreign persons, who are not countries of concern or covered persons, do not enable the transfer of the same data to countries of concern under subsequent arrangements. This additional safeguard on data brokerage where sensitive personal data is involved underlines the requirement for sufficient due diligence with overseas partners.

Vendor, employment, and non-passive investment agreements are captured as restricted transactions. These transactions are permitted if they meet certain security requirements developed by the Cybersecurity and Infrastructure Agency (CISA).

Finally, data transactions which fall under categories such as (but not limited to) personal communications that do not transfer anything of value, ordinary corporate group transactions between a U.S. person and its foreign subsidiary or affiliate, and financial services involving transactions ordinarily incident to and part of providing financial services, are exempt from any compliance requirements under the Final Rule: illustrating the practical intention of the requirements.

Compliance obligations

CISA requirements detail the types of cybersecurity, data retention, encryption and anonymisation policies, alongside other measures, that can be adopted by US companies in order to bring a restricted transaction into compliance, ensuring the safety of sensitive personal data.

An enhanced due diligence exercise is therefore expected when seeking to transact with covered persons, where the bulk transfer of sensitive personal data is a possibility. Key features of this include the implementation of a data compliance program, including comprehensive policies, procedures and record keeping surrounding data involved in a restricted transaction, as well the completion of third-party audits to monitor compliance with the Final Rule. Finally, reporting is expected when engaging in restricted transactions, demonstrating the depth of US government oversight and interest in these transactions.

FAQs, Compliance Guide and Enforcement Policy

On April 11, 2025, the Department of Justice published answers to Frequently Asked Questions;  a Compliance Guide; and issued a Implementation and Enforcement Policy for the first 90 days of the Final Rule. (i.e. through July 8, 2025). 

  • Compliance Guide. The Compliance Guide aims to provide ‘general information’ to assist individuals and entities when complying with the Data Security Program (“DSP”), established by the Department of Justice’s National Security Division to implement the  Final Rule and Executive Order 14117. The Compliance Guide includes guidance on a number of different areas, including, key definitions, steps that organizations should take  to comply with the Final Rule, model contract language and prohibited and restricted data transactions.
  • FAQs. The Department of Justice has provided answers to more than 100 FAQs, which aim to provide high level clarifications about Executive Order 14117 and the DSP, including, for example, answers to questions in relation to scope of the DSP;  the effective date of the Final Rule; definitions , exemptions; and enforcement and penalties.
  • Implementation and Enforcement Policy for the First 90 Days (the Policy): The Policy states that during the first 90 days, enforcement will be limited “to allow U.S. persons (e.g., individuals and companies) additional time to continue implementing the necessary changes to comply with the DSP “. Specifically, the Policy is clear that there will be limited  civil enforcement actions against any person for violations of the DSP that occur from April 8 through July 8, 2025 “so long as the person is engaging in good faith efforts to comply with or come into compliance with the DSP during that time”. The Policy provides examples of ‘good faith efforts’, including: conducting internal reviews of access to sensitive personal data; renegotiating vendor agreements or negotiating contracts with new vendors; transferring products and services to new vendors; implementing CISA security requirements; adjusting employee work locations, roles or responsibilities; and evaluating investments from countries of concern or covered persons. The Policy stated that at “the end of this 90-day period, individuals, and entities should be in full compliance with the DSP.”

Next steps

Whilst certain due diligence, auditing, and reporting obligations will not become effective until October 2025, preparation for effective oversight and compliance with the CISA requirements can begin now. In particular, organisations should assess current compliance measures in place to identify potential compliance gaps and establish controls to address those gaps, in order to be able to demonstrate that they are engaging in “good faith efforts.” DLA Piper can advise on a review of current policies and procedures and preparing effectively for transactions that may fall within the Final Rule.

]]>
UK: Consultation on Ransomware payments https://privacymatters.dlapiper.com/2025/01/uk-consultation-on-ransomware-payments/ Thu, 23 Jan 2025 18:55:36 +0000 https://privacymatters.dlapiper.com/?p=7531 Continue Reading]]> On 14 January 2025, the UK Home Office published a consultation paper focusing on legislative proposals to reduce payments to cyber criminals and increasing incident reporting.  

The proposals set out in the consultation paper aim to protect UK businesses, citizens, and critical infrastructure from the growing threat of ransomware, by reducing the financial incentives for criminals targeting UK organisations and to improve intelligence and understanding of ransomware to support the overall resilience of the UK’s cyber defences.

Summary of key proposals

The consultation sets out three key proposals:

  1. A targeted ban on ransomware payments   – a targeted ban on ransomware payments for all public sector bodies (including local government) and critical national infrastructure (CNI) owners and operators. This proposal goes beyond the current principle that central government departments cannot make ransomware payments – by prohibiting all organisations in the UK public sector from making a payment to cyber criminals in response to a ransomware incident, as well as including CNI owners and operators. This aim of the proposal is to deter criminals by ensuring they cannot profit from attacking essential services. However, the possible impact of this is unclear and the government is seeking input on whether suppliers to such bodies/entities should also be included. The prohibition of ransomware payments by public sector bodies and critical national infrastructure may have a deterrent effect, assuming the threat actors in question are motivated by financial purposes, but a failure to include supply chain would likely simply shift the threat actors’ focus downstream.  However, inclusion of the entire chain could be extremely far reaching, particularly where such vendors provide products/services across multiple sectors.

    It is also not clear how this proposal will be enforced in practice and the government is seeking views on appropriate measures to support compliance. The consultation includes a number of possible measures, ranging from criminal penalties (such as making non-compliance with the ban a criminal offence) or civil penalties (such as a monetary penalty or a ban on being a member of a board).                                      
  1. A new ransomware payment prevention regime – requiring all victims, including those not within the scope of the ban, to “engage with the authorities and report their intention to make a ransomware payment before paying over any money to the criminals“. After the report is made, the potential victim would receive support and guidance including the discussion of non-payment resolution options. Under the proposals, the authorities would review the proposed payment to see if there is a reason it needs to be blocked (e.g. known terrorist organisations). If the proposed payment is not blocked, it would be a matter for the victim whether to proceed. Input is sought on the best measures for encouraging compliance with this regime, as well as what additional support and/or guidance should be provided – possibly building on existing collaboration between the National Cyber Security Centre (NCSC) and the Information Commissioner’s Office (ICO).
  1. A ransomware incident reporting regime –  a mandatory ransomware incident reporting regime, which could include a threshold-based requirement for suspected victims to report incidents, enhancing the government’s understanding and response capabilities. Input is sought on whether this should be economy wide, or only apply to organisations/individuals meeting a certain threshold. The consultation proposes that organisations will have 72 hours to provide an initial report of the incident and then 28 days to provide the full report. It is unclear how these reporting requirements will align with existing incident reporting obligations, however, the government has stated that the intent is to ensure that “UK victims are only required to report an individual ransomware incident once, as far as possible“.

These proposals, if implemented in their broadest form, will pose a significant challenge for any business impacted by a ransomware incident, requiring mandatory reporting of such incidents, as well as a need to wait for guidance from authorities before making any payments.  This is likely to be particularly problematic where threat actors are imposing deadlines for payment and could lead to significant disruptions to essential services where a ransomware attack has occurred and payment is not possible. The impact of the proposals on organisations not subject to the ban is also unclear, particularly in relation to reporting and disclosure requirements and how these will align with incident/breach notification obligations.

The consultation closes on 8 April 2025.

]]>
EU: DLA Piper GDPR Fines and Data Breach Survey: January 2025 https://privacymatters.dlapiper.com/2025/01/eu-dla-piper-gdpr-fines-and-data-breach-survey-january-2025/ Tue, 21 Jan 2025 11:53:17 +0000 https://privacymatters.dlapiper.com/?p=7534 Continue Reading]]> The seventh annual edition of DLA Piper’s GDPR Fines and Data Breach Survey has revealed another significant year in data privacy enforcement, with an aggregate total of EUR1.2 billion (USD1.26 billion/GBP996 million) in fines issued across Europe in 2024.

Ireland once again remains the preeminent enforcer issuing EUR3.5 billion (USD3.7 billion/GBP2.91 billion) in fines since May 2018, more than four times the value of fines issued by the second placed Luxembourg Data Protection Authority which has issued EUR746.38 million (USD784 million/GBP619 million) in fines over the same period.

The total fines reported since the application of GDPR in 2018 now stand at EUR5.88 billion (USD 6.17 billion/GBP 4.88 billion). The largest fine ever imposed under the GDPR remains the EUR1.2 billion (USD1.26 billion/GBP996 million) penalty issued by the Irish DPC against Meta Platforms Ireland Limited in 2023.

Trends and Insights

In the year from 28 January 2024, EUR1.2 billion fines were imposed. This was a 33% decrease compared to the aggregate fines imposed in the previous year, bucking the 7-year trend of increasing enforcement. This does not represent a shift in focus from personal data enforcement; the clear year on year trend remains upwards. This year’s reduction is almost entirely due to the record breaking EUR 1.2 billion fine against Meta falling in 2023 which skewed the 2023 figures. There was no record breaking fine in 2024.

Big tech companies and social media giants continue to be the primary targets for record fines, with nearly all of the top 10 largest fines since 2018 imposed on this sector. This year alone the Irish Data Protection Commission issued fines of EUR310 million (USD326 million/GBP257 million) against LinkedIn and EUR251 million (USD264 million/GBP208 million) against Meta.  In August 2024, the Dutch Data Protection Authority issued a fine of EUR290 million (USD305 million/GBP241 million) against a well-known ride-hailing app in relation to transfers of personal data to a third country. 

2024 enforcement expanded notably in other sectors, including financial services and energy. For example, the Spanish Data Protection Authority issued two fines totalling EUR6.2 million  (USD6.5 million/GBP5.1 million) against a large bank for inadequate security measures, and the Italian Data Protection Authority fined a utility provider EUR5 million (USD5.25 million/GBP4.15 million) for using outdated customer data.

The UK was an outlier in 2024, issuing very few fines. The UK Information Commissioner John Edwards was quoted in the British press in November 2024 as saying that he does not agree that fines are likely to have the greatest impact and that they would tie his office up in years of litigation. An approach which is unlikely to catch on in the rest of Europe. 

The dawn of personal liability

Perhaps most significantly, a focus on governance and oversight has led to a number of enforcement decisions citing failings in these areas and specifically calling out failings of management bodies. Most significantly the Dutch Data Protection Commission announced it is investigating whether it can hold the directors of Clearview AI personally liable for numerous breaches of the GDPR, following a EUR30.5 million (USD32.03 million/GBP25.32 million) against the company. This novel investigation into the possibility of holding Clearview AI’s management personally liable for continued failings of the company signals a potentially significant shift in focus by regulators who recognise the power of personal liability to focus minds and drive better compliance. 

Data Breach Notifications

The average number of breach notifications per day increased slightly to 363 from 335 last year, a ‘levelling off’ consistent with previous years, likely indicative of organisations becoming more wary of reporting data breaches given the risk of investigations, enforcement, fines and compensation claims that may follow notification. 

A recurring theme of DLA Piper’s previous annual surveys is that there has been little change at the top of the tables regarding the total number of data breach notifications made since the GDPR came into force on 25 May 2018 and during the most recent full year from 28 January 2024 to 27 January 2025. The Netherlands, Germany, and Poland remain the top three countries for the highest number of data breaches notified, with 33471, 27829 and 14,286 breaches notified respectively. 

AI enforcement

There have been a number of decisions this year signalling the intent of data protection supervisory authorities to closely scrutinise the operation of AI technologies and their alignment with privacy and data protection laws. For businesses, this highlights the need to integrate GDPR compliance into the core design and functionality of their AI systems.

Commenting on the survey findings, Ross McKean, Chair of the UK Data, Privacy and Cybersecurity practice said:

“European regulators have signalled a more assertive approach to enforcement during 2024 to ensure that AI training, deployment and use remains within the guard rails of the GDPR.”

We expect for this trend to continue during 2025 as US AI technology comes up against European data protection laws.

John Magee, Global Co-Chair of DLA Piper’s Data, Privacy and Cybersecurity practice commented:

“The headline figures in this year’s survey have, for the first time ever, not broken any records so you may be forgiven for assuming a cooling of interest and enforcement by Europe’s data regulators. This couldn’t be further from the truth. From growing enforcement in sectors away from big tech and social media, to the use of the GDPR as an incumbent guardrail for AI enforcement as AI specific regulation falls into place, to significant fines across the likes of Germany, Italy and the Netherlands, and the UK’s shift away from fine-first enforcement – GDPR enforcement remains a dynamic and evolving arena.”

Ross McKean added:

“For me, I will mostly remember 2024 as the year that GDPR enforcement got personal.”

“As the Dutch DPA champions personal liability for the management of Clearview AI, 2025 may well be the year that regulators pivot more to naming and shaming and personal liability to drive data compliance.”

]]>
EU: EDPB Opinion on AI Provides Important Guidance though Many Questions Remain https://privacymatters.dlapiper.com/2025/01/eu-edpb-opinion-on-ai-provides-important-guidance-though-many-questions-remain/ Tue, 14 Jan 2025 13:53:05 +0000 https://privacymatters.dlapiper.com/?p=7528 Continue Reading]]> A much-anticipated Opinion from the European Data Protection Board (EDPB) on AI models and data protection has not resulted in the clear or definitive guidance that businesses operating in the EU had hoped for. The Opinion emphasises the need for case-by-case assessments to determine GDPR applicability, highlighting the importance of accountability and record-keeping, while also flagging ‘legitimate interests’ as an appropriate legal basis under specific conditions. In rejecting the proposed Hamburg thesis, the EDPB has stated that AI models trained on personal data should be considered anonymous only if personal data cannot be extracted or regurgitated.

Introduction

On 17 December 2024, the EDPB published a much-anticipated Opinion on AI models and data protection.  The Opinion includes the EDPB’s view on the following key questions: does the development and use of an AI model involve the processing of personal data; and if so, what is the correct legal basis for that processing?

As is sometimes the case with EDPB Opinions, which necessarily represent the consensus view of the supervisory authorities of 27 different Member States, the Opinion does not provide many clear or definitive answers.  Instead, the EDPB offers indicative guidance and criteria, calling for case-by-case assessments of AI models to understand whether, and how, they are impacted by the GDPR.  In this context, the Opinion repeatedly highlights the importance of accountability and record-keeping by businesses developing or using AI, so that the applicability of data protection laws, and the business’ compliance with those laws, can be properly assessed. 

Whilst the equivocation of the Opinion might be viewed as unhelpful by European businesses looking for regulatory certainty, it is also a reflection of the complexities inherent in this intersection of law and technology.

In summary, the answers given by the EDPB to the four questions in the Opinion are as follows:

  1. Can an AI model, which has been trained using personal data, be considered anonymous?  Yes, but only in some cases.  It must be impossible, using all means reasonably likely to be used, to obtain personal data from the model, either through attacks which aim to extract the original training data from the model itself, or through interactions with the AI model (i.e., personal data provided in responses to prompts / queries). 
  2. Is ‘legitimate interests’ an appropriate legal basis for the training and development of an AI model? In principle yes, but only where the processing of personal data is necessary to develop the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  In particular, the issue of data minimisation, and the related issue of web-scraping / indiscriminate capture of data, will be relevant here. 
  3. Is ‘legitimate interests’ an appropriate legal basis for the deployment of an AI model? In principle yes, but only where the processing of personal data is necessary to deploy the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  Here, the impact on the data subject of the use of the AI model is of predominant importance.
  4. If an AI Model has been found to have been created, updated or developed using unlawfully processed personal data, how does this impact the subsequent use of that AI model?  This depends in part on whether the AI model was first anonymised before being disclosed to the deployer of that model (see Question 1).  Otherwise, the deployer of the model may need to assess the lawfulness of the development of the model as part of its accountability obligations.

Background

The Opinion was issued by the EDPB under Article 64 of the GDPR, in response to a request from the Irish Data Protection Commission.  Article 64 requires the EDPB to publish an opinion on matters of ‘general application’ or which ‘produce effects in more than one Member State’. 

In this case, the Irish DPC asked the EDPB to provide an opinion on the above-mentioned questions – a request that is not surprising given the general importance of AI models to businesses across the EU, but also in light of the large number of technology companies developing those models who have established their European operations in Ireland. 

In order to understand the Opinion, it helps to be familiar with certain concepts and terminology relating to AI. 

First, the Opinion distinguishes between an ‘AI system’ and an ‘AI model’. For the former, the EDPB relies on the definition given in the EU AI Act. In short: a machine-based system operating with some degree of autonomy that infers, from inputs, how to produce outputs such as  predictions, content, recommendations, or decisions.  An AI model, meanwhile, is a component part of an AI system. Colloquially, it is the ‘brain’ of the AI system – an algorithm, or series of algorithms (such as in the form of a neural network), that recognises patterns in data. AI models require the addition of further components, such as a user interface, to become AI systems. To take a common example – the generative AI system known as Chat GPT is a software application comprised of an AI model (the GPT Large Language Model) connected to a chatbot-style user interface that allows the user to submit queries (or ‘prompts’) to the model in the form of natural language questions. Whilst the Opinion is notionally concerned only with AI models, at times the Opinion appears to blur the distinction between the model and the system, in particular, when discussing the significance of model outputs that are only rendered comprehensible to the user through an interface that sits outside of the model.

Second, the Opinion relies on an understanding of a typical ‘AI lifecycle’, pursuant to which an AI model is first developed by training the model on large volumes of data.  This training may happen in a number of phases which become increasingly refined (referred to as ‘fine-tuning’). Only after an AI model is developed can it be used, or ‘deployed’, in a live setting, as part of an AI system.  Often, the developer of an AI model will not be the same person as the deployer.  This is relevant because the Opinion variously addresses both development and deployment phases.

The significance of the ‘Hamburg thesis’

With respect to the key question of whether AI models can be considered anonymous, the Opinion follows in the wake of a much-discussed paper published in July 2024 by the data protection authority for the German state of Hamburg.  The paper took the position that AI models (specifically, Large Language Models) are, in isolation, anonymous – they do not involve the processing of personal data. 

In order to reach that conclusion, the paper decoupled the model itself from: (i) the prior training of the model (which may involve the collection and further processing of personal data as part of the training dataset); and (ii) the subsequent use of the model, whereby a prompt/input may contain personal data, and an output may be used in a way that means it constitutes personal data.

Looking only at the AI model itself, the paper decided that the tokens and values which make up the ‘inner workings’ of a typical AI model do not, in any meaningful way, relate to or correspond with information about identifiable individuals.  Consequently, the model itself was found to be anonymous, even if the development and use of the model involves the processing of personal data. 

The Hamburg thesis was welcomed for several reasons, not least because it resolved difficult questions such as how data subject rights could be understood in relation to an AI model (if someone asks for their personal data to be deleted, then what can this mean in the context of an AI model?), and the question of the lawful basis for ‘storing’ personal data in an AI model (as distinct from the lawful basis for collecting and preparing data to train the model).

However, as we go on to explain, the EDPB Opinion does not follow the relatively simple and certain framework presented by the Hamburg thesis.  Instead, it introduces uncertainty by asserting that there are, in fact, scenarios where an AI model contains personal data, but that this must be determined on a case-by-case basis.

Are AI models anonymous?

First, the Opinion is only concerned with AI models that have been trained using personal data.  Therefore, AI models trained using solely non-personal data (such as statistical data, or financial data relating to businesses) can, for the avoidance of doubt, be considered anonymous.  However, in this context the broad scope of ‘personal data’ under the GDPR must be remembered, and the Opinion does not suggest any de minimis level of personal data that needs to be involved in the training of the AI model for the question of GDPR applicability to arise.

Where personal data is used in the training phase, the next question is whether the model is specifically designed to provide personal data regarding individuals whose personal data were used to train the model.  If so, the AI model will not be anonymous.  For example, an AI model that is trained to provide a user, on request, with biographical information and contact details for directors of public companies, or a generative AI model that is trained on the voice recordings of famous singers so that it can, in turn, mimic the voices of those singers.  In each case, the model is trained on personal data of specific individuals, in order to be able to produce other personal data about those individuals as an output. 

Finally, there is the intermediary case of AI models that are trained on personal data, but that are not designed to provide personal data related to the training data as an output.  It is this use case that the Opinion focuses on.  The conclusion is that AI models in this category may be anonymous, but only if the developer of the model can demonstrate that information about individuals whose personal data was used to train the model cannot be ‘obtained from’ the model, using all means reasonably likely to be used.  Notwithstanding that personal data used for training the model no longer exists within the model in its original form (but rather it is “represented through mathematical objects“), that information is, in the eyes of the EDPB, still capable of constituting personal data.

The following question then arises: how does someone ‘obtain’ personal data from an AI model? In short, the Opinion posits two possibilities.  First, that training data is ‘extracted’ via deliberate attacks.  The Opinion refers to an evolving field of research in this area and makes reference to techniques such as ‘model inversion’, ‘reconstruction attacks’, and ‘attribute and membership inference’.  These are techniques that can be deployed to trick the model into revealing training data, or otherwise reconstruct that training data, in some cases relying on privileged access to the model itself.  Second, is the risk of accidental or inadvertent ‘regurgitation’ of personal data as part of an AI model’s outputs. 

Consequently, a developer must be able to demonstrate that its AI model is resistant both to attacks that extract personal data directly from the model, as well as to the risk of regurgitation of personal data in response to queries:  “In sum, the EDPB considers that, for an AI model to be considered anonymous, using reasonable means, both (i) the likelihood of direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to train the model; as well as (ii) the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant for any data subject“. 

Which criteria should be used to evaluate whether an AI model is anonymous?

Recognising the uncertainty in its conclusion that the AI models may or may not be anonymous, the EDPB provides a list of criteria that can be used to assess the likelihood of a model being found to contain personal data.  These include:

  • Steps taken to avoid or limit the collection of personal data during the training phase.
  • Data minimisation or masking measures (e.g., pseudonymisation) applied to reduce the volume and sensitivity of personal data used during the training phase.
  • The use of methodologies during model development that reduce privacy risks (e.g., regularisation methods to improve model generalisation and reduce overfitting, and appropriate and effective privacy-preserving techniques, such as differential privacy).
  • Measures that reduce the likelihood of obtaining personal data from queries (e.g., ensuring the AI system blocks the presentation to the user of outputs that may contain personal data).
  • Document-based audits (internal or external) undertaken by the model developer that include an evaluation of the chosen measures and of their impact to limit the likelihood of identification.
  • Testing of the model to demonstrate its resilience to different forms of data extraction attacks.

What is the correct legal basis for AI models?

When using personal data to train an AI model, the preferred legal basis is normally the ‘legitimate interests’ of the controller, under Article 6(1)(f) GDPR. This is for practical reasons. Whilst, in some circumstances, it may be possible to obtain GDPR-compliant consent from individuals authorising the use of their data for AI training purposes, in most cases this will not be feasible. 

Helpfully, the Opinion accepts that legitimate interests is, in principle, a viable legal basis for processing personal data to train an AI model. Further, the Opinion also suggests that it should be straightforward for businesses to identify a lawful legitimate interest. For example, the Opinion cites “developing an AI system to detect fraudulent content or behaviour” as a sufficiently precise and real interest. 

However, where businesses may have more difficulty is in showing that the processing of personal data is necessary to realise their legitimate interest, and that their legitimate interest is not outweighed by any impact on the rights and freedoms of data subjects (the ‘balancing test’). Whilst this is fundamentally just a restatement of existing legal principles, the following sentence should nevertheless cause some concern for businesses developing AI models, in particular Large Language Models: “If the pursuit of the purpose is also possible through an AI model that does not entail processing of personal data, then processing personal data should be considered as not necessary“. Technically speaking, it may often be the case that personal data is not essential for the training of an AI model – however, this does not mean that it is straightforward to systematically remove all personal data from a training dataset, or otherwise replace all identifying elements with ‘dummy’ values. 

With respect to the balancing test, the EDPB asks businesses to consider a data subject’s interest in self-determination and in maintaining control over their own data when considering whether it is lawful to collect personal data for model training purposes.  In particular, it may be more difficult to satisfy the balancing test if a developer is scraping large volumes of personal data (especially including any sensitive data categories) against their wishes, without their knowledge, or otherwise in contexts that would not be reasonably expected by the data subject. 

When it comes to the separate purpose of deploying an AI model, the EDPB asks businesses to consider the impact on the data subject’s fundamental rights that arise from the purpose for which the AI model is used.  For example, AI models that are used to block content publication may adversely affect a data subject’s fundamental right to freedom of expression.  However, conversely the EDPB recognises that the deployment of AI models may have a positive impact on a data subject’s rights and freedoms – for example, an AI model that is used to improve accessibility to certain services for people with disabilities). In line with Recital 47 GDPR, the EDPB reminds controllers to consider the ‘reasonable expectations’ of data subjects in relation to both training and deployment uses of personal data.

Finally, the Opinion discusses a range of ‘mitigating measures’ that may be used to reduce risks to data subjects and therefore tip the balancing test in favour of the controller.  These include:

  • Technical measures to reduce the volume or sensitivity of personal data at use (e.g., pseudonymisation, masking).
  • Measures to facilitate the exercise of data subject rights (e.g., providing an unconditional right for data subjects to opt-out of the use of their personal data for training or deploying the model; allowing a reasonable period of time to elapse between collection of training data and its use).
  • Transparency measures (e.g., public communications about the controller’s practices in connection with the use of personal data for AI model development).
  • Measures specific to web-scraping (e.g., excluding publications that present particular risks; excluding certain data categories or sources; excluding websites that clearly object to web scraping).

Notably, the EDPB observes that, to be effective, these mitigating measures must go beyond mere compliance with GDPR obligations (for example, providing a GDPR compliant privacy notice, which a controller would in any case be required to do, would not be an effective transparency measure for these purposes). 

When are companies liable to non-compliant AI models?

In its final question, the DPC sought clarification from the EDPB on how a deployer of an AI model might be impacted by any unlawful processing of personal data in the development phase of the AI model. 

According to the EDPB, such ‘upstream’ unlawful processing may impact a subsequent deployer of an AI model in the following ways:

  • Corrective measures taken against the developer may have a knock-on effect on the deployer – for example, if the developer is ordered to delete personal data unlawfully collected for training purposes, the developer would not be allowed to subsequently process this data. However, this raises an important practical question about how such data could be identified in, and deleted from, the AI model, taking into account the fact that the model does not retain training data in its original form.
  • Unlawful processing in the development phase may impact the legal basis for the deployment of the model – in particular, if the deployer of the AI model is relying on ‘legitimate interests’, it will be more difficult to satisfy the balancing test in light of the deficiencies associated with the collection and use of the training data.

In light of these risks, the EDPB recommends that deployers take reasonable steps to assess the developer’s compliance with data protection laws during the training phase.  For example, can the developer explain the sources of data used, steps taken to comply with the minimisation principle, and any legitimate interest assessments conducted for the training phase?  For certain AI models, the transparency obligations imposed in relation to AI systems under the AI Act should assist a deployer in obtaining this information from a third party AI model developer. While the opinion provides a useful framework for assessing GDPR issues with AI systems, businesses operating in the EU may be frustrated with the lack of certainty or definitive guidance on many key questions relating to this new era of technology innovation.

]]>
EU: Cyber Resilience Act published in EU Official Journal https://privacymatters.dlapiper.com/2024/11/eu-cyber-resilience-act-published-in-eu-official-journal/ Thu, 21 Nov 2024 11:23:25 +0000 https://privacymatters.dlapiper.com/?p=7506 Continue Reading]]> On 20 November 2024, the EU Cyber Resilience Act (CRA) was published in the Official Journal of the EU, kicking off the phased implementation of the CRA obligations.

What is the CRA?

The CRA is a harmonising EU regulation, the first of its kind focusing on safeguarding consumers and businesses from cybersecurity threats.  It is a key element of the EU’s Cybersecurity Strategy for the Digital Decade.

The CRA is designed to fulfil a perceived gap in EU regulation and sets uniform cybersecurity standards for the design, development and production of hardware and software products with digital elements (PDEs) placed on the EU market – introducing mandatory requirements (e.g. relating to security vulnerabilities, and addressing transparency) for manufacturers and retailers, extending throughout the product lifecycle.  With few exceptions for specific categories, the CRA covers all products connected directly or indirectly to other devices or networks.

Scope of the CRA

The CRA applies to all economic operators of PDEs made available on the EU market. This includes:

  • manufacturers (and their authorised representatives);
  • importers;
  • distributors; and
  • any other natural or legal person subject to obligations in relation to the manufacture of PDEs or making them available on the market (including retailers).

The reach of the proposed CRA is broad, covering all PDEs whose intended and reasonably foreseeable use includes a direct or indirect logical or physical data connection to a device or network.

A PDE is defined as “any software or hardware product and its remote data processing solutions, including software or hardware components to be placed on the market separately” (Article 3(1) CRA).

Remote data processing is defined as “any data processing at a distance for which the software is designed and developed by the manufacturer or under the responsibility of the manufacturer, and the absence of which would prevent the product with digital elements from performing one of its functions” (Article 3(2) CRA).

Whilst the usual example of in-scope products is smart devices, such as smartphones, this is complicated in respect of software products involving remote data processing solutions: the CRA supporting FAQ indicates that software which forms part of a service rather than a product is not intended to be covered.

It is therefore important to identify how products are provided – as software products with remote data solutions, or software which is part of a service. This analysis will need to take into account how the various ‘features’ making up each product are provided.

Manufacturers are broadly defined as “any natural or legal person who develops or manufactures products with digital elements or has products with digital elements designed, developed or manufactured, and markets them under his or her name or trademark, whether for payment or free of charge” (Article 3(13) CRA).

Exceptions:

The CRA excludes from its scope a limited number of products and/or fields which are considered to be already sufficiently regulated, including:

  • Products which are in conformity with harmonised standards and products certified under an EU cybersecurity scheme; and
  • Medical devices, aviation devices, and certain motor vehicle systems/components/technical units, to which existing certification regimes apply.

Obligations of economic operators

The primary objective of the CRA is to address a perception at EU institutional level of a poor level of cybersecurity and vulnerabilities in many software and hardware products on the market. The CRA also aims to address the lack of comprehensive information on the cybersecurity properties of digital products to enable consumers to make more informed choices when buying products. With this in mind, the CRA imposes a large number of obligations upon relevant economic operators, with the majority of obligations falling on “manufacturers” of PDEs.

Key obligations on manufactures under the CRA include:

  • When placing a PDE on the EU market, ensuring that it has been designed, developed and produced in accordance with the essential requirements set out in Section 1 of Annex I CRA. The high level requirements set out in Annex 1, Part 1 CRA, include that products with digital elements “shall be designed, developed and produced in such way that they ensure an appropriate level of cybersecurity”, to ensure protection from unauthorised access by appropriate control mechanisms, and protect the confidentiality and integrity of stored, transmitted or otherwise processed data; to be designed, developed and produced to limit attack surface, including external interfaces. These requirements may be clarified as the European Commission is authorised to adopt implementing acts establishing common specifications covering technical requirements that provide a means to comply with the essential requirements set out in Annex 1 CRA;
  • Undertake an assessment of the cybersecurity risks associated with a PDE, taking the outcome of that assessment into account during the planning, design, development, production, delivery and maintenance phases of the PDE, with a view to minimising cybersecurity risks, preventing security incidents and minimising the impacts of such incidents, including in relation to the health and safety of users;
  • Document and update the assessment of the cybersecurity risks associated with a PDE and take the outcome of that assessment into account during the planning, design, development, production, delivery and maintenance phases of the product with digital elements;
  • Exercise due diligence when integrating components sourced from third parties in PDEs and ensure that such components do not compromise the security of the PDE;
  • Document relevant cybersecurity aspects concerning the PDE, including vulnerabilities and any relevant information provided by third parties, and, where applicable, update the risk assessment of the product;
  • Put in place compliant vulnerability handling processes, including providing relevant security updates, for the duration of the support period (of, in principle, five years);
  • Report actively exploited vulnerabilities to the relevant Computer Security Incident Response Team (CSIRT) and the EU Agency for Cybersecurity (ENISA) without undue delay and in any event within 24 hours of becoming aware. The manufacturer must also inform the impacted users of the PDE (and, where appropriate, all users) in a timely manner about an actively exploited vulnerability or a severe incident and, where necessary, about risk mitigation and any corrective measures that they might deploy to mitigate the impact;
  • Perform (or have performed) a conformity assessment for PDEs to demonstrate compliance with obligations. Depending on the risk classification of the product in question there are different procedures and methods that may be applied, with products considered to be of particular high risk being subject to stricter requirements. The procedures range from internal control measures to full quality assurance, with more stringent provisions introduced for products deemed “critical”, such as web browsers, firewalls, password managers (designated class I) and operating systems, CPUs (designated class II). These products will have to undergo specific conformity assessment procedures carried out by notified third-party bodies. For each of these procedures, the CRA contains checklists with specifications that must all be met in order to successfully pass. Manufactures must also draw up an EU declaration of conformity and affix a CE marking to the product; and
  • Ensure that PDEs are accompanied by information, such as the manufacturer’s details and point of contact where vulnerabilities can be reported, and detailed instructions for users including how security updates can be installed and how the product can be securely decommissioned.

Importers and Distributors

The above obligations primarily fall upon manufacturers. However importers and distributors of these products are subject to related obligations regarding those processes, including, only placing on the market PDEs that comply with the essential requirements set out under the law; ensuring that the manufacturer has carried out the appropriate conformity assessment procedures and drawn up the required technical documentation; and that PEDs bear the CE marking and is accompanied by required information for users. Where an importer or distributor identifies a vulnerability in a PDE, it must inform the manufacturer without undue delay, and must immediately inform market surveillance authorities where a PDE presents a “significant cybersecurity risk.”

Overlap with other EU Legislation

The CRA FAQ states that the Act aims to “harmonise the EU regulatory landscape by introducing cybersecurity requirements for products with digital elements and avoid overlapping requirements stemming from different pieces of legislation”. The application of the CRA is subject to certain exclusions where relevant PDEs are already covered by certain regulations – such as the NIS2 Directive and the AI Act (which are considered lex specialis to the CRA as lex generalis). In relation to high-risk AI systems, for example, the CRA explicitly provides that PDEs that also qualify as high-risk AI systems under the AI Act will be deemed in compliance with the AI Act’s cybersecurity requirements where they fulfil the corresponding requirements of the CRA. The listed regulations do not include DORA (Regulation 2022/2554), so there is the potential for overlap for those caught by DORA.

However, Article 2(4) CRA indicates that the application of the CRA may be limited or excluded where PDEs are covered by other Union rules laying down requirements addressing some or all of the risk covered by the essential requirements set out in Annex 1 CRA, in a manner consistent with the applicable regulatory framework, and where the sectoral rules achieve the same or a higher level of protection as that provided under the CRA.

The European Commission may also use its powers to adopt delegated acts in order to further clarify such limitations or exclusions, but in the absence of such delegated acts, the scope is somewhat unclear in respect of financial services entities, given the overlap with DORA.

Enforcement

The CRA provides for extensive participation by public authorities. Accordingly, the European Commission, ENISA and national authorities are granted comprehensive market monitoring, investigative and regulatory powers. For cross-border matters, the CRA also addresses the different procedures and principles for these authorities to cooperate with each other if disagreements arise in the interpretation and application of the law.

Authorities are also provided with the power to carry out so-called “sweeps”. Sweeps will be unannounced and coordinated, involving area-wide monitoring and control measures that are intended to provide information as to whether or not the requirements of the CRA are being complied with. It is particularly important to note that sweeps may apparently be carried out simultaneously by several authorities in close coordination, thus enabling the investigation of cross-border matters.

The CRA provides for a phased concept of administrative fines for non-compliance with certain legal requirements, which follows the model of recent European legislation and is intended primarily as a deterrent:

  • Breaches of the essential cybersecurity requirements, conformity assessment and reporting obligations may result in administrative fines of up to EUR 15 million or up to 2.5% of annual global turnover, whichever is higher.
  • Breaches of the other CRA rules, including requirements to appoint an authorised representative, obligations applicable to importers or distributors, and certain requirements for the EU declaration of conformity, technical documentation and CE marking, may result in administrative fines of up to EUR 10 million or up to 2% of annual global turnover, whichever is higher.
  • Organisations which provide incorrect, incomplete or misleading information face administrative fines of up to EUR 5 million or, if the offender is an undertaking, up to 1% of annual turnover.

When deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation should be taken into account, including the size and market share of the operator committing the infringement.

Non-compliance with CRA requirements may also result in corrective or restrictive measures, including the Market Surveillance Authorities or the Commission recalling or withdrawing products from the EU market.

As the methods for imposing administrative fines will be left to Member States to implement, there is the risk of significant legal uncertainty in relation to enforcement. Although the CRA specifies certain parameters, in particular criteria for the calculation of administrative fines, the proposed regulation raises concerns with regard to the uniform interpretation and application of the rules on administrative fines throughout the EU.

Next procedural steps

The CRA provides for a phased transition period, with the provisions on notification of conformity assessment bodies (Chapter VI) applying from 11 June 2026, and the reporting obligations for manufacturers taking effect from 11 September 2026. The remaining obligations will come into effect on 11 December 2027.  

The CRA is likely to present significant challenges for many companies. It is important that those entities falling within the scope of the CRA start preparing for its implementation. Manufacturers should assess current cybersecurity measures against the upcoming requirements to identify potential compliance gaps and start planning compliance strategies early, including understanding the requirements relating to conformity assessments; technical documentation; and new incident reporting requirements.

Please reach out to your usual DLA Piper contact if you would like to discuss further.


]]>
UK: Data (Use and Access) Bill: newcomer or a familiar face? https://privacymatters.dlapiper.com/2024/11/uk-data-use-and-access-bill-newcomer-or-a-familiar-face/ Tue, 05 Nov 2024 14:59:26 +0000 https://privacymatters.dlapiper.com/?p=7488 Continue Reading]]> Déjà vu in the world of UK data law: the Labour government has proposed reforms to data protection and e-privacy laws through the new Data (Use and Access) Bill (“DUAB“). The DUAB follows the previous government’s unsuccessful attempts to reform these laws post-Brexit, which led to the abandonment of the Data Protection and Digital Information (No.2) Bill (“DPDI Bill“), in the run-up to the general election.

The new Labour government first announced plans for a bill in the King’s speech in July. In a notable shift of emphasis from the DPDI Bill, the term ‘data protection’ has been dropped from the title of the Bill.  Reform to the data protection and e-privacy regime is still an important part of the Bill, but arguably secondary to emphasis within the bill on wider data related policy initiatives, focussed on facilitating digital identities and securing access to ‘smart’ or ‘open’ data sets. This is reflected in the Government’s introduction that the new Bill will “harness the enormous power of data to boost the UK economy by £10 billion” and “unlock the secure and effective use of data for the public interest, without adding pressures to the country’s finances“.

Key data protection law changes

The Bill proposes very limited changes to the UK data protection regime. These are targeted and incremental and unlikely to have a material impact on day-to-day compliance for most businesses operating in the UK.

The specific areas of reform proposed include:

  • Scientific research definition and broad ‘consent to research’: The DUAB creates a statutory definition of scientific research to help clarify how the various provisions in the UK GDPR which refer to ‘research’ are intended to be applied. The intention is to clarify that ‘scientific research’ can extend to cover research “carried out for commercial or non-commercial activity” and includes any research that “can reasonably be described as scientific”. This replicates similar proposals in the DPDI Bill, which effectively bring into the UK GDPR references that appear in the recitals to the GDPR, that suggest a broad interpretation of “scientific research” should be applied. The DUAB also clarifies that an individual may be able to give consent to their data being used for more than one type of scientific research, even if at the time consent is provided, it is not possible to identify all of those research purposes.
  • Recognised legitimate interests: The DUAB helpfully introduces the concept of ‘recognised legitimate interests’ to provide a presumption of legitimacy to certain processing activities that a controller may wish to carry out under Article 6(1)(f) (legitimate interests). Again this is a helpful carry over from the DPDI Bill. The DUAB also introduces a new provision requiring any new recognised legitimate interest to be necessary to safeguard an objective listed in Article 23(1) UK GDPR (i.e. public security, the prevention, investigation, detection or prosecution of crime, public health, data subject rights etc.).
  • Automated Decision Making: The DUAB will remove the requirement to establish a qualifying lawful basis before conducting automated decision making (the requirement currently at Article 21(2) UK GDPR), except where special category data is used. This change is particularly relevant to organisations using AI systems, potentially allowing those organisations to use ADM more widely than under EU GDPR. However, data subjects will still benefit from rights of objection and human intervention, and organisations will still need to carefully assess their use of ADM. 
  • Special category personal data: The DUAB grants the Secretary of State the authority to designate new special categories of personal data and additional processing activities that fall under the prohibition of processing special category data in Article 9(1) of the UK GDPR. This potentially extends the scope of additional protections afforded by Article 9, beyond the current prescribed list of categories of special category data in the UK GDPR. It is unclear whether the Government anticipates including any additional categories of data under this mechanism in the near term.
  • Cookies: The DPDI Bill included a number of reforms to the rules on cookie consent. These have been retained in the DUAB. Businesses will likely find these changes helpful, as they have the effect of easing the consent requirements in some cases and provide greater clarity as to what falls within the “strictly necessary” exemption. One of the more challenging proposals by the previous government – that would have required cookie consent platforms to be centralised (e.g. into browsers) – has been withdrawn.
  • PECR Enforcement Regime:  The Bill fully aligns the UK GDPR / DPA and PECR enforcement regimes. This effectively increases regulatory exposure under the PECR to potential fines equivalent to the UK GDPR.
  • International Data Transfers – The DUAB introduces amendments that are designed to clarify the UK’s approach to the transfer of personal data internationally and the UK’s approach to conduct of adequacy assessments. These are technical changes, but notably the EU approach to adequacy anticipates a third country has a regime that is ‘essentially equivalent’ to the EU standard; the DUAB moves away from that to a new threshold that the third country offers safeguards that are ‘not materially lower than’ the UK.
  • ICO: The DUAB retains the majority of the reforms to the ICO, including the name change to an Information Commission, rather than a Commissioner, introducing a formal Board structure with an appointed CEO. The DUAB also aims to reduce the number of complaints reaching the ICO – by requiring complaints to be made first to the controller, with escalation to the authority only if they are not satisfactorily dealt with.

Which proposed changes have been dropped?

Many of the other reforms to UK data protection law proposed in the DPDI Bill have been dropped.  Notably, the following provisions did not make their way into the new bill:

  • The DPDI Bill proposed an expanded definition of ‘personal data’ which would have provided further clarification as to when data is related to an identified or identifiable individual and when it should be considered anonymous. That has been dropped.
  • The DPDI Bill amended the accountability provisions within the UK GDPR, reducing the burden on smaller businesses to maintain records of processing, or carry out Data Protection Impact Assessments. Those changes have not be carried across. The role of the Data Protection Officer will also remain as is, with the previous proposal to replace the DPO with the concept of a ‘senior responsible individual’ dropped.
  • The proposal in the DPDI Bill to exempt “vexatious” data subject access requests (in line with the terminology used in freedom of information law) has been discarded. Instead, the existing exemption of “manifestly unfounded or excessive” requests will continue to apply. Helpfully though the DUAB does incorporate a new provision allowing controllers to limit themselves to ‘reasonable and proportionate’ efforts in responding to access requests, a codification of ICO guidance and case law in this area.
  • The proposal to remove a requirement on non-UK businesses to appoint a representative under Article 27 UK GDPR has been scrapped – the role of the representative in the UK remains for now.
  • Some of the reform to the ICO has not survived, including the requirement for the ICO to take into account the government’s strategic priorities and some of the changes to the ICO’s enforcement powers.

Smart data schemes and digital identity verification

As noted above, data protection is no longer the main focus of the Bill, with large sections of the Bill set aside to deal with wider digital policy matters, including smart data schemes and certification for digital identity service providers “the Bill will create the right conditions to support the future of open banking and the growth of new smart data schemes” (HM Government).

  • Smart data schemes – The DUAB gives the Secretary of State broad powers to make data regulations addressing access to business data and customer data, with sector specific ‘smart data’ regimes. Secondary legislation will follow that sets out much of the important detail here, but the essence of these provisions is to require data holders to provide or otherwise make available datasets, as well as give businesses and individuals the right to request access to those datasets. This is similar to elements of the EU Data Act and EU Data Governance Act at EU level, but goes further as it is not limited to IoT or public sector data. There is also a strong overlap with the European Health Data Space Regulation and the EU FIDA Regulation: promoting access to data for secondary uses and breaking down the barriers that exist between data holders and those persons, whether individuals or businesses, that would like access to data for certain, as yet undefined, purposes.
  • Digital identity verification – The DUAB will separately establish a framework to facilitate the development of digital verification services. This framework aims to certify organisations that offer identity verification tools in accordance with the government’s trust framework standards. New provisions in the bill grant the Secretary of State the authority to deny certification on national security grounds and mandate that it consults with the Information Commissioner regarding relevant regulations.

What next?

Although the DUAB comes with some bold statements from the Government that it will “unlock the power of data to grow the economy and improve people’s lives“, the proposals represent incremental reform, rather than radical change. There are arguably no big surprises (and perhaps some missed opportunities) with much of the drafting a lighter version of what we saw in earlier drafts of the DPDI Bill, with some of the more innovative elements (around smart data access and use) still unclear as we await the detail of secondary legislation.

We will keep a close eye on the DUAB as it makes its way through Parliament. We expect a relatively smooth passage, given so much has already been through earlier legislative processes , so extensive debate seems unlikely.

]]>
EU: NIS2 Member State implementation deadline has arrived https://privacymatters.dlapiper.com/2024/10/eu-nis2-member-state-implementation-deadline-has-arrived/ Thu, 17 Oct 2024 08:32:52 +0000 https://privacymatters.dlapiper.com/?p=7463 Continue Reading]]> Today marks the deadline for EU Member State implementation of the Network and Information Systems Directive II (“NIS2“) into national law.

NIS2 is part of the EU’s Cybersecurity Strategy and repeals and replaces the original NIS Directive which entered into force in 2016 (with Member State implementation by 9 May 2018). Much like its predecessor, it establishes measures for a common level of cybersecurity for critical services and infrastructure across the EU and also aims to respond to perceived weakness of NIS1 regime and the needs of increasing digital change. NIS2 establishes harmonised cybersecurity risk management measures and reporting requirements for highly critical sectors. It has a much wider scope than its predecessor – many sectors come under NIS2 for the first time.

Although some Member States such as Croatia, Hungary and Belgium have transposed the directive into national legislation, as the map below demonstrates, the majority of EU countries do not yet have the relevant implementing legislation in place, even less so the broader frameworks and guidance that would equip organisations with the necessary tools to achieve compliance. This will pose difficulties for organisations, especially those with in-scope operations in multiple EU jurisdictions, as they evaluate the scope of their exposure and work towards compliance.

Visit our EU Digital Decade topic hub for further information on NIS2 and the EU’s Cybersecurity Strategy. If you have any questions, please get in touch with your usual DLA contact.

]]>
EU: CJEU Insight  https://privacymatters.dlapiper.com/2024/10/eu-cjeu-insight/ Tue, 15 Oct 2024 14:31:59 +0000 https://privacymatters.dlapiper.com/?p=7454 Continue Reading]]> October has already been a busy month for the Court of Justice of the European Union (“CJEU”), which has published a number of judgments on the interpretation and application of the GDPR, including five important decisions, all issued by the CJEU on one day – 4 October 2024. 

This article provides an overview and summary of several of the key data protection judgments issued by the CJEU this month. The judgments consider issues including: whether legitimate interests can cover purely commercial interests;  whether competitors are entitled to bring an injunction claim based on an infringement of the GDPR; what constitutes ‘health data’ within the meaning of Art. 4 and Art. 9 of the GDPR, whether a controller can rely on an opinion of the national supervisory authority to be exempt from liability under Art. 82(2) GDPR; and what constitutes sufficient compensation for non-material damages and many more. 

Following preliminary questions from the Amsterdam district court, the CJEU has provided valuable clarification in relation to whether “legitimate interests” under Art. 6 (1)(f) GDPR can be “purely commercial”. In its judgement, the CJEU recognized that a wide range of interests can be considered a ‘legitimate interest’ under the GDPR and there is no requirement that the interests of the controller are laid down by law. While the CJEU decided not to answer the specific preliminary questions received from the Amsterdam district court, the attitude of the CJEU is clear: “legitimate interests” can serve purely commercial interests.  

For further information on this decision, please see our blog post available here.  

In its judgement, the CJEU ruled that Chapter VIII of the GDPR allows for national rules which grant undertakings the right to take action in case of an infringement of substantive provisions of the GDPR allegedly committed by a competitor. Such an action would be on the basis of the prohibition of acts considered to be unfair competition. The CJEU further ruled, that the data of a pharmacist’s customers, which are provided when ordering pharmacy-only but non-prescription medicines on an online sales platform, constitute “health data” within the meaning of Art. 4 (15) and Art. 9 GDPR (to that extent contrary to the Advocate General’s opinion of 25 April 2024). 

For further information on this decision, please see our blog post available here.  

  • Maximilian Schrems v Meta Platforms Ireland Ltd (C-446/21) 

Background 

The privacy activist, Maximilian Schrems, brought an action before the Austrian courts challenging the processing of his personal data by Meta Platforms Ireland (“Meta”) in the context of the online social network Facebook. Mr Schrems argued that personal data relating to his sexuality had been processed unlawfully by Meta to send him personalised advertisements.   

Mr Schrems alleged that this processing took place without his consent or other lawful means under the GDPR. The CJEU noted that Mr Schrems had not posted sensitive data on his Facebook profile and further did not consent to Meta using a wider pool of personal data received from advertisers and other partners concerning Mr Schrems’ activities outside Facebook for the purpose of providing personalised advertising.  

The personalised advertisements in question were not based directly on his sexual orientation but on an analysis of his particular interests, drawn from a wider pool of data processed by Meta, as nothing had been openly published by Mr Schrems via Facebook about his sexuality. 

Key findings 

In its judgment, CJEU held that Art. 5(1)(c) GDPR does not allow the controller, in particular a social network platform, to process data collected inside and outside the platform for the purpose of personalised advertising for unlimited time and without distinction as to type of data. 

The CJEU emphasised that the principle of data minimisation requires the controller to limit the retention period of personal data to what is strictly necessary in the light of the objective of the processing activity. 

Regarding the collection, aggregation and processing of personal data for the purposes of targeted advertising, without distinction as to the type of those data, the CJEU held that a controller may not collect personal data in a generalised and indiscriminate manner and must refrain from collecting data which are not strictly necessary for the processing purpose. 

The CJEU also held that the fact that an individual manifestly made public information concerning their sexual orientation does not mean that the individual consented to processing of other data relating to their sexual orientation by the operator of an online social network platform within the meaning of Art. 9(2)(a) GDPR. 

Background 

The data subject is a shareholder of a company in Bulgaria. The company’s constitutive instrument was sent to the Registration Agency (Agentsia po vpisvaniyata), the Bulgarian authority managing the commercial register. 

This instrument, which includes the surname, first name, identification number, identity card number, date and place of issue of that card, as well as the data subject’s address and signature, was made available to the public by the Agency as submitted. The data subject requested the Agency to erase the personal data relating to her contained in that constitutive instrument. As it is a legal requirement to publish certain information relating to the company’s constitutive instrument in the commercial register under Directive 2017/1132 (relating to certain aspects of company law), the Agency refused to delete it when requested by the data subject. The Agency also did not want to delete the personal data that is not required under the Directive but was nevertheless published as it was contained in the instrument. The data subject brought an action before the Administrative Court of Dobrich (Administrativen sad Dobrich) seeking annulment of the Agency’s decision and an order that the Agency compensates her for the alleged non-material damage she suffered.  

 Key findings 

Of the in total eight questions asked by the national court, the CJEU answered six, of which five related directly to the GDPR. Firstly, the CJEU held that an operator of a public register, which receives personal data as part of the constitutive instrument that is subject to compulsory disclosure under EU law, is both a ‘recipient’ of the personal data insofar the operator makes it available to the public, and also a ‘controller’, even if the instrument contains personal data that is not required based on EU or member state laws for the operator to process. This does not change even if the Agency receives additional information because the data subject did not redact their personal data when sharing the constitutive instrument when they should have according to the operator’s procedural rules. 

Secondly, the controller managing the national register may not outrightly refuse any request of erasure of personal data published in the register using the argument that the data subject should have provided a redacted copy of the constitutive instrument. A data subject enjoys a right to object to processing and a right to erasure, unless there are overriding legitimate grounds (which is not the case here).  

Thirdly, the CJEU confirmed that a handwritten signature of a natural person is considered personal data as it is usually used to identify a person and has evidential value regarding the accuracy and sincerity of a document.  

Fourthly, the CJEU held that Art. 82(1) GDPR must be interpreted as meaning that a loss of control for a limited period by the data subject over their personal data, due to the making available to the public of such data online in the commercial register of a Member State, may be sufficient to cause ‘non-material damage’. What in any case is required, is that the person demonstrates that they actually suffered such damage, however minimal. The concept of ‘non-material damage’ does not require the demonstration of the existence of additional tangible negative adverse consequences.  

Lastly, if the supervisory authority of a member state issues an opinion on the basis of Art. 58(3)(b) GDPR, the controller is not exempt from liability under Art. 82(2) GDPR if it acts in line with that opinion. The Agency namely argued that a company’s constitutive instrument may still be entered into the register even if personal data is not redacted and referred hereby to an opinion of the Bulgarian supervisory authority. However, as such an opinion issued to the controller is not legally binding, it can therefore not demonstrate that damages suffered by the data subject are not attributable to the controller which means that it is insufficient to exempt the controller from liability.  

  • Patērētāju tiesību aizsardzības centrs (Latvia Consumer Rights Protection Centre) (C-507/23) 

Background 

The data subject is a well-known journalist and expert in the automotive sector in Latvia. During a campaign to make consumers aware of the risks involved in purchasing a second-hand vehicle, the Latvian Consumer Rights Protection Centre (“PTAC”) published a video on several websites which, among other things, featured a character imitating the data subject, without his consent.  

The journalist brought an action before the District Administrative Court in Latvia seeking (i) a finding that the actions of the PTAC, consisting in the use and distribution of his personal data without authorisation, were unlawful, and (ii) compensation for non-material damage in the form of an apology and the payment of EUR 2,000. The court ruled that the actions in question were unlawful, ordered the PTAC to end to acts, to make a public apology to the journalist and to pay him EUR 100 in compensation in respect of the non-material damage he had suffered. However, on appeal, although the Regional Administrative Court confirmed that the processing of personal data by the PTAC was unlawful and ordered the processing to cease and the publication of an apology on the websites which had disseminated the video footage, it dismissed the claim for financial compensation for the non-material damage suffered. The court found that the infringement that had been committed was not serious on the ground that the video footage was intended to perform a task in the public interest and not to harm the data subject’s reputation, honour and dignity.  

The journalist appealed this decision, and the Latvian Supreme Court referred a number of questions on the interpretation of Art 82(1) GDPR to the CJEU 

 Key findings 

Firstly, the CJEU found that an infringement of a provision of the GDPR, including the unlawful processing of personal data, is not sufficient, in itself, to constitute ‘damage’ within the meaning of Art. 82(1) GDPR.  

By this, the CJEU repeats and emphasises its previous interpretations of Art. 82(1) GDPR to the effect that a mere infringement of the GDPR is not sufficient to confer a right to compensation, since cumulatively and in addition to an ‘infringement’, the existence of ‘damage’ and of a ‘causal link between damage and infringement constitutes the conditions for the right to compensation in Art. 82(1) GDPR. According to the CJEU, this principle even applies if a provision of the GDPR has been infringed that grants rights to natural persons, as such an infringement cannot, in itself, constitute ‘non-material damage’. In particular, the CJEU held that the occurrence of damage in the context of the unlawful processing of personal data is only a potential and not an automatic consequence of such processing. 

Secondly, the CJEU found the presentation of an apology may constitute sufficient compensation for non-material damage on the basis of Art 82(1) GDPR. This applies in particular where it is impossible to restore the situation that existed prior to the occurrence of that damage, provided that that form of redress is capable of fully compensating for the damage suffered by the data subject. 

According to the CJEU, Art. 82(1) GDPR does not preclude the making of an apology from being able to constitute standalone or supplementary compensation for non-material damage provided that such a form of compensation complies with those principles of equivalence and effectiveness. In the present case, providing an apology as a possible compensation was explicitly laid down in Art. 14 of the Latvian Law on compensation for damage caused by public authorities. Other jurisdictions, however, such as German civil law, do not explicitly provide in their national laws the possibility of an apology as a form of compensation. Nevertheless, some courts have already taken apologies into account when determining the amount of monetary compensation. In light of this decision, courts may therefore consider an apology even more as a means of reducing the monetary amount of compensation for damages.  

Thirdly, according to the CJEU, Art. 82(1) GDPR precludes the controller’s attitude and motivation from being taken into account when deciding whether to grant the data subject less compensation than the damage actually suffered.  

According to the CJEU, Art. 82(1) GDPR has an exclusively compensatory and not a punitive function. Therefore, the gravity of an infringement cannot influence the amount of damages awarded under Art. 82(1) GDPR. The amount of damages may not be set at a level that exceeds full compensation for the actually suffered damage. 

Conclusion/implications 

While these five judgements were published on the same day, the decisions relate to a number of different topics. What they do have in common is that they all demonstrate the CJEU’s willingness to impose its reach and tackle difficult questions on the interpretation of the GDPR, particularly where there has not always been agreement or clarity among supervisory authorities. Although these decisions generally clarify and strengthen the CJEU’s previous interpretation of a number of issues, such as those relating to the compensation of non-material damages pursuant Art. 82(1) GDPR, it is interesting that for both the KLNTB decision and the Agentsia po vpisvaniyata decision, the CJEU followed a different interpretation of the GDPR to that of the relevant supervisory authorities (and in the KLNTB decision, contrary to the AG Opinion).

As we start to head into 2025, we can expect continued judgments from the CJEU on the interpretation and application of the GDPR with more than 20 pending cases with the CJEU relating to the GDPR.

]]>
UK: The UK Cybersecurity and Resilience Bill – a different approach to NIS2 or a British sister act? https://privacymatters.dlapiper.com/2024/10/uk-the-uk-cybersecurity-and-resilience-bill-a-different-approach-to-nis2-or-a-british-sister-act/ Tue, 01 Oct 2024 13:14:24 +0000 https://privacymatters.dlapiper.com/?p=7441 Continue Reading]]> In the much anticipated first King’s Speech of the new Labour Government on 17 July 2024, the monarch announced that the long anticipated Cybersecurity and Resilience Bill (CS&R Bill) would be amongst those new laws making their way onto Parliament’s schedule for the next year. Six years on from the implementation of the NIS Regulations 2018 (NIS Regulations) which, in common with our fellow EU Member States of the time, was based on the EU’s NIS1 Directive, the CS&R Bill recognises that the time is ripe for reform. While the NIS Regulations clearly took a step in the right direction to achieving a high level of cybersecurity across critical sectors, the new Bill recognises the need to upgrade and expand the UK’s approach to keep in step with an ever-increased cyber threat.

But in the UK, we are not alone in recognising cyber as one of the most significant threats of our age. In the recitals to NIS2, the EU Commission notes that the “number, magnitude, sophistication, frequency and impact of incidents are increasing and present a major threat to the functioning of network and information systems” with the result that they “impede the pursuit of economic activities in the internal market, generate financial loss, undermine user confidence and cause major damage to the Union’s economy and society“. The EU’s response was to enact a bolstered NIS2 which significantly expands the number of entities directly in scope; includes a focus on supply chains; enhances the powers of enforcement and supervision available to local authorities; steps up incident reporting obligations; and imposes ultimate responsibility for compliance at a senior management level. With DORA, the EU adds another layer of regulation, trumping the requirements of NIS2 for the financial services sector.

So how will the UK’s new Bill compare? Our article looking at the initial indications released by Government to try and answer that question is available here.

]]>
EU: European Supervisory Authorities issue second batch of technical standards under DORA https://privacymatters.dlapiper.com/2024/07/eu-european-supervisory-authorities-issue-second-batch-of-technical-standards-under-dora/ Thu, 18 Jul 2024 13:03:28 +0000 https://privacymatters.dlapiper.com/?p=7361 Continue Reading]]> On 18th July, the European Supervisory Authorities (“ESAs“) published the final versions of the second batch of their draft regulatory technical standards (RTS) and implementing technical standards (ITS), developed under the Digital Operational Resilience Act (DORA), as well as two sets of Guidelines.

Summary of draft regulatory technical standards and implementing technical standards

  1. Final draft RTS and ITS on the content, format, templates and timelines for reporting major ICT-related incidents and significant cyber threats

DORA requires a financial entity to report major ICT-related incidents to the relevant competent authority. In addition, financial entities may, on a voluntary basis, notify significant cyber threats.

In summary these RTS cover:

  • the content of the reports to be submitted for major ICT-related incidents, as well as the standard forms and templates for the reports
  • the time limits for reporting these incidents to the competent authority, and
  • the form and content of the notification for significant cyber threats.

The RTS setting out the criteria for classifying major ICT-related incidents and significant cyber threats came into effect earlier this week.

  1. Final draft RTS on threat-led penetration testing (“TLPT”)

DORA sets out requirements for the security of network and information systems of financial entities and of the critical third parties providing ICT services to them. This includes an obligation on in-scope financial entities to conduct advanced testing by means of TLPT at least every 3 years.

In summary these RTS include:

  • the criteria used for identifying those financial entities required to perform TLPT
  • the requirements and standards governing the use of internal testers
  • the requirements in relation to the scope of TLPT, testing methodology and approach for each phase of the testing process, and
  • the type of supervisory and other relevant cooperation needed for the implementation of TLPT and for the facilitation of mutual recognition of that testing.
  1. Final draft RTS on the harmonisation of conditions enabling the conduct of the oversight activities

Under DORA, each critical ICT third-party service provider (“CTPP“) will have a designated ‘Lead Overseer’, who will be one of the three European Supervisory Authorities. DORA grants powers to the Lead Overseer in exercising oversight of CTPPs.

In summary these RTS include:

  • the content and format of the information to be submitted by CTPPs that is necessary for the Lead Overseer to carry out its duties (including the template for providing information on subcontracting arrangements), and
  • the information to be provided by an ICT third-party service provider in the application for a voluntary request to be designated as critical.
  1. Final draft RTS specifying the criteria for determining the composition of the joint examination team  

The Lead Overseer mentioned above will be assisted in its oversight activities by the ‘joint examination team’ or “JET”. These RTS set out the criteria for determining the composition of the JET and specify its tasks and working arrangements. The JET will be comprised of staff members from the ESAs and competent authorities who have expertise in ICT matters and operational risks – these RTS are intended to ensure a balanced participation of staff members from those different organisations.

Summary of Guidelines

In addition to the above, the ESAs have issued two sets of Guidelines:

  1. Joint Guidelines on the estimation of aggregated costs and losses caused by major ICT-related incidents

If requested by a competent authority, a financial entity will have to report an estimation of aggregated annual costs and losses caused by major ICT-related incidents. These Guidelines propose how a financial entity should estimate the annual costs and losses, and which figures to use for the estimation. The ESAs have previously stated that they will apply the same approach as that adopted for assessing costs and losses under other DORA RTS.

  1. Joint Guidelines on the oversight cooperation and information exchange between the ESAs and the competent authorities

The ESAs and competent authorities have received new roles and responsibilities as part of DORA’s pan-European oversight framework. These Guidelines are intended to ensure a consistent supervisory approach, including a coordinated approach to the oversight of CTPPs.

The Guidelines cover cooperation and information-sharing between the ESAs and competent authorities, including how they will allocate tasks between them and what information competent authorities will need in order to follow up on any recommendations addressed to CTPPs in their territory.

Next steps

The final draft RTS and ITS have been submitted to the European Commission for review and adoption, subject to any changes the Commission may choose to make. The Joint Guidelines have been adopted already by the Board of Supervisors of the three ESAs.

A notable omission are the RTS on subcontracting which specify what a financial entity must take account of when allowing the subcontracting of ICT services that support critical or important functions. The ESAs have stated that those RTS will be published ‘in due course‘.

For further information or if you have any questions, please get in touch with your usual DLA contact.

]]>