| Privacy Matters DLA Piper's Global Privacy and Data Protection Resource Wed, 07 May 2025 11:40:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.8&lxb_maple_bar_source=lxb_maple_bar_source https://privacyblog.dlapiperblogs.com/wp-content/uploads/sites/32/2023/07/cropped-Favicon_512x512-32x32.gif | Privacy Matters 32 32 Germany: New government plans to centralize data protection supervision and reduce regulation for small and medium-sized companies https://privacymatters.dlapiper.com/2025/04/germany-new-government-plans-to-centralize-data-protection-supervision-and-reduce-regulation-for-small-and-medium-sized-companies/ Mon, 14 Apr 2025 08:52:20 +0000 https://privacymatters.dlapiper.com/?p=7568 Continue Reading]]> On April 9, 2025, the coalition agreement of the future German Federal Government, consisting of the three German parties CDU, CSU and SPD, was published. The document entitled “Responsibility for Germany” contains several plans, including some that may fundamentally change the German data protection supervisory authority structure and that aim to ease the regulatory burden for small and medium-sized companies.

Central data protection supervision and new role of the Data Protection Conference  

The future government is planning to reform the structure of the data protection supervision authorities in Germany. Responsibilities and competencies for the private sector are to be bundled into the Federal Commissioner for Data Protection and Information Security (“BfDI“). Currently, Germany does not have one central supervisory authority for data protection law but authorities in each of the sixteen German federal states (Länder), that are competent for the public and the private sector in the respective state. In addition, there are different supervisory authorities for private broadcasters as well as for public broadcasters. Currently, the BfDI is only competent for the federal public sector and a limited number of private sectors, such as telecommunications.

This change in structure would lead to considerable relief, particularly for companies or groups of companies with headquarters outside Germany or outside the EEA. If the BfDI becomes the responsible authority for the private sector as a whole, there will no longer be any uncertainty as to which national supervisory authority to work with. This is particularly relevant if a company or group of companies has several branches in Germany. Controllers and processors would only have to cooperate with one national supervisory authority and the contact details of the data protection officer would only have to be communicated to the BfDI. In addition, controllers without a lead supervisory authority will no longer be required to report data security breaches to all of the various German supervisory authorities. Currently, controllers without establishment in the EU have to make notifications to the authorities in those federal states where the affected data subjects live – in the future, instead of notifying up to 16 different authorities, they could only notify to one authority, just like in other EU countries.

In addition, the new structure could provide greater legal certainty for both controllers and processors, as currently, each German supervisory authority may interpret the legal requirements differently and pursue varying priorities, for example with regard to enforcement.

However, it remains unclear how this structural reform can be implemented in a legally secure manner. The coexistence of different responsibilities of the federal government and the federal states is an expression of federal structures and thus of the federal state principle safeguarded by the German constitution (the German Basic Law, Grundgesetz).

In addition, the Data Protection Conference (“DSK“), in which all German supervisory authorities are represented, is to be anchored in the Federal Data Protection Act (“BDSG“). In contrast to the current situation, it is to be given the task of creating binding data protection standards. This can ensure that a uniform approach is created, particularly in areas of cooperation between the private and public sectors. At the same time, there is a risk that even non-practical and very dogmatic opinions of this very diverse body in the future will become binding.

Better use of GDPR leeway

The coalition partners also want to make better use of the leeway provided by the GDPR. This means that where the GDPR provides opening clauses for national legislators, new rules shall  be created to relieve the burden on small and medium-sized enterprises as well as for the processing of personal data of and by employees as well as volunteers. Such leeway exists in the GDPR under Art. 23 GDPR, among others. According to Art. 23 (1) GDPR, the extensive transparency obligations under Art. 13, 14 and Art. 15 GDPR could be reduced to an appropriate level for small and medium-sized enterprises. However, no concrete plans have been agreed on yet.

Introduction of the retention of data relating to the civil identity and associated IP addresses

A proposal on data retention (Vorratsdatenspeicherung), which is currently suspended in Germany, has also caused a stir. Specifically, a proportionate three-month retention period for IP addresses and port numbers is to be introduced, in line with European and constitutional requirements, to be able to assign them to the owner of the connection. In this context, the Federal Police is to be authorized to carry out source telecommunication surveillance to combat serious crimes.

As recently as April 30, 2024, the ECJ ruled in Case C-470/21 that data retention is not by itself contrary to European law. However, it remains to be seen whether the future German Federal Government will succeed in finding a regulation that upholds the fundamental rights to respect for family life and the protection of personal data (Art. 7 and Art. 8 of the Charter of Fundamental Rights of the European Union).

Actual effects

The actual effects of the measures set out are not yet foreseeable. On the one hand, the measures set out for the reform of data protection are very vague. Secondly, the coalition agreement itself is not a binding document. The implementation of the intended measures depends largely on the political framework conditions. Several years may pass before the reforms envisaged in a coalition agreement are implemented in law.

]]>
EU: DLA Piper GDPR Fines and Data Breach Survey: January 2025 https://privacymatters.dlapiper.com/2025/01/eu-dla-piper-gdpr-fines-and-data-breach-survey-january-2025/ Tue, 21 Jan 2025 11:53:17 +0000 https://privacymatters.dlapiper.com/?p=7534 Continue Reading]]> The seventh annual edition of DLA Piper’s GDPR Fines and Data Breach Survey has revealed another significant year in data privacy enforcement, with an aggregate total of EUR1.2 billion (USD1.26 billion/GBP996 million) in fines issued across Europe in 2024.

Ireland once again remains the preeminent enforcer issuing EUR3.5 billion (USD3.7 billion/GBP2.91 billion) in fines since May 2018, more than four times the value of fines issued by the second placed Luxembourg Data Protection Authority which has issued EUR746.38 million (USD784 million/GBP619 million) in fines over the same period.

The total fines reported since the application of GDPR in 2018 now stand at EUR5.88 billion (USD 6.17 billion/GBP 4.88 billion). The largest fine ever imposed under the GDPR remains the EUR1.2 billion (USD1.26 billion/GBP996 million) penalty issued by the Irish DPC against Meta Platforms Ireland Limited in 2023.

Trends and Insights

In the year from 28 January 2024, EUR1.2 billion fines were imposed. This was a 33% decrease compared to the aggregate fines imposed in the previous year, bucking the 7-year trend of increasing enforcement. This does not represent a shift in focus from personal data enforcement; the clear year on year trend remains upwards. This year’s reduction is almost entirely due to the record breaking EUR 1.2 billion fine against Meta falling in 2023 which skewed the 2023 figures. There was no record breaking fine in 2024.

Big tech companies and social media giants continue to be the primary targets for record fines, with nearly all of the top 10 largest fines since 2018 imposed on this sector. This year alone the Irish Data Protection Commission issued fines of EUR310 million (USD326 million/GBP257 million) against LinkedIn and EUR251 million (USD264 million/GBP208 million) against Meta.  In August 2024, the Dutch Data Protection Authority issued a fine of EUR290 million (USD305 million/GBP241 million) against a well-known ride-hailing app in relation to transfers of personal data to a third country. 

2024 enforcement expanded notably in other sectors, including financial services and energy. For example, the Spanish Data Protection Authority issued two fines totalling EUR6.2 million  (USD6.5 million/GBP5.1 million) against a large bank for inadequate security measures, and the Italian Data Protection Authority fined a utility provider EUR5 million (USD5.25 million/GBP4.15 million) for using outdated customer data.

The UK was an outlier in 2024, issuing very few fines. The UK Information Commissioner John Edwards was quoted in the British press in November 2024 as saying that he does not agree that fines are likely to have the greatest impact and that they would tie his office up in years of litigation. An approach which is unlikely to catch on in the rest of Europe. 

The dawn of personal liability

Perhaps most significantly, a focus on governance and oversight has led to a number of enforcement decisions citing failings in these areas and specifically calling out failings of management bodies. Most significantly the Dutch Data Protection Commission announced it is investigating whether it can hold the directors of Clearview AI personally liable for numerous breaches of the GDPR, following a EUR30.5 million (USD32.03 million/GBP25.32 million) against the company. This novel investigation into the possibility of holding Clearview AI’s management personally liable for continued failings of the company signals a potentially significant shift in focus by regulators who recognise the power of personal liability to focus minds and drive better compliance. 

Data Breach Notifications

The average number of breach notifications per day increased slightly to 363 from 335 last year, a ‘levelling off’ consistent with previous years, likely indicative of organisations becoming more wary of reporting data breaches given the risk of investigations, enforcement, fines and compensation claims that may follow notification. 

A recurring theme of DLA Piper’s previous annual surveys is that there has been little change at the top of the tables regarding the total number of data breach notifications made since the GDPR came into force on 25 May 2018 and during the most recent full year from 28 January 2024 to 27 January 2025. The Netherlands, Germany, and Poland remain the top three countries for the highest number of data breaches notified, with 33471, 27829 and 14,286 breaches notified respectively. 

AI enforcement

There have been a number of decisions this year signalling the intent of data protection supervisory authorities to closely scrutinise the operation of AI technologies and their alignment with privacy and data protection laws. For businesses, this highlights the need to integrate GDPR compliance into the core design and functionality of their AI systems.

Commenting on the survey findings, Ross McKean, Chair of the UK Data, Privacy and Cybersecurity practice said:

“European regulators have signalled a more assertive approach to enforcement during 2024 to ensure that AI training, deployment and use remains within the guard rails of the GDPR.”

We expect for this trend to continue during 2025 as US AI technology comes up against European data protection laws.

John Magee, Global Co-Chair of DLA Piper’s Data, Privacy and Cybersecurity practice commented:

“The headline figures in this year’s survey have, for the first time ever, not broken any records so you may be forgiven for assuming a cooling of interest and enforcement by Europe’s data regulators. This couldn’t be further from the truth. From growing enforcement in sectors away from big tech and social media, to the use of the GDPR as an incumbent guardrail for AI enforcement as AI specific regulation falls into place, to significant fines across the likes of Germany, Italy and the Netherlands, and the UK’s shift away from fine-first enforcement – GDPR enforcement remains a dynamic and evolving arena.”

Ross McKean added:

“For me, I will mostly remember 2024 as the year that GDPR enforcement got personal.”

“As the Dutch DPA champions personal liability for the management of Clearview AI, 2025 may well be the year that regulators pivot more to naming and shaming and personal liability to drive data compliance.”

]]>
EU: EDPB Opinion on AI Provides Important Guidance though Many Questions Remain https://privacymatters.dlapiper.com/2025/01/eu-edpb-opinion-on-ai-provides-important-guidance-though-many-questions-remain/ Tue, 14 Jan 2025 13:53:05 +0000 https://privacymatters.dlapiper.com/?p=7528 Continue Reading]]> A much-anticipated Opinion from the European Data Protection Board (EDPB) on AI models and data protection has not resulted in the clear or definitive guidance that businesses operating in the EU had hoped for. The Opinion emphasises the need for case-by-case assessments to determine GDPR applicability, highlighting the importance of accountability and record-keeping, while also flagging ‘legitimate interests’ as an appropriate legal basis under specific conditions. In rejecting the proposed Hamburg thesis, the EDPB has stated that AI models trained on personal data should be considered anonymous only if personal data cannot be extracted or regurgitated.

Introduction

On 17 December 2024, the EDPB published a much-anticipated Opinion on AI models and data protection.  The Opinion includes the EDPB’s view on the following key questions: does the development and use of an AI model involve the processing of personal data; and if so, what is the correct legal basis for that processing?

As is sometimes the case with EDPB Opinions, which necessarily represent the consensus view of the supervisory authorities of 27 different Member States, the Opinion does not provide many clear or definitive answers.  Instead, the EDPB offers indicative guidance and criteria, calling for case-by-case assessments of AI models to understand whether, and how, they are impacted by the GDPR.  In this context, the Opinion repeatedly highlights the importance of accountability and record-keeping by businesses developing or using AI, so that the applicability of data protection laws, and the business’ compliance with those laws, can be properly assessed. 

Whilst the equivocation of the Opinion might be viewed as unhelpful by European businesses looking for regulatory certainty, it is also a reflection of the complexities inherent in this intersection of law and technology.

In summary, the answers given by the EDPB to the four questions in the Opinion are as follows:

  1. Can an AI model, which has been trained using personal data, be considered anonymous?  Yes, but only in some cases.  It must be impossible, using all means reasonably likely to be used, to obtain personal data from the model, either through attacks which aim to extract the original training data from the model itself, or through interactions with the AI model (i.e., personal data provided in responses to prompts / queries). 
  2. Is ‘legitimate interests’ an appropriate legal basis for the training and development of an AI model? In principle yes, but only where the processing of personal data is necessary to develop the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  In particular, the issue of data minimisation, and the related issue of web-scraping / indiscriminate capture of data, will be relevant here. 
  3. Is ‘legitimate interests’ an appropriate legal basis for the deployment of an AI model? In principle yes, but only where the processing of personal data is necessary to deploy the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  Here, the impact on the data subject of the use of the AI model is of predominant importance.
  4. If an AI Model has been found to have been created, updated or developed using unlawfully processed personal data, how does this impact the subsequent use of that AI model?  This depends in part on whether the AI model was first anonymised before being disclosed to the deployer of that model (see Question 1).  Otherwise, the deployer of the model may need to assess the lawfulness of the development of the model as part of its accountability obligations.

Background

The Opinion was issued by the EDPB under Article 64 of the GDPR, in response to a request from the Irish Data Protection Commission.  Article 64 requires the EDPB to publish an opinion on matters of ‘general application’ or which ‘produce effects in more than one Member State’. 

In this case, the Irish DPC asked the EDPB to provide an opinion on the above-mentioned questions – a request that is not surprising given the general importance of AI models to businesses across the EU, but also in light of the large number of technology companies developing those models who have established their European operations in Ireland. 

In order to understand the Opinion, it helps to be familiar with certain concepts and terminology relating to AI. 

First, the Opinion distinguishes between an ‘AI system’ and an ‘AI model’. For the former, the EDPB relies on the definition given in the EU AI Act. In short: a machine-based system operating with some degree of autonomy that infers, from inputs, how to produce outputs such as  predictions, content, recommendations, or decisions.  An AI model, meanwhile, is a component part of an AI system. Colloquially, it is the ‘brain’ of the AI system – an algorithm, or series of algorithms (such as in the form of a neural network), that recognises patterns in data. AI models require the addition of further components, such as a user interface, to become AI systems. To take a common example – the generative AI system known as Chat GPT is a software application comprised of an AI model (the GPT Large Language Model) connected to a chatbot-style user interface that allows the user to submit queries (or ‘prompts’) to the model in the form of natural language questions. Whilst the Opinion is notionally concerned only with AI models, at times the Opinion appears to blur the distinction between the model and the system, in particular, when discussing the significance of model outputs that are only rendered comprehensible to the user through an interface that sits outside of the model.

Second, the Opinion relies on an understanding of a typical ‘AI lifecycle’, pursuant to which an AI model is first developed by training the model on large volumes of data.  This training may happen in a number of phases which become increasingly refined (referred to as ‘fine-tuning’). Only after an AI model is developed can it be used, or ‘deployed’, in a live setting, as part of an AI system.  Often, the developer of an AI model will not be the same person as the deployer.  This is relevant because the Opinion variously addresses both development and deployment phases.

The significance of the ‘Hamburg thesis’

With respect to the key question of whether AI models can be considered anonymous, the Opinion follows in the wake of a much-discussed paper published in July 2024 by the data protection authority for the German state of Hamburg.  The paper took the position that AI models (specifically, Large Language Models) are, in isolation, anonymous – they do not involve the processing of personal data. 

In order to reach that conclusion, the paper decoupled the model itself from: (i) the prior training of the model (which may involve the collection and further processing of personal data as part of the training dataset); and (ii) the subsequent use of the model, whereby a prompt/input may contain personal data, and an output may be used in a way that means it constitutes personal data.

Looking only at the AI model itself, the paper decided that the tokens and values which make up the ‘inner workings’ of a typical AI model do not, in any meaningful way, relate to or correspond with information about identifiable individuals.  Consequently, the model itself was found to be anonymous, even if the development and use of the model involves the processing of personal data. 

The Hamburg thesis was welcomed for several reasons, not least because it resolved difficult questions such as how data subject rights could be understood in relation to an AI model (if someone asks for their personal data to be deleted, then what can this mean in the context of an AI model?), and the question of the lawful basis for ‘storing’ personal data in an AI model (as distinct from the lawful basis for collecting and preparing data to train the model).

However, as we go on to explain, the EDPB Opinion does not follow the relatively simple and certain framework presented by the Hamburg thesis.  Instead, it introduces uncertainty by asserting that there are, in fact, scenarios where an AI model contains personal data, but that this must be determined on a case-by-case basis.

Are AI models anonymous?

First, the Opinion is only concerned with AI models that have been trained using personal data.  Therefore, AI models trained using solely non-personal data (such as statistical data, or financial data relating to businesses) can, for the avoidance of doubt, be considered anonymous.  However, in this context the broad scope of ‘personal data’ under the GDPR must be remembered, and the Opinion does not suggest any de minimis level of personal data that needs to be involved in the training of the AI model for the question of GDPR applicability to arise.

Where personal data is used in the training phase, the next question is whether the model is specifically designed to provide personal data regarding individuals whose personal data were used to train the model.  If so, the AI model will not be anonymous.  For example, an AI model that is trained to provide a user, on request, with biographical information and contact details for directors of public companies, or a generative AI model that is trained on the voice recordings of famous singers so that it can, in turn, mimic the voices of those singers.  In each case, the model is trained on personal data of specific individuals, in order to be able to produce other personal data about those individuals as an output. 

Finally, there is the intermediary case of AI models that are trained on personal data, but that are not designed to provide personal data related to the training data as an output.  It is this use case that the Opinion focuses on.  The conclusion is that AI models in this category may be anonymous, but only if the developer of the model can demonstrate that information about individuals whose personal data was used to train the model cannot be ‘obtained from’ the model, using all means reasonably likely to be used.  Notwithstanding that personal data used for training the model no longer exists within the model in its original form (but rather it is “represented through mathematical objects“), that information is, in the eyes of the EDPB, still capable of constituting personal data.

The following question then arises: how does someone ‘obtain’ personal data from an AI model? In short, the Opinion posits two possibilities.  First, that training data is ‘extracted’ via deliberate attacks.  The Opinion refers to an evolving field of research in this area and makes reference to techniques such as ‘model inversion’, ‘reconstruction attacks’, and ‘attribute and membership inference’.  These are techniques that can be deployed to trick the model into revealing training data, or otherwise reconstruct that training data, in some cases relying on privileged access to the model itself.  Second, is the risk of accidental or inadvertent ‘regurgitation’ of personal data as part of an AI model’s outputs. 

Consequently, a developer must be able to demonstrate that its AI model is resistant both to attacks that extract personal data directly from the model, as well as to the risk of regurgitation of personal data in response to queries:  “In sum, the EDPB considers that, for an AI model to be considered anonymous, using reasonable means, both (i) the likelihood of direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to train the model; as well as (ii) the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant for any data subject“. 

Which criteria should be used to evaluate whether an AI model is anonymous?

Recognising the uncertainty in its conclusion that the AI models may or may not be anonymous, the EDPB provides a list of criteria that can be used to assess the likelihood of a model being found to contain personal data.  These include:

  • Steps taken to avoid or limit the collection of personal data during the training phase.
  • Data minimisation or masking measures (e.g., pseudonymisation) applied to reduce the volume and sensitivity of personal data used during the training phase.
  • The use of methodologies during model development that reduce privacy risks (e.g., regularisation methods to improve model generalisation and reduce overfitting, and appropriate and effective privacy-preserving techniques, such as differential privacy).
  • Measures that reduce the likelihood of obtaining personal data from queries (e.g., ensuring the AI system blocks the presentation to the user of outputs that may contain personal data).
  • Document-based audits (internal or external) undertaken by the model developer that include an evaluation of the chosen measures and of their impact to limit the likelihood of identification.
  • Testing of the model to demonstrate its resilience to different forms of data extraction attacks.

What is the correct legal basis for AI models?

When using personal data to train an AI model, the preferred legal basis is normally the ‘legitimate interests’ of the controller, under Article 6(1)(f) GDPR. This is for practical reasons. Whilst, in some circumstances, it may be possible to obtain GDPR-compliant consent from individuals authorising the use of their data for AI training purposes, in most cases this will not be feasible. 

Helpfully, the Opinion accepts that legitimate interests is, in principle, a viable legal basis for processing personal data to train an AI model. Further, the Opinion also suggests that it should be straightforward for businesses to identify a lawful legitimate interest. For example, the Opinion cites “developing an AI system to detect fraudulent content or behaviour” as a sufficiently precise and real interest. 

However, where businesses may have more difficulty is in showing that the processing of personal data is necessary to realise their legitimate interest, and that their legitimate interest is not outweighed by any impact on the rights and freedoms of data subjects (the ‘balancing test’). Whilst this is fundamentally just a restatement of existing legal principles, the following sentence should nevertheless cause some concern for businesses developing AI models, in particular Large Language Models: “If the pursuit of the purpose is also possible through an AI model that does not entail processing of personal data, then processing personal data should be considered as not necessary“. Technically speaking, it may often be the case that personal data is not essential for the training of an AI model – however, this does not mean that it is straightforward to systematically remove all personal data from a training dataset, or otherwise replace all identifying elements with ‘dummy’ values. 

With respect to the balancing test, the EDPB asks businesses to consider a data subject’s interest in self-determination and in maintaining control over their own data when considering whether it is lawful to collect personal data for model training purposes.  In particular, it may be more difficult to satisfy the balancing test if a developer is scraping large volumes of personal data (especially including any sensitive data categories) against their wishes, without their knowledge, or otherwise in contexts that would not be reasonably expected by the data subject. 

When it comes to the separate purpose of deploying an AI model, the EDPB asks businesses to consider the impact on the data subject’s fundamental rights that arise from the purpose for which the AI model is used.  For example, AI models that are used to block content publication may adversely affect a data subject’s fundamental right to freedom of expression.  However, conversely the EDPB recognises that the deployment of AI models may have a positive impact on a data subject’s rights and freedoms – for example, an AI model that is used to improve accessibility to certain services for people with disabilities). In line with Recital 47 GDPR, the EDPB reminds controllers to consider the ‘reasonable expectations’ of data subjects in relation to both training and deployment uses of personal data.

Finally, the Opinion discusses a range of ‘mitigating measures’ that may be used to reduce risks to data subjects and therefore tip the balancing test in favour of the controller.  These include:

  • Technical measures to reduce the volume or sensitivity of personal data at use (e.g., pseudonymisation, masking).
  • Measures to facilitate the exercise of data subject rights (e.g., providing an unconditional right for data subjects to opt-out of the use of their personal data for training or deploying the model; allowing a reasonable period of time to elapse between collection of training data and its use).
  • Transparency measures (e.g., public communications about the controller’s practices in connection with the use of personal data for AI model development).
  • Measures specific to web-scraping (e.g., excluding publications that present particular risks; excluding certain data categories or sources; excluding websites that clearly object to web scraping).

Notably, the EDPB observes that, to be effective, these mitigating measures must go beyond mere compliance with GDPR obligations (for example, providing a GDPR compliant privacy notice, which a controller would in any case be required to do, would not be an effective transparency measure for these purposes). 

When are companies liable to non-compliant AI models?

In its final question, the DPC sought clarification from the EDPB on how a deployer of an AI model might be impacted by any unlawful processing of personal data in the development phase of the AI model. 

According to the EDPB, such ‘upstream’ unlawful processing may impact a subsequent deployer of an AI model in the following ways:

  • Corrective measures taken against the developer may have a knock-on effect on the deployer – for example, if the developer is ordered to delete personal data unlawfully collected for training purposes, the developer would not be allowed to subsequently process this data. However, this raises an important practical question about how such data could be identified in, and deleted from, the AI model, taking into account the fact that the model does not retain training data in its original form.
  • Unlawful processing in the development phase may impact the legal basis for the deployment of the model – in particular, if the deployer of the AI model is relying on ‘legitimate interests’, it will be more difficult to satisfy the balancing test in light of the deficiencies associated with the collection and use of the training data.

In light of these risks, the EDPB recommends that deployers take reasonable steps to assess the developer’s compliance with data protection laws during the training phase.  For example, can the developer explain the sources of data used, steps taken to comply with the minimisation principle, and any legitimate interest assessments conducted for the training phase?  For certain AI models, the transparency obligations imposed in relation to AI systems under the AI Act should assist a deployer in obtaining this information from a third party AI model developer. While the opinion provides a useful framework for assessing GDPR issues with AI systems, businesses operating in the EU may be frustrated with the lack of certainty or definitive guidance on many key questions relating to this new era of technology innovation.

]]>
Germany: Judgment on Non-Material Damages for Loss of Control over Personal Data https://privacymatters.dlapiper.com/2024/11/germany-judgment-on-non-material-damages-for-loss-of-control-over-personal-data/ Tue, 19 Nov 2024 16:44:34 +0000 https://privacymatters.dlapiper.com/?p=7502 Continue Reading]]> On November 18, 2024, the German Federal Court of Justice (Bundesgerichtshof – “BGH”) made a (to date unpublished) judgment under the case number VI ZR 10/24 regarding claims for non-material damages pursuant to Art. 82 GDPR, due to the loss of control over personal data.

The judgment is based on a personal data breach at Facebook. In April 2021, data from over 500 million users was made public on the internet. This data was collected by unknown third parties using scraping.

In the course of this incident, the plaintiff’s data (user ID, first and last name, place of work and gender) was published on the internet. The plaintiff argues that Facebook did not take sufficient and appropriate measures to protect his personal data and is essentially seeking non-material damages for the anger and loss of control over his personal data.

After the plaintiff was awarded an amount of EUR 250 in the first instance instead of the requested minimum of EUR 1,000, he lost in the appeal instance. The court of appeal stated that the mere loss of control is not sufficient for the assumption of non-material damage within the meaning of Art. 82 (1) GDPR. Furthermore, the plaintiff had not sufficiently substantiated that he had been psychologically affected beyond the loss of control.

The appeal to BGH was partially successful. The BGH is of the opinion that even the mere and brief loss of control over personal data as a result of an infringement of the GDPR could constitute non-material damages within the meaning of Art 82(1) GDPR. There is no need for the data to be misused in a specific way to the detriment of the data subject or for there to be any other additional noticeable negative consequences. For the specific case, the BGH has not decided on a particular amount of damages but considers EUR 100 to be reasonable in view of the underlying circumstances. However, it still remains in general the plaintiff’s obligation to present and prove the conditions that are pre-requisites for his claims.

The BGH has now referred the case back to the court of appeal for a new hearing and decision.

This judgment is important insofar as the BGH has taken a position on a legal issue – non-material damages for loss of control over personal data and its amount – that has been controversial and inconsistently handled to date. Back on October 31, 2024, the BGH determined the procedure for the Leading Decision Procedure in accordance with Section 552b of the German Code of Civil Procedure (Zivilprozessordnung – “ZPO”). In such procedures, the BGH can decide legal issues that are relevant to the outcome of a large number of proceedings and thus provide guidance for the courts of lower instance. However, leading decisions are not formally binding. Nevertheless, the BGH judgment sends a signal, as the BGH considers the loss of personal data to be low in relation to the amount of damages.

An update to this post will be made once the judgment is publicly available.

]]>
EU: CJEU Insight  https://privacymatters.dlapiper.com/2024/10/eu-cjeu-insight/ Tue, 15 Oct 2024 14:31:59 +0000 https://privacymatters.dlapiper.com/?p=7454 Continue Reading]]> October has already been a busy month for the Court of Justice of the European Union (“CJEU”), which has published a number of judgments on the interpretation and application of the GDPR, including five important decisions, all issued by the CJEU on one day – 4 October 2024. 

This article provides an overview and summary of several of the key data protection judgments issued by the CJEU this month. The judgments consider issues including: whether legitimate interests can cover purely commercial interests;  whether competitors are entitled to bring an injunction claim based on an infringement of the GDPR; what constitutes ‘health data’ within the meaning of Art. 4 and Art. 9 of the GDPR, whether a controller can rely on an opinion of the national supervisory authority to be exempt from liability under Art. 82(2) GDPR; and what constitutes sufficient compensation for non-material damages and many more. 

Following preliminary questions from the Amsterdam district court, the CJEU has provided valuable clarification in relation to whether “legitimate interests” under Art. 6 (1)(f) GDPR can be “purely commercial”. In its judgement, the CJEU recognized that a wide range of interests can be considered a ‘legitimate interest’ under the GDPR and there is no requirement that the interests of the controller are laid down by law. While the CJEU decided not to answer the specific preliminary questions received from the Amsterdam district court, the attitude of the CJEU is clear: “legitimate interests” can serve purely commercial interests.  

For further information on this decision, please see our blog post available here.  

In its judgement, the CJEU ruled that Chapter VIII of the GDPR allows for national rules which grant undertakings the right to take action in case of an infringement of substantive provisions of the GDPR allegedly committed by a competitor. Such an action would be on the basis of the prohibition of acts considered to be unfair competition. The CJEU further ruled, that the data of a pharmacist’s customers, which are provided when ordering pharmacy-only but non-prescription medicines on an online sales platform, constitute “health data” within the meaning of Art. 4 (15) and Art. 9 GDPR (to that extent contrary to the Advocate General’s opinion of 25 April 2024). 

For further information on this decision, please see our blog post available here.  

  • Maximilian Schrems v Meta Platforms Ireland Ltd (C-446/21) 

Background 

The privacy activist, Maximilian Schrems, brought an action before the Austrian courts challenging the processing of his personal data by Meta Platforms Ireland (“Meta”) in the context of the online social network Facebook. Mr Schrems argued that personal data relating to his sexuality had been processed unlawfully by Meta to send him personalised advertisements.   

Mr Schrems alleged that this processing took place without his consent or other lawful means under the GDPR. The CJEU noted that Mr Schrems had not posted sensitive data on his Facebook profile and further did not consent to Meta using a wider pool of personal data received from advertisers and other partners concerning Mr Schrems’ activities outside Facebook for the purpose of providing personalised advertising.  

The personalised advertisements in question were not based directly on his sexual orientation but on an analysis of his particular interests, drawn from a wider pool of data processed by Meta, as nothing had been openly published by Mr Schrems via Facebook about his sexuality. 

Key findings 

In its judgment, CJEU held that Art. 5(1)(c) GDPR does not allow the controller, in particular a social network platform, to process data collected inside and outside the platform for the purpose of personalised advertising for unlimited time and without distinction as to type of data. 

The CJEU emphasised that the principle of data minimisation requires the controller to limit the retention period of personal data to what is strictly necessary in the light of the objective of the processing activity. 

Regarding the collection, aggregation and processing of personal data for the purposes of targeted advertising, without distinction as to the type of those data, the CJEU held that a controller may not collect personal data in a generalised and indiscriminate manner and must refrain from collecting data which are not strictly necessary for the processing purpose. 

The CJEU also held that the fact that an individual manifestly made public information concerning their sexual orientation does not mean that the individual consented to processing of other data relating to their sexual orientation by the operator of an online social network platform within the meaning of Art. 9(2)(a) GDPR. 

Background 

The data subject is a shareholder of a company in Bulgaria. The company’s constitutive instrument was sent to the Registration Agency (Agentsia po vpisvaniyata), the Bulgarian authority managing the commercial register. 

This instrument, which includes the surname, first name, identification number, identity card number, date and place of issue of that card, as well as the data subject’s address and signature, was made available to the public by the Agency as submitted. The data subject requested the Agency to erase the personal data relating to her contained in that constitutive instrument. As it is a legal requirement to publish certain information relating to the company’s constitutive instrument in the commercial register under Directive 2017/1132 (relating to certain aspects of company law), the Agency refused to delete it when requested by the data subject. The Agency also did not want to delete the personal data that is not required under the Directive but was nevertheless published as it was contained in the instrument. The data subject brought an action before the Administrative Court of Dobrich (Administrativen sad Dobrich) seeking annulment of the Agency’s decision and an order that the Agency compensates her for the alleged non-material damage she suffered.  

 Key findings 

Of the in total eight questions asked by the national court, the CJEU answered six, of which five related directly to the GDPR. Firstly, the CJEU held that an operator of a public register, which receives personal data as part of the constitutive instrument that is subject to compulsory disclosure under EU law, is both a ‘recipient’ of the personal data insofar the operator makes it available to the public, and also a ‘controller’, even if the instrument contains personal data that is not required based on EU or member state laws for the operator to process. This does not change even if the Agency receives additional information because the data subject did not redact their personal data when sharing the constitutive instrument when they should have according to the operator’s procedural rules. 

Secondly, the controller managing the national register may not outrightly refuse any request of erasure of personal data published in the register using the argument that the data subject should have provided a redacted copy of the constitutive instrument. A data subject enjoys a right to object to processing and a right to erasure, unless there are overriding legitimate grounds (which is not the case here).  

Thirdly, the CJEU confirmed that a handwritten signature of a natural person is considered personal data as it is usually used to identify a person and has evidential value regarding the accuracy and sincerity of a document.  

Fourthly, the CJEU held that Art. 82(1) GDPR must be interpreted as meaning that a loss of control for a limited period by the data subject over their personal data, due to the making available to the public of such data online in the commercial register of a Member State, may be sufficient to cause ‘non-material damage’. What in any case is required, is that the person demonstrates that they actually suffered such damage, however minimal. The concept of ‘non-material damage’ does not require the demonstration of the existence of additional tangible negative adverse consequences.  

Lastly, if the supervisory authority of a member state issues an opinion on the basis of Art. 58(3)(b) GDPR, the controller is not exempt from liability under Art. 82(2) GDPR if it acts in line with that opinion. The Agency namely argued that a company’s constitutive instrument may still be entered into the register even if personal data is not redacted and referred hereby to an opinion of the Bulgarian supervisory authority. However, as such an opinion issued to the controller is not legally binding, it can therefore not demonstrate that damages suffered by the data subject are not attributable to the controller which means that it is insufficient to exempt the controller from liability.  

  • Patērētāju tiesību aizsardzības centrs (Latvia Consumer Rights Protection Centre) (C-507/23) 

Background 

The data subject is a well-known journalist and expert in the automotive sector in Latvia. During a campaign to make consumers aware of the risks involved in purchasing a second-hand vehicle, the Latvian Consumer Rights Protection Centre (“PTAC”) published a video on several websites which, among other things, featured a character imitating the data subject, without his consent.  

The journalist brought an action before the District Administrative Court in Latvia seeking (i) a finding that the actions of the PTAC, consisting in the use and distribution of his personal data without authorisation, were unlawful, and (ii) compensation for non-material damage in the form of an apology and the payment of EUR 2,000. The court ruled that the actions in question were unlawful, ordered the PTAC to end to acts, to make a public apology to the journalist and to pay him EUR 100 in compensation in respect of the non-material damage he had suffered. However, on appeal, although the Regional Administrative Court confirmed that the processing of personal data by the PTAC was unlawful and ordered the processing to cease and the publication of an apology on the websites which had disseminated the video footage, it dismissed the claim for financial compensation for the non-material damage suffered. The court found that the infringement that had been committed was not serious on the ground that the video footage was intended to perform a task in the public interest and not to harm the data subject’s reputation, honour and dignity.  

The journalist appealed this decision, and the Latvian Supreme Court referred a number of questions on the interpretation of Art 82(1) GDPR to the CJEU 

 Key findings 

Firstly, the CJEU found that an infringement of a provision of the GDPR, including the unlawful processing of personal data, is not sufficient, in itself, to constitute ‘damage’ within the meaning of Art. 82(1) GDPR.  

By this, the CJEU repeats and emphasises its previous interpretations of Art. 82(1) GDPR to the effect that a mere infringement of the GDPR is not sufficient to confer a right to compensation, since cumulatively and in addition to an ‘infringement’, the existence of ‘damage’ and of a ‘causal link between damage and infringement constitutes the conditions for the right to compensation in Art. 82(1) GDPR. According to the CJEU, this principle even applies if a provision of the GDPR has been infringed that grants rights to natural persons, as such an infringement cannot, in itself, constitute ‘non-material damage’. In particular, the CJEU held that the occurrence of damage in the context of the unlawful processing of personal data is only a potential and not an automatic consequence of such processing. 

Secondly, the CJEU found the presentation of an apology may constitute sufficient compensation for non-material damage on the basis of Art 82(1) GDPR. This applies in particular where it is impossible to restore the situation that existed prior to the occurrence of that damage, provided that that form of redress is capable of fully compensating for the damage suffered by the data subject. 

According to the CJEU, Art. 82(1) GDPR does not preclude the making of an apology from being able to constitute standalone or supplementary compensation for non-material damage provided that such a form of compensation complies with those principles of equivalence and effectiveness. In the present case, providing an apology as a possible compensation was explicitly laid down in Art. 14 of the Latvian Law on compensation for damage caused by public authorities. Other jurisdictions, however, such as German civil law, do not explicitly provide in their national laws the possibility of an apology as a form of compensation. Nevertheless, some courts have already taken apologies into account when determining the amount of monetary compensation. In light of this decision, courts may therefore consider an apology even more as a means of reducing the monetary amount of compensation for damages.  

Thirdly, according to the CJEU, Art. 82(1) GDPR precludes the controller’s attitude and motivation from being taken into account when deciding whether to grant the data subject less compensation than the damage actually suffered.  

According to the CJEU, Art. 82(1) GDPR has an exclusively compensatory and not a punitive function. Therefore, the gravity of an infringement cannot influence the amount of damages awarded under Art. 82(1) GDPR. The amount of damages may not be set at a level that exceeds full compensation for the actually suffered damage. 

Conclusion/implications 

While these five judgements were published on the same day, the decisions relate to a number of different topics. What they do have in common is that they all demonstrate the CJEU’s willingness to impose its reach and tackle difficult questions on the interpretation of the GDPR, particularly where there has not always been agreement or clarity among supervisory authorities. Although these decisions generally clarify and strengthen the CJEU’s previous interpretation of a number of issues, such as those relating to the compensation of non-material damages pursuant Art. 82(1) GDPR, it is interesting that for both the KLNTB decision and the Agentsia po vpisvaniyata decision, the CJEU followed a different interpretation of the GDPR to that of the relevant supervisory authorities (and in the KLNTB decision, contrary to the AG Opinion).

As we start to head into 2025, we can expect continued judgments from the CJEU on the interpretation and application of the GDPR with more than 20 pending cases with the CJEU relating to the GDPR.

]]>
Europe/Germany:  Right to bring collective action for violations of information obligations under GDPR https://privacymatters.dlapiper.com/2024/08/europe-germany-right-to-bring-collective-action-for-violations-of-information-obligations-under-gdpr/ Thu, 29 Aug 2024 07:31:01 +0000 https://privacymatters.dlapiper.com/?p=7421 Continue Reading]]> Summary

In its judgement of 11 July 2024 (C-757/22), the European Court of Justice (‘ECJ’) ruled that the violation of a controller’s information obligations under Art. 12 and 13 GDPR, can be subject to a representative action under Article 80(2) GDPR.

Facts of the case

Meta Platforms Ireland Limited (“Meta“) provides users of  Facebook with free games from third-party providers (known as the “App Center”). When accessing the App Center, users were informed that by using certain games, the third-party provider will collect their personal data and has permission to publish this data. The user was also informed that, by using the applications concerned, they accepted general conditions of those applications and the relevant data protection policies.

The Federation of German Consumer Organizations (Verbraucherzentrale Bundesverband – “VZBV“), brought an action before the Regional Court of Berlin (Landgericht Berlin), claiming that the information provided to users by the games in the App Center was unfair, particularly in relation to the failure to obtain valid consent from users in compliance with data protection law. It further argued that the information by means of which the applications were given permission to publish certain personal information on behalf of users constituted a general condition which unduly disadvantaged those users.  

The Landgericht Berlin upheld the action and Meta appealed this decision before the Higher Regional Court of Berlin. This appeal was dismissed and Meta then further appealed to the Federal Court of Justice. The Federal Court of Justice did not rule out the possibility that the VZBV might have lost its prior right of action during the proceedings following the entry into force of the GDPR. As a result, the German Federal Court of Justice temporarily suspended the proceedings and referred a question to the ECJ for a preliminary ruling on the interpretation of Article 80 (1) and (2) and Article 84 (1) GDPR. In its judgment of 28 April 2022 (Meta Platforms Ireland C-319/20), the ECJ ruled that Article 80 (2) GDPR must be interpreted as not precluding a national provision that allows an association to bring an action to protect consumer interests due to a violation of personal data protection through unfair commercial practices or the use of ineffective general terms and conditions, provided that the data processing in question may affect the rights of natural persons under the GDPR.

However, the judgment did not address whether a violation of the information obligation under Article 12 (1), first sentence, and Article 13 (1)(c) and (e) GDPR constitutes a breach “as a result of processing” within the meaning of Article 80 (2) GDPR. Consequently, the German Federal Court of Justice has once again suspended the proceedings and referred this specific question to the ECJ for clarification.

Decision

The ECJ held that where processing of personal data is carried out in breach of the data subject’s right to information under Articles 12 and 13 GDPR, the infringement of that right to information must be regarded as an infringement of the data subject’s rights ‘as a result of the processing’, within the meaning of Article 80(2) GDPR. The ECJ further held that it therefore follows that the right of the data subject, under the first sentence of Article 12(1) and Article 13(1)(c) and (e) GDPR, to obtain from the controller, in a concise, transparent, intelligible and easily accessible form, using clear and plain language, information relating to processing, constitutes a right whose infringement allows recourse to the representative action mechanism provided for in Article 80(2) GDPR.

Practical note

This ruling by the ECJ will have significant implications for controllers in practice. Data protection notices, such as publicly accessible notices on websites, will be open to scrutiny by consumer protection associations such as the VZBV. There has been an increase in recent years of both consumer and privacy associations scrutinizing potential violations of data protection requirements, with the VZBV, for example, initiating numerous cases before the German courts – particularly recent actions relating to the use of cookies. In a recently published statement, the VZBV has supported the ECJ judgement, stating that the “ruling sends a positive signal to consumers”.

While the review of data protection notices has not been a primary focus of German data protection supervisory authorities thus far, and there have been few enforcement actions in this regard, the ECJ ruling increases the risk of being sued by consumer protection associations due to inadequate data protection notices.

Accordingly, controllers should undertake a thorough review of their data protection notices to ensure compliance with the requirements set out in Articles 12 (1) and 13 or 14 of the GDPR. In particular, controllers should ensure that data protection notices comply with the requirement under Article 12 (1) GDPR, to provide information in a concise, transparent, intelligible and easily accessible form, using clear and plain language, to which the ECJ expressly refers in its judgement.

]]>
Europe: EDPB issues Opinion on ‘consent or pay’ models deployed by large online platforms https://privacymatters.dlapiper.com/2024/04/europe-edpb-issues-opinion-on-consent-or-pay-models-deployed-by-large-online-platforms/ Wed, 24 Apr 2024 14:26:41 +0000 https://privacymatters.dlapiper.com/?p=7287 Continue Reading]]> The European Data Protection Board (“EDPB”) has adopted an Opinion (“EDPB Opinion”) on the validity of consent to process personal data for the purposes of behavioural advertising in the context of ‘consent or pay’ models deployed by large online platforms. The EDPB concludes that “in most cases”, the requirements of valid consent under the General Data Protection Regulation (“GDPR”), will not be met if users are only given a choice between consenting to processing of personal data for behavioural advertising purposes and paying a fee.

Background

Last year, following a request from the Norwegian Data Protection Authority, the EDPB adopted an urgent binding decision, imposing a ban on the processing of personal data by Meta for behavioural advertising on the legal bases of contract and legitimate interest, across the European Economic Area (“EEA”).  As a result of the EDPB’s decision, Meta announced that it planned to rely on consent as the legal basis for its behavioural advertising activities in respect of users in the EEA – using a subscription model where users who do not consent to share their personal data and receive targeted adverts will be charged a monthly fee. This so-called “consent or pay” model has already been the subject of significant debate among European data protection supervisory authorities and been the subject of complaints from privacy activists.

In response to Meta’s announcement, the Dutch, Norwegian & Hamburg Data Protection Authorities made an Article 64(2) GDPR request to the EDBP to issue an opinion on the circumstances and conditions ’consent or pay’ models relating to behavioural advertising can be implemented by large online platforms, in a way that constitutes valid, and in particular, freely given, consent.

EDPB Opinion

The EDPB has clarified that the scope of its Opinion is limited to the implementation by large online platforms of ‘consent or pay’ models, where users are asked to consent to processing for the purposes of behavioural advertising. The EDPB states that “large online platforms” may cover, but is not limited to, “very large online platforms” as defined under the EU Digital Services Act and “gatekeepers” as defined under the EU Digital Markets Act.

In its Opinion, the EDPB concludes that offering only “a paid alternative to the service which includes processing for behavioural advertising purposes should not be the default way forward for controllers”. Individuals should be provided with an ‘equivalent alternative’, that does not require payment of a fee. The EDPB further states that “if controllers choose to charge a fee for access to the ‘equivalent alternative’, controllers should consider also offering a further alternative, free of charge, without behavioural advertising” – the EDPB considers this “a particularly important factor in the assessment of certain criteria for valid consent under the GDPR”. Individuals must have a genuine free choice – any fee charged cannot make individuals feel compelled to consent.

The EDPB refers to the European Court of Justice (“CJEU”) decision in Meta vs Bundeskartellamt,  which considered whether consent given by the user of an online social network to the operator of such a network meets the requirements of valid consent under the GDPR, in particular the condition that consent must be freely given, where that operator holds a dominant position on the market for online social networks. In its Opinion, the EDPB confirms that, as set out in the Bundeskartellamt judgment, when assessing whether consent is “freely given”, controllers should take into account: “whether the data subject suffers detriment by not consenting or withdrawing consent; whether there is an imbalance of power between the data subject and the controller; whether consent is required to access goods or services, even though the processing is not necessary for the fulfilment of the contract (conditionality); and whether the data subject is able to consent to different processing operations (granularity)”.

In addition, the EDPB confirms that controllers should assess, on a case by case basis, whether imposing a fee for use of the service is appropriate and, if so, the amount of that fee. In particular, controllers should ensure that “the fee is not such as to inhibit data subjects from making a genuine choice in light of the requirements of valid consent and of the principles under Article 5 GDPR, in particular fairness”.

Conclusion

The EDPB Opinion provides some clarity in relation to ‘consent or pay’ models, however, it raises the question as to how online services will be paid for if large online service providers cannot harvest and monetise consumer data . Although the EDPB does not go as far as prohibiting the use of a “consent or pay” models for behavioural advertising purposes, stating only that these models will not satisfy the requirements of valid consent under the GDPR ‘in most cases’, it sets a very high bar.

It is clear that the ‘consent or pay’ model will continue to attract attention from regulators. In particular, although the decision is non-binding, it will be taken into account by the Irish Data Protection Commission, and the Dutch, Norwegian and Hamburg data protection authorities that referred the matter to the EDPB, as they continue to investigate the processing of personal data for behavioural advertising purposes by large online platforms. In the UK, the ICO has also recently launched a call for views on the use of “consent or pay” models; and in the EU, the European Commission has launched investigations against a number of large online service providers in relation to compliance with obligations under the Digital Markets Act, including in relation to Meta’s new “consent or pay” model.

Although the EDPB Opinion is limited to ‘large online platforms’, we expect further guidance for other online service providers. In its press release, the EDPB confirmed that it will also develop guidelines on ‘consent or pay’ models with a broader scope.

]]>
CJEU Insight https://privacymatters.dlapiper.com/2024/01/cjeu-insight/ Wed, 24 Jan 2024 11:18:40 +0000 https://privacymatters.dlapiper.com/?p=7197 Continue Reading]]> 2023 was a busy year for the Court of Justice of the European Union (CJEU), with the issuance of a number of far-reaching judgments on the interpretation and application of the GDPR.

In December 2023, the CJEU delivered two important decisions which supplement a growing body of jurisprudence on the issuance of administrative fines and claims for non-material damages.  

In Deutsche Wohnen C-807/21, the CJEU delivered effective guidance on the need to establish wrongdoing by a controller in order to impose a fine, while in Natsionalna agentsia za prihodite C-340/21, the CJEU has weighed in on the adequacy of a controller’s security measures and their exposure to claims for damages as a result.

Deutsche Wohnen

Background

On 5 December 2023, the CJEU delivered a judgment on the culpability of data controllers and the administration of fines by a supervisory authority for infringing the GDPR.

In this case, Deutsche Wohen, a German listed real estate company was fined by the Berlin Data Protection Authority approximately €14.5 million for the “intentional infringement” of the GDPR. The primary issue was Deutsche Wohen’s failure to delete personal data belonging to tenants when no longer necessary.

Deutsche Wohen brought an action against that decision which led to two fundamental questions being referred to the CJEU:

  1. To address a complex faceoff between German law and the GDPR on the liability of undertakings, the CJEU was asked whether an administrative fine can be issued under Article 83 GDPR against an undertaking without that infringement being first attributed to identified natural person (e.g., member of bodies or represent of the concerned undertaking)?
  2. The CJEU was asked whether an undertaking must have intentionally or negligently committed an infringement of the GDPR, or was the objective fact of the infringement suffice to impose a fine (i.e., is the undertaking strictly liable for the infringement)?

Key findings

Perhaps not surprisingly, in answering the first question, the CJEU held that the obligations and provisions of the GDPR do not permit the inference by Member States that the imposition of an administrative fine on a legal person as a controller is subject to a previous finding that that infringement was committed by an identified natural person.

In answering the second question the CJEU has provided some clear and direct guidance:

  • A function of administrative fines is to incentivise compliance with the GDPR. However, to do so, they do not need to be imposed in the absence of any wrongdoing.
  • Only infringements committed wrongfully (intentionally or negligently) can result in culpability and lead to a fine being imposed.
  • Nothing in the GDPR allows for Member States to deviate from this requirement and to effectively establish a strict liability regime.
  • Ignorance of an infringement is no defence.
  • It is not necessary to establish that a member of management acted intentionally, negligently, or was even aware of the infringement.
  • The concept of an undertaking is derived from EU competition law and that when a supervisory authority is calculating a fine to be imposed, they must do so on the basis of the percentage of the total worldwide annual turnover of the undertaking (group) in the preceding business year.

Natsionalna agentsia za prihodite

Background

On 14 December 2023, the CJEU delivered an important judgment on the conditions necessary to award compensation for non-material damage suffered by data subjects following a cyberattack.

The Bulgarian National Revenue Agency (NAP) is an authority attached to the Bulgarian Minister for Finance. Its function is to identify, secure and recover public debts. On 15 July 2019, it was revealed that a cyberattack had taken place on the NAP’s IT system leading to the unlawful dissemination of personal data of more than six million individuals, including both Bulgarians and foreigners.

A case was brought by an affected data subject against the NAP before the Bulgarian Administrative Court, seeking an order for compensation under Article 82 GDPR for the non-material damage suffered as a result of the fear that the data subject’s personal data may be misused in the future.

The case was referred to the CJEU by the Bulgarian Supreme Administrative Court seeking clarification on whether a person’s fear that their data may be misused in the future following unauthorised access due to a cyberattack amounts to non-material damage under Article 82 GDPR.

Key findings

  • The CJEU confirmed that such fear can constitute non-material damage under the GDPR. However, a national court must satisfy itself that the fear is genuine and well founded, having regard to the specific circumstances of the infringement and of the data subject.
  • The following factors were persuasive:
    • Article 82(1) GDPR establishes the right to compensation from the controller for the (non-material) damages.
    • The right of compensation requires three cumulative conditions to be met: (i) damage which has been suffered; (ii) an infringement of the GDPR; and (iii) a causal link between the damage and the infringement (as set out in the Austrian Post decision).
    • Once an infringement has been established, Article 82 GDPR cannot be interpreted as distinguishing between a scenario where the non-material damage suffered stems from actual misuse of personal data compared to where the damage stems from the fear over potential future misuse. In other words, the concept of non-material damage encompasses both.

Conclusion / implications

The Deutche Wohnen judgment is significant in that it develops the concept of culpability and wrongdoing and has thankfully provided long overdue clarity on whether Article 83 GDPR imposes a strict liability regime. The CJEU said that it does not.

Whereas from the NAP judgment, controllers must take account of not only the exposure to damages claims for tangible harm suffered due to a cyberattack but also the psychological distress that can be suffered from the fear of the misuse of compromised personal data. This case reifies the expression “better safe than sorry”. It elucidates the importance of having robust and state of the art technical and organisational measures in place. Controllers should consider both in tandem as controller exposure for infringing the GDPR can take form in both a fine imposed by a supervisory authority and an award for damages by a national court.

The two judgements, along with several other key CJEU decisions issued recently,[1] are a continuation of the CJEU beginning to impose its reach on controllers under the GDPR. The trickle up affect from the decisions of supervisory authorities and national courts to the CJEU is starting to bear fruit and over the course of 2024 we can expect a number of further important decisions from the CJEU on fundamental data protection issues.


[1] See for example, the Schufa case (C-634/21) and its impact on automated decision-making processes and the CJEU’s landmark decision in Meta vs Bundeskartellamt (C-252/21), where the CJEU imposed strict limitations on the use of the lawful bases of contractual necessity, legitimate interests and consent.

]]>
Clearview AI -v- Information Commissioner https://privacymatters.dlapiper.com/2023/10/clearview-ai-v-information-commissioner/ Mon, 23 Oct 2023 08:29:07 +0000 https://privacymatters.dlapiper.com/?p=7123 Continue Reading]]> Summary

A UK court has reversed a fine imposed on the provider of a facial image database service, Clearview AI, on the basis that the (UK) GDPR did not apply to the processing of personal data by the company. In so doing, the court has provided helpful judicial interpretation of both the territorial and material scope of UK data protection law.  

The key takeaways were:

  • A controller or processor may be caught by the extra-territorial scope of the UK GDPR on the basis that its processing activities relate to the monitoring of the behaviour of data subjects in the UK, even where that entity is not itself monitoring data subjects, but where its activities enable its customers to conduct such monitoring.
  • A reminder that processing activities that are carried out for, or connected to, law enforcement purposes – for example, where a company provides its services solely to law enforcement agencies – will fall outside of the scope of the UK GDPR. If those law enforcement agencies are in the UK, then the processing will instead be subject to the parallel law enforcement processing regime under the Data Protection Act. However, if the law enforcement agencies are outside of the UK (as Clearview AI’s customers were) then UK data protection law will not engage.

Background 

On 18 May 2022, the Information Commissioner’s Office brought twin-track enforcement action against Clearview AI in the form of: (1) an Enforcement Notice; and (2) a Monetary Penalty Notice (i.e., a fine) in the amount of GBP 7.5 million.

The ICO had concluded that Clearview AI:

  1. was a controller of personal data under the GDPR as applied in the EU, and under the UK GDPR and Data Protection Act 2018 with respect to the UK data protection framework; and
  2. was or had been processing personal data of UK residents within the scope of the GDPR (in respect of processing taking place up to the end of the Brexit transition period of 23:00 on 31 December 2020) and the UK GDPR (in respect of subsequent processing).

The ICO concluded that Clearview AI had infringed whole rafts of the UK GDPR and GDPR in respect of the requirements:

  • represented by the core data protection principles under Article 5;
  • as to lawfulness of processing under Article 6;
  • around the processing of special category data under Article 9;
  • represented by the transparency obligations under Article 14;
  • represented by the data subject rights to subject access (under Article 15), to rectification (under Article 16), to erasure (under Article 17), and the right to object (under Article 21);
  • as to automated decision making under Article 22; and
  • to undertake a data protection impact assessment under Article 35.

Preliminary Issue

Clearview AI appealed the enforcement notice and the monetary penalty to the First-tier Tribunal (Information Rights). The matters before the Tribunal did not relate to whether Clearview AI had infringed the GDPR or UK GDPR. The issue under consideration was solely relating to the jurisdictional challenge brought by Clearview AI.

It primarily considered three questions as to:

  1. whether as a matter of law, Article 3(2)(b) can apply where the monitoring of behaviour is carried out by a third party rather than the data controller;
  2. whether as a matter of fact, processing of data by Clearview AI was related to monitoring by either Clearview AI itself or by its customers; or
  3. whether the processing by Clearview AI was beyond the material scope of the GDPR by operation of Article 2(2)(a) GDPR and/or was not relevant processing for the purposes of Article 3 of the UK GDPR thereby removing the processing from the material scope of the UK GDPR.

Clearview AI argued that the data processing undertaken by it in the context of the services was outside the territorial scope of the GDPR and UK GDPR, with the consequence that the ICO had no jurisdiction to issue the notices.

Clearview AI services

Clearview AI provides law enforcement agencies with access to a database of facial images scraped from public sources. It uses biometric processing to match a customer’s image with an image and identity in its database. In detail, its services were broadly achieved through the following phases of activity:

Activity 1

  • It copied and scraped facial images in photographs that it found across the public internet and stored them with a series of connected databases and sub-databases, linked to the source of the image, the date and time it was made, and information around associated social media profiles.
  • That material was then used for the creation of a set of vectors for each facial image, using the Clearview AI machine learning facial recognition algorithm.
  • The facial vectors were then stored in a neural-network database, clustered together according to closeness in facial similarities.

Activity 2

  • If a customer wished to use the service, they would upload the facial image being searched for to the Clearview AI system. Vectors would be created of the facial features of that image, which would then be compared to the facial vectors of the stored images on the database using the Clearview AI machine learning facial recognition algorithm. Up to 120 matching facial images would be returned, along with an assessment of the degree of similarity. The service was found to achieve over 99% accuracy.

Further Activity

  • The returned images would allow a customer to then view additional, non-Clearview AI derived information, (such as by visiting the source page where the image was scraped from) as to:
    • the person’s name;
    • the person’s relationship status, whether they have a partner and who that may be;
    • whether the person is a parent;
    • the person’s associates;
    • the place the photo was taken;
    • where the person is based/lives/is currently located;
    • what social media is used by the person;
    • whether the person smokes/drinks alcohol;
    • the person’s occupation or pastime(s);
    • whether the person can drive a car;
    • what the person is carrying/doing and whether that is legal; and/or
    • whether the person has been arrested.

The Tribunal considered that it was reasonably likely that the database would contain the images of UK residents and/or images taken within the UK of persons resident elsewhere. It was therefore found that the Clearview AI service could have an impact on UK residents, irrespective of whether it was used by UK customers.

The Clearview AI service was used by customers for commercial purposes prior to 2020 and is not currently used by customers in the UK or in the EU at all. Its customers are in the United States, as well as other countries globally (including Panama, Brazil, Mexico, and the Dominican Republic). It was acknowledged that investigators in one country may be interested in behaviour happening in another country, given that criminal activity is not limited by national boundaries.

Clearview AI had offered its service on a trial basis to law enforcement and government organisations within the UK between June 2019 and March 2020. An overseas law enforcement agency could use the service as part of an investigation into the alleged criminal activity of a UK resident.

Conclusions of the Tribunal

The Tribunal considered that Clearview AI was the sole controller responsible for Activity 1 (as described above) and that Clearview AI was joint controller with its customers for Activity 2. The Further Activity was then processing for which Clearview AI was not a controller at all.

The ICO submitted that the Clearview AI service was being used to monitor the behaviour of the data subjects. The Tribunal concluded that Clearview AI did not monitor behaviour itself but that its customers used the service to monitor the behaviour of data subjects. Consequently, for the purposes of Article 3(2)(b), Clearview AI’s services were related to the monitoring of the behaviour of data subjects. Clearview AI’s status as a joint controller with its customer for the purposes of Activity 2 may have been a significant factor in establishing a sufficiently close nexus between Clearview AI, as the ‘service provider’, and its customer, as the entity actually conducting the behavioural monitoring.

However, whilst the processing activities may in theory have been within the territorial scope of the (UK) GDPR, what was decisive was that they fell outside of its material scope. The Tribunal accepted that Clearview AI offered its services exclusively to non-UK/EU law enforcement and national security agencies, and their contractors, in support of the discharge of their respective criminal law enforcement and national security functions. Such activities fall outside of the scope of the GDPR and the UK GDPR. Whilst the UK has data protection regimes under the Data Protection Act 2018 that apply to both law enforcement and intelligence agencies, those regimes only bind processing activities relating to UK law enforcement or intelligence agencies.

]]>
Europe: Opinion of the Advocate General on presumed fault of the controller in case of unlawful third-party access to personal data https://privacymatters.dlapiper.com/2023/04/europe-opinion-of-the-advocate-general-on-presumed-fault-of-the-controller-in-case-of-unlawful-third-party-access-to-personal-data/ Thu, 27 Apr 2023 13:34:42 +0000 https://blogs.dlapiper.com/privacymatters/?p=3823 Continue Reading]]> Authors: Verena Grentzenberg, Andreas Rüdiger, Ludwig Lauer

In his Opinion of 27.04.2023 (C 340/21), the Advocate General of the European Court of Justice (“ECJ”) commented on the interpretation of the civil non-material right to damages pursuant to Article 82 (1) GDPR as well as on the requirements and the duty of disclosure of the technical and organizational measures pursuant to Articles 24, 32 GDPR in the event of a cyber-attack in the context of a reference for a preliminary ruling of Bulgarian origin.

Facts of the case

The Bulgarian authority “National Revenue Agency” (hereinafter referred to as “NAP”) was target of a cyber-attack which led to unauthorized access to NAP’s information system. In the course of this cyber-attack, personal data – mainly tax and social security information – of approximately 4 million Bulgarian citizens (or approximately 6 million citizens in total, including foreign citizens) had been accessed and published on the Internet. Among them is also the plaintiff.

In the proceedings at first instance before the Administrative Court of the City of Sofia (hereinafter referred to as “ASSG”), the plaintiff demanded an amount of approx. 500 EUR on the grounds of a legal infringement arising from Article 82 (1) GDPR. He argued that NAP had failed to ensure its cybersecurity in an appropriate manner. In the opinion of the plaintiff, the failure to apply appropriate technical and organizational measures in accordance with Articles 24, 32 GDPR resulted in a breach of the protection of personal data. The plaintiff expressed his non-material damage suffered in the form of worries, fears and anxieties about possible future misuse of his personal data.

The NAP, as the defendant, considered the claim to be unfounded. The NAP argued that a cyber-attack does not allow per se conclusions to be drawn about a lack of technical and organizational measures. The NAP argued that it had been the victim of a cyber-attack by third parties who were not its employees and could therefore not be (co-)responsible for the damage incurred and therefore is exempted from liability pursuant to Article 82 (3) GDPR.

Decisions of the court of first instance and referral to the ECJ

The ASSG dismissed the claim, taking the view that the dissemination of the data was not attributable to the NAP, that the burden of proof as to whether the measures implemented were appropriate was on the plaintiff, and that non-material damage was not eligible for compensation.

Hearing the case on appeal, the Bulgarian Supreme Administrative Court referred a number of questions to the ECJ with regard to

  • the presumption that technical and organisational measures in accordance with Art. 32 GDPR are not sufficient in case a cyber-attack occurs;
  • the subject matter and scope of the judicial review re. the appropriateness of technical and organizational measures;
  • the controller’s burden of proof that the technical and organisational measures are appropriate;
  • the exemption of liability under Art. 82 (3) GDPR in connection with cyber-attacks; and
  • the threshold for the non-material damages under Art. 82 (1) GDPR.

Statements of the Advocate General of the ECJ

The core statements of the Advocate General of the ECJ are as follows:

  • According to the Advocate General, the occurrence of a “personal data breach” is not sufficient in itself to conclude that the technical and organisational measures implemented by the controller were not “appropriate” to ensure data protection. The assessment of the appropriateness of those measures must be based on a balancing exercise between the interests of the data subject and the economic interests and technological capacity of the controller, in compliance with the general principle of proportionality.
  • Further, the Advocate General states that, when verifying whether the measures are appropriate, the national court must carry out a review which extends to a specific analysis of the content of those measures and the manner in which they were applied, as well as of their practical effects.
  • The Advocate General states that the burden of proving that the technical and organisational measures are appropriate is on the controller. In accordance with the principle of procedural autonomy, it is for the national legal order of each Member State to determine the admissible methods of proof and their probative value, including the measures of inquiry.
  • The fact that the infringement of the GDPR was committed by a third party does not in itself constitute a ground for exempting the controller. In order to be exempted from liability, the controller must demonstrate, to a high standard of proof, that it is not in any way responsible for the event giving rise to the damage. The unlawful processing of personal data has, in fact, the nature of aggravated liability for presumed fault, which gives rise to the possibility for the controller to provide exonerating evidence.
  • Finally, according to the Advocate General, detriment consisting in the fear of a potential misuse of one’s personal data in the future, the existence of which the data subject has demonstrated, may constitute non-material damage giving rise to a right to compensation, provided that it is a matter of actual and certain emotional damage and not simply trouble and inconvenience.

Conclusion

Although the opinion of the Advocate General is not binding for the ECJ, it is to be expected that the ECJ will in general adopt the opinion of the Advocate General in its final judgement. In case the ECJ will follow the opinion of the Advocate General, this judgment will have huge impact and relevance for data processing companies. As the numbers of cyber-attacks increase constantly, in general any company can be affected by a cyber-attack. It is therefore of utmost importance to be prepared for such an eventuality and to review and, if necessary, amend the implemented technical and organisational measures in accordance with Art. 32 GDPR. Even though a cyber-attack can probably never be completely prevented, it is highly recommended in the light of the opinion of the Advocate General and the associated burden of proof for the companies concerned, to regularly check the technical and organizational measures as part of internal audits and to ensure sufficient documentation which is appropriate to be used in court. Such audits also need to cover processors and even sub-processors. Furthermore, contracts with processors and sub-processors need to adequately address not just the allocation of responsibility, but also court-proof documentation.

]]>