James Clark, Heidi Waem, John Magee and Rachel de Souza | Privacy Matters | DLA Piper Data Protection and Privacy | DLA Piper https://privacymatters.dlapiper.com/author/jmagee/ DLA Piper's Global Privacy and Data Protection Resource Tue, 14 Jan 2025 13:54:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.8&lxb_maple_bar_source=lxb_maple_bar_source https://privacyblog.dlapiperblogs.com/wp-content/uploads/sites/32/2023/07/cropped-Favicon_512x512-32x32.gif James Clark, Heidi Waem, John Magee and Rachel de Souza | Privacy Matters | DLA Piper Data Protection and Privacy | DLA Piper https://privacymatters.dlapiper.com/author/jmagee/ 32 32 EU: EDPB Opinion on AI Provides Important Guidance though Many Questions Remain https://privacymatters.dlapiper.com/2025/01/eu-edpb-opinion-on-ai-provides-important-guidance-though-many-questions-remain/ Tue, 14 Jan 2025 13:53:05 +0000 https://privacymatters.dlapiper.com/?p=7528 Continue Reading]]> A much-anticipated Opinion from the European Data Protection Board (EDPB) on AI models and data protection has not resulted in the clear or definitive guidance that businesses operating in the EU had hoped for. The Opinion emphasises the need for case-by-case assessments to determine GDPR applicability, highlighting the importance of accountability and record-keeping, while also flagging ‘legitimate interests’ as an appropriate legal basis under specific conditions. In rejecting the proposed Hamburg thesis, the EDPB has stated that AI models trained on personal data should be considered anonymous only if personal data cannot be extracted or regurgitated.

Introduction

On 17 December 2024, the EDPB published a much-anticipated Opinion on AI models and data protection.  The Opinion includes the EDPB’s view on the following key questions: does the development and use of an AI model involve the processing of personal data; and if so, what is the correct legal basis for that processing?

As is sometimes the case with EDPB Opinions, which necessarily represent the consensus view of the supervisory authorities of 27 different Member States, the Opinion does not provide many clear or definitive answers.  Instead, the EDPB offers indicative guidance and criteria, calling for case-by-case assessments of AI models to understand whether, and how, they are impacted by the GDPR.  In this context, the Opinion repeatedly highlights the importance of accountability and record-keeping by businesses developing or using AI, so that the applicability of data protection laws, and the business’ compliance with those laws, can be properly assessed. 

Whilst the equivocation of the Opinion might be viewed as unhelpful by European businesses looking for regulatory certainty, it is also a reflection of the complexities inherent in this intersection of law and technology.

In summary, the answers given by the EDPB to the four questions in the Opinion are as follows:

  1. Can an AI model, which has been trained using personal data, be considered anonymous?  Yes, but only in some cases.  It must be impossible, using all means reasonably likely to be used, to obtain personal data from the model, either through attacks which aim to extract the original training data from the model itself, or through interactions with the AI model (i.e., personal data provided in responses to prompts / queries). 
  2. Is ‘legitimate interests’ an appropriate legal basis for the training and development of an AI model? In principle yes, but only where the processing of personal data is necessary to develop the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  In particular, the issue of data minimisation, and the related issue of web-scraping / indiscriminate capture of data, will be relevant here. 
  3. Is ‘legitimate interests’ an appropriate legal basis for the deployment of an AI model? In principle yes, but only where the processing of personal data is necessary to deploy the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  Here, the impact on the data subject of the use of the AI model is of predominant importance.
  4. If an AI Model has been found to have been created, updated or developed using unlawfully processed personal data, how does this impact the subsequent use of that AI model?  This depends in part on whether the AI model was first anonymised before being disclosed to the deployer of that model (see Question 1).  Otherwise, the deployer of the model may need to assess the lawfulness of the development of the model as part of its accountability obligations.

Background

The Opinion was issued by the EDPB under Article 64 of the GDPR, in response to a request from the Irish Data Protection Commission.  Article 64 requires the EDPB to publish an opinion on matters of ‘general application’ or which ‘produce effects in more than one Member State’. 

In this case, the Irish DPC asked the EDPB to provide an opinion on the above-mentioned questions – a request that is not surprising given the general importance of AI models to businesses across the EU, but also in light of the large number of technology companies developing those models who have established their European operations in Ireland. 

In order to understand the Opinion, it helps to be familiar with certain concepts and terminology relating to AI. 

First, the Opinion distinguishes between an ‘AI system’ and an ‘AI model’. For the former, the EDPB relies on the definition given in the EU AI Act. In short: a machine-based system operating with some degree of autonomy that infers, from inputs, how to produce outputs such as  predictions, content, recommendations, or decisions.  An AI model, meanwhile, is a component part of an AI system. Colloquially, it is the ‘brain’ of the AI system – an algorithm, or series of algorithms (such as in the form of a neural network), that recognises patterns in data. AI models require the addition of further components, such as a user interface, to become AI systems. To take a common example – the generative AI system known as Chat GPT is a software application comprised of an AI model (the GPT Large Language Model) connected to a chatbot-style user interface that allows the user to submit queries (or ‘prompts’) to the model in the form of natural language questions. Whilst the Opinion is notionally concerned only with AI models, at times the Opinion appears to blur the distinction between the model and the system, in particular, when discussing the significance of model outputs that are only rendered comprehensible to the user through an interface that sits outside of the model.

Second, the Opinion relies on an understanding of a typical ‘AI lifecycle’, pursuant to which an AI model is first developed by training the model on large volumes of data.  This training may happen in a number of phases which become increasingly refined (referred to as ‘fine-tuning’). Only after an AI model is developed can it be used, or ‘deployed’, in a live setting, as part of an AI system.  Often, the developer of an AI model will not be the same person as the deployer.  This is relevant because the Opinion variously addresses both development and deployment phases.

The significance of the ‘Hamburg thesis’

With respect to the key question of whether AI models can be considered anonymous, the Opinion follows in the wake of a much-discussed paper published in July 2024 by the data protection authority for the German state of Hamburg.  The paper took the position that AI models (specifically, Large Language Models) are, in isolation, anonymous – they do not involve the processing of personal data. 

In order to reach that conclusion, the paper decoupled the model itself from: (i) the prior training of the model (which may involve the collection and further processing of personal data as part of the training dataset); and (ii) the subsequent use of the model, whereby a prompt/input may contain personal data, and an output may be used in a way that means it constitutes personal data.

Looking only at the AI model itself, the paper decided that the tokens and values which make up the ‘inner workings’ of a typical AI model do not, in any meaningful way, relate to or correspond with information about identifiable individuals.  Consequently, the model itself was found to be anonymous, even if the development and use of the model involves the processing of personal data. 

The Hamburg thesis was welcomed for several reasons, not least because it resolved difficult questions such as how data subject rights could be understood in relation to an AI model (if someone asks for their personal data to be deleted, then what can this mean in the context of an AI model?), and the question of the lawful basis for ‘storing’ personal data in an AI model (as distinct from the lawful basis for collecting and preparing data to train the model).

However, as we go on to explain, the EDPB Opinion does not follow the relatively simple and certain framework presented by the Hamburg thesis.  Instead, it introduces uncertainty by asserting that there are, in fact, scenarios where an AI model contains personal data, but that this must be determined on a case-by-case basis.

Are AI models anonymous?

First, the Opinion is only concerned with AI models that have been trained using personal data.  Therefore, AI models trained using solely non-personal data (such as statistical data, or financial data relating to businesses) can, for the avoidance of doubt, be considered anonymous.  However, in this context the broad scope of ‘personal data’ under the GDPR must be remembered, and the Opinion does not suggest any de minimis level of personal data that needs to be involved in the training of the AI model for the question of GDPR applicability to arise.

Where personal data is used in the training phase, the next question is whether the model is specifically designed to provide personal data regarding individuals whose personal data were used to train the model.  If so, the AI model will not be anonymous.  For example, an AI model that is trained to provide a user, on request, with biographical information and contact details for directors of public companies, or a generative AI model that is trained on the voice recordings of famous singers so that it can, in turn, mimic the voices of those singers.  In each case, the model is trained on personal data of specific individuals, in order to be able to produce other personal data about those individuals as an output. 

Finally, there is the intermediary case of AI models that are trained on personal data, but that are not designed to provide personal data related to the training data as an output.  It is this use case that the Opinion focuses on.  The conclusion is that AI models in this category may be anonymous, but only if the developer of the model can demonstrate that information about individuals whose personal data was used to train the model cannot be ‘obtained from’ the model, using all means reasonably likely to be used.  Notwithstanding that personal data used for training the model no longer exists within the model in its original form (but rather it is “represented through mathematical objects“), that information is, in the eyes of the EDPB, still capable of constituting personal data.

The following question then arises: how does someone ‘obtain’ personal data from an AI model? In short, the Opinion posits two possibilities.  First, that training data is ‘extracted’ via deliberate attacks.  The Opinion refers to an evolving field of research in this area and makes reference to techniques such as ‘model inversion’, ‘reconstruction attacks’, and ‘attribute and membership inference’.  These are techniques that can be deployed to trick the model into revealing training data, or otherwise reconstruct that training data, in some cases relying on privileged access to the model itself.  Second, is the risk of accidental or inadvertent ‘regurgitation’ of personal data as part of an AI model’s outputs. 

Consequently, a developer must be able to demonstrate that its AI model is resistant both to attacks that extract personal data directly from the model, as well as to the risk of regurgitation of personal data in response to queries:  “In sum, the EDPB considers that, for an AI model to be considered anonymous, using reasonable means, both (i) the likelihood of direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to train the model; as well as (ii) the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant for any data subject“. 

Which criteria should be used to evaluate whether an AI model is anonymous?

Recognising the uncertainty in its conclusion that the AI models may or may not be anonymous, the EDPB provides a list of criteria that can be used to assess the likelihood of a model being found to contain personal data.  These include:

  • Steps taken to avoid or limit the collection of personal data during the training phase.
  • Data minimisation or masking measures (e.g., pseudonymisation) applied to reduce the volume and sensitivity of personal data used during the training phase.
  • The use of methodologies during model development that reduce privacy risks (e.g., regularisation methods to improve model generalisation and reduce overfitting, and appropriate and effective privacy-preserving techniques, such as differential privacy).
  • Measures that reduce the likelihood of obtaining personal data from queries (e.g., ensuring the AI system blocks the presentation to the user of outputs that may contain personal data).
  • Document-based audits (internal or external) undertaken by the model developer that include an evaluation of the chosen measures and of their impact to limit the likelihood of identification.
  • Testing of the model to demonstrate its resilience to different forms of data extraction attacks.

What is the correct legal basis for AI models?

When using personal data to train an AI model, the preferred legal basis is normally the ‘legitimate interests’ of the controller, under Article 6(1)(f) GDPR. This is for practical reasons. Whilst, in some circumstances, it may be possible to obtain GDPR-compliant consent from individuals authorising the use of their data for AI training purposes, in most cases this will not be feasible. 

Helpfully, the Opinion accepts that legitimate interests is, in principle, a viable legal basis for processing personal data to train an AI model. Further, the Opinion also suggests that it should be straightforward for businesses to identify a lawful legitimate interest. For example, the Opinion cites “developing an AI system to detect fraudulent content or behaviour” as a sufficiently precise and real interest. 

However, where businesses may have more difficulty is in showing that the processing of personal data is necessary to realise their legitimate interest, and that their legitimate interest is not outweighed by any impact on the rights and freedoms of data subjects (the ‘balancing test’). Whilst this is fundamentally just a restatement of existing legal principles, the following sentence should nevertheless cause some concern for businesses developing AI models, in particular Large Language Models: “If the pursuit of the purpose is also possible through an AI model that does not entail processing of personal data, then processing personal data should be considered as not necessary“. Technically speaking, it may often be the case that personal data is not essential for the training of an AI model – however, this does not mean that it is straightforward to systematically remove all personal data from a training dataset, or otherwise replace all identifying elements with ‘dummy’ values. 

With respect to the balancing test, the EDPB asks businesses to consider a data subject’s interest in self-determination and in maintaining control over their own data when considering whether it is lawful to collect personal data for model training purposes.  In particular, it may be more difficult to satisfy the balancing test if a developer is scraping large volumes of personal data (especially including any sensitive data categories) against their wishes, without their knowledge, or otherwise in contexts that would not be reasonably expected by the data subject. 

When it comes to the separate purpose of deploying an AI model, the EDPB asks businesses to consider the impact on the data subject’s fundamental rights that arise from the purpose for which the AI model is used.  For example, AI models that are used to block content publication may adversely affect a data subject’s fundamental right to freedom of expression.  However, conversely the EDPB recognises that the deployment of AI models may have a positive impact on a data subject’s rights and freedoms – for example, an AI model that is used to improve accessibility to certain services for people with disabilities). In line with Recital 47 GDPR, the EDPB reminds controllers to consider the ‘reasonable expectations’ of data subjects in relation to both training and deployment uses of personal data.

Finally, the Opinion discusses a range of ‘mitigating measures’ that may be used to reduce risks to data subjects and therefore tip the balancing test in favour of the controller.  These include:

  • Technical measures to reduce the volume or sensitivity of personal data at use (e.g., pseudonymisation, masking).
  • Measures to facilitate the exercise of data subject rights (e.g., providing an unconditional right for data subjects to opt-out of the use of their personal data for training or deploying the model; allowing a reasonable period of time to elapse between collection of training data and its use).
  • Transparency measures (e.g., public communications about the controller’s practices in connection with the use of personal data for AI model development).
  • Measures specific to web-scraping (e.g., excluding publications that present particular risks; excluding certain data categories or sources; excluding websites that clearly object to web scraping).

Notably, the EDPB observes that, to be effective, these mitigating measures must go beyond mere compliance with GDPR obligations (for example, providing a GDPR compliant privacy notice, which a controller would in any case be required to do, would not be an effective transparency measure for these purposes). 

When are companies liable to non-compliant AI models?

In its final question, the DPC sought clarification from the EDPB on how a deployer of an AI model might be impacted by any unlawful processing of personal data in the development phase of the AI model. 

According to the EDPB, such ‘upstream’ unlawful processing may impact a subsequent deployer of an AI model in the following ways:

  • Corrective measures taken against the developer may have a knock-on effect on the deployer – for example, if the developer is ordered to delete personal data unlawfully collected for training purposes, the developer would not be allowed to subsequently process this data. However, this raises an important practical question about how such data could be identified in, and deleted from, the AI model, taking into account the fact that the model does not retain training data in its original form.
  • Unlawful processing in the development phase may impact the legal basis for the deployment of the model – in particular, if the deployer of the AI model is relying on ‘legitimate interests’, it will be more difficult to satisfy the balancing test in light of the deficiencies associated with the collection and use of the training data.

In light of these risks, the EDPB recommends that deployers take reasonable steps to assess the developer’s compliance with data protection laws during the training phase.  For example, can the developer explain the sources of data used, steps taken to comply with the minimisation principle, and any legitimate interest assessments conducted for the training phase?  For certain AI models, the transparency obligations imposed in relation to AI systems under the AI Act should assist a deployer in obtaining this information from a third party AI model developer. While the opinion provides a useful framework for assessing GDPR issues with AI systems, businesses operating in the EU may be frustrated with the lack of certainty or definitive guidance on many key questions relating to this new era of technology innovation.

]]>
Ireland: Increased regulatory convergence of AI and data protection: X suspends training of AI chatbot with EU user data after Irish regulator issues High Court proceedings https://privacymatters.dlapiper.com/2024/08/ireland-increased-regulatory-convergence-of-ai-and-data-protection-x-suspends-training-of-ai-chatbot-with-eu-user-data-after-irish-regulator-issues-high-court-proceedings/ Mon, 19 Aug 2024 12:23:43 +0000 https://privacymatters.dlapiper.com/?p=7414 Continue Reading]]> The Irish Data Protection Commission (DPC) has welcomed X’s agreement to suspend its processing of certain personal data for the purpose of training its AI chatbot tool, Grok. This comes after the DPC issued suspension proceedings against X in the Irish High Court.  The DPC described this as the first time that any Lead Supervisory Authority had taken such an action, and the first time that it had utilised these particular powers.

Section 134 of the Data Protection Act 2018 allows the DPC, where it considers there is an urgent need to act to protect the rights and freedoms of data subjects, to make an application to the High Court for an order requiring a data controller to suspend, restrict, or prohibit the processing of personal data.

The High Court proceedings were issued on foot of a complaint to the DPC raised by consumer rights organisations Euroconsumers, and Altroconsumo on behalf of data subjects in the EU/EEA. The complainants argued that the Grok chatbot was being trained with user data in a manner that did not sufficiently explain the purposes of data processing, and that more data than necessary was being collected. They further argued that X may have been handling sensitive data without sufficient reasons for doing so.

Much of the complaint stemmed from X’s initial approach of having data sharing automatically turned on for users in the EU/EEA, which it later mitigated by adding an opt-out setting. X claimed that it had relied on the lawful basis of legitimate interest under the GDPR, but the complainants argued that X’s privacy policy – dating back to September 2023 – was insufficiently clear as to how this applied to the processing of user data for the purposes of training AI models such as Grok.

This development follows a similar chain of events involving Meta in June. Complaints from privacy advocacy organisation NOYB were made against Meta’s reliance on ‘legitimate interest’ in relation to the use of data to train AI models. This led to engagement with the DPC and the eventual decision in June by Meta to pause relevant processing (without the need for the authority to invoke s134).

The DPC and other European supervisory authorities strive to emphasise the principles of lawfulness, fairness and transparency at the heart of the GDPR, and their actions illustrate that any activities which purport to threaten these values will be dealt with directly.

The DPC has previously taken the approach of making informal requests and has stated that the exercise of its powers in this case comes after extensive engagement with X on its model training. The High Court proceedings highlight the DPC’s willingness to escalate action where there remains a perceived risk to data subjects.

The DPC has, in parallel, stated that it intends to refer the matter to the EDPB although there has been no confirmation of such referral as of this date.

Such referral will presumably form part of a thematic examination of AI processing by data controllers. The topic is also the subject of debate from individual DPAs, as evidenced by the Discussion Paper on Large Language Models and Personal Data recently published by the Hamburg DPA.

The fact much of the high profile activity relating to regulation of AI is coming from the data protection sphere will no doubt bolster the EDPB’s recommendation in a statement last month that Data Protection Authorities (DPAs) are best placed to regulate high risk AI.

It is expected that regulatory scrutiny and activity will only escalate and accelerate in tandem with the increase in integration of powerful AI models into existing services by ‘big tech’ players to enrich data. This is particularly the case where it is perceived that data sets are being re-purposed and further processing is taking place. In such circumstances, it is essential that an appropriate legal basis is being relied upon – noting the significant issues that can arise if there is an over-reliance on legitimate interest. The DPC and other regulators are likely to investigate, engage and ultimately intervene where it believes that data subjects’ rights under the GDPR are threatened. Perhaps in anticipation of more cross-border enforcement activity, last month, the European Commission proposed a new law to  streamline cooperation between DPAs when enforcing the GDPR in such cases.

A fundamental lesson from these developments is that, in the new AI paradigm, ensuring there is a suitable legal basis for any type of processing and the principles of fairness and transparency are complied with should be an absolute priority.

]]>
Ireland: DPC Issues Record 87% of EU GDPR Fines in 2023; Breach Reports Increase by 20% https://privacymatters.dlapiper.com/2024/06/ireland-dpc-issues-record-87-of-eu-gdpr-fines-in-2023-breach-reports-increase-by-20/ Thu, 06 Jun 2024 12:23:06 +0000 https://privacymatters.dlapiper.com/?p=7337 Continue Reading]]>

The Data Protection Commission (DPC) has published its 2023 Annual Report, highlighting a record year with DPC fines accounting for 87% of all GDPR fines issued across the EU. A busy year for the DPC also saw a 20% increase in reported personal data breaches as Helen Dixon steps down after 10 years in the job, with Dr. Des Hogan and Dale Sunderland taking over the reins.

The past year has seen the DPC progress ongoing large-scale inquiries in particular against social media platforms, defend cross-border decisions in legal proceedings brought forward by appealing regulated entities and increase its interaction with the European Data Protection Board (EDPB). As a result, the DPC fines account for 87% of the GDPR fines issued by EU data protection authorities last year.

The DPC received a total of 6,991 valid notifications of personal data breaches in 2023, an increase of 20% against the previous year. The DPC also handled 43 complaints relating to alleged personal data breaches which were not notified to the DPC in line with Article 33.

Unauthorised disclosure of personal data continues to be the leading reason for breach notifications, accounting for 52% of the overall total in 2023. 146 of thevalid data breach notifications were received under the ePrivacy Regulations, an increase of 42% and 59 notifications in relation to the Law Enforcement Directive. In line with previous years, most incidents reported originate from the private sector (3,766), followed by the public sector (2968), with the remaining coming from the voluntary and charity sector (275).  

Complaints Handling

The Annual Report notes another year of extensive enforcement work by the DPC. In total, 11,147 cases were concluded by the DPC in 2023. As of 31 December 2023, the DPC had 89 statutory inquiries on-hand, including 51 cross-border inquiries. In addition to its cases and inquiries, the DPC also handled over 25,130 electronic contacts, 7,085 phone calls and 1,253 postal contacts. 

The Annual Report highlights that once again the most frequent GDPR topics for queries and complaints in 2023 were access requests; fair-processing; disclosure; direct marketing and right to erasure (delisting and/or removal requests).

Administrative Fines and Large-Scale Inquiries

The Annual Report highlights 19 inquiries that concluded in 2023 resulting in fines totaling €1.55 billion. From the tables below, what we see is a consistent enforcement strategy being implemented by the DPC focusing on international and domestic companies and their compliance with core principles of the GPDR (e.g. transparency, lawful basis, security measures) as well as targeted thematic focuses (e.g. children’s personal data and video surveillance).

Since the implementation of the GDPR, the DPC has been established as the Lead Supervisory Authority for 87% of cross-border complaints.

Notable large scale cross border inquiries that concluded in 2023 were:

Controller SectorFineIssues At Play
Social Media€5.5 millionController was not entitled to rely on contract as a lawful basis for service improvement and security under its terms and conditions.
Social Media€1.6 billionTransfer of data from the EU to the US without a lawful basis.
Social Media€345 millionProcessing of children’s personal data.

Notable domestic inquires that concluded in 2023 were:

Controller SectorFineIssues At Play
Financial Services€750,000Ten data breaches relating to the unauthorised disclosure of personal data on a customer facing app.
Healthcare€460,000A ransomware attack which impacted over 70,000 patients and their data, with 2,500 permanently affected when data was deleted with no back-up.
County Council€50,000Usage of CCTV, car plate reading technology and body worn cameras.

Ongoing Inquiries

The breadth and scale of the inquiries being undertaken by the DPC shows no signs of abating in its report. Notable inquires that have been progressed by the DPC include:

Controller SectorStatusIssues at play
Government DepartmentDPC is preparing a Statement of IssuesAllegation that the database used for the Public Services Card was unlawfully provided to the Department.
TechnologyDraft Decision with peer regulators for review (Art 60 GDPR)Processing of location data.
TechnologyDraft Decision with peer regulators for review (Art 60 GDPR)Compliance with transparency obligations when responding to data subjects.
Social MediaDPC has issued preliminary draft decisions in relation to four related inquiries.User generated data being posted on Social Media.
Social MediaDraft Decision with peer regulators for review (Art 60 GDPR)Transfer of data from EU to China
TechnologyDraft Decision with peer regulators for review (Art 60 GDPR)Real time bidding / adtech and data subject access rights.
Social MediaDPC is preparing its preliminary draft decisionAllegation of collated datasets being made available online.

Litigation  

At the outset of its Annual Report, the DPC recognizes the continued focus on domestic litigation before the Irish Courts. The DPC was awarded a considerable number of legal costs orders in 2023. The threat of a legal cost order may act as a deterrent to those considering challenging the DPC in the future.

There were 7 national judgments or final orders in 2023 split almost evenly between the Irish Circuit Court and the Irish High Court. The cases involved: 1 plenary matter, 5 appeals (with 4 statutory appeals and 1 appeal on a point of law) and 1 judicial review. 2 cases issued against the DPC were discontinued and a further 5 were concluded. The legal costs of 5 proceedings were awarded in favour of the DPC, with no reference to costs made in the reports for the other 2 proceedings. These awards enable the DPC to seek the legal costs it incurred in defending the proceedings against the claimant(s).

The DPC uses the Annual Report to showcase its supervisory and enforcement functions in relation to the processing of personal data in the context of electronic communications under the e-Privacy Regulations. The Annual Report highlights 4 successful prosecutions involving unsolicited marketing messages. In all 4 cases, the DPC had the legal costs of the prosecution discharged by the defendants, two of whom were companies in the telecommunications and insurance sectors.  

Children  

Prioritising the protection of children and other vulnerable groups forms one of the five core pillars to the DPC’s Regulatory Strategy 2022 – 2027, so it was no surprise that the DPC continued to be proactive in safeguarding children’s data protection rights this year. This is reflected in the list of matters that were prioritised for direct intervention by the DPC during 2023, which included CCTV in school toilets and posting of images of children online. The DPC issued a Final Decision and imposed a large fine of €345 million against a major social media company for infringements of GDPR related to the processing of personal data relating to children.

The DPC also produced guidance for organisations and civil society to enhance the protection of children’s personal data. An example of this is the data protection toolkit for schools, which was devised by the DPC after it noticed in the course of supervisory and engagement activities that the sectors was finding certain aspects of data protection compliance challenging.

Interestingly, the DPC has been nominated to represent the EDPB on the newly formed Task Force on Age Verification under the Digital Services Act and act as co-rapporteur in the preparation at EDPB level of guidance on children’s data protection issues. This leadership role follows the DPC’s publication of a guidance note on the Fundamentals of children’s data protection and the DPC’s enforcement activity in this area over recent years.

Data Protection Officers  

The DPC has continued its efforts to bring together the DPO community in Ireland, recognising the importance of the DPO’s role in data protection compliance for organisations. As at the end of 2023, the DPC has been notified of 3,520 DPOs. The DPC is actively engaging with DPO networks across a number of key sectors and has contributed to several events aimed at DPOs including a new course run by the Institute of Public Administration, ‘GDPR and Data Protection Programme for DPOs in the Public Service’.

Importantly, the DPC participated in the 2023 Coordinated Enforcement Framework (CEF) Topic ‘The Designation and Position of Data Protection Officers’. The DPC contacted 100 DPOs and identified three substantive issues in its national report:

  • Resources available to DPOs – a third of respondents noted they do not have sufficient resources to fulfill their role;
  • Conflicts of interests – over a third indicated their role is split with other core governance roles within their organisations; and
  • Tasks of the DPO – it was noted that many tasks of the DPO do not actually compliment the role of the DPO within many organisations.

Supervision  

A sectoral breakdown notes that of the 751 supervision engagements during 2023, 391 were from multinational technology companies. The DPC also provided guidance and observations on 37 proposed legislative measures.

Supervisory engagements undertaken by the DPC in 2023 included identifying data protection issues arising in the context of adult safeguarding and service provision to at-risk adults and an examination of the use of technology in sport and the processing of health data for performance monitoring (questionnaire due to issue to voluntary and professional sports).

The DPC also engaged with the Local Government Management Authority in relation to three draft codes of practice prepared in relation to the use of CCTV and mobile recording devices to investigate and prosecute certain waste and litter pollution related offences. Separately, given the significant increase in use of CCTV in areas of an increased expectation of privacy the DPC published a detailed update of its  CCTV Guidance in November 2023.

In February 2024, Helen Dixon stepped down from her role as Data Protection Commissioner and Dr. Des Hogan, who serves as Chairperson, and Mr. Dale Sunderland commenced their new roles.

The DPC continues to focus on systemic non-compliance and children’s data protection rights in 2024 as well as participating in the EDPB’s ongoing coordinated enforcement action on the right of access. With the level of enforcement action taking place as well as the rapid pace of AI and technology development, organisations are advised to review and update their privacy frameworks to ensure compliance with the GDPR. 

]]>