| Privacy Matters DLA Piper's Global Privacy and Data Protection Resource Tue, 04 Mar 2025 12:17:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.8&lxb_maple_bar_source=lxb_maple_bar_source https://privacyblog.dlapiperblogs.com/wp-content/uploads/sites/32/2023/07/cropped-Favicon_512x512-32x32.gif | Privacy Matters 32 32 Malaysia: Guidelines Issued on Data Breach Notification and Data Protection Officer Appointment https://privacymatters.dlapiper.com/2025/03/malaysia-guidelines-issued-on-data-breach-notification-and-data-protection-officer-appointment/ Tue, 04 Mar 2025 12:16:46 +0000 https://privacymatters.dlapiper.com/?p=7560 Continue Reading]]> Following Malaysia’s introduction of data breach notification and data protection officer (“DPO”) appointment requirements in last year’s significant amendments to the Personal Data Protection Act (“PDPA”) (click here for our summary), the Personal Data Protection Commissioner of Malaysia (“Commissioner”) recently released guidelines that flesh out such requirements, titled the Guideline on Data Breach Notification (“DBN Guideline”) and the Guideline on Appointment of Data Protection Officer (“DPO Guideline”). With the data breach notification and DPO appointment requirements set to come into force on 1 June 2025, organisations subject to the PDPA, whether data controllers or processors, are recommended to understand and adapt to these guidelines to ensure compliance.

DBN Guideline

When must a personal data breach be notified to the regulator and affected data subjects?

A data controller must notify a personal data breach to both the Commissioner andaffected data subjects if it causes or is likely to cause “significant harm”, which includes a risk for any of the following:

  • physical harm, financial loss, a negative effect on credit records, or damage to or loss of property;
  • misuse of personal data for illegal purposes;
  • compromise of sensitive personal data;
  • combination of personal data with other personal information that could potentially enable identity fraud; or
  • (for the purpose of notification to the Commissioner only) a breach of “significant scale”, i.e. involving more than 1,000 affected data subjects.

What is the timeframe to make data breach notifications?

The timeframe for notifications is as follows:

  • Notification to the Commissioner: as soon as practicable and within 72 hours from the occurrence of the breach. If notificationfails to be made to the Commissioner within 72 hours, a written notice detailing the reasons for the delay and providing supporting evidence must be submitted; and
  • Notification to affected data subjects: without unnecessary delay and within seven days of notifying the Commissioner.

What are the other key obligations related to personal data breaches?

A data controller should:

  • DPA:  contractually obligate its data processor to promptly notify it of a data breach and to provide it with all reasonable and necessary assistance to meet its data breach notification obligations;
  • Management and response plans: put in place adequate data breach management and response plans;
  • Training: conduct periodic training as well as awareness and simulation exercises to prepare its employees for responding to personal data breaches;
  • Breach assessment and containment: act promptly as soon as it becomes aware of any personal data breach to assess, contain, and reduce the potential impact of the data breach, including taking certain containment actions (such as isolating compromised systems) and identifying certain details about the data breach in its investigation; and
  • Record-keeping: maintain a register of the personal data breach for at least two years to document the prescribed information about the data breach.

DPO Guideline

Who are required to appoint DPOs?

An organisation, in the role of either a data controller or a data processor, is required to appoint a DPO if its processing of personal data involves:

  • personal data of more than 20,000 data subjects;
  • sensitive personal data including financial information of more than 10,000 data subjects; or
  • activities that require “regular and systematic monitoring” of personal data.

Who can be appointed as DPOs?

DPOs may be appointed from among existing employees or through outsourcing services based on a service contract. They must:

  • Expertise: demonstrate a sound level of prescribed skills, qualities and expertise;
  • Language: be proficient in both Malay and English languages; and
  • Residency: be either resident in Malaysia or easily contactable via any means.

What are the other key obligations related to DPO appointments?

A data controller required to appoint a DPO should:

  • Notification: notify the Commissioner of the appointed DPO and their business contact information within 21 days of the DPO appointment;
  • Publication: publish the business contact information of its DPO through:
  • its website and other official media;
  • its personal data protection notices; or
  • its security policies and guidelines; and
  • Record-keeping: maintain records of the appointed DPO to demonstrate compliance.

A data processor required to appoint a DPO should comply with the publication and record-keeping obligations above in relation to its DPO.

Next Steps The new guidelines represent a significant step in the implementation of the newly introduced data breach notification and DPO appointment requirements. All organisations subject to the PDPA, whether data controllers or processors, should carefully review the guidelines and take steps to ensure compliance by 1 June 2025. This includes updating relevant internal policies (such as data breach response plans and record-keeping and training policies) and contracts with data processors to align with the guidelines. Additionally, organisations should assess whether a DPO appointment is necessary and, if so, be prepared to complete the appointment and notification processes and update their privacy notices, websites and other media to include DPO information.

]]>
CHINA: Mandatory Data Protection Compliance Audits from 1 May 2025 https://privacymatters.dlapiper.com/2025/02/china-mandatory-data-protection-compliance-audits-from-1-may-2025/ Thu, 20 Feb 2025 11:19:41 +0000 https://privacymatters.dlapiper.com/?p=7550 Continue Reading]]> Chinese data regulators are intensifying their focus on the data protection compliance audit obligations under the Personal Information Protection Law (“PIPL“), with the release of the Administrative Measures for Personal Information Protection Compliance Audits (“Measures“), effective 1 May 2025.

The Measures outline the requirements and procedures for both self-initiated and regulator-requested compliance audits.

(Interestingly, they also clarify some other PIPL obligations, such as the data volume threshold for appointing a DPO as well as the necessity of separate consent for some processing activities.)

Who must conduct data protection compliance audits, and when?

The Measures require a data controller processing personal data of more than 10 million individuals to conduct a self-initiatedcompliance audit of its personal data processing activities (“Self-Initiated Audits“) at least once every two years. 

Data controllers below this volume threshold should still conduct Self-Initiated Audits on a regular basis as is already prescribed under the PIPL, as a matter of good governance.

In addition, the CAC or other data regulators may instruct any data controller to conduct an audit (“Regulator-Requested Audits“):

  1. when personal data processing activities are found to involve significant risks, including serious impact on individuals’ rights and interests or a serious lack of security measures;
  2. when processing activities may infringe upon the rights and interests of a large number of individuals; or
  3. following a data security incident involving the leakage, tampering, loss, or damage of personal information of one million or more individuals, or sensitive personal information of 100,000 or more individuals.

The audit report for Regulator-Requested Audits must be submitted to the regulator. The regulator may request data controllers to undertake rectification steps, and a subsequent rectification report must be provided to the regulator within 15 business days of competing the rectification steps.

Data controllers may, if they wish or when requested by the regulator, engage an accredited third party to conduct the audit (but the third party and its affiliates must not conduct more than three such audits in total for the same organisation).  

DPOs of data controllers processing personal data of more than one million individuals are responsible for overseeing the audit activities.

Key elements to be audited

The Measures outline a detailed set of key elements to be audited, which offer valuable insights into the detailed compliance steps expected from controllers for compliance with PIPL obligations, and will help organisations to scope their audits. Unsurprisingly, these elements cover every facet of PIPL compliance, spanning the whole data lifecycle. They include: lawful bases, notice and consent, joint controllership, sharing or disclosing personal data, cross-border data transfers, automated decision-making, image collection/identification equipment, processing publicly available personal data, processing sensitive personal data, retention and deletion, data subject right requests, internal data governance, data incident response, privacy training, Important Platform Providers’ platform rules and CSR reports, etc.

]]>
UK: Google’s U-Turn on Device Fingerprinting: ICO’s Response and Subsequent Guidance https://privacymatters.dlapiper.com/2025/01/googles-u-turn-on-device-fingerprinting-icos-response-and-subsequent-guidance/ Thu, 30 Jan 2025 18:25:52 +0000 https://privacymatters.dlapiper.com/?p=7540 Continue Reading]]> In a December, the Information Commissioner’s Office (ICO) responded to Google’s decision to lift a prohibition on device fingerprinting (which involves collecting and combining information about a device’s software and hardware, for the purpose of identifying the device) for organisations using its advertising products, effective from 16 February 2025 (see an overview of Google’s new Ads Platforms policies here). This follows Google’s previous decision in July 2024 to keep third party cookies.

In its response, the ICO criticized Google’s decision to permit device fingerprinting for advertising purposes as “irresponsible” and emphasised that device fingerprinting:

  1. Requires Consent: device fingerprinting enables devices to be identified even where cookies are blocked or the location is disguised, hence its common use for fraud prevention purposes, but the ICO reinforced that it is subject to the usual consent requirements.
  2. Reduces User Control: Despite various browsers now offering “enhanced” tracking protection, the ICO stated that device fingerprinting is not a fair means of tracking users online as it diminishes people’s choice and control over how their information is collected.

This statement echoes concerns previously voiced by Google who had stated that device fingerprinting “subverts user choice and is wrong”.

With the potential for fingerprinting to replace the long-debated third-party (3P) cookie functionality, this statement forms part of a shift in regulatory focus to technologies beyond cookies. Various technologies have recently received greater scrutiny, both in the ICO’s Draft Guidance on the use of storage and access technologies | ICO (“ICO’s Draft Guidance“) – interestingly issued in December 2024 to coincide with the Google update – and the European Data Protection Board (EDPB) Guidelines 2/2023 on Technical Scope of Art. 5(3) of ePrivacy Directive.

ICO Draft Guidance: Key Takeaways

The ICO’s Draft Guidance explores the practical application of the Privacy and Electronic Communications Regulations (PECR) requirement that consent must be obtained by the user for any storage or access of information on/from a device (‘terminal equipment’), unless such storage/access is strictly necessary for the purposes of a communication or to provide a service requested by the user.

In particular, the Draft Guidance addresses the following areas which are explored further in their respective sections below:

Technologies

The ICO’s Draft Guidance looks at how and why the rules relating to storage and access of device information apply to various types of technologies used in web browsers, mobile apps or connected devices, namely: Cookies; Tracking Pixels, Link Decoration and Navigational Tracking, Web Storage, Scripts and tags, and Fingerprinting techniques. The technologies focused on by the ICO overlap to a large extent with those examples used by the EDPB in their guidelines. However, taking the analysis on pixels as an example, the EDPB suggests that any distribution of tracking links/pixels to the user’s device (whether via websites, emails, or text messaging systems) is subject to Regulation 5(3) of the ePrivacy Directive as it constitutes ‘storage’ even if only temporarily via client-side caching.  The ICO’s guidance is less clear, suggesting that tracking pixels are only subject to Regulation 6 Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) when they store information on the user’s device. This might imply a less expansive view compared to the EDPB, highlighting the importance of remaining alive to jurisdictional nuances for any global tracking campaigns.

Detailed Consent Requirements

The ICO reiterates that for a PECR consent to be valid, it must meet UK GDPR standards (freely given, specific, informed and unambiguous statement of the individual’s wishes indicated by a clear affirmative action).

    The ICO highlights the fact that the consent must be provided by the data subject where personal data is processed (this contrasts with the PECR user/subscriber consent requirement) – this tension is an existing issue, but quite how the party collecting the cookie consent for personal data processed via cookies (or a similar technology) is supposed to know whether the user of a device has changed, without either requiring re-consent or user identification on each visit (or carrying out background identification using user fingerprinting or similar, which means more data processing and may be intrusive) is unclear.

    In line with recent ICO statements in relation to the lack of ‘reject all’ options, the ICO emphasises that subscribers/users must be able to refuse the use of storage and access technologies as easily as they can consent. Additional points of interest for controllers include:

    • That users must have control over any use of non-essential storage and access technologies. While this could, on a conservative reading, be interpreted as needing US-style granular per-cookie consent, the examples provided suggest high-level consent mechanisms expressed per category (e.g., analytics, social media tracking, marketing) are still acceptable;
    • Clarification that you must specifically name any third parties whose technologies you are requesting consent to (this information can be provided in a layered fashion provided this is very clear). However, if controls are not required at an individual cookie level, which seems to be the case, then this becomes less meaningful for data subjects who cannot act on this additional information as they only have the choice of rejecting all storage and access technologies for each purpose category (e.g. all analytics cookies/technologies) rather than a relevant third party; and
    • Clarification that users must be provided with controls over any use of storage and access technologies for non-essential purposes (albeit this was arguably already required in order to facilitate withdrawal of consent/changing of preferences on an ongoing basis).

    Exemptions to consent: Strictly Necessary

    Leaving aside technologies necessary for communications, the ICO emphasises that the “strictly necessary” exemption applies when the purpose of the storage or access is essential to provide the service the subscriber or user requests. Helpfully, the ICO Draft Guidance clarifies that technologies used to comply with applicable law e.g. meeting security requirements, can be regarded as “strictly necessary”, such that no consent is required. This will not apply if there are other ways that you can comply with this legislation without using cookies or similar technologies.

    Other examples of activities likely to meet the exemption include: (i) ensuring the security of terminal equipment; (ii) preventing or detecting fraud; (iii) preventing or detecting technical faults; (iv) authenticating the subscriber or user; and (v) recording information or selections made on an online service.

    One area of ambiguity remains in relation to fraud prevention and detection. In the financial services sector, websites/apps often use third-party fingerprinting for fraud detection (in order to meet legal obligations to ensure the security of their services).  ‘Preventing or detecting fraud’ is listed as an example of an activity likely to meet the exemption, whilst third party fingerprinting for fraud prevention is used by the ICO as an example of an activity subject to Article 6 PECR, with the implication that consent is needed (albeit this is not stated). However, the DUA Bill (if passed in its current form) provides some helpful clarity here, as it states that use of such technologies should be regarded as “strictly necessary” where used to protect information, for security purposes, to prevent or detect fraud or technical faults, to facilitate automatic authentication, or to maintain a record of selections made by the user.

    Interestingly, the guidance suggests that the use of social media plugins/tools by logged-in users might be strictly necessary, though this does not extend to logged-out users, users who are not a member of that network, or any associated tracking.

    Governance and compliance

    A number of the ICO’s clarifications are likely to impact day to day resourcing and operations for any organisation using material numbers of storage and access technologies:

    • Governance: the ICO emphasises what it expects in respect of governance of storage and access requirements, including an audit checklist, emphasising the need to regularly audit the use of such technologies and ensure that the rest of the consent ecosystem (including transparency, consent, data sharing, and subsequent processing) is consistent and up to date. This is likely to be resource intensive, and few organisations will be set up for this level of assurance.
    • Transparency:  The ICO guidance reinforces the need for transparency around whether any third parties will store/access information on the user’s device or receive this information, making clear that all third parties providing cookies or receiving data must be named (avoiding ambiguous references to “partners” or “third parties.”), and that specific information must be provided about each, taking into account UK GDPR considerations where personal data is processed. This will be a considerable challenge for complex ecosystems, most notably in the context of online advertising (albeit this has been a known challenge for some time).
    • Consent Ecosystem: The guidance makes very clear that a process must be in place for passing on when a user withdraws their consent. In practice, the entity collecting the consent is responsible for informing third parties when consent is no longer valid. This is crucial but challenging to comply with, and is again perhaps most relevant in the context of online advertising. 
    • Subsequent Processing: as it has done in the past, the ICO continues to strongly suggests that any subsequent processing of personal data obtained via storage/access technologies on the basis of consent should also be based on consent, going as far as to suggest that reliance on an alternative lawful basis (e.g. legitimate interest) may invalidate any initial consent received.

    Conclusion

    As device fingerprinting and other technologies evolve, it is crucial for organisations to stay informed and ensure compliance with the latest guidance and consider that there may be nuance between regulation in EU / UK.

    The ICO’s Draft Guidance provides helpful clarity on existing rules in the UK, including detailed examples of how to conduct cookie audits, but does not otherwise provide practical guidance on how to overcome many of the operational privacy challenges faced by controllers (such as monitoring changing users and managing consent withdrawals within online advertising ecosystems).

    With increasing regulatory commentary and action in this space, including the ICO’s most recent announcement regarding its focus on reviewing cookie usage on the biggest UK sites, now is the time to take stock of your tracking technologies and ensure compliance!

    The ICO’s Draft Guidance is currently open for consultation, with input sought by 5pm on Friday 14th March 2025. If you have any questions or would like to know more, please get in touch with your usual DLA contact.

    ]]>
    EU: EDPB Opinion on AI Provides Important Guidance though Many Questions Remain https://privacymatters.dlapiper.com/2025/01/eu-edpb-opinion-on-ai-provides-important-guidance-though-many-questions-remain/ Tue, 14 Jan 2025 13:53:05 +0000 https://privacymatters.dlapiper.com/?p=7528 Continue Reading]]> A much-anticipated Opinion from the European Data Protection Board (EDPB) on AI models and data protection has not resulted in the clear or definitive guidance that businesses operating in the EU had hoped for. The Opinion emphasises the need for case-by-case assessments to determine GDPR applicability, highlighting the importance of accountability and record-keeping, while also flagging ‘legitimate interests’ as an appropriate legal basis under specific conditions. In rejecting the proposed Hamburg thesis, the EDPB has stated that AI models trained on personal data should be considered anonymous only if personal data cannot be extracted or regurgitated.

    Introduction

    On 17 December 2024, the EDPB published a much-anticipated Opinion on AI models and data protection.  The Opinion includes the EDPB’s view on the following key questions: does the development and use of an AI model involve the processing of personal data; and if so, what is the correct legal basis for that processing?

    As is sometimes the case with EDPB Opinions, which necessarily represent the consensus view of the supervisory authorities of 27 different Member States, the Opinion does not provide many clear or definitive answers.  Instead, the EDPB offers indicative guidance and criteria, calling for case-by-case assessments of AI models to understand whether, and how, they are impacted by the GDPR.  In this context, the Opinion repeatedly highlights the importance of accountability and record-keeping by businesses developing or using AI, so that the applicability of data protection laws, and the business’ compliance with those laws, can be properly assessed. 

    Whilst the equivocation of the Opinion might be viewed as unhelpful by European businesses looking for regulatory certainty, it is also a reflection of the complexities inherent in this intersection of law and technology.

    In summary, the answers given by the EDPB to the four questions in the Opinion are as follows:

    1. Can an AI model, which has been trained using personal data, be considered anonymous?  Yes, but only in some cases.  It must be impossible, using all means reasonably likely to be used, to obtain personal data from the model, either through attacks which aim to extract the original training data from the model itself, or through interactions with the AI model (i.e., personal data provided in responses to prompts / queries). 
    2. Is ‘legitimate interests’ an appropriate legal basis for the training and development of an AI model? In principle yes, but only where the processing of personal data is necessary to develop the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  In particular, the issue of data minimisation, and the related issue of web-scraping / indiscriminate capture of data, will be relevant here. 
    3. Is ‘legitimate interests’ an appropriate legal basis for the deployment of an AI model? In principle yes, but only where the processing of personal data is necessary to deploy the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  Here, the impact on the data subject of the use of the AI model is of predominant importance.
    4. If an AI Model has been found to have been created, updated or developed using unlawfully processed personal data, how does this impact the subsequent use of that AI model?  This depends in part on whether the AI model was first anonymised before being disclosed to the deployer of that model (see Question 1).  Otherwise, the deployer of the model may need to assess the lawfulness of the development of the model as part of its accountability obligations.

    Background

    The Opinion was issued by the EDPB under Article 64 of the GDPR, in response to a request from the Irish Data Protection Commission.  Article 64 requires the EDPB to publish an opinion on matters of ‘general application’ or which ‘produce effects in more than one Member State’. 

    In this case, the Irish DPC asked the EDPB to provide an opinion on the above-mentioned questions – a request that is not surprising given the general importance of AI models to businesses across the EU, but also in light of the large number of technology companies developing those models who have established their European operations in Ireland. 

    In order to understand the Opinion, it helps to be familiar with certain concepts and terminology relating to AI. 

    First, the Opinion distinguishes between an ‘AI system’ and an ‘AI model’. For the former, the EDPB relies on the definition given in the EU AI Act. In short: a machine-based system operating with some degree of autonomy that infers, from inputs, how to produce outputs such as  predictions, content, recommendations, or decisions.  An AI model, meanwhile, is a component part of an AI system. Colloquially, it is the ‘brain’ of the AI system – an algorithm, or series of algorithms (such as in the form of a neural network), that recognises patterns in data. AI models require the addition of further components, such as a user interface, to become AI systems. To take a common example – the generative AI system known as Chat GPT is a software application comprised of an AI model (the GPT Large Language Model) connected to a chatbot-style user interface that allows the user to submit queries (or ‘prompts’) to the model in the form of natural language questions. Whilst the Opinion is notionally concerned only with AI models, at times the Opinion appears to blur the distinction between the model and the system, in particular, when discussing the significance of model outputs that are only rendered comprehensible to the user through an interface that sits outside of the model.

    Second, the Opinion relies on an understanding of a typical ‘AI lifecycle’, pursuant to which an AI model is first developed by training the model on large volumes of data.  This training may happen in a number of phases which become increasingly refined (referred to as ‘fine-tuning’). Only after an AI model is developed can it be used, or ‘deployed’, in a live setting, as part of an AI system.  Often, the developer of an AI model will not be the same person as the deployer.  This is relevant because the Opinion variously addresses both development and deployment phases.

    The significance of the ‘Hamburg thesis’

    With respect to the key question of whether AI models can be considered anonymous, the Opinion follows in the wake of a much-discussed paper published in July 2024 by the data protection authority for the German state of Hamburg.  The paper took the position that AI models (specifically, Large Language Models) are, in isolation, anonymous – they do not involve the processing of personal data. 

    In order to reach that conclusion, the paper decoupled the model itself from: (i) the prior training of the model (which may involve the collection and further processing of personal data as part of the training dataset); and (ii) the subsequent use of the model, whereby a prompt/input may contain personal data, and an output may be used in a way that means it constitutes personal data.

    Looking only at the AI model itself, the paper decided that the tokens and values which make up the ‘inner workings’ of a typical AI model do not, in any meaningful way, relate to or correspond with information about identifiable individuals.  Consequently, the model itself was found to be anonymous, even if the development and use of the model involves the processing of personal data. 

    The Hamburg thesis was welcomed for several reasons, not least because it resolved difficult questions such as how data subject rights could be understood in relation to an AI model (if someone asks for their personal data to be deleted, then what can this mean in the context of an AI model?), and the question of the lawful basis for ‘storing’ personal data in an AI model (as distinct from the lawful basis for collecting and preparing data to train the model).

    However, as we go on to explain, the EDPB Opinion does not follow the relatively simple and certain framework presented by the Hamburg thesis.  Instead, it introduces uncertainty by asserting that there are, in fact, scenarios where an AI model contains personal data, but that this must be determined on a case-by-case basis.

    Are AI models anonymous?

    First, the Opinion is only concerned with AI models that have been trained using personal data.  Therefore, AI models trained using solely non-personal data (such as statistical data, or financial data relating to businesses) can, for the avoidance of doubt, be considered anonymous.  However, in this context the broad scope of ‘personal data’ under the GDPR must be remembered, and the Opinion does not suggest any de minimis level of personal data that needs to be involved in the training of the AI model for the question of GDPR applicability to arise.

    Where personal data is used in the training phase, the next question is whether the model is specifically designed to provide personal data regarding individuals whose personal data were used to train the model.  If so, the AI model will not be anonymous.  For example, an AI model that is trained to provide a user, on request, with biographical information and contact details for directors of public companies, or a generative AI model that is trained on the voice recordings of famous singers so that it can, in turn, mimic the voices of those singers.  In each case, the model is trained on personal data of specific individuals, in order to be able to produce other personal data about those individuals as an output. 

    Finally, there is the intermediary case of AI models that are trained on personal data, but that are not designed to provide personal data related to the training data as an output.  It is this use case that the Opinion focuses on.  The conclusion is that AI models in this category may be anonymous, but only if the developer of the model can demonstrate that information about individuals whose personal data was used to train the model cannot be ‘obtained from’ the model, using all means reasonably likely to be used.  Notwithstanding that personal data used for training the model no longer exists within the model in its original form (but rather it is “represented through mathematical objects“), that information is, in the eyes of the EDPB, still capable of constituting personal data.

    The following question then arises: how does someone ‘obtain’ personal data from an AI model? In short, the Opinion posits two possibilities.  First, that training data is ‘extracted’ via deliberate attacks.  The Opinion refers to an evolving field of research in this area and makes reference to techniques such as ‘model inversion’, ‘reconstruction attacks’, and ‘attribute and membership inference’.  These are techniques that can be deployed to trick the model into revealing training data, or otherwise reconstruct that training data, in some cases relying on privileged access to the model itself.  Second, is the risk of accidental or inadvertent ‘regurgitation’ of personal data as part of an AI model’s outputs. 

    Consequently, a developer must be able to demonstrate that its AI model is resistant both to attacks that extract personal data directly from the model, as well as to the risk of regurgitation of personal data in response to queries:  “In sum, the EDPB considers that, for an AI model to be considered anonymous, using reasonable means, both (i) the likelihood of direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to train the model; as well as (ii) the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant for any data subject“. 

    Which criteria should be used to evaluate whether an AI model is anonymous?

    Recognising the uncertainty in its conclusion that the AI models may or may not be anonymous, the EDPB provides a list of criteria that can be used to assess the likelihood of a model being found to contain personal data.  These include:

    • Steps taken to avoid or limit the collection of personal data during the training phase.
    • Data minimisation or masking measures (e.g., pseudonymisation) applied to reduce the volume and sensitivity of personal data used during the training phase.
    • The use of methodologies during model development that reduce privacy risks (e.g., regularisation methods to improve model generalisation and reduce overfitting, and appropriate and effective privacy-preserving techniques, such as differential privacy).
    • Measures that reduce the likelihood of obtaining personal data from queries (e.g., ensuring the AI system blocks the presentation to the user of outputs that may contain personal data).
    • Document-based audits (internal or external) undertaken by the model developer that include an evaluation of the chosen measures and of their impact to limit the likelihood of identification.
    • Testing of the model to demonstrate its resilience to different forms of data extraction attacks.

    What is the correct legal basis for AI models?

    When using personal data to train an AI model, the preferred legal basis is normally the ‘legitimate interests’ of the controller, under Article 6(1)(f) GDPR. This is for practical reasons. Whilst, in some circumstances, it may be possible to obtain GDPR-compliant consent from individuals authorising the use of their data for AI training purposes, in most cases this will not be feasible. 

    Helpfully, the Opinion accepts that legitimate interests is, in principle, a viable legal basis for processing personal data to train an AI model. Further, the Opinion also suggests that it should be straightforward for businesses to identify a lawful legitimate interest. For example, the Opinion cites “developing an AI system to detect fraudulent content or behaviour” as a sufficiently precise and real interest. 

    However, where businesses may have more difficulty is in showing that the processing of personal data is necessary to realise their legitimate interest, and that their legitimate interest is not outweighed by any impact on the rights and freedoms of data subjects (the ‘balancing test’). Whilst this is fundamentally just a restatement of existing legal principles, the following sentence should nevertheless cause some concern for businesses developing AI models, in particular Large Language Models: “If the pursuit of the purpose is also possible through an AI model that does not entail processing of personal data, then processing personal data should be considered as not necessary“. Technically speaking, it may often be the case that personal data is not essential for the training of an AI model – however, this does not mean that it is straightforward to systematically remove all personal data from a training dataset, or otherwise replace all identifying elements with ‘dummy’ values. 

    With respect to the balancing test, the EDPB asks businesses to consider a data subject’s interest in self-determination and in maintaining control over their own data when considering whether it is lawful to collect personal data for model training purposes.  In particular, it may be more difficult to satisfy the balancing test if a developer is scraping large volumes of personal data (especially including any sensitive data categories) against their wishes, without their knowledge, or otherwise in contexts that would not be reasonably expected by the data subject. 

    When it comes to the separate purpose of deploying an AI model, the EDPB asks businesses to consider the impact on the data subject’s fundamental rights that arise from the purpose for which the AI model is used.  For example, AI models that are used to block content publication may adversely affect a data subject’s fundamental right to freedom of expression.  However, conversely the EDPB recognises that the deployment of AI models may have a positive impact on a data subject’s rights and freedoms – for example, an AI model that is used to improve accessibility to certain services for people with disabilities). In line with Recital 47 GDPR, the EDPB reminds controllers to consider the ‘reasonable expectations’ of data subjects in relation to both training and deployment uses of personal data.

    Finally, the Opinion discusses a range of ‘mitigating measures’ that may be used to reduce risks to data subjects and therefore tip the balancing test in favour of the controller.  These include:

    • Technical measures to reduce the volume or sensitivity of personal data at use (e.g., pseudonymisation, masking).
    • Measures to facilitate the exercise of data subject rights (e.g., providing an unconditional right for data subjects to opt-out of the use of their personal data for training or deploying the model; allowing a reasonable period of time to elapse between collection of training data and its use).
    • Transparency measures (e.g., public communications about the controller’s practices in connection with the use of personal data for AI model development).
    • Measures specific to web-scraping (e.g., excluding publications that present particular risks; excluding certain data categories or sources; excluding websites that clearly object to web scraping).

    Notably, the EDPB observes that, to be effective, these mitigating measures must go beyond mere compliance with GDPR obligations (for example, providing a GDPR compliant privacy notice, which a controller would in any case be required to do, would not be an effective transparency measure for these purposes). 

    When are companies liable to non-compliant AI models?

    In its final question, the DPC sought clarification from the EDPB on how a deployer of an AI model might be impacted by any unlawful processing of personal data in the development phase of the AI model. 

    According to the EDPB, such ‘upstream’ unlawful processing may impact a subsequent deployer of an AI model in the following ways:

    • Corrective measures taken against the developer may have a knock-on effect on the deployer – for example, if the developer is ordered to delete personal data unlawfully collected for training purposes, the developer would not be allowed to subsequently process this data. However, this raises an important practical question about how such data could be identified in, and deleted from, the AI model, taking into account the fact that the model does not retain training data in its original form.
    • Unlawful processing in the development phase may impact the legal basis for the deployment of the model – in particular, if the deployer of the AI model is relying on ‘legitimate interests’, it will be more difficult to satisfy the balancing test in light of the deficiencies associated with the collection and use of the training data.

    In light of these risks, the EDPB recommends that deployers take reasonable steps to assess the developer’s compliance with data protection laws during the training phase.  For example, can the developer explain the sources of data used, steps taken to comply with the minimisation principle, and any legitimate interest assessments conducted for the training phase?  For certain AI models, the transparency obligations imposed in relation to AI systems under the AI Act should assist a deployer in obtaining this information from a third party AI model developer. While the opinion provides a useful framework for assessing GDPR issues with AI systems, businesses operating in the EU may be frustrated with the lack of certainty or definitive guidance on many key questions relating to this new era of technology innovation.

    ]]>
    CHINA: Draft Regulation on Certification for Cross-Border Data Transfers Published https://privacymatters.dlapiper.com/2025/01/7523/ Tue, 14 Jan 2025 12:02:22 +0000 https://privacymatters.dlapiper.com/?p=7523 Continue Reading]]>

    On 3 January 2025, the Cyberspace Administration of China (“CAC“) released for public consultation the draft Measures for Certification of Personal Information Protection for Cross-Border Transfer of Personal Information (“Draft Measures“). This regulation represents the final piece in the CAC’s regulatory framework for the three routes to legitimize cross-border transfers of personal data outside of China (“CBDTs“).

    To recap, Chinese law requires data controllers to take one of the following three routes to legitimize CBDTs, unless they qualify for specific exemptions under the Provisions on Promoting and Regulating Cross-Border Data Flows (click here for our summary, “Provisions“) or local rules:

    • CAC security assessment;
    • Standard Contractual Clauses (“SCCs“) filing; or
    • CAC-accredited certification.

    If enacted, the Draft Measures will provide significant clarity regarding the certification route, offering data controllers both within and outside of China a viable option for compliance of CBDTs. Below is a practical guide to the key provisions of the Draft Measures, along with our recommendations for data controllers engaged in CBDTs in light of this new regulation.

    Who can utilise the certification route?

    Data controllers in China: In alignment with the conditions outlined in the Provisions, the Draft Measures reiterate that a data controller in China may pursue the certification route if:

    • the data controller is not a critical information infrastructure operator (“CIIO“);
    • no important data is transferred outside of China; and
    • it has cumulatively transferred non-sensitive personal data of 100,000-1,000,000 individuals or sensitive personal data of less than 10,000 individuals outside of China since the beginning of the year.

    It is worth noting that these conditions are the same as those for taking the SCCs filing route, making the certification route an effective alternative to the SCCs filing route for data controllers in China.

    Overseas data controllers: The certification route is also available to data controllers outside of China that fall under the extraterritorial jurisdiction of the Personal Information Protection Law (“PIPL“), i.e. those processing personal data of residents in China to provide products or services to them or analyze or evaluate their behavior.

    The Draft Measures do not specify the volume threshold or other conditions for overseas data controllers to take the certification route. It remains to be clarified whether overseas data controllers with a limited scope of CBDTs (e.g. those not reaching the volume threshold for data controllers in China as outlined above) can be exempted from obtaining certification or following the other legitimizing routes.

    From which certification bodies can a data controller obtain the certification?

    Certification bodies that have received approval from the State Administration for Market Regulation (“SAMR“) and have completed a filing process with the CAC are qualified to issue the CBDT certification.

    What are the evaluation criteria for the certification?

    The evaluation for the certification will focus on the following aspects:

    • the legality, legitimacy and necessity of the purposes, scope and methods of the CBDT;
    • the impact of the personal data protection laws and policies and network and data security environment of the country/region where the overseas data controller/recipient is located on the security of the transferred personal data;
    • whether the overseas data controller/recipient’s level of personal data protection meets the requirements under Chinese laws, regulations and mandatory national standards;
    • whether the legally binding agreement between the data controller and the overseas data recipient imposes obligations for personal data protection;
    • whether the organizational structure, management system, and technical measures of the data controller and the overseas data recipient can adequately and effectively ensure data security and protect individuals’ rights and interests regarding their personal data; and
    • other aspects deemed necessary by certification bodies according to relevant standards for personal information protection certification.

    Are there special requirements for overseas data controllers pursuing certification?

    Yes. An overseas data controller governed by the PIPL seeking certification must submit the application with the assistance of its dedicated institution or designated representative located in China (the presence of which is a requirement under the PIPL).

    The Draft Measures also make it clear that overseas data controllers must, like data controllers in China, assume legal responsibilities associated with certification processes, undertake to comply with relevant Chinese data protection laws and regulations, and be subject to the supervision by Chinese regulators and certification bodies.

    How are certification processes and results supervised?

    The Draft Measures grant supervisory powers to both the SAMR and the CAC. They can conduct random checks on certification processes and results; and evaluate certification bodies. Certified data controllers will also be under continuous supervision by their certification bodies.

    If a certified data controller is found to no longer meet the certification requirements (e.g. the actual scope of the CBDT is inconsistent with that specified in the certification), the certification will be suspended or revoked, which action will be made public. 

    Are there ancillary rules and standards on the horizon?

    Probably yes. The Draft Measures indicate that the CAC will collaborate with relevant regulators to formulate standards, technical regulations, and conformity assessment procedures for CBDT certification and work alongside the SAMR to develop implementation rules and unified certificates and marks for CBDT certification.

    Is the certification likely to be recognised in other jurisdictions?

    Probably yes. According to the Draft Measures, China will facilitate mutual recognition of personal information protection certification with other countries, regions, and international organizations.

    Recommendations

    As discussed, the Draft Measures make available a tangible certification route to legitimize CBDTs for data controllers both within and outside of China. Data controllers should carefully evaluate and choose between the three legitimizing routes when engaging in CBDTs, considering their respective pros and cons and suitability for the controllers’ specific patterns of CBDTs. For example, the certification route may be advantageous for complex CBDTs among multiple parties where signing of SCCs is challenging. To make well-informed decisions, data controllers engaged in CBDTs are recommended to closely monitor developments related to the Draft Measures in the months following the conclusion of the public consultation period on 3 February 2025, and remain vigilant for any release of ancillary rules and standards. This is particularly necessary because some important details about the certification route, such as the validity period of the certification and any thresholds for overseas data controllers to take the certification route, remain unclear.

    Overseas data controllers processing personal data of residents in China should also be aware of the Draft Measures, as they specifically outline the certification route. This represents a further enhancement of Chinese regulations governing overseas data controllers, following clarifications regarding the procedure for reporting dedicated institutions or designated representatives of overseas data controllers under the Network Data Security Management Regulation that took effect on 1 January 2025 (click here for our summary). Given this trend, overseas data controllers processing personal data of residents in China should consider assessing whether they fall under the extraterritorial jurisdiction of Chinese data protection laws and, if so, evaluating the practical risks of non-compliance with such laws (e.g. the impact of potential service disruptions or access restrictions). If compliance with Chinese data protection laws turns out to be necessary, it is advisable to implement a comprehensive program to navigate how China’s CBDT restrictions and, more broadly, its complex data regulatory framework may apply to the overseas data controller and devise compliance strategies.

    It is also important to remember that the legitimizing routes are not the sole requirement for CBDTs under Chinese law. Regardless of the chosen route, data controllers must implement other compliance measures for CBDTs, including obtaining separate consent from data subjects, conducting personal information impact assessments, and maintaining records of processing activities.

    ]]>
    Germany: Works agreements cannot legitimate inadmissible data processing. https://privacymatters.dlapiper.com/2025/01/germany-works-agreements-cannot-legitimate-inadmissible-data-processing/ Fri, 10 Jan 2025 11:36:28 +0000 https://privacymatters.dlapiper.com/?p=7520 Continue Reading]]> If employers and works councils agree on ‘more specific rules’ in a works agreement regarding the processing of employees’ personal data in the employment context (Art. 88 (1) GDPR), these must take into account the general data protection principles, including the lawfulness of processing (Art. 5, Art. 6 and Art. 9 GDPR), according to the European Court of Justice (ECJ). In addition, such a works agreement is comprehensively subject to review by the courts; any scope for discretion that is not subject to judicial review must be rejected (Decision of 19 December 2024, case no. C-65/23).

    The case

    The employer had initially concluded a temporary works agreement with the works council formed at the company and later a works agreement on the use of the software ‘Workday’ with the works council. This works agreement provided, inter alia, that specifically identified employee data may be transferred to a server of the parent company in the US. An employee brought an action before the Labour Court for access to certain information, for the deletion of data concerning him and for damages. He argued, among other things, that his employer had transferred personal data concerning him to the parent company’s server, some of which were not specified in the toleration works agreement. Since he did not fully prevail before the Labour Court, the employee appealed to the Federal Labour Court (BAG). The BAG referred three questions to the ECJ for a preliminary ruling.

    General requirements of the GDPR to which the parties are bound

    The ECJ answered the first question submitted for a preliminary ruling by stating that Art. 88 (1) and (2) of the GDPR is to be interpreted as requiring a national law adopted under Art. 88 (1) of the GDPR must not only meet the requirements arising from Art. 88 (2) of the GDPR, but also those arising from Art. 5, Art. 6 (1) and Art. 9 (1) and (2) of the GDPR. The court thus makes it clear that the parties to a works agreement must also observe the requirement of necessity (as part of the lawfulness of processing under Art. 6 (1) and Art. 9 (1) and (2) of the GDPR) in the context of a works agreement, but also the principles of data processing (Art. 5 of the GDPR). Accordingly, processing operations regulated in works agreements would also have to fulfil the requirements of the GDPR for the lawfulness of processing. This would not only be consistent with the context of Art. 88 GDPR and the wording of the provision, but also with the objective of the GDPR, which is to ensure a high level of protection for employees with regard to the processing of their personal data.

    Comprehensive judicial review of works agreements

    If the parties to the works agreement enact ‘more specific rules’ in a works agreement with regard to the processing of employees’ personal data in the employment context, these rules are subject to comprehensive review by the national (labour) courts, according to the ECJ in response to the second question submitted for a preliminary ruling. The courts would have to examine whether the provisions in the works agreement violate the content and objectives of the GDPR. If this is the case, these provisions would be inapplicable. The works council’s and the employer’s regulatory authority under Art. 88 (1) of the GDPR does not include any discretion to apply the requirements of necessity less strictly or to dispense with them. For reasons of efficiency or simplicity, the parties to the works agreement may not compromise in a way that unduly compromises the GDPR’s goal of ensuring a high level of protection for employees.

    A response to the third question, which concerned the extent to which judicial review may be restricted, was no longer necessary due to the response to the second question.

    Practical note

    The ECJ’s decision comes as little surprise and finally puts to rest the position held in Germany at least until the GDPR came into force, that a works agreement could legitimise data processing that is unlawful under the legal provisions because it is not ‘necessary’. Now it is clear that the parties to a works agreement by no means act outside the law and must observe the requirements of the GDPR for the lawfulness of data processing. In legal terms, the decision has little impact, since in practice the employer and works council were hardly in a position to meet the strict requirements of Article 88 (2) GDPR in a works agreement anyway. Nevertheless, many companies still base individual processing operations of employee data on the ‘legal basis of a works agreement’. These companies should check whether other legal bases can be used, in particular to avoid the threat of fines and claims for damages from employees. Furthermore, these companies are advised to adapt their data protection documentation accordingly. Finally, the ECJ ruling must be taken into account by all companies when negotiating works agreements on technical devices (Section 87 (1) no. 6 of the German Works Constitution Act (BetrVG)).

    ]]>
    Australia: In-Store Facial Recognition Tech Breached Privacy Act https://privacymatters.dlapiper.com/2024/11/australia-in-store-facial-recognition-tech-breached-privacy-act/ Fri, 22 Nov 2024 09:14:22 +0000 https://privacymatters.dlapiper.com/?p=7509 Continue Reading]]> “Ethically challenging” and “the most intrusive option” – these are some of the words Australia’s Privacy Commissioner used to describe facial recognition technology (FRT), and its use by national hardware retailer Bunnings.

    The Office of the Australian Information Commissioner (OAIC) has released the findings of its much-awaited investigation into the use of FRT in at least 62 Bunnings stores in Victoria and New South Wales between November 2018 and November 2021. FRT was used to, as Bunnings submitted, monitor and identify individuals known by the retailer to engage in antisocial behaviour in its stores.

    The investigation was sparked by consumer advocate group Choice, which flagged concerns about the use of FRT by Bunnings and other retailers in 2022. Facial recognition technology collects biometric information about an individual. Biometric information is sensitive information, which is entitled to specific protections under Australia’s overarching privacy law, the Privacy Act 1988 (Cth) (Privacy Act). Choice took the view that sensitive personal information was being collected via in-store FRT without sufficient notice to customers, and that the collection was “disproportionate” to legitimate business functions.

    The OAIC’s investigation has affirmed these concerns.

    Key Findings

    Bunnings breached the Australian Privacy Principles (APPs) in the Privacy Act by unlawfully interfering with the privacy of individuals whose personal and sensitive information it collected through the FRT system.

    • Lack of Consent: Sensitive information was collected without consent, breaching APP 3.3, which prohibits such collection unless specific consent is given (or an exception applies, which it did not in this case).
    • Failure to Notify: Bunnings did not adequately inform individuals about the collection of their personal information. This was a breach of APP 5.1, which requires entities to notify individuals about certain matters regarding their personal information as it is collected.
    • Inadequate Practices and Policies: Bunnings failed to implement proper practices, policies, and procedures to ensure compliance with the APPs, breaching APP 1.2.
    • Incomplete Privacy Policies: Bunnings’ privacy policies did not include information about the kinds of personal information it collected and held, and how, breaching APP 1.3.

    The OAIC has emphasised that entities using FRT must be transparent, and ensure individuals can provide informed consent.

    Along with the outcome of the investigation, the regulator has also issued specific guidance on the use of FRT, stating, “the use of facial recognition technology interferes with the privacy of anyone who comes into contact with it,” and that convenience is not a sufficient justification for its use. Businesses must consider five key principles when looking to employ FRT: 1) privacy by design; 2) necessity and proportionality; 3) consent and transparency; 4) accuracy and bias; and 5) governance and ongoing assurance.

    What’s Next for Bunnings?

    Bunnings had already paused its use of FRT. As a result of its investigation, the OAIC has made declarations that Bunnings:

    • Not repeat or continue the acts and practices that led to the interference with individuals’ privacy.
    • Publish a statement about the conduct.
    • Destroy all personal information and sensitive information collected via the FRT system that it still holds (after one year).

    This decision aligns with the continued emphasis on privacy rights in Australia. As we await further legislative updates to the Privacy Act in the new year, businesses operating in Australia will need to apply greater scrutiny to the security and privacy practices adopted in respect of consumers.

    ]]>
    Germany: Judgment on Non-Material Damages for Loss of Control over Personal Data https://privacymatters.dlapiper.com/2024/11/germany-judgment-on-non-material-damages-for-loss-of-control-over-personal-data/ Tue, 19 Nov 2024 16:44:34 +0000 https://privacymatters.dlapiper.com/?p=7502 Continue Reading]]> On November 18, 2024, the German Federal Court of Justice (Bundesgerichtshof – “BGH”) made a (to date unpublished) judgment under the case number VI ZR 10/24 regarding claims for non-material damages pursuant to Art. 82 GDPR, due to the loss of control over personal data.

    The judgment is based on a personal data breach at Facebook. In April 2021, data from over 500 million users was made public on the internet. This data was collected by unknown third parties using scraping.

    In the course of this incident, the plaintiff’s data (user ID, first and last name, place of work and gender) was published on the internet. The plaintiff argues that Facebook did not take sufficient and appropriate measures to protect his personal data and is essentially seeking non-material damages for the anger and loss of control over his personal data.

    After the plaintiff was awarded an amount of EUR 250 in the first instance instead of the requested minimum of EUR 1,000, he lost in the appeal instance. The court of appeal stated that the mere loss of control is not sufficient for the assumption of non-material damage within the meaning of Art. 82 (1) GDPR. Furthermore, the plaintiff had not sufficiently substantiated that he had been psychologically affected beyond the loss of control.

    The appeal to BGH was partially successful. The BGH is of the opinion that even the mere and brief loss of control over personal data as a result of an infringement of the GDPR could constitute non-material damages within the meaning of Art 82(1) GDPR. There is no need for the data to be misused in a specific way to the detriment of the data subject or for there to be any other additional noticeable negative consequences. For the specific case, the BGH has not decided on a particular amount of damages but considers EUR 100 to be reasonable in view of the underlying circumstances. However, it still remains in general the plaintiff’s obligation to present and prove the conditions that are pre-requisites for his claims.

    The BGH has now referred the case back to the court of appeal for a new hearing and decision.

    This judgment is important insofar as the BGH has taken a position on a legal issue – non-material damages for loss of control over personal data and its amount – that has been controversial and inconsistently handled to date. Back on October 31, 2024, the BGH determined the procedure for the Leading Decision Procedure in accordance with Section 552b of the German Code of Civil Procedure (Zivilprozessordnung – “ZPO”). In such procedures, the BGH can decide legal issues that are relevant to the outcome of a large number of proceedings and thus provide guidance for the courts of lower instance. However, leading decisions are not formally binding. Nevertheless, the BGH judgment sends a signal, as the BGH considers the loss of personal data to be low in relation to the amount of damages.

    An update to this post will be made once the judgment is publicly available.

    ]]>
    EU: EHDS – Access to health data for secondary use under the European Health Data Space https://privacymatters.dlapiper.com/2024/11/eu-ehds-access-to-health-data-for-secondary-use-under-the-european-health-data-space/ Tue, 19 Nov 2024 09:23:40 +0000 https://privacymatters.dlapiper.com/?p=7499 Continue Reading]]> This is Part 3 in a series of articles on the European Health Data Space (“EHDS“).  Part 1, which provides a general overview of the EHDS, is available here. Part 2, which deals with the requirements on the manufacturers of EHR-Systems under the EHDS, is available here.

    This article provides an overview of the framework for accessing health data for secondary use under the EHDS. It is based on the compromise text of the EHDS published by the Council of the European Union in March 2024.  

    Improving access to health data for the purposes of supporting research and innovation activities is one of the key pillars of the EHDS and offers a potentially significant benefit for life sciences and healthcare companies who are looking for improved access to high-quality secondary use data.

    By way of reminder, in general terms the EHDS creates a regime under which organisations may apply to a health data access body (“HDAB“) for access to electronic health data held by a third party, for one of a number of permitted secondary use purposes.  When required to do so by the HDAB, the company holding the health data (the health data holder) must then provide the data to the HDAB in order to satisfy the access request. The EHDS provides for safeguards to protect intellectual property rights and trade secrets, and there is some scope for health data holders to recover costs incurred in making data available.  

    In more detail, the process operates as follows:

    1. Access to secondary health data

    The EHDS stipulates a specific process as well as certain requirements for the access to secondary health data.

    In order to get access to secondary health data under the EHDS, the applicant must submit a data access application to the health data access body (“HDAB”). Each Member State must designate an HDAB which is, inter alia, responsible for deciding on data access applications, authorizing and issuing data permits, providing access to electronic health data and monitoring and supervising compliance with the requirements under the EHDS.

    Further, the HDAB is responsible for ensuring that data that are adequate, relevant and limited to what is necessary in relation to the purpose of processing indicated in the data access application. The default position is that data will be provided in an anonymized format. However, if the applicant can demonstrate that the purpose of processing cannot be achieved with anonymized data, the HDAB may provide access to the electronic health data in a pseudonymised format.

    The data access application must include at least the following:

    • The applicant’s identity, description of professional functions and operations, including the identity of the natural persons who will have access to electronic health data;
    • Which purposes the access is sought for including a detailed explanation of the intended use and expected benefit related to the use (e.g., protection against serious cross-border threats to health in the public interest, scientific research related to health or care sectors to ensure high levels of quality and safety of health care or medicinal products/devices with the aim of benefitting the end-users, including development and innovation activities for products and services);
    • A description of the requested electronic health data, including their scope and time range, format and data sources, where possible, including geographical coverage where data is request from health data holders in several member states;
    • A description whether electronic health data need to be made available in a pseudonymised or anonymized format, in case of pseudonymised format, a justification why the processing cannot be pursued using anonymized data. Further, where the applicant seeks to access the personal electronic health data in a pseudonymised format, the compliance with applicable data protection laws shall be demonstrated;
    • A description of the safeguards, proportionate to the risks, planned to prevent any misuse of the electronic health data as well as to protect the rights and interests of the health data holder and of the natural persons concerned, including to prevent any re-identification of natural persons in the dataset;
    • A justified indication of the period during which the electronic health data is needed for processing in a secure processing environment;
    • A description of the tools and computing resources needed for a secure processing environment and, where applicable, information on the assessment of ethical aspects

    Where an applicant seeks access to electronic health data from health data holders established in more than one Member State, the applicant must submit a single data access application to the HDAB of the main establishment of the applicant which shall be automatically forwarded to other relevant HDABs.

    Also, there is the option to only apply for access to health data in anonymized statistical format with less formal requirements as well as a simplified procedure for trusted health data holders. The European Commission is responsible for creating templates for the data access applications.

    • Requirements for the technical infrastructure

    The HDAB shall only provide access to electronic health data pursuant to a data permit through a secure processing environment. The secure processing environment shall comply with the following security measures:

    • Access to the data must be restricted to the natural persons listed in the data access application;
    • Implementation of state-of-the-art technical and organisational measures to minimize the risk of unauthorized processing of electronic health data;
    • Limitation of the input of electronic health data and the inspection, modification or deletion of electronic health data to a limited number of authorized persons;
    • Ensure that access is only granted to electronic health data covered by the data access application;
    • Keeping identifiable logs of access to and activities in the secure processing environment for not shorter than one year to verify and audit all processing operations;
    • Monitoring compliance and security measures to mitigate potential security threats.

    The HDAB shall ensure regular audits, including by third parties, of the secure processing environments and, if necessary, take corrective actions for any shortcomings or vulnerabilities identified.

    • Data protection roles

    From a data protection law perspective, the health data holder shall be deemed controller for the disclosure of the requested electronic health data to the HDAB pursuant to Art. 4 No. 1 GDPR. When fulfilling its tasks under the EHDS, the HDAB shall be deemed controller for the processing of personal electronic health data. However, where the HDAB provides electronic health data to a health data user pursuant to a data access application, the HDAB shall be deemed to act as processor on behalf of the health data user. The EU Commission may establish a template for controller to processor agreements in those cases.

    • Fees for the access to health data for secondary use

    The HDAB may charge fees for making electronic health data available for secondary use. Such fees shall cover all or part of costs related to the procedure for assessing a data access application and granting, refusing or amending a data permit, including the costs related to the consolidation, preparation, anonymization, pseudonymization and provisioning of electronic health data. The fees further include compensation for the costs incurred by the health data holder for compiling and preparing the electronic health data to be made available for secondary use. The health data holder shall provide an estimate of such costs to the HDAB.

    Conclusion

    The access to electronic health data for secondary use is a big opportunity especially for companies operating in the life science and healthcare sectors to get access to potentially large volumes of high-quality electronic health data for research and product development purposes. Although Chapter IV of the EHDS, which deals with the secondary use of electronic health data, will become applicable 4 years after the EHDS enters into force, companies are well-advised to begin preparation to gain access to electronic health data for secondary use at an early stage in order to gain a competitive advantage and to ensure that they are able to make direct use of the opportunities granted by the EHDS. Such preparation includes, inter alia, the early determination of the specific electronic health data required for the specific purpose the company wants to achieve as well as the set up of an infrastructure which meets the requirements under the

    ]]>
    EU: Engaging vendors in the financial sector: EDPB clarifications mean more mapping and management https://privacymatters.dlapiper.com/2024/11/eu-engaging-vendors-in-the-financial-sector-edpb-clarifications-mean-more-mapping-and-management/ Fri, 08 Nov 2024 14:22:51 +0000 https://privacymatters.dlapiper.com/?p=7493 Continue Reading]]> The European Data Protection Board (“EDPB“) adopted an opinion on 7 October 2024, providing guidance for data controllers relying on processors (and sub-processors) under the GDPR. The two key themes are:

    1. supply chain mapping;
    2. verifying compliance with flow-down obligations.

    For many financial institutions, the emphasis on these obligations should not come as a surprise. However, there are some nuanced clarifications in the opinion which could have an impact on general vendor management in the financial services sector. We have summarised the key takeaways below.

    Supply Chain Mapping

    Controllers should always be able to identify the processing supply chain. This means knowing all processors, and their subprocessors, for all third-party engagements – and not just their identity. The EDPB’s opinion clarifies that controllers should know:

    • the legal entity name, address and information for a contact person for each processor/subprocessor;
    • the data processed by each processor/subprocessor and why; and
    • the delimitation of roles where several subprocessors are engaged by the primary processor.

    This may seem excessive. However, the practical benefit of knowing this information stems beyond Article 28 compliance. It is also required to discharge transparency obligations under Articles 13 and 14 and to respond to data subject requests (e.g. of access under Article 15 or erasure under Article 19).

    How is this achieved in reality? Vendor engagement can be tedious. While many financial institutions have sophisticated vendor onboarding processes, data protection is often an afterthought, addressed after commercials are finalised.

    So, what should you do as a data controller? Revisit your contracts to ensure your processors are obliged to provide the above information proactively. At a frequency and in the format you require.   

    Verification of Compliance

    Controllers should be able to verify and document the sufficiency of safeguards implemented by processors and subprocessors to comply with data laws. In other words, controllers must be able to evidence a processor’s compliance with key obligations e.g.:

    • making sure personal data is secure; and
    • ensuring data is transferred or accessed internationally in line with the requirements of Chapter V.

    The nature of this verification and documentation will vary depending on the risk associated with the processing activity. A low-risk vendor, from a commercial business perspective, may provide a service involving high-risk data processing. In this case, verification might involve seeking a copy of the subprocessor contract to review it. For lower-risk processing, verification could be limited to confirming a subprocessor contract is in place.

    The EDPB suggests controllers can rely on information received from their processor and build on it. For example, through diligence questionnaires, publicly available information, certifications, and audit reports.

    Where the primary processor is also an exporter of personal data outside the EEA, the EDPB clarified that the obligation is on the exporting processor to ensure there is an appropriate transfer mechanism in place with the importing subprocessor and to ensure a transfer impact assessment has been carried out. The controller should verify the transfer impact assessment and make amends if necessary. Otherwise, controllers can rely on the exporting processor’s transfer impact assessment if deemed adequate. The verification required here will depend on whether it is an initial or onward transfer, and what lawful basis is used for the transfer. This does not impact the controller’s obligation to carry out transfer mapping where it engages primary processors themselves located outside the EEA.

    In that regard, the EDPB clarified a subtle but often debated provision of Article 28. The opinion notes that the wording “unless required to do so by law or binding order of a governmental body”, is unlikely to be compliant where data is transferred outside the EEA. It is therefore highly recommended to include the wording:

    “unless required to [process] by Union or Member State law to which the processor is subject.”

    Either verbatim or in very similar terms. This is particularly relevant in the context of transfer mapping and impact assessments. Regulated entities should be vigilant for third-party contracts which appear to meet the obligations set out in Article 28(3) with respect to the processing data for purposes outside of the controller’s instructions, but are, as confirmed by the EDPB, actually non-compliant.

    What steps should you take now then?

    The opinion clarifies that controllers can rely on a sample selection of subprocessor contracts to verify downstream compliance and we suggest you do so.

    But when?

    Regulated entities, particularly in the financial services industry, are facing a swathe of regulations that impact vendor engagement. The Digital Operational Resilience Act and NIS 2 Directive (EU) (2022/2555) require financial institutions to maintain a register of all contractual arrangements with vendors and ensure third-party service providers comply with cybersecurity standards. Effectively, these are enhancements to existing processor requirements under the GDPR. The reality is, however, that many controllers are only now firming up supply chain management to cover key data protection and cyber risks.

    We recommend controllers use the clarifications in the EDPB’s opinion to improve negotiations when separately looking at uplifts required by DORA which takes effect on 17 January 2025. The clock is ticking.

    Please reach out to your usual DLA Piper contact if you would like to discuss further, including if you are struggling to map these requirements against other emerging laws i.e. DORA or NIS2. We can provide assistance with the data and cyber contractual commitments in your contracts.

    ]]>