| Privacy Matters https://privacymatters.dlapiper.com/category/privacy-law/ DLA Piper's Global Privacy and Data Protection Resource Wed, 16 Apr 2025 12:01:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.8&lxb_maple_bar_source=lxb_maple_bar_source https://privacyblog.dlapiperblogs.com/wp-content/uploads/sites/32/2023/07/cropped-Favicon_512x512-32x32.gif | Privacy Matters https://privacymatters.dlapiper.com/category/privacy-law/ 32 32 Germany: Monitoring and auditing obligations of controllers with respect to their processors https://privacymatters.dlapiper.com/2025/04/germany-monitoring-and-auditing-obligations-of-controllers-with-respect-to-their-processors/ Wed, 16 Apr 2025 12:01:32 +0000 https://privacymatters.dlapiper.com/?p=7575 Continue Reading]]> In a decision on immaterial damages under Article 82 of the EU General Data Protection Regulation (GDPR), the Higher Regional Court of Dresden, Germany (case number 4 U 940/24), set out important monitoring and auditing obligations of controllers with respect to their processors.  

The controller (defendant) operates an online music streaming service; the plaintiff is a customer of this service. The case was triggered by a data breach in November 2022 at a former processor of the controller, involving customers’ personal information (including email addresses, full names, ages, etc.).

The contract between the controller and the processor ended several years before the data breach at the end of 2019. According to the data processing agreement, the controller could choose between deletion or return of the data after the end of the processing. However, the  controller never exercised this right. A few days before the termination of the agreement, the processor informed the controller by email that the data would be deleted the following day. Almost a year later, in December 2020, the processor sent another email to the controller announcing that the deletion was imminent. Nevertheless, it was not until early 2023 and after the data breach had been reported that the processor confirmed to the controller that (some kind of) deletion had been carried out.

The Higher Regional Court ruled that the defendant was in principle liable to the plaintiff for damages within the meaning of Article 82 of the GDPR, but that the plaintiff had not credibly demonstrated any emotional damage and therefore no compensation payments were awarded.

In its judgment, the court dealt extensively with the issue of a controller’s liability for the omissions of its processor. In particular, the court addressed the monitoring and auditing measures that a controller must exercise over its processor and how these measures must be designed.

In general, the court takes the view that:

  • if a company selects an IT service provider that is known in the market as a leading and reliable provider, it can generally place trust in the provider’s expertise and reliability without the need for an on-site inspection, but
  • increased  requirements apply if large amounts of data or particularly sensitive data is hosted.

In the opinion of the Higher Regional Court, in the specific case this meant that the data controller was obliged to:

  • exercise its rights towards the processor with respect to the deletion of the data (the data processing agreement allowed the controller to choose between deletion and return of the data);
  • in case of deletion, obtain a written confirmation (i.e. a meaningful document certifying the deletion) from the processor, as detailed in the data processing agreement(s);
  • immediately request the provision of the deletion confirmation, if no such confirmation has been provided within the contractually agreed period; and
  • if necessary, carry out an on-site inspection (e.g., if the deletion confirmation remains outstanding).

The court also clarified that mere announcements of the data processor to delete the data (in the future) are not an adequate substitute for the confirmation that the data has already been deleted.

Conclusion and practical recommendation:

Even if the controller in the specific case has escaped being ordered to pay damages, the court has nevertheless affirmed the company’s liability.

Controllers should therefore take this judgment as an opportunity to review the robustness of their monitoring and auditing measures with regard to processors. Necessary measures must not only be introduced but also sustained and documented in such a way that they are sufficient as evidence in front of courts and supervisory authorities.

]]>
US: Department of Justice issues final rule restricting the transfer of Sensitive Personal Data and United States Government-Related Data to “countries of concern” https://privacymatters.dlapiper.com/2025/04/us-department-of-justice-issues-final-rule-restricting-the-transfer-of-sensitive-personal-data-and-united-states-government-related-data-to-countries-of-concern/ Wed, 16 Apr 2025 08:40:41 +0000 https://privacymatters.dlapiper.com/?p=7572 Continue Reading]]> On April, 8 2025, the Department of Justice’s final rule, implementing the Biden-era Executive Order 14117 restricting the transfer of Americans’ Sensitive Personal Data and United States Government-Related Data to countries of concern (the “Final Rule“), came into force. The Final Rule imposes new requirements on US companies when transferring certain types of personal data to designated countries of concern or covered persons.

Executive Order 14117, and the implementing Final Rule , intends to address the threat of foreign powers and state-sponsored threat actors using Americans’ sensitive personal data for malicious purposes. The Final Rule sets out the conditions under which a bulk transfer of sensitive personal data or US government-related data to a country of concern or covered person will be permitted, restricted or prohibited.

The Final Rule underpins the higher levels of scrutiny from the US government over bulk cross-border data transfers which may pose a risk to the US national interests, and the tightening of compliance requirements on US companies to protect sensitive personal data and government data when engaging with these countries, or those connected.

Scope of the Final Rule

The key elements determining the applicability and scope of the Final Rule, when applied to a data transaction by a US entity, are:

  • Countries of Concern: As noted above, the Final Rule designates six countries as countries of concern: (1) China (including Hong Kong SAR and Macau SAR), (2) Cuba, (3) Iran, (4) North Korea, (5) Russia, and (6) Venezuela. The transfer of sensitive data to Covered Persons within these jurisdictions could therefore be captured.
  • Covered Persons: The Final Rule defines four classes of covered persons as the transacting party that will require additional scrutiny: (1) foreign entities that are 50% or more owned by a country of concern, organized under the laws of a country of concern, or have their principal place of business in a country of concern; (2) foreign entities that are 50% or more owned by a covered person; (3) foreign employees or contractors of countries of concern or entities that are covered persons; and (4) foreign individuals primarily resident in countries of concern.
  • Sensitive Personal Data: The Final Rule regulates transactions involving six categories of sensitive personal data: (1) certain covered personal identifiers; (2) precise geolocation data; (3) biometric identifiers; (4) human genomic data and three other types of human ‘omic data (epigenomic, proteomic, or transcriptomic); (5) personal health data; and (6) personal financial data.
  • Bulk Sensitive Personal Data: Within these Sensitive Personal Data categories, different thresholds for the volume of data being transferred are applied. These thresholds determine the applicability of the Final Rule to the transaction. The prohibitions and restrictions apply to covered data transactions involving sensitive personal data exceeding certain thresholds over the preceding 12 months before the transaction. For example, compliance requirements for the transfer of precise geolocation data will not be triggered unless location data from over 1,000 US persons or devices is being transferred. Contrastingly, the data transfer of the personal identifiers (such as social security numbers) of over 100,000 US persons will be required before the threshold is met. The definition of ‘bulk’ and how this applies across the categories of personal data is therefore key.

Prohibited or restricted transactions?

Alongside these key elements, the Final Rule determines that the type of transaction under which the data is being transferred will inform whether the transaction is restricted, prohibited or exempt from scrutiny. A transaction falling into the category of restricted will impose the new, additional compliance requirements on US Companies before the transaction can proceed.

The Final Rule prohibits transactions involving (1) data brokerage (i.e., “the sale of data, licensing of access to data, or similar commercial transactions involving the transfer of data”), and (2) covered data transactions involving access to bulk human ‘omic data or human biospecimens from which such data can be derived. The outright prohibition on data brokerage agreements with countries of concern is extended further, with the Final Rule also requiring US persons to contractually ensure that data brokerage transactions with other foreign persons, who are not countries of concern or covered persons, do not enable the transfer of the same data to countries of concern under subsequent arrangements. This additional safeguard on data brokerage where sensitive personal data is involved underlines the requirement for sufficient due diligence with overseas partners.

Vendor, employment, and non-passive investment agreements are captured as restricted transactions. These transactions are permitted if they meet certain security requirements developed by the Cybersecurity and Infrastructure Agency (CISA).

Finally, data transactions which fall under categories such as (but not limited to) personal communications that do not transfer anything of value, ordinary corporate group transactions between a U.S. person and its foreign subsidiary or affiliate, and financial services involving transactions ordinarily incident to and part of providing financial services, are exempt from any compliance requirements under the Final Rule: illustrating the practical intention of the requirements.

Compliance obligations

CISA requirements detail the types of cybersecurity, data retention, encryption and anonymisation policies, alongside other measures, that can be adopted by US companies in order to bring a restricted transaction into compliance, ensuring the safety of sensitive personal data.

An enhanced due diligence exercise is therefore expected when seeking to transact with covered persons, where the bulk transfer of sensitive personal data is a possibility. Key features of this include the implementation of a data compliance program, including comprehensive policies, procedures and record keeping surrounding data involved in a restricted transaction, as well the completion of third-party audits to monitor compliance with the Final Rule. Finally, reporting is expected when engaging in restricted transactions, demonstrating the depth of US government oversight and interest in these transactions.

FAQs, Compliance Guide and Enforcement Policy

On April 11, 2025, the Department of Justice published answers to Frequently Asked Questions;  a Compliance Guide; and issued a Implementation and Enforcement Policy for the first 90 days of the Final Rule. (i.e. through July 8, 2025). 

  • Compliance Guide. The Compliance Guide aims to provide ‘general information’ to assist individuals and entities when complying with the Data Security Program (“DSP”), established by the Department of Justice’s National Security Division to implement the  Final Rule and Executive Order 14117. The Compliance Guide includes guidance on a number of different areas, including, key definitions, steps that organizations should take  to comply with the Final Rule, model contract language and prohibited and restricted data transactions.
  • FAQs. The Department of Justice has provided answers to more than 100 FAQs, which aim to provide high level clarifications about Executive Order 14117 and the DSP, including, for example, answers to questions in relation to scope of the DSP;  the effective date of the Final Rule; definitions , exemptions; and enforcement and penalties.
  • Implementation and Enforcement Policy for the First 90 Days (the Policy): The Policy states that during the first 90 days, enforcement will be limited “to allow U.S. persons (e.g., individuals and companies) additional time to continue implementing the necessary changes to comply with the DSP “. Specifically, the Policy is clear that there will be limited  civil enforcement actions against any person for violations of the DSP that occur from April 8 through July 8, 2025 “so long as the person is engaging in good faith efforts to comply with or come into compliance with the DSP during that time”. The Policy provides examples of ‘good faith efforts’, including: conducting internal reviews of access to sensitive personal data; renegotiating vendor agreements or negotiating contracts with new vendors; transferring products and services to new vendors; implementing CISA security requirements; adjusting employee work locations, roles or responsibilities; and evaluating investments from countries of concern or covered persons. The Policy stated that at “the end of this 90-day period, individuals, and entities should be in full compliance with the DSP.”

Next steps

Whilst certain due diligence, auditing, and reporting obligations will not become effective until October 2025, preparation for effective oversight and compliance with the CISA requirements can begin now. In particular, organisations should assess current compliance measures in place to identify potential compliance gaps and establish controls to address those gaps, in order to be able to demonstrate that they are engaging in “good faith efforts.” DLA Piper can advise on a review of current policies and procedures and preparing effectively for transactions that may fall within the Final Rule.

]]>
CHINA: Recent Enforcement Trends https://privacymatters.dlapiper.com/2025/03/china-recent-enforcement-trends/ Wed, 12 Mar 2025 09:42:03 +0000 https://privacymatters.dlapiper.com/?p=7564 Continue Reading]]> Recently, the Cyberspace Administration of China (CAC), which is the primary data regulator in China, published a newsletter about the government authorities’ enforcement of Apps and websites that violated personal data protection and cybersecurity laws during the year 2024.

Based on the official statistics, during 2024, the CAC interviewed 11,159 website platforms, imposed warnings or fines on 4,046 website platforms, ordered 585 websites to suspend or update relevant functions, took down 200 Apps and took administrative actions on 40 mini-programs. The CAC also conducted joint enforcement actions together with the Ministry of Industry and Information Technology and revoked the licenses or shut down 10,946 websites and closed 107,802 accounts.

The following violations are of particular concern to these enforcement activities:

  • Failure to maintain relevant network logs as required by law or to promptly address security risks (such as system vulnerabilities), resulting in illegal and regulatory issues such as system attacks, tampering, and data leaks;
  • Failure to clearly display privacy notices in Apps, obtain necessary consent to process personal data, or provide convenient methods to opt out or de-register accounts;
  • Failure to conduct required recordal or filing for AI models or features built into Apps or mini-apps; and
  • Unreasonably requiring consumers to scan QR codes or perform facial recognition that is not necessary to provide the underlying services.

Around the same time, the National Computer Virus Emergency Response Center, which is an institution responsible for detecting and handling computer virus outbreaks and cyber attacks under the supervision of the Ministry of Public Security, published a list Apps that violated the personal data protection laws in the following areas:

  • Failure to provide data subjects with all the required information about the processing (e.g. name and contact details of the controller, categories of personal data processed, purposes of the processing, retention period, etc.) in a prominent place and in clear and understandable language; in particular, failure to provide such information about any third party SDK or plugin is also considered a breach of the law;
  • Failure to provide data subjects with the required details about any separate controller (e.g. name, contact information, categories of personal data processed, processing purposes, etc.) or to obtain the separate consent of data subjects before sharing their personal data with the separate controller;
  • Failure to obtain the separate consent of data subjects before processing their sensitive personal data;
  • Failure to provide users with the App functions to delete personal data or de-register accounts, or to complete the deletion or deregistration within 15 business days; or setting unreasonable conditions for users to de-register accounts;
  • Failure to formulate special rules for processing the personal data of minors (under the age of 14) or to obtain parental consent before processing the personal data of minors; and
  • Failure to take appropriate encryption, de-identification and other security measures, taking into account the nature of the processing and its impact on the rights and interests of data subjects.

The above enforcement focuses are also consistent with the audit points highlighted in the newly released personal data protection audit rules (see our article here). We expect the same enforcement trend to continue into 2025. Companies that process personal data in China or in connection with business in China are advised to review their compliance status with the requirements of Chinese law and take remedial action in a timely manner.

]]>
Malaysia: Guidelines Issued on Data Breach Notification and Data Protection Officer Appointment https://privacymatters.dlapiper.com/2025/03/malaysia-guidelines-issued-on-data-breach-notification-and-data-protection-officer-appointment/ Tue, 04 Mar 2025 12:16:46 +0000 https://privacymatters.dlapiper.com/?p=7560 Continue Reading]]> Following Malaysia’s introduction of data breach notification and data protection officer (“DPO”) appointment requirements in last year’s significant amendments to the Personal Data Protection Act (“PDPA”) (click here for our summary), the Personal Data Protection Commissioner of Malaysia (“Commissioner”) recently released guidelines that flesh out such requirements, titled the Guideline on Data Breach Notification (“DBN Guideline”) and the Guideline on Appointment of Data Protection Officer (“DPO Guideline”). With the data breach notification and DPO appointment requirements set to come into force on 1 June 2025, organisations subject to the PDPA, whether data controllers or processors, are recommended to understand and adapt to these guidelines to ensure compliance.

DBN Guideline

When must a personal data breach be notified to the regulator and affected data subjects?

A data controller must notify a personal data breach to both the Commissioner andaffected data subjects if it causes or is likely to cause “significant harm”, which includes a risk for any of the following:

  • physical harm, financial loss, a negative effect on credit records, or damage to or loss of property;
  • misuse of personal data for illegal purposes;
  • compromise of sensitive personal data;
  • combination of personal data with other personal information that could potentially enable identity fraud; or
  • (for the purpose of notification to the Commissioner only) a breach of “significant scale”, i.e. involving more than 1,000 affected data subjects.

What is the timeframe to make data breach notifications?

The timeframe for notifications is as follows:

  • Notification to the Commissioner: as soon as practicable and within 72 hours from the occurrence of the breach. If notificationfails to be made to the Commissioner within 72 hours, a written notice detailing the reasons for the delay and providing supporting evidence must be submitted; and
  • Notification to affected data subjects: without unnecessary delay and within seven days of notifying the Commissioner.

What are the other key obligations related to personal data breaches?

A data controller should:

  • DPA:  contractually obligate its data processor to promptly notify it of a data breach and to provide it with all reasonable and necessary assistance to meet its data breach notification obligations;
  • Management and response plans: put in place adequate data breach management and response plans;
  • Training: conduct periodic training as well as awareness and simulation exercises to prepare its employees for responding to personal data breaches;
  • Breach assessment and containment: act promptly as soon as it becomes aware of any personal data breach to assess, contain, and reduce the potential impact of the data breach, including taking certain containment actions (such as isolating compromised systems) and identifying certain details about the data breach in its investigation; and
  • Record-keeping: maintain a register of the personal data breach for at least two years to document the prescribed information about the data breach.

DPO Guideline

Who are required to appoint DPOs?

An organisation, in the role of either a data controller or a data processor, is required to appoint a DPO if its processing of personal data involves:

  • personal data of more than 20,000 data subjects;
  • sensitive personal data including financial information of more than 10,000 data subjects; or
  • activities that require “regular and systematic monitoring” of personal data.

Who can be appointed as DPOs?

DPOs may be appointed from among existing employees or through outsourcing services based on a service contract. They must:

  • Expertise: demonstrate a sound level of prescribed skills, qualities and expertise;
  • Language: be proficient in both Malay and English languages; and
  • Residency: be either resident in Malaysia or easily contactable via any means.

What are the other key obligations related to DPO appointments?

A data controller required to appoint a DPO should:

  • Notification: notify the Commissioner of the appointed DPO and their business contact information within 21 days of the DPO appointment;
  • Publication: publish the business contact information of its DPO through:
  • its website and other official media;
  • its personal data protection notices; or
  • its security policies and guidelines; and
  • Record-keeping: maintain records of the appointed DPO to demonstrate compliance.

A data processor required to appoint a DPO should comply with the publication and record-keeping obligations above in relation to its DPO.

Next Steps The new guidelines represent a significant step in the implementation of the newly introduced data breach notification and DPO appointment requirements. All organisations subject to the PDPA, whether data controllers or processors, should carefully review the guidelines and take steps to ensure compliance by 1 June 2025. This includes updating relevant internal policies (such as data breach response plans and record-keeping and training policies) and contracts with data processors to align with the guidelines. Additionally, organisations should assess whether a DPO appointment is necessary and, if so, be prepared to complete the appointment and notification processes and update their privacy notices, websites and other media to include DPO information.

]]>
CHINA: Mandatory Data Protection Compliance Audits from 1 May 2025 https://privacymatters.dlapiper.com/2025/02/china-mandatory-data-protection-compliance-audits-from-1-may-2025/ Thu, 20 Feb 2025 11:19:41 +0000 https://privacymatters.dlapiper.com/?p=7550 Continue Reading]]> Chinese data regulators are intensifying their focus on the data protection compliance audit obligations under the Personal Information Protection Law (“PIPL“), with the release of the Administrative Measures for Personal Information Protection Compliance Audits (“Measures“), effective 1 May 2025.

The Measures outline the requirements and procedures for both self-initiated and regulator-requested compliance audits.

(Interestingly, they also clarify some other PIPL obligations, such as the data volume threshold for appointing a DPO as well as the necessity of separate consent for some processing activities.)

Who must conduct data protection compliance audits, and when?

The Measures require a data controller processing personal data of more than 10 million individuals to conduct a self-initiatedcompliance audit of its personal data processing activities (“Self-Initiated Audits“) at least once every two years. 

Data controllers below this volume threshold should still conduct Self-Initiated Audits on a regular basis as is already prescribed under the PIPL, as a matter of good governance.

In addition, the CAC or other data regulators may instruct any data controller to conduct an audit (“Regulator-Requested Audits“):

  1. when personal data processing activities are found to involve significant risks, including serious impact on individuals’ rights and interests or a serious lack of security measures;
  2. when processing activities may infringe upon the rights and interests of a large number of individuals; or
  3. following a data security incident involving the leakage, tampering, loss, or damage of personal information of one million or more individuals, or sensitive personal information of 100,000 or more individuals.

The audit report for Regulator-Requested Audits must be submitted to the regulator. The regulator may request data controllers to undertake rectification steps, and a subsequent rectification report must be provided to the regulator within 15 business days of competing the rectification steps.

Data controllers may, if they wish or when requested by the regulator, engage an accredited third party to conduct the audit (but the third party and its affiliates must not conduct more than three such audits in total for the same organisation).  

DPOs of data controllers processing personal data of more than one million individuals are responsible for overseeing the audit activities.

Key elements to be audited

The Measures outline a detailed set of key elements to be audited, which offer valuable insights into the detailed compliance steps expected from controllers for compliance with PIPL obligations, and will help organisations to scope their audits. Unsurprisingly, these elements cover every facet of PIPL compliance, spanning the whole data lifecycle. They include: lawful bases, notice and consent, joint controllership, sharing or disclosing personal data, cross-border data transfers, automated decision-making, image collection/identification equipment, processing publicly available personal data, processing sensitive personal data, retention and deletion, data subject right requests, internal data governance, data incident response, privacy training, Important Platform Providers’ platform rules and CSR reports, etc.

]]>
UK: Google’s U-Turn on Device Fingerprinting: ICO’s Response and Subsequent Guidance https://privacymatters.dlapiper.com/2025/01/googles-u-turn-on-device-fingerprinting-icos-response-and-subsequent-guidance/ Thu, 30 Jan 2025 18:25:52 +0000 https://privacymatters.dlapiper.com/?p=7540 Continue Reading]]> In a December, the Information Commissioner’s Office (ICO) responded to Google’s decision to lift a prohibition on device fingerprinting (which involves collecting and combining information about a device’s software and hardware, for the purpose of identifying the device) for organisations using its advertising products, effective from 16 February 2025 (see an overview of Google’s new Ads Platforms policies here). This follows Google’s previous decision in July 2024 to keep third party cookies.

In its response, the ICO criticized Google’s decision to permit device fingerprinting for advertising purposes as “irresponsible” and emphasised that device fingerprinting:

  1. Requires Consent: device fingerprinting enables devices to be identified even where cookies are blocked or the location is disguised, hence its common use for fraud prevention purposes, but the ICO reinforced that it is subject to the usual consent requirements.
  2. Reduces User Control: Despite various browsers now offering “enhanced” tracking protection, the ICO stated that device fingerprinting is not a fair means of tracking users online as it diminishes people’s choice and control over how their information is collected.

This statement echoes concerns previously voiced by Google who had stated that device fingerprinting “subverts user choice and is wrong”.

With the potential for fingerprinting to replace the long-debated third-party (3P) cookie functionality, this statement forms part of a shift in regulatory focus to technologies beyond cookies. Various technologies have recently received greater scrutiny, both in the ICO’s Draft Guidance on the use of storage and access technologies | ICO (“ICO’s Draft Guidance“) – interestingly issued in December 2024 to coincide with the Google update – and the European Data Protection Board (EDPB) Guidelines 2/2023 on Technical Scope of Art. 5(3) of ePrivacy Directive.

ICO Draft Guidance: Key Takeaways

The ICO’s Draft Guidance explores the practical application of the Privacy and Electronic Communications Regulations (PECR) requirement that consent must be obtained by the user for any storage or access of information on/from a device (‘terminal equipment’), unless such storage/access is strictly necessary for the purposes of a communication or to provide a service requested by the user.

In particular, the Draft Guidance addresses the following areas which are explored further in their respective sections below:

Technologies

The ICO’s Draft Guidance looks at how and why the rules relating to storage and access of device information apply to various types of technologies used in web browsers, mobile apps or connected devices, namely: Cookies; Tracking Pixels, Link Decoration and Navigational Tracking, Web Storage, Scripts and tags, and Fingerprinting techniques. The technologies focused on by the ICO overlap to a large extent with those examples used by the EDPB in their guidelines. However, taking the analysis on pixels as an example, the EDPB suggests that any distribution of tracking links/pixels to the user’s device (whether via websites, emails, or text messaging systems) is subject to Regulation 5(3) of the ePrivacy Directive as it constitutes ‘storage’ even if only temporarily via client-side caching.  The ICO’s guidance is less clear, suggesting that tracking pixels are only subject to Regulation 6 Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) when they store information on the user’s device. This might imply a less expansive view compared to the EDPB, highlighting the importance of remaining alive to jurisdictional nuances for any global tracking campaigns.

Detailed Consent Requirements

The ICO reiterates that for a PECR consent to be valid, it must meet UK GDPR standards (freely given, specific, informed and unambiguous statement of the individual’s wishes indicated by a clear affirmative action).

    The ICO highlights the fact that the consent must be provided by the data subject where personal data is processed (this contrasts with the PECR user/subscriber consent requirement) – this tension is an existing issue, but quite how the party collecting the cookie consent for personal data processed via cookies (or a similar technology) is supposed to know whether the user of a device has changed, without either requiring re-consent or user identification on each visit (or carrying out background identification using user fingerprinting or similar, which means more data processing and may be intrusive) is unclear.

    In line with recent ICO statements in relation to the lack of ‘reject all’ options, the ICO emphasises that subscribers/users must be able to refuse the use of storage and access technologies as easily as they can consent. Additional points of interest for controllers include:

    • That users must have control over any use of non-essential storage and access technologies. While this could, on a conservative reading, be interpreted as needing US-style granular per-cookie consent, the examples provided suggest high-level consent mechanisms expressed per category (e.g., analytics, social media tracking, marketing) are still acceptable;
    • Clarification that you must specifically name any third parties whose technologies you are requesting consent to (this information can be provided in a layered fashion provided this is very clear). However, if controls are not required at an individual cookie level, which seems to be the case, then this becomes less meaningful for data subjects who cannot act on this additional information as they only have the choice of rejecting all storage and access technologies for each purpose category (e.g. all analytics cookies/technologies) rather than a relevant third party; and
    • Clarification that users must be provided with controls over any use of storage and access technologies for non-essential purposes (albeit this was arguably already required in order to facilitate withdrawal of consent/changing of preferences on an ongoing basis).

    Exemptions to consent: Strictly Necessary

    Leaving aside technologies necessary for communications, the ICO emphasises that the “strictly necessary” exemption applies when the purpose of the storage or access is essential to provide the service the subscriber or user requests. Helpfully, the ICO Draft Guidance clarifies that technologies used to comply with applicable law e.g. meeting security requirements, can be regarded as “strictly necessary”, such that no consent is required. This will not apply if there are other ways that you can comply with this legislation without using cookies or similar technologies.

    Other examples of activities likely to meet the exemption include: (i) ensuring the security of terminal equipment; (ii) preventing or detecting fraud; (iii) preventing or detecting technical faults; (iv) authenticating the subscriber or user; and (v) recording information or selections made on an online service.

    One area of ambiguity remains in relation to fraud prevention and detection. In the financial services sector, websites/apps often use third-party fingerprinting for fraud detection (in order to meet legal obligations to ensure the security of their services).  ‘Preventing or detecting fraud’ is listed as an example of an activity likely to meet the exemption, whilst third party fingerprinting for fraud prevention is used by the ICO as an example of an activity subject to Article 6 PECR, with the implication that consent is needed (albeit this is not stated). However, the DUA Bill (if passed in its current form) provides some helpful clarity here, as it states that use of such technologies should be regarded as “strictly necessary” where used to protect information, for security purposes, to prevent or detect fraud or technical faults, to facilitate automatic authentication, or to maintain a record of selections made by the user.

    Interestingly, the guidance suggests that the use of social media plugins/tools by logged-in users might be strictly necessary, though this does not extend to logged-out users, users who are not a member of that network, or any associated tracking.

    Governance and compliance

    A number of the ICO’s clarifications are likely to impact day to day resourcing and operations for any organisation using material numbers of storage and access technologies:

    • Governance: the ICO emphasises what it expects in respect of governance of storage and access requirements, including an audit checklist, emphasising the need to regularly audit the use of such technologies and ensure that the rest of the consent ecosystem (including transparency, consent, data sharing, and subsequent processing) is consistent and up to date. This is likely to be resource intensive, and few organisations will be set up for this level of assurance.
    • Transparency:  The ICO guidance reinforces the need for transparency around whether any third parties will store/access information on the user’s device or receive this information, making clear that all third parties providing cookies or receiving data must be named (avoiding ambiguous references to “partners” or “third parties.”), and that specific information must be provided about each, taking into account UK GDPR considerations where personal data is processed. This will be a considerable challenge for complex ecosystems, most notably in the context of online advertising (albeit this has been a known challenge for some time).
    • Consent Ecosystem: The guidance makes very clear that a process must be in place for passing on when a user withdraws their consent. In practice, the entity collecting the consent is responsible for informing third parties when consent is no longer valid. This is crucial but challenging to comply with, and is again perhaps most relevant in the context of online advertising. 
    • Subsequent Processing: as it has done in the past, the ICO continues to strongly suggests that any subsequent processing of personal data obtained via storage/access technologies on the basis of consent should also be based on consent, going as far as to suggest that reliance on an alternative lawful basis (e.g. legitimate interest) may invalidate any initial consent received.

    Conclusion

    As device fingerprinting and other technologies evolve, it is crucial for organisations to stay informed and ensure compliance with the latest guidance and consider that there may be nuance between regulation in EU / UK.

    The ICO’s Draft Guidance provides helpful clarity on existing rules in the UK, including detailed examples of how to conduct cookie audits, but does not otherwise provide practical guidance on how to overcome many of the operational privacy challenges faced by controllers (such as monitoring changing users and managing consent withdrawals within online advertising ecosystems).

    With increasing regulatory commentary and action in this space, including the ICO’s most recent announcement regarding its focus on reviewing cookie usage on the biggest UK sites, now is the time to take stock of your tracking technologies and ensure compliance!

    The ICO’s Draft Guidance is currently open for consultation, with input sought by 5pm on Friday 14th March 2025. If you have any questions or would like to know more, please get in touch with your usual DLA contact.

    ]]>
    EU: DLA Piper GDPR Fines and Data Breach Survey: January 2025 https://privacymatters.dlapiper.com/2025/01/eu-dla-piper-gdpr-fines-and-data-breach-survey-january-2025/ Tue, 21 Jan 2025 11:53:17 +0000 https://privacymatters.dlapiper.com/?p=7534 Continue Reading]]> The seventh annual edition of DLA Piper’s GDPR Fines and Data Breach Survey has revealed another significant year in data privacy enforcement, with an aggregate total of EUR1.2 billion (USD1.26 billion/GBP996 million) in fines issued across Europe in 2024.

    Ireland once again remains the preeminent enforcer issuing EUR3.5 billion (USD3.7 billion/GBP2.91 billion) in fines since May 2018, more than four times the value of fines issued by the second placed Luxembourg Data Protection Authority which has issued EUR746.38 million (USD784 million/GBP619 million) in fines over the same period.

    The total fines reported since the application of GDPR in 2018 now stand at EUR5.88 billion (USD 6.17 billion/GBP 4.88 billion). The largest fine ever imposed under the GDPR remains the EUR1.2 billion (USD1.26 billion/GBP996 million) penalty issued by the Irish DPC against Meta Platforms Ireland Limited in 2023.

    Trends and Insights

    In the year from 28 January 2024, EUR1.2 billion fines were imposed. This was a 33% decrease compared to the aggregate fines imposed in the previous year, bucking the 7-year trend of increasing enforcement. This does not represent a shift in focus from personal data enforcement; the clear year on year trend remains upwards. This year’s reduction is almost entirely due to the record breaking EUR 1.2 billion fine against Meta falling in 2023 which skewed the 2023 figures. There was no record breaking fine in 2024.

    Big tech companies and social media giants continue to be the primary targets for record fines, with nearly all of the top 10 largest fines since 2018 imposed on this sector. This year alone the Irish Data Protection Commission issued fines of EUR310 million (USD326 million/GBP257 million) against LinkedIn and EUR251 million (USD264 million/GBP208 million) against Meta.  In August 2024, the Dutch Data Protection Authority issued a fine of EUR290 million (USD305 million/GBP241 million) against a well-known ride-hailing app in relation to transfers of personal data to a third country. 

    2024 enforcement expanded notably in other sectors, including financial services and energy. For example, the Spanish Data Protection Authority issued two fines totalling EUR6.2 million  (USD6.5 million/GBP5.1 million) against a large bank for inadequate security measures, and the Italian Data Protection Authority fined a utility provider EUR5 million (USD5.25 million/GBP4.15 million) for using outdated customer data.

    The UK was an outlier in 2024, issuing very few fines. The UK Information Commissioner John Edwards was quoted in the British press in November 2024 as saying that he does not agree that fines are likely to have the greatest impact and that they would tie his office up in years of litigation. An approach which is unlikely to catch on in the rest of Europe. 

    The dawn of personal liability

    Perhaps most significantly, a focus on governance and oversight has led to a number of enforcement decisions citing failings in these areas and specifically calling out failings of management bodies. Most significantly the Dutch Data Protection Commission announced it is investigating whether it can hold the directors of Clearview AI personally liable for numerous breaches of the GDPR, following a EUR30.5 million (USD32.03 million/GBP25.32 million) against the company. This novel investigation into the possibility of holding Clearview AI’s management personally liable for continued failings of the company signals a potentially significant shift in focus by regulators who recognise the power of personal liability to focus minds and drive better compliance. 

    Data Breach Notifications

    The average number of breach notifications per day increased slightly to 363 from 335 last year, a ‘levelling off’ consistent with previous years, likely indicative of organisations becoming more wary of reporting data breaches given the risk of investigations, enforcement, fines and compensation claims that may follow notification. 

    A recurring theme of DLA Piper’s previous annual surveys is that there has been little change at the top of the tables regarding the total number of data breach notifications made since the GDPR came into force on 25 May 2018 and during the most recent full year from 28 January 2024 to 27 January 2025. The Netherlands, Germany, and Poland remain the top three countries for the highest number of data breaches notified, with 33471, 27829 and 14,286 breaches notified respectively. 

    AI enforcement

    There have been a number of decisions this year signalling the intent of data protection supervisory authorities to closely scrutinise the operation of AI technologies and their alignment with privacy and data protection laws. For businesses, this highlights the need to integrate GDPR compliance into the core design and functionality of their AI systems.

    Commenting on the survey findings, Ross McKean, Chair of the UK Data, Privacy and Cybersecurity practice said:

    “European regulators have signalled a more assertive approach to enforcement during 2024 to ensure that AI training, deployment and use remains within the guard rails of the GDPR.”

    We expect for this trend to continue during 2025 as US AI technology comes up against European data protection laws.

    John Magee, Global Co-Chair of DLA Piper’s Data, Privacy and Cybersecurity practice commented:

    “The headline figures in this year’s survey have, for the first time ever, not broken any records so you may be forgiven for assuming a cooling of interest and enforcement by Europe’s data regulators. This couldn’t be further from the truth. From growing enforcement in sectors away from big tech and social media, to the use of the GDPR as an incumbent guardrail for AI enforcement as AI specific regulation falls into place, to significant fines across the likes of Germany, Italy and the Netherlands, and the UK’s shift away from fine-first enforcement – GDPR enforcement remains a dynamic and evolving arena.”

    Ross McKean added:

    “For me, I will mostly remember 2024 as the year that GDPR enforcement got personal.”

    “As the Dutch DPA champions personal liability for the management of Clearview AI, 2025 may well be the year that regulators pivot more to naming and shaming and personal liability to drive data compliance.”

    ]]>
    EU: EDPB Opinion on AI Provides Important Guidance though Many Questions Remain https://privacymatters.dlapiper.com/2025/01/eu-edpb-opinion-on-ai-provides-important-guidance-though-many-questions-remain/ Tue, 14 Jan 2025 13:53:05 +0000 https://privacymatters.dlapiper.com/?p=7528 Continue Reading]]> A much-anticipated Opinion from the European Data Protection Board (EDPB) on AI models and data protection has not resulted in the clear or definitive guidance that businesses operating in the EU had hoped for. The Opinion emphasises the need for case-by-case assessments to determine GDPR applicability, highlighting the importance of accountability and record-keeping, while also flagging ‘legitimate interests’ as an appropriate legal basis under specific conditions. In rejecting the proposed Hamburg thesis, the EDPB has stated that AI models trained on personal data should be considered anonymous only if personal data cannot be extracted or regurgitated.

    Introduction

    On 17 December 2024, the EDPB published a much-anticipated Opinion on AI models and data protection.  The Opinion includes the EDPB’s view on the following key questions: does the development and use of an AI model involve the processing of personal data; and if so, what is the correct legal basis for that processing?

    As is sometimes the case with EDPB Opinions, which necessarily represent the consensus view of the supervisory authorities of 27 different Member States, the Opinion does not provide many clear or definitive answers.  Instead, the EDPB offers indicative guidance and criteria, calling for case-by-case assessments of AI models to understand whether, and how, they are impacted by the GDPR.  In this context, the Opinion repeatedly highlights the importance of accountability and record-keeping by businesses developing or using AI, so that the applicability of data protection laws, and the business’ compliance with those laws, can be properly assessed. 

    Whilst the equivocation of the Opinion might be viewed as unhelpful by European businesses looking for regulatory certainty, it is also a reflection of the complexities inherent in this intersection of law and technology.

    In summary, the answers given by the EDPB to the four questions in the Opinion are as follows:

    1. Can an AI model, which has been trained using personal data, be considered anonymous?  Yes, but only in some cases.  It must be impossible, using all means reasonably likely to be used, to obtain personal data from the model, either through attacks which aim to extract the original training data from the model itself, or through interactions with the AI model (i.e., personal data provided in responses to prompts / queries). 
    2. Is ‘legitimate interests’ an appropriate legal basis for the training and development of an AI model? In principle yes, but only where the processing of personal data is necessary to develop the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  In particular, the issue of data minimisation, and the related issue of web-scraping / indiscriminate capture of data, will be relevant here. 
    3. Is ‘legitimate interests’ an appropriate legal basis for the deployment of an AI model? In principle yes, but only where the processing of personal data is necessary to deploy the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  Here, the impact on the data subject of the use of the AI model is of predominant importance.
    4. If an AI Model has been found to have been created, updated or developed using unlawfully processed personal data, how does this impact the subsequent use of that AI model?  This depends in part on whether the AI model was first anonymised before being disclosed to the deployer of that model (see Question 1).  Otherwise, the deployer of the model may need to assess the lawfulness of the development of the model as part of its accountability obligations.

    Background

    The Opinion was issued by the EDPB under Article 64 of the GDPR, in response to a request from the Irish Data Protection Commission.  Article 64 requires the EDPB to publish an opinion on matters of ‘general application’ or which ‘produce effects in more than one Member State’. 

    In this case, the Irish DPC asked the EDPB to provide an opinion on the above-mentioned questions – a request that is not surprising given the general importance of AI models to businesses across the EU, but also in light of the large number of technology companies developing those models who have established their European operations in Ireland. 

    In order to understand the Opinion, it helps to be familiar with certain concepts and terminology relating to AI. 

    First, the Opinion distinguishes between an ‘AI system’ and an ‘AI model’. For the former, the EDPB relies on the definition given in the EU AI Act. In short: a machine-based system operating with some degree of autonomy that infers, from inputs, how to produce outputs such as  predictions, content, recommendations, or decisions.  An AI model, meanwhile, is a component part of an AI system. Colloquially, it is the ‘brain’ of the AI system – an algorithm, or series of algorithms (such as in the form of a neural network), that recognises patterns in data. AI models require the addition of further components, such as a user interface, to become AI systems. To take a common example – the generative AI system known as Chat GPT is a software application comprised of an AI model (the GPT Large Language Model) connected to a chatbot-style user interface that allows the user to submit queries (or ‘prompts’) to the model in the form of natural language questions. Whilst the Opinion is notionally concerned only with AI models, at times the Opinion appears to blur the distinction between the model and the system, in particular, when discussing the significance of model outputs that are only rendered comprehensible to the user through an interface that sits outside of the model.

    Second, the Opinion relies on an understanding of a typical ‘AI lifecycle’, pursuant to which an AI model is first developed by training the model on large volumes of data.  This training may happen in a number of phases which become increasingly refined (referred to as ‘fine-tuning’). Only after an AI model is developed can it be used, or ‘deployed’, in a live setting, as part of an AI system.  Often, the developer of an AI model will not be the same person as the deployer.  This is relevant because the Opinion variously addresses both development and deployment phases.

    The significance of the ‘Hamburg thesis’

    With respect to the key question of whether AI models can be considered anonymous, the Opinion follows in the wake of a much-discussed paper published in July 2024 by the data protection authority for the German state of Hamburg.  The paper took the position that AI models (specifically, Large Language Models) are, in isolation, anonymous – they do not involve the processing of personal data. 

    In order to reach that conclusion, the paper decoupled the model itself from: (i) the prior training of the model (which may involve the collection and further processing of personal data as part of the training dataset); and (ii) the subsequent use of the model, whereby a prompt/input may contain personal data, and an output may be used in a way that means it constitutes personal data.

    Looking only at the AI model itself, the paper decided that the tokens and values which make up the ‘inner workings’ of a typical AI model do not, in any meaningful way, relate to or correspond with information about identifiable individuals.  Consequently, the model itself was found to be anonymous, even if the development and use of the model involves the processing of personal data. 

    The Hamburg thesis was welcomed for several reasons, not least because it resolved difficult questions such as how data subject rights could be understood in relation to an AI model (if someone asks for their personal data to be deleted, then what can this mean in the context of an AI model?), and the question of the lawful basis for ‘storing’ personal data in an AI model (as distinct from the lawful basis for collecting and preparing data to train the model).

    However, as we go on to explain, the EDPB Opinion does not follow the relatively simple and certain framework presented by the Hamburg thesis.  Instead, it introduces uncertainty by asserting that there are, in fact, scenarios where an AI model contains personal data, but that this must be determined on a case-by-case basis.

    Are AI models anonymous?

    First, the Opinion is only concerned with AI models that have been trained using personal data.  Therefore, AI models trained using solely non-personal data (such as statistical data, or financial data relating to businesses) can, for the avoidance of doubt, be considered anonymous.  However, in this context the broad scope of ‘personal data’ under the GDPR must be remembered, and the Opinion does not suggest any de minimis level of personal data that needs to be involved in the training of the AI model for the question of GDPR applicability to arise.

    Where personal data is used in the training phase, the next question is whether the model is specifically designed to provide personal data regarding individuals whose personal data were used to train the model.  If so, the AI model will not be anonymous.  For example, an AI model that is trained to provide a user, on request, with biographical information and contact details for directors of public companies, or a generative AI model that is trained on the voice recordings of famous singers so that it can, in turn, mimic the voices of those singers.  In each case, the model is trained on personal data of specific individuals, in order to be able to produce other personal data about those individuals as an output. 

    Finally, there is the intermediary case of AI models that are trained on personal data, but that are not designed to provide personal data related to the training data as an output.  It is this use case that the Opinion focuses on.  The conclusion is that AI models in this category may be anonymous, but only if the developer of the model can demonstrate that information about individuals whose personal data was used to train the model cannot be ‘obtained from’ the model, using all means reasonably likely to be used.  Notwithstanding that personal data used for training the model no longer exists within the model in its original form (but rather it is “represented through mathematical objects“), that information is, in the eyes of the EDPB, still capable of constituting personal data.

    The following question then arises: how does someone ‘obtain’ personal data from an AI model? In short, the Opinion posits two possibilities.  First, that training data is ‘extracted’ via deliberate attacks.  The Opinion refers to an evolving field of research in this area and makes reference to techniques such as ‘model inversion’, ‘reconstruction attacks’, and ‘attribute and membership inference’.  These are techniques that can be deployed to trick the model into revealing training data, or otherwise reconstruct that training data, in some cases relying on privileged access to the model itself.  Second, is the risk of accidental or inadvertent ‘regurgitation’ of personal data as part of an AI model’s outputs. 

    Consequently, a developer must be able to demonstrate that its AI model is resistant both to attacks that extract personal data directly from the model, as well as to the risk of regurgitation of personal data in response to queries:  “In sum, the EDPB considers that, for an AI model to be considered anonymous, using reasonable means, both (i) the likelihood of direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to train the model; as well as (ii) the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant for any data subject“. 

    Which criteria should be used to evaluate whether an AI model is anonymous?

    Recognising the uncertainty in its conclusion that the AI models may or may not be anonymous, the EDPB provides a list of criteria that can be used to assess the likelihood of a model being found to contain personal data.  These include:

    • Steps taken to avoid or limit the collection of personal data during the training phase.
    • Data minimisation or masking measures (e.g., pseudonymisation) applied to reduce the volume and sensitivity of personal data used during the training phase.
    • The use of methodologies during model development that reduce privacy risks (e.g., regularisation methods to improve model generalisation and reduce overfitting, and appropriate and effective privacy-preserving techniques, such as differential privacy).
    • Measures that reduce the likelihood of obtaining personal data from queries (e.g., ensuring the AI system blocks the presentation to the user of outputs that may contain personal data).
    • Document-based audits (internal or external) undertaken by the model developer that include an evaluation of the chosen measures and of their impact to limit the likelihood of identification.
    • Testing of the model to demonstrate its resilience to different forms of data extraction attacks.

    What is the correct legal basis for AI models?

    When using personal data to train an AI model, the preferred legal basis is normally the ‘legitimate interests’ of the controller, under Article 6(1)(f) GDPR. This is for practical reasons. Whilst, in some circumstances, it may be possible to obtain GDPR-compliant consent from individuals authorising the use of their data for AI training purposes, in most cases this will not be feasible. 

    Helpfully, the Opinion accepts that legitimate interests is, in principle, a viable legal basis for processing personal data to train an AI model. Further, the Opinion also suggests that it should be straightforward for businesses to identify a lawful legitimate interest. For example, the Opinion cites “developing an AI system to detect fraudulent content or behaviour” as a sufficiently precise and real interest. 

    However, where businesses may have more difficulty is in showing that the processing of personal data is necessary to realise their legitimate interest, and that their legitimate interest is not outweighed by any impact on the rights and freedoms of data subjects (the ‘balancing test’). Whilst this is fundamentally just a restatement of existing legal principles, the following sentence should nevertheless cause some concern for businesses developing AI models, in particular Large Language Models: “If the pursuit of the purpose is also possible through an AI model that does not entail processing of personal data, then processing personal data should be considered as not necessary“. Technically speaking, it may often be the case that personal data is not essential for the training of an AI model – however, this does not mean that it is straightforward to systematically remove all personal data from a training dataset, or otherwise replace all identifying elements with ‘dummy’ values. 

    With respect to the balancing test, the EDPB asks businesses to consider a data subject’s interest in self-determination and in maintaining control over their own data when considering whether it is lawful to collect personal data for model training purposes.  In particular, it may be more difficult to satisfy the balancing test if a developer is scraping large volumes of personal data (especially including any sensitive data categories) against their wishes, without their knowledge, or otherwise in contexts that would not be reasonably expected by the data subject. 

    When it comes to the separate purpose of deploying an AI model, the EDPB asks businesses to consider the impact on the data subject’s fundamental rights that arise from the purpose for which the AI model is used.  For example, AI models that are used to block content publication may adversely affect a data subject’s fundamental right to freedom of expression.  However, conversely the EDPB recognises that the deployment of AI models may have a positive impact on a data subject’s rights and freedoms – for example, an AI model that is used to improve accessibility to certain services for people with disabilities). In line with Recital 47 GDPR, the EDPB reminds controllers to consider the ‘reasonable expectations’ of data subjects in relation to both training and deployment uses of personal data.

    Finally, the Opinion discusses a range of ‘mitigating measures’ that may be used to reduce risks to data subjects and therefore tip the balancing test in favour of the controller.  These include:

    • Technical measures to reduce the volume or sensitivity of personal data at use (e.g., pseudonymisation, masking).
    • Measures to facilitate the exercise of data subject rights (e.g., providing an unconditional right for data subjects to opt-out of the use of their personal data for training or deploying the model; allowing a reasonable period of time to elapse between collection of training data and its use).
    • Transparency measures (e.g., public communications about the controller’s practices in connection with the use of personal data for AI model development).
    • Measures specific to web-scraping (e.g., excluding publications that present particular risks; excluding certain data categories or sources; excluding websites that clearly object to web scraping).

    Notably, the EDPB observes that, to be effective, these mitigating measures must go beyond mere compliance with GDPR obligations (for example, providing a GDPR compliant privacy notice, which a controller would in any case be required to do, would not be an effective transparency measure for these purposes). 

    When are companies liable to non-compliant AI models?

    In its final question, the DPC sought clarification from the EDPB on how a deployer of an AI model might be impacted by any unlawful processing of personal data in the development phase of the AI model. 

    According to the EDPB, such ‘upstream’ unlawful processing may impact a subsequent deployer of an AI model in the following ways:

    • Corrective measures taken against the developer may have a knock-on effect on the deployer – for example, if the developer is ordered to delete personal data unlawfully collected for training purposes, the developer would not be allowed to subsequently process this data. However, this raises an important practical question about how such data could be identified in, and deleted from, the AI model, taking into account the fact that the model does not retain training data in its original form.
    • Unlawful processing in the development phase may impact the legal basis for the deployment of the model – in particular, if the deployer of the AI model is relying on ‘legitimate interests’, it will be more difficult to satisfy the balancing test in light of the deficiencies associated with the collection and use of the training data.

    In light of these risks, the EDPB recommends that deployers take reasonable steps to assess the developer’s compliance with data protection laws during the training phase.  For example, can the developer explain the sources of data used, steps taken to comply with the minimisation principle, and any legitimate interest assessments conducted for the training phase?  For certain AI models, the transparency obligations imposed in relation to AI systems under the AI Act should assist a deployer in obtaining this information from a third party AI model developer. While the opinion provides a useful framework for assessing GDPR issues with AI systems, businesses operating in the EU may be frustrated with the lack of certainty or definitive guidance on many key questions relating to this new era of technology innovation.

    ]]>
    CHINA: Draft Regulation on Certification for Cross-Border Data Transfers Published https://privacymatters.dlapiper.com/2025/01/7523/ Tue, 14 Jan 2025 12:02:22 +0000 https://privacymatters.dlapiper.com/?p=7523 Continue Reading]]>

    On 3 January 2025, the Cyberspace Administration of China (“CAC“) released for public consultation the draft Measures for Certification of Personal Information Protection for Cross-Border Transfer of Personal Information (“Draft Measures“). This regulation represents the final piece in the CAC’s regulatory framework for the three routes to legitimize cross-border transfers of personal data outside of China (“CBDTs“).

    To recap, Chinese law requires data controllers to take one of the following three routes to legitimize CBDTs, unless they qualify for specific exemptions under the Provisions on Promoting and Regulating Cross-Border Data Flows (click here for our summary, “Provisions“) or local rules:

    • CAC security assessment;
    • Standard Contractual Clauses (“SCCs“) filing; or
    • CAC-accredited certification.

    If enacted, the Draft Measures will provide significant clarity regarding the certification route, offering data controllers both within and outside of China a viable option for compliance of CBDTs. Below is a practical guide to the key provisions of the Draft Measures, along with our recommendations for data controllers engaged in CBDTs in light of this new regulation.

    Who can utilise the certification route?

    Data controllers in China: In alignment with the conditions outlined in the Provisions, the Draft Measures reiterate that a data controller in China may pursue the certification route if:

    • the data controller is not a critical information infrastructure operator (“CIIO“);
    • no important data is transferred outside of China; and
    • it has cumulatively transferred non-sensitive personal data of 100,000-1,000,000 individuals or sensitive personal data of less than 10,000 individuals outside of China since the beginning of the year.

    It is worth noting that these conditions are the same as those for taking the SCCs filing route, making the certification route an effective alternative to the SCCs filing route for data controllers in China.

    Overseas data controllers: The certification route is also available to data controllers outside of China that fall under the extraterritorial jurisdiction of the Personal Information Protection Law (“PIPL“), i.e. those processing personal data of residents in China to provide products or services to them or analyze or evaluate their behavior.

    The Draft Measures do not specify the volume threshold or other conditions for overseas data controllers to take the certification route. It remains to be clarified whether overseas data controllers with a limited scope of CBDTs (e.g. those not reaching the volume threshold for data controllers in China as outlined above) can be exempted from obtaining certification or following the other legitimizing routes.

    From which certification bodies can a data controller obtain the certification?

    Certification bodies that have received approval from the State Administration for Market Regulation (“SAMR“) and have completed a filing process with the CAC are qualified to issue the CBDT certification.

    What are the evaluation criteria for the certification?

    The evaluation for the certification will focus on the following aspects:

    • the legality, legitimacy and necessity of the purposes, scope and methods of the CBDT;
    • the impact of the personal data protection laws and policies and network and data security environment of the country/region where the overseas data controller/recipient is located on the security of the transferred personal data;
    • whether the overseas data controller/recipient’s level of personal data protection meets the requirements under Chinese laws, regulations and mandatory national standards;
    • whether the legally binding agreement between the data controller and the overseas data recipient imposes obligations for personal data protection;
    • whether the organizational structure, management system, and technical measures of the data controller and the overseas data recipient can adequately and effectively ensure data security and protect individuals’ rights and interests regarding their personal data; and
    • other aspects deemed necessary by certification bodies according to relevant standards for personal information protection certification.

    Are there special requirements for overseas data controllers pursuing certification?

    Yes. An overseas data controller governed by the PIPL seeking certification must submit the application with the assistance of its dedicated institution or designated representative located in China (the presence of which is a requirement under the PIPL).

    The Draft Measures also make it clear that overseas data controllers must, like data controllers in China, assume legal responsibilities associated with certification processes, undertake to comply with relevant Chinese data protection laws and regulations, and be subject to the supervision by Chinese regulators and certification bodies.

    How are certification processes and results supervised?

    The Draft Measures grant supervisory powers to both the SAMR and the CAC. They can conduct random checks on certification processes and results; and evaluate certification bodies. Certified data controllers will also be under continuous supervision by their certification bodies.

    If a certified data controller is found to no longer meet the certification requirements (e.g. the actual scope of the CBDT is inconsistent with that specified in the certification), the certification will be suspended or revoked, which action will be made public. 

    Are there ancillary rules and standards on the horizon?

    Probably yes. The Draft Measures indicate that the CAC will collaborate with relevant regulators to formulate standards, technical regulations, and conformity assessment procedures for CBDT certification and work alongside the SAMR to develop implementation rules and unified certificates and marks for CBDT certification.

    Is the certification likely to be recognised in other jurisdictions?

    Probably yes. According to the Draft Measures, China will facilitate mutual recognition of personal information protection certification with other countries, regions, and international organizations.

    Recommendations

    As discussed, the Draft Measures make available a tangible certification route to legitimize CBDTs for data controllers both within and outside of China. Data controllers should carefully evaluate and choose between the three legitimizing routes when engaging in CBDTs, considering their respective pros and cons and suitability for the controllers’ specific patterns of CBDTs. For example, the certification route may be advantageous for complex CBDTs among multiple parties where signing of SCCs is challenging. To make well-informed decisions, data controllers engaged in CBDTs are recommended to closely monitor developments related to the Draft Measures in the months following the conclusion of the public consultation period on 3 February 2025, and remain vigilant for any release of ancillary rules and standards. This is particularly necessary because some important details about the certification route, such as the validity period of the certification and any thresholds for overseas data controllers to take the certification route, remain unclear.

    Overseas data controllers processing personal data of residents in China should also be aware of the Draft Measures, as they specifically outline the certification route. This represents a further enhancement of Chinese regulations governing overseas data controllers, following clarifications regarding the procedure for reporting dedicated institutions or designated representatives of overseas data controllers under the Network Data Security Management Regulation that took effect on 1 January 2025 (click here for our summary). Given this trend, overseas data controllers processing personal data of residents in China should consider assessing whether they fall under the extraterritorial jurisdiction of Chinese data protection laws and, if so, evaluating the practical risks of non-compliance with such laws (e.g. the impact of potential service disruptions or access restrictions). If compliance with Chinese data protection laws turns out to be necessary, it is advisable to implement a comprehensive program to navigate how China’s CBDT restrictions and, more broadly, its complex data regulatory framework may apply to the overseas data controller and devise compliance strategies.

    It is also important to remember that the legitimizing routes are not the sole requirement for CBDTs under Chinese law. Regardless of the chosen route, data controllers must implement other compliance measures for CBDTs, including obtaining separate consent from data subjects, conducting personal information impact assessments, and maintaining records of processing activities.

    ]]>
    Australia: Privacy Act amendments and Cyber Security Act become law https://privacymatters.dlapiper.com/2024/12/australia-privacy-act-amendments-and-cyber-security-act-become-law/ Thu, 05 Dec 2024 09:37:47 +0000 https://privacymatters.dlapiper.com/?p=7512 Continue Reading]]> On 29 November 2024, the Australian Senate passed the Privacy and Other Legislation Amendment Bill 2024 (Cth) (the Privacy Act Bill).  This follows the passage of the Cyber Security Act 2024 (Cth), and other cyber-security related amendments, on 25 November 2024.  

    The majority of the amendments to the Privacy Act 1988 (Cth) will commence the day after the Privacy Act Bill receives Royal Assent, with a few exceptions.

    The Privacy Act Bill contains key amendments to the Privacy Act including:

    • A statutory tort for serious invasions of privacy – this will only apply (amongst other criteria) where the conduct in question was intentional or reckless, and this section of the Bill will take effect no later than six months after the Act receives Royal Asset.
    • The framework for a Children’s Online Privacy Code – this will be developed by the Information Commissioner and will apply to social media platforms and any online services likely to be accessed by children.
    • Tiered sanctions for less serious privacy breaches – this includes civil penalties of up to AUD 3.3 million for an “interference with privacy” and lower level fines of up to AUD 330,000 for administrative breaches, such as deficient privacy policies.  The headline penalties of up to the greater of AUD 50 million, three times the benefit of a contravention, or 30% of annual turnover, remain for conduct which amounts to a “serious interference with privacy”.
    • Requirements to include details of the use of automated decision making into privacy policies, where personal information is used in wholly or substantially automated decision making that could reasonably be expected  to significantly affect the rights or interests of an individual.  This requirement will not take effect for 24 months however.
    • The introduction of a criminal offence for doxing.
    • Eligible data breach declarations and information sharing – these are designed to allow limited information sharing following a data breach, in circumstances which would otherwise be in breach of the Privacy Act (such as disclosing information to banks and other institutions for the purpose of enhanced monitoring).
    • Clarifications to APP 11 to ensure it is clear that the reasonable steps which entities must take to protect personal information include “technical and organisation measures”.
    • The introduction of equivalency decisions under APP 8 to facilitate cross-border transfers of data.

    Our previous post, available here, provides further insights regarding these changes.

    Whilst the Privacy Act Bill implements some of the recommendations from the Privacy Act Review Report, subsequent tranches of amendments are expected in the next 12-18 months to implement the remaining recommendations.

    The Cyber Security Act 2024 (Cth), which received Royal Asset on 29 November 2024, introduces:

    • A mandatory ransomware reporting requirement – reports must be made to the Department of Home Affairs if a ransomware payment is paid to an extorting entity. This requirement will be implemented after a 6 month implementation period, and is drafted so as to also capture ransomware payments made on behalf of an entity doing business in Australia.
    • A Cyber Review Board which will conduct no-fault, post incident reviews of significant cyber security incidents in Australia.
    • A limited use exception –  this prevents information which is voluntarily provided to certain Government departments from being used for enforcement purposes, and is designed to encourage enhanced cooperation between industry and Government during cyber incidents.
    • Mandatory security standards for smart devices.

    Our previous post, available here, includes further details on cyber security legislative package.

    ]]>