James Clark, Heidi Waem, John Magee and Rachel de Souza | Privacy Matters | DLA Piper Data Protection and Privacy | DLA Piper https://privacymatters.dlapiper.com/author/jclark/ DLA Piper's Global Privacy and Data Protection Resource Tue, 14 Jan 2025 13:54:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.8&lxb_maple_bar_source=lxb_maple_bar_source https://privacyblog.dlapiperblogs.com/wp-content/uploads/sites/32/2023/07/cropped-Favicon_512x512-32x32.gif James Clark, Heidi Waem, John Magee and Rachel de Souza | Privacy Matters | DLA Piper Data Protection and Privacy | DLA Piper https://privacymatters.dlapiper.com/author/jclark/ 32 32 EU: EDPB Opinion on AI Provides Important Guidance though Many Questions Remain https://privacymatters.dlapiper.com/2025/01/eu-edpb-opinion-on-ai-provides-important-guidance-though-many-questions-remain/ Tue, 14 Jan 2025 13:53:05 +0000 https://privacymatters.dlapiper.com/?p=7528 Continue Reading]]> A much-anticipated Opinion from the European Data Protection Board (EDPB) on AI models and data protection has not resulted in the clear or definitive guidance that businesses operating in the EU had hoped for. The Opinion emphasises the need for case-by-case assessments to determine GDPR applicability, highlighting the importance of accountability and record-keeping, while also flagging ‘legitimate interests’ as an appropriate legal basis under specific conditions. In rejecting the proposed Hamburg thesis, the EDPB has stated that AI models trained on personal data should be considered anonymous only if personal data cannot be extracted or regurgitated.

Introduction

On 17 December 2024, the EDPB published a much-anticipated Opinion on AI models and data protection.  The Opinion includes the EDPB’s view on the following key questions: does the development and use of an AI model involve the processing of personal data; and if so, what is the correct legal basis for that processing?

As is sometimes the case with EDPB Opinions, which necessarily represent the consensus view of the supervisory authorities of 27 different Member States, the Opinion does not provide many clear or definitive answers.  Instead, the EDPB offers indicative guidance and criteria, calling for case-by-case assessments of AI models to understand whether, and how, they are impacted by the GDPR.  In this context, the Opinion repeatedly highlights the importance of accountability and record-keeping by businesses developing or using AI, so that the applicability of data protection laws, and the business’ compliance with those laws, can be properly assessed. 

Whilst the equivocation of the Opinion might be viewed as unhelpful by European businesses looking for regulatory certainty, it is also a reflection of the complexities inherent in this intersection of law and technology.

In summary, the answers given by the EDPB to the four questions in the Opinion are as follows:

  1. Can an AI model, which has been trained using personal data, be considered anonymous?  Yes, but only in some cases.  It must be impossible, using all means reasonably likely to be used, to obtain personal data from the model, either through attacks which aim to extract the original training data from the model itself, or through interactions with the AI model (i.e., personal data provided in responses to prompts / queries). 
  2. Is ‘legitimate interests’ an appropriate legal basis for the training and development of an AI model? In principle yes, but only where the processing of personal data is necessary to develop the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  In particular, the issue of data minimisation, and the related issue of web-scraping / indiscriminate capture of data, will be relevant here. 
  3. Is ‘legitimate interests’ an appropriate legal basis for the deployment of an AI model? In principle yes, but only where the processing of personal data is necessary to deploy the AI model, and where the ‘balancing test’ can be resolved in favour of the controller.  Here, the impact on the data subject of the use of the AI model is of predominant importance.
  4. If an AI Model has been found to have been created, updated or developed using unlawfully processed personal data, how does this impact the subsequent use of that AI model?  This depends in part on whether the AI model was first anonymised before being disclosed to the deployer of that model (see Question 1).  Otherwise, the deployer of the model may need to assess the lawfulness of the development of the model as part of its accountability obligations.

Background

The Opinion was issued by the EDPB under Article 64 of the GDPR, in response to a request from the Irish Data Protection Commission.  Article 64 requires the EDPB to publish an opinion on matters of ‘general application’ or which ‘produce effects in more than one Member State’. 

In this case, the Irish DPC asked the EDPB to provide an opinion on the above-mentioned questions – a request that is not surprising given the general importance of AI models to businesses across the EU, but also in light of the large number of technology companies developing those models who have established their European operations in Ireland. 

In order to understand the Opinion, it helps to be familiar with certain concepts and terminology relating to AI. 

First, the Opinion distinguishes between an ‘AI system’ and an ‘AI model’. For the former, the EDPB relies on the definition given in the EU AI Act. In short: a machine-based system operating with some degree of autonomy that infers, from inputs, how to produce outputs such as  predictions, content, recommendations, or decisions.  An AI model, meanwhile, is a component part of an AI system. Colloquially, it is the ‘brain’ of the AI system – an algorithm, or series of algorithms (such as in the form of a neural network), that recognises patterns in data. AI models require the addition of further components, such as a user interface, to become AI systems. To take a common example – the generative AI system known as Chat GPT is a software application comprised of an AI model (the GPT Large Language Model) connected to a chatbot-style user interface that allows the user to submit queries (or ‘prompts’) to the model in the form of natural language questions. Whilst the Opinion is notionally concerned only with AI models, at times the Opinion appears to blur the distinction between the model and the system, in particular, when discussing the significance of model outputs that are only rendered comprehensible to the user through an interface that sits outside of the model.

Second, the Opinion relies on an understanding of a typical ‘AI lifecycle’, pursuant to which an AI model is first developed by training the model on large volumes of data.  This training may happen in a number of phases which become increasingly refined (referred to as ‘fine-tuning’). Only after an AI model is developed can it be used, or ‘deployed’, in a live setting, as part of an AI system.  Often, the developer of an AI model will not be the same person as the deployer.  This is relevant because the Opinion variously addresses both development and deployment phases.

The significance of the ‘Hamburg thesis’

With respect to the key question of whether AI models can be considered anonymous, the Opinion follows in the wake of a much-discussed paper published in July 2024 by the data protection authority for the German state of Hamburg.  The paper took the position that AI models (specifically, Large Language Models) are, in isolation, anonymous – they do not involve the processing of personal data. 

In order to reach that conclusion, the paper decoupled the model itself from: (i) the prior training of the model (which may involve the collection and further processing of personal data as part of the training dataset); and (ii) the subsequent use of the model, whereby a prompt/input may contain personal data, and an output may be used in a way that means it constitutes personal data.

Looking only at the AI model itself, the paper decided that the tokens and values which make up the ‘inner workings’ of a typical AI model do not, in any meaningful way, relate to or correspond with information about identifiable individuals.  Consequently, the model itself was found to be anonymous, even if the development and use of the model involves the processing of personal data. 

The Hamburg thesis was welcomed for several reasons, not least because it resolved difficult questions such as how data subject rights could be understood in relation to an AI model (if someone asks for their personal data to be deleted, then what can this mean in the context of an AI model?), and the question of the lawful basis for ‘storing’ personal data in an AI model (as distinct from the lawful basis for collecting and preparing data to train the model).

However, as we go on to explain, the EDPB Opinion does not follow the relatively simple and certain framework presented by the Hamburg thesis.  Instead, it introduces uncertainty by asserting that there are, in fact, scenarios where an AI model contains personal data, but that this must be determined on a case-by-case basis.

Are AI models anonymous?

First, the Opinion is only concerned with AI models that have been trained using personal data.  Therefore, AI models trained using solely non-personal data (such as statistical data, or financial data relating to businesses) can, for the avoidance of doubt, be considered anonymous.  However, in this context the broad scope of ‘personal data’ under the GDPR must be remembered, and the Opinion does not suggest any de minimis level of personal data that needs to be involved in the training of the AI model for the question of GDPR applicability to arise.

Where personal data is used in the training phase, the next question is whether the model is specifically designed to provide personal data regarding individuals whose personal data were used to train the model.  If so, the AI model will not be anonymous.  For example, an AI model that is trained to provide a user, on request, with biographical information and contact details for directors of public companies, or a generative AI model that is trained on the voice recordings of famous singers so that it can, in turn, mimic the voices of those singers.  In each case, the model is trained on personal data of specific individuals, in order to be able to produce other personal data about those individuals as an output. 

Finally, there is the intermediary case of AI models that are trained on personal data, but that are not designed to provide personal data related to the training data as an output.  It is this use case that the Opinion focuses on.  The conclusion is that AI models in this category may be anonymous, but only if the developer of the model can demonstrate that information about individuals whose personal data was used to train the model cannot be ‘obtained from’ the model, using all means reasonably likely to be used.  Notwithstanding that personal data used for training the model no longer exists within the model in its original form (but rather it is “represented through mathematical objects“), that information is, in the eyes of the EDPB, still capable of constituting personal data.

The following question then arises: how does someone ‘obtain’ personal data from an AI model? In short, the Opinion posits two possibilities.  First, that training data is ‘extracted’ via deliberate attacks.  The Opinion refers to an evolving field of research in this area and makes reference to techniques such as ‘model inversion’, ‘reconstruction attacks’, and ‘attribute and membership inference’.  These are techniques that can be deployed to trick the model into revealing training data, or otherwise reconstruct that training data, in some cases relying on privileged access to the model itself.  Second, is the risk of accidental or inadvertent ‘regurgitation’ of personal data as part of an AI model’s outputs. 

Consequently, a developer must be able to demonstrate that its AI model is resistant both to attacks that extract personal data directly from the model, as well as to the risk of regurgitation of personal data in response to queries:  “In sum, the EDPB considers that, for an AI model to be considered anonymous, using reasonable means, both (i) the likelihood of direct (including probabilistic) extraction of personal data regarding individuals whose personal data were used to train the model; as well as (ii) the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant for any data subject“. 

Which criteria should be used to evaluate whether an AI model is anonymous?

Recognising the uncertainty in its conclusion that the AI models may or may not be anonymous, the EDPB provides a list of criteria that can be used to assess the likelihood of a model being found to contain personal data.  These include:

  • Steps taken to avoid or limit the collection of personal data during the training phase.
  • Data minimisation or masking measures (e.g., pseudonymisation) applied to reduce the volume and sensitivity of personal data used during the training phase.
  • The use of methodologies during model development that reduce privacy risks (e.g., regularisation methods to improve model generalisation and reduce overfitting, and appropriate and effective privacy-preserving techniques, such as differential privacy).
  • Measures that reduce the likelihood of obtaining personal data from queries (e.g., ensuring the AI system blocks the presentation to the user of outputs that may contain personal data).
  • Document-based audits (internal or external) undertaken by the model developer that include an evaluation of the chosen measures and of their impact to limit the likelihood of identification.
  • Testing of the model to demonstrate its resilience to different forms of data extraction attacks.

What is the correct legal basis for AI models?

When using personal data to train an AI model, the preferred legal basis is normally the ‘legitimate interests’ of the controller, under Article 6(1)(f) GDPR. This is for practical reasons. Whilst, in some circumstances, it may be possible to obtain GDPR-compliant consent from individuals authorising the use of their data for AI training purposes, in most cases this will not be feasible. 

Helpfully, the Opinion accepts that legitimate interests is, in principle, a viable legal basis for processing personal data to train an AI model. Further, the Opinion also suggests that it should be straightforward for businesses to identify a lawful legitimate interest. For example, the Opinion cites “developing an AI system to detect fraudulent content or behaviour” as a sufficiently precise and real interest. 

However, where businesses may have more difficulty is in showing that the processing of personal data is necessary to realise their legitimate interest, and that their legitimate interest is not outweighed by any impact on the rights and freedoms of data subjects (the ‘balancing test’). Whilst this is fundamentally just a restatement of existing legal principles, the following sentence should nevertheless cause some concern for businesses developing AI models, in particular Large Language Models: “If the pursuit of the purpose is also possible through an AI model that does not entail processing of personal data, then processing personal data should be considered as not necessary“. Technically speaking, it may often be the case that personal data is not essential for the training of an AI model – however, this does not mean that it is straightforward to systematically remove all personal data from a training dataset, or otherwise replace all identifying elements with ‘dummy’ values. 

With respect to the balancing test, the EDPB asks businesses to consider a data subject’s interest in self-determination and in maintaining control over their own data when considering whether it is lawful to collect personal data for model training purposes.  In particular, it may be more difficult to satisfy the balancing test if a developer is scraping large volumes of personal data (especially including any sensitive data categories) against their wishes, without their knowledge, or otherwise in contexts that would not be reasonably expected by the data subject. 

When it comes to the separate purpose of deploying an AI model, the EDPB asks businesses to consider the impact on the data subject’s fundamental rights that arise from the purpose for which the AI model is used.  For example, AI models that are used to block content publication may adversely affect a data subject’s fundamental right to freedom of expression.  However, conversely the EDPB recognises that the deployment of AI models may have a positive impact on a data subject’s rights and freedoms – for example, an AI model that is used to improve accessibility to certain services for people with disabilities). In line with Recital 47 GDPR, the EDPB reminds controllers to consider the ‘reasonable expectations’ of data subjects in relation to both training and deployment uses of personal data.

Finally, the Opinion discusses a range of ‘mitigating measures’ that may be used to reduce risks to data subjects and therefore tip the balancing test in favour of the controller.  These include:

  • Technical measures to reduce the volume or sensitivity of personal data at use (e.g., pseudonymisation, masking).
  • Measures to facilitate the exercise of data subject rights (e.g., providing an unconditional right for data subjects to opt-out of the use of their personal data for training or deploying the model; allowing a reasonable period of time to elapse between collection of training data and its use).
  • Transparency measures (e.g., public communications about the controller’s practices in connection with the use of personal data for AI model development).
  • Measures specific to web-scraping (e.g., excluding publications that present particular risks; excluding certain data categories or sources; excluding websites that clearly object to web scraping).

Notably, the EDPB observes that, to be effective, these mitigating measures must go beyond mere compliance with GDPR obligations (for example, providing a GDPR compliant privacy notice, which a controller would in any case be required to do, would not be an effective transparency measure for these purposes). 

When are companies liable to non-compliant AI models?

In its final question, the DPC sought clarification from the EDPB on how a deployer of an AI model might be impacted by any unlawful processing of personal data in the development phase of the AI model. 

According to the EDPB, such ‘upstream’ unlawful processing may impact a subsequent deployer of an AI model in the following ways:

  • Corrective measures taken against the developer may have a knock-on effect on the deployer – for example, if the developer is ordered to delete personal data unlawfully collected for training purposes, the developer would not be allowed to subsequently process this data. However, this raises an important practical question about how such data could be identified in, and deleted from, the AI model, taking into account the fact that the model does not retain training data in its original form.
  • Unlawful processing in the development phase may impact the legal basis for the deployment of the model – in particular, if the deployer of the AI model is relying on ‘legitimate interests’, it will be more difficult to satisfy the balancing test in light of the deficiencies associated with the collection and use of the training data.

In light of these risks, the EDPB recommends that deployers take reasonable steps to assess the developer’s compliance with data protection laws during the training phase.  For example, can the developer explain the sources of data used, steps taken to comply with the minimisation principle, and any legitimate interest assessments conducted for the training phase?  For certain AI models, the transparency obligations imposed in relation to AI systems under the AI Act should assist a deployer in obtaining this information from a third party AI model developer. While the opinion provides a useful framework for assessing GDPR issues with AI systems, businesses operating in the EU may be frustrated with the lack of certainty or definitive guidance on many key questions relating to this new era of technology innovation.

]]>
EU: EHDS – Access to health data for secondary use under the European Health Data Space https://privacymatters.dlapiper.com/2024/11/eu-ehds-access-to-health-data-for-secondary-use-under-the-european-health-data-space/ Tue, 19 Nov 2024 09:23:40 +0000 https://privacymatters.dlapiper.com/?p=7499 Continue Reading]]> This is Part 3 in a series of articles on the European Health Data Space (“EHDS“).  Part 1, which provides a general overview of the EHDS, is available here. Part 2, which deals with the requirements on the manufacturers of EHR-Systems under the EHDS, is available here.

This article provides an overview of the framework for accessing health data for secondary use under the EHDS. It is based on the compromise text of the EHDS published by the Council of the European Union in March 2024.  

Improving access to health data for the purposes of supporting research and innovation activities is one of the key pillars of the EHDS and offers a potentially significant benefit for life sciences and healthcare companies who are looking for improved access to high-quality secondary use data.

By way of reminder, in general terms the EHDS creates a regime under which organisations may apply to a health data access body (“HDAB“) for access to electronic health data held by a third party, for one of a number of permitted secondary use purposes.  When required to do so by the HDAB, the company holding the health data (the health data holder) must then provide the data to the HDAB in order to satisfy the access request. The EHDS provides for safeguards to protect intellectual property rights and trade secrets, and there is some scope for health data holders to recover costs incurred in making data available.  

In more detail, the process operates as follows:

  1. Access to secondary health data

The EHDS stipulates a specific process as well as certain requirements for the access to secondary health data.

In order to get access to secondary health data under the EHDS, the applicant must submit a data access application to the health data access body (“HDAB”). Each Member State must designate an HDAB which is, inter alia, responsible for deciding on data access applications, authorizing and issuing data permits, providing access to electronic health data and monitoring and supervising compliance with the requirements under the EHDS.

Further, the HDAB is responsible for ensuring that data that are adequate, relevant and limited to what is necessary in relation to the purpose of processing indicated in the data access application. The default position is that data will be provided in an anonymized format. However, if the applicant can demonstrate that the purpose of processing cannot be achieved with anonymized data, the HDAB may provide access to the electronic health data in a pseudonymised format.

The data access application must include at least the following:

  • The applicant’s identity, description of professional functions and operations, including the identity of the natural persons who will have access to electronic health data;
  • Which purposes the access is sought for including a detailed explanation of the intended use and expected benefit related to the use (e.g., protection against serious cross-border threats to health in the public interest, scientific research related to health or care sectors to ensure high levels of quality and safety of health care or medicinal products/devices with the aim of benefitting the end-users, including development and innovation activities for products and services);
  • A description of the requested electronic health data, including their scope and time range, format and data sources, where possible, including geographical coverage where data is request from health data holders in several member states;
  • A description whether electronic health data need to be made available in a pseudonymised or anonymized format, in case of pseudonymised format, a justification why the processing cannot be pursued using anonymized data. Further, where the applicant seeks to access the personal electronic health data in a pseudonymised format, the compliance with applicable data protection laws shall be demonstrated;
  • A description of the safeguards, proportionate to the risks, planned to prevent any misuse of the electronic health data as well as to protect the rights and interests of the health data holder and of the natural persons concerned, including to prevent any re-identification of natural persons in the dataset;
  • A justified indication of the period during which the electronic health data is needed for processing in a secure processing environment;
  • A description of the tools and computing resources needed for a secure processing environment and, where applicable, information on the assessment of ethical aspects

Where an applicant seeks access to electronic health data from health data holders established in more than one Member State, the applicant must submit a single data access application to the HDAB of the main establishment of the applicant which shall be automatically forwarded to other relevant HDABs.

Also, there is the option to only apply for access to health data in anonymized statistical format with less formal requirements as well as a simplified procedure for trusted health data holders. The European Commission is responsible for creating templates for the data access applications.

  • Requirements for the technical infrastructure

The HDAB shall only provide access to electronic health data pursuant to a data permit through a secure processing environment. The secure processing environment shall comply with the following security measures:

  • Access to the data must be restricted to the natural persons listed in the data access application;
  • Implementation of state-of-the-art technical and organisational measures to minimize the risk of unauthorized processing of electronic health data;
  • Limitation of the input of electronic health data and the inspection, modification or deletion of electronic health data to a limited number of authorized persons;
  • Ensure that access is only granted to electronic health data covered by the data access application;
  • Keeping identifiable logs of access to and activities in the secure processing environment for not shorter than one year to verify and audit all processing operations;
  • Monitoring compliance and security measures to mitigate potential security threats.

The HDAB shall ensure regular audits, including by third parties, of the secure processing environments and, if necessary, take corrective actions for any shortcomings or vulnerabilities identified.

  • Data protection roles

From a data protection law perspective, the health data holder shall be deemed controller for the disclosure of the requested electronic health data to the HDAB pursuant to Art. 4 No. 1 GDPR. When fulfilling its tasks under the EHDS, the HDAB shall be deemed controller for the processing of personal electronic health data. However, where the HDAB provides electronic health data to a health data user pursuant to a data access application, the HDAB shall be deemed to act as processor on behalf of the health data user. The EU Commission may establish a template for controller to processor agreements in those cases.

  • Fees for the access to health data for secondary use

The HDAB may charge fees for making electronic health data available for secondary use. Such fees shall cover all or part of costs related to the procedure for assessing a data access application and granting, refusing or amending a data permit, including the costs related to the consolidation, preparation, anonymization, pseudonymization and provisioning of electronic health data. The fees further include compensation for the costs incurred by the health data holder for compiling and preparing the electronic health data to be made available for secondary use. The health data holder shall provide an estimate of such costs to the HDAB.

Conclusion

The access to electronic health data for secondary use is a big opportunity especially for companies operating in the life science and healthcare sectors to get access to potentially large volumes of high-quality electronic health data for research and product development purposes. Although Chapter IV of the EHDS, which deals with the secondary use of electronic health data, will become applicable 4 years after the EHDS enters into force, companies are well-advised to begin preparation to gain access to electronic health data for secondary use at an early stage in order to gain a competitive advantage and to ensure that they are able to make direct use of the opportunities granted by the EHDS. Such preparation includes, inter alia, the early determination of the specific electronic health data required for the specific purpose the company wants to achieve as well as the set up of an infrastructure which meets the requirements under the

]]>
UK: Data (Use and Access) Bill: newcomer or a familiar face? https://privacymatters.dlapiper.com/2024/11/uk-data-use-and-access-bill-newcomer-or-a-familiar-face/ Tue, 05 Nov 2024 14:59:26 +0000 https://privacymatters.dlapiper.com/?p=7488 Continue Reading]]> Déjà vu in the world of UK data law: the Labour government has proposed reforms to data protection and e-privacy laws through the new Data (Use and Access) Bill (“DUAB“). The DUAB follows the previous government’s unsuccessful attempts to reform these laws post-Brexit, which led to the abandonment of the Data Protection and Digital Information (No.2) Bill (“DPDI Bill“), in the run-up to the general election.

The new Labour government first announced plans for a bill in the King’s speech in July. In a notable shift of emphasis from the DPDI Bill, the term ‘data protection’ has been dropped from the title of the Bill.  Reform to the data protection and e-privacy regime is still an important part of the Bill, but arguably secondary to emphasis within the bill on wider data related policy initiatives, focussed on facilitating digital identities and securing access to ‘smart’ or ‘open’ data sets. This is reflected in the Government’s introduction that the new Bill will “harness the enormous power of data to boost the UK economy by £10 billion” and “unlock the secure and effective use of data for the public interest, without adding pressures to the country’s finances“.

Key data protection law changes

The Bill proposes very limited changes to the UK data protection regime. These are targeted and incremental and unlikely to have a material impact on day-to-day compliance for most businesses operating in the UK.

The specific areas of reform proposed include:

  • Scientific research definition and broad ‘consent to research’: The DUAB creates a statutory definition of scientific research to help clarify how the various provisions in the UK GDPR which refer to ‘research’ are intended to be applied. The intention is to clarify that ‘scientific research’ can extend to cover research “carried out for commercial or non-commercial activity” and includes any research that “can reasonably be described as scientific”. This replicates similar proposals in the DPDI Bill, which effectively bring into the UK GDPR references that appear in the recitals to the GDPR, that suggest a broad interpretation of “scientific research” should be applied. The DUAB also clarifies that an individual may be able to give consent to their data being used for more than one type of scientific research, even if at the time consent is provided, it is not possible to identify all of those research purposes.
  • Recognised legitimate interests: The DUAB helpfully introduces the concept of ‘recognised legitimate interests’ to provide a presumption of legitimacy to certain processing activities that a controller may wish to carry out under Article 6(1)(f) (legitimate interests). Again this is a helpful carry over from the DPDI Bill. The DUAB also introduces a new provision requiring any new recognised legitimate interest to be necessary to safeguard an objective listed in Article 23(1) UK GDPR (i.e. public security, the prevention, investigation, detection or prosecution of crime, public health, data subject rights etc.).
  • Automated Decision Making: The DUAB will remove the requirement to establish a qualifying lawful basis before conducting automated decision making (the requirement currently at Article 21(2) UK GDPR), except where special category data is used. This change is particularly relevant to organisations using AI systems, potentially allowing those organisations to use ADM more widely than under EU GDPR. However, data subjects will still benefit from rights of objection and human intervention, and organisations will still need to carefully assess their use of ADM. 
  • Special category personal data: The DUAB grants the Secretary of State the authority to designate new special categories of personal data and additional processing activities that fall under the prohibition of processing special category data in Article 9(1) of the UK GDPR. This potentially extends the scope of additional protections afforded by Article 9, beyond the current prescribed list of categories of special category data in the UK GDPR. It is unclear whether the Government anticipates including any additional categories of data under this mechanism in the near term.
  • Cookies: The DPDI Bill included a number of reforms to the rules on cookie consent. These have been retained in the DUAB. Businesses will likely find these changes helpful, as they have the effect of easing the consent requirements in some cases and provide greater clarity as to what falls within the “strictly necessary” exemption. One of the more challenging proposals by the previous government – that would have required cookie consent platforms to be centralised (e.g. into browsers) – has been withdrawn.
  • PECR Enforcement Regime:  The Bill fully aligns the UK GDPR / DPA and PECR enforcement regimes. This effectively increases regulatory exposure under the PECR to potential fines equivalent to the UK GDPR.
  • International Data Transfers – The DUAB introduces amendments that are designed to clarify the UK’s approach to the transfer of personal data internationally and the UK’s approach to conduct of adequacy assessments. These are technical changes, but notably the EU approach to adequacy anticipates a third country has a regime that is ‘essentially equivalent’ to the EU standard; the DUAB moves away from that to a new threshold that the third country offers safeguards that are ‘not materially lower than’ the UK.
  • ICO: The DUAB retains the majority of the reforms to the ICO, including the name change to an Information Commission, rather than a Commissioner, introducing a formal Board structure with an appointed CEO. The DUAB also aims to reduce the number of complaints reaching the ICO – by requiring complaints to be made first to the controller, with escalation to the authority only if they are not satisfactorily dealt with.

Which proposed changes have been dropped?

Many of the other reforms to UK data protection law proposed in the DPDI Bill have been dropped.  Notably, the following provisions did not make their way into the new bill:

  • The DPDI Bill proposed an expanded definition of ‘personal data’ which would have provided further clarification as to when data is related to an identified or identifiable individual and when it should be considered anonymous. That has been dropped.
  • The DPDI Bill amended the accountability provisions within the UK GDPR, reducing the burden on smaller businesses to maintain records of processing, or carry out Data Protection Impact Assessments. Those changes have not be carried across. The role of the Data Protection Officer will also remain as is, with the previous proposal to replace the DPO with the concept of a ‘senior responsible individual’ dropped.
  • The proposal in the DPDI Bill to exempt “vexatious” data subject access requests (in line with the terminology used in freedom of information law) has been discarded. Instead, the existing exemption of “manifestly unfounded or excessive” requests will continue to apply. Helpfully though the DUAB does incorporate a new provision allowing controllers to limit themselves to ‘reasonable and proportionate’ efforts in responding to access requests, a codification of ICO guidance and case law in this area.
  • The proposal to remove a requirement on non-UK businesses to appoint a representative under Article 27 UK GDPR has been scrapped – the role of the representative in the UK remains for now.
  • Some of the reform to the ICO has not survived, including the requirement for the ICO to take into account the government’s strategic priorities and some of the changes to the ICO’s enforcement powers.

Smart data schemes and digital identity verification

As noted above, data protection is no longer the main focus of the Bill, with large sections of the Bill set aside to deal with wider digital policy matters, including smart data schemes and certification for digital identity service providers “the Bill will create the right conditions to support the future of open banking and the growth of new smart data schemes” (HM Government).

  • Smart data schemes – The DUAB gives the Secretary of State broad powers to make data regulations addressing access to business data and customer data, with sector specific ‘smart data’ regimes. Secondary legislation will follow that sets out much of the important detail here, but the essence of these provisions is to require data holders to provide or otherwise make available datasets, as well as give businesses and individuals the right to request access to those datasets. This is similar to elements of the EU Data Act and EU Data Governance Act at EU level, but goes further as it is not limited to IoT or public sector data. There is also a strong overlap with the European Health Data Space Regulation and the EU FIDA Regulation: promoting access to data for secondary uses and breaking down the barriers that exist between data holders and those persons, whether individuals or businesses, that would like access to data for certain, as yet undefined, purposes.
  • Digital identity verification – The DUAB will separately establish a framework to facilitate the development of digital verification services. This framework aims to certify organisations that offer identity verification tools in accordance with the government’s trust framework standards. New provisions in the bill grant the Secretary of State the authority to deny certification on national security grounds and mandate that it consults with the Information Commissioner regarding relevant regulations.

What next?

Although the DUAB comes with some bold statements from the Government that it will “unlock the power of data to grow the economy and improve people’s lives“, the proposals represent incremental reform, rather than radical change. There are arguably no big surprises (and perhaps some missed opportunities) with much of the drafting a lighter version of what we saw in earlier drafts of the DPDI Bill, with some of the more innovative elements (around smart data access and use) still unclear as we await the detail of secondary legislation.

We will keep a close eye on the DUAB as it makes its way through Parliament. We expect a relatively smooth passage, given so much has already been through earlier legislative processes , so extensive debate seems unlikely.

]]>
EU: Data Act Frequently Asked Questions answered by the EU Commission https://privacymatters.dlapiper.com/2024/09/data-act-frequently-asked-questions-answered-by-the-eu-commission/ Mon, 23 Sep 2024 16:09:32 +0000 https://privacymatters.dlapiper.com/?p=7432 Continue Reading]]> The EU Data Act is one of the cornerstones of the EU’s Data Strategy and introduces a new and horizontal set of rules on data access and use to boost the EU’s data economy. Most of the provisions of the Data Act will become applicable as of 12 September 2025. To assist stakeholders in the implementation, the European Commission recently published a fairly extensive FAQ document.  In particular, the FAQs contain clarifications in relation to data in scope of the Act; overlap with other data protection laws and EU legislation; implementation of IoT data sharing; and transfer restrictions.  

Our article providing a summary of the key takeaways from the FAQs is available here.

For more information on how DLA Piper can support with the Data Act and other recent EU digital regulations, please refer to our EU Digital Decade website.

]]>
UK: Changes to UK surveillance and communications law: the Investigatory Powers (Amendment) Act 2024. https://privacymatters.dlapiper.com/2024/07/uk-changes-to-uk-surveillance-and-communications-law-the-investigatory-powers-amendment-act-2024/ Mon, 01 Jul 2024 14:36:43 +0000 https://privacymatters.dlapiper.com/?p=7352 Continue Reading]]> The UK has made several consequential amendments to its primary electronic surveillance law, the Investigatory Powers Act (“IPA”). These changes have the potential to impact the development of certain privacy-enhancing services by technology companies, whilst also widening the scope of the government’s access to certain electronic datasets. There is also the possibility of an impact on the UK’s ‘adequacy’ status under the EU GDPR.

Background

The Investigatory Powers (Amendment) Act 2024 (“IP(A)A”) amends the IPA, which governs the use and oversight of investigatory powers by law enforcement and the security and intelligence agencies. The IPA impacts private sector technology companies who provide electronic communications services which can be subject to surveillance using the powers granted under the IPA.

The IP(A)A was one of the final pieces of legislation to be passed by the Conservative government prior to the dissolution of parliament for the 4July general election, receiving royal assent on 25 April 2024. It is the product of a government review into the effectiveness of the IPA, which entered into force in 2016. That review concluded that changes to the IPA were needed in order to “modernise and update the legal framework surrounding investigatory powers to ensure the security and intelligence agencies, and law enforcement can continue to exercise the capabilities they need to maintain public safety and protect the public from terrorism, and serious crime”.[1] 

Key changes

The following is a summary of the key changes brought in by the IP(A)A, with a focus on those likely to be of relevance to the private sector:

Companies now required to notify the Government of planned changes in functionality.

The IP(A)A introduces a new power for the Secretary of State to require companies providing communications products or services to notify the government in advance of any planned changes to those services or their functionality[2].

The purpose of this amendment is to prevent technological changes – such as the introduction of end-to-end encryption – from having a negative effect on the powers and capabilities of the police and intelligence services such as preventing them from accessing the capabilities and communications related data needed to prevent crime and protect national security. The requirement of notification is focused on changes that will impact the police and intelligence services from lawfully accessing data where this outcome can be “reasonably anticipated by the operator, even if this is not the primary motivation.”[3]

The government will use secondary legislation to specify the changes in functionality caught by the requirement[4], as well as the threshold that will be used by the Secretary of State to define the specific factors that must be considered before issuing a notice[5]. Security patches will remain out of scope.

Notably, the amendment does not give the Secretary of State any specific powers to intervene regarding any changes or provide their consent to the change[6].

Nevertheless, companies may have concerns about the need to share commercially sensitive information with the government (taking into account, amongst other factors, freedom of information rights in the UK), as well as longer term impacts on the ability to protect user privacy through the planned technological changes.

Retention of low sensitivity data.

Intelligence agencies routinely utilise ‘bulk personal datasets’ as part of their investigations. These are databases of personal information about large numbers of people, for example an electoral register, telephone directories or travel-related data[7].

The IP(A)A creates a new, light-touch regime for the retention and examination of bulk personal datasets where “the individuals to whom the personal data related to could have no, or only a low, reasonable expectation of privacy in relation to this data.”[8] Going forward, intelligence agencies will no longer be required to obtain a warrant prior to retaining such data. Instead, only the approval of a Judicial Commissioner (a serving or retired judge) will be required[9]. In determining whether the data is low sensitivity, the factors to be considered under the IP(A)A are[10]:

  • The nature of the data;
  • The extent to which the data has been made public by the individuals or whether the individuals have consented to the data being made public;
  • If the data has been published, the extent to which it was published subject to editorial control or by a person acting in accordance with professional standards;
  • If the data has been published or is otherwise in the public domain, the extent to which the data is widely known about;
  • The extent to which the data has already been used in the public domain.

The Home Office states that, regarding their objectives, the “intelligence services are not interested in examining data that is not operationally relevant, but in finding ways to identify the specific threat in vast quantities of data.”[11]

A ‘reasonable expectation of privacy’ is a turn of phrase that readers may be familiar with from UK privacy law, and specifically the tort of misuse of private information, which has developed over the last 20 years out of the UK’s commitment to the right for a private and family life under the European Convention on Human Rights (ECHR). A similar set of factors has been quoted by the courts in misuse of private information cases.

Impact on data protection adequacy

Under the EU GDPR, the European Commission uses an adequacy decision to determine whether another country provides an equivalent level of data protection to the EU. Where an adequacy decision is granted, personal data may flow freely to that country without any legal restrictions.

In June 2021, the EU Commission published two decisions regarding the UK’s adequacy (one under the EU GDPR, and one under the Law Enforcement Directive in respect of the processing of law enforcement data). As these decisions expire in June 2025, the Commission will work later in 2024 to assess whether to extend the adequacy decisions[12].

The IP(A)A is likely to be scrutinised closely by the Commission as part of this review. The IPA was a relatively new framework at the time of the original adequacy decisions and largely received a positive assessment from the Commission. However, as some of the changes under the IP(A)A – including those highlighted above – can be interpreted as reducing privacy protections, there is a risk that the Commission will view the amended framework in a different light.


[1] Annex A, IPA 2016 impact assessment (publishing.service.gov.uk)

[2] Investigatory Powers (Amendment) Act 2024, s.258A(1).

[3] Investigatory Powers (Amendment) Bill: Notification Requirement (26/04/24)- GOV.UK (www.gov.uk)

[4] Changes to the UK investigatory powers regime receive royal assent | Inside Global Tech

[5] Investigatory Powers (Amendment) Bill: Notification Requirement (26/04/24)- GOV.UK (www.gov.uk)

[6] Investigatory Powers (Amendment) Bill: Notification Requirement (26/04/24)- GOV.UK (www.gov.uk)

[7] Bulk data | MI5 – The Security Service

[8] Investigatory Powers (Amendment) Act 2024, s.226A(1).

[9] Investigatory Powers (Amendment) Act 2024, s.226B(5).

[10] Investigatory Powers (Amendment) Act 2024, s.226A(3).

[11] Investigatory Powers (Amendment) Bill: Bulk Personal Datasets and Third-Party Bulk Personal Datasets (26/04/2024) – GOV.UK (www.gov.uk)

[12] Adequacy | ICO

]]>
EU/UK: Data-Sharing Frameworks – A State of Play in the EU and the UK https://privacymatters.dlapiper.com/2024/06/eu-uk-data-sharing-frameworks-a-state-of-play-in-the-eu-and-the-uk/ Thu, 06 Jun 2024 12:07:18 +0000 https://privacymatters.dlapiper.com/?p=7335 Continue Reading]]> Disclaimer: This article first appeared in the June 2024 issue of PLC Magazine, and is available at http://uk.practicallaw.com/resources/uk-publications/plc-magazine.

In order to capture the benefits of data-driven innovation, the EU and the UK are taking action to facilitate data sharing across various industries.

In the EU, the European Commission is investing €2 billion to foster the development of so-called “common European data spaces” and the associated digital infrastructure. The UK government has announced similar, mainly policy, initiatives regarding the establishment of data-sharing frameworks, referred to as smart data schemes.

Despite the shared objectives, differences emerge between the EU and UK approaches, raising questions about alignment, implementation efficiency and market dynamics.

In this article, DLA Piper:

  • Explores the concepts of data spaces and data schemes, and the policy objectives behind them.
  • Gives an overview of the emerging rules that will be part of the foundation of these data-sharing frameworks in the EU and the UK.
  • Examines what can be expected from these initiatives and what hurdles still need to be overcome in order to secure successful implementation.

The article is available here.

]]>
UK: New cyber security requirements for consumer products https://privacymatters.dlapiper.com/2024/05/uk-new-cyber-security-requirements-for-consumer-products/ Wed, 01 May 2024 10:23:35 +0000 https://privacymatters.dlapiper.com/?p=7314 Continue Reading]]> On Monday 29 April, new cyber security requirements entered into force in the United Kingdom.  They apply to connected products sold to consumers and place obligations on the manufacturers, importers and distributors of those products.

Background

The Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) Regulations 2023 (Regulations) are the first set of regulations enacted under the Product Security and Telecommunications Infrastructure Act 2022 (Act).  The Act is a key pillar of the UK government’s cyber security strategy and can be compared with the EU’s pending Cyber Resilience Act, which similarly looks to impose cybersecurity standards for digital products.

Scope

The Regulations create requirements for ‘relevant connectable products’ which are ‘made available to consumers’ in the UK.   This encompasses both internet-connected products, as well as devices that connect to such products (‘network connectable products’), where these are sold, or otherwise provided (e.g., as a prize or free giveaway), by a business to a consumer.  The Regulations will also apply to foreign manufactured products that are put on the market in the UK.

Importantly, under Schedule 3 to the Regulations, certain products that are subject to existing safety regimes are exempt.  These include medical devices, computers (other than those intended exclusively for children under 14) and smart meters.

Relevant requirements

The Regulations impose minimum security requirements on the manufacturers of connected products.  These are detailed in Schedule 1 to the Regulations and in outline are:

  1. Passwords  – these must be unique per product or capable of being defined by the user of the product.  
  2. Information on how to report security issues  – the manufacturer must provide clear information about how to report product related security issues. Acknowledgment of the receipt of a report and status updates must also be provided. 
  3. Information on minimum security update periods  – information about the security update cycle for the product must be provided in a way that is understandable for a reader without prior technical knowledge.  

Manufacturers will need to produce (and importers will need to retain) a statement of compliance attesting to the products compliance with the security requirements.

Enforcement

In cases of non-compliance, the Act provides the Secretary of State with a range of enforcement powers.  These include mandatory product recalls, stop notices and fines of up to £10m or 4% of worldwide revenue.

]]>
Europe: The EU AI Act’s relationship with data protection law: key takeaways https://privacymatters.dlapiper.com/2024/04/europe-the-eu-ai-acts-relationship-with-data-protection-law-key-takeaways/ Thu, 25 Apr 2024 12:54:20 +0000 https://privacymatters.dlapiper.com/?p=7294 Continue Reading]]> Disclaimer: The blogpost below is based on a previously published Thomson Reuters Practical Law practice note (EU AI Act: data protection aspects (EU)) and only presents a short overview of and key takeaways from this practice note. This blogpost has been produced with the permission of Thomson Reuters, who has the copyright over the full version of the practice note. Interested readers may access the full version practice note through this link (paywall).

On 13 March 2024, the European Parliament plenary session formally adopted at first reading the EU AI Act. The EU AI Act is now expected to be formally adopted in a few weeks’ time. Following publication in the Official Journal of the European Union, it will enter into force 20 days later.

Artificial intelligence (“AI”) systems rely on data inputs from initial development, through the training phase, and in live use. Given the broad definition of personal data under European data protection laws, AI systems’ development and use will frequently result in the processing of personal data.

At its heart, the EU AI Act is a product safety law that provides for the safe technical development and use of AI systems.  With a couple of exceptions, it does not create any rights for individuals.  By contrast, the GDPR is a fundamental rights law that gives individuals a wide range of rights in relation to the processing of their data.  As such, the EU AI Act and the GDPR are designed to work hand-in-glove, with the latter ‘filing the gap’ in terms of individual rights for scenarios where AI systems use data relating to living persons.

Consequently, as AI becomes a regulated technology through the EU AI Act, practitioners and organisations must understand the close relationship between the EU data protection law and the EU AI Act.

1. EU data protection law and AI systems

1.1 The GDPR and AI systems
  • The General Data Protection Regulation (“GDPR”) is a technology-neutral regulation. As the definition of “processing” under the GDPR is broad (and in practice includes nearly all activities conducted on personal data, including data storage), it is evident that the GDPR applies to AI systems, to the extent that personal data is present somewhere in the lifecycle of an AI system.
  • It is often technically very difficult to separate personal data from non-personal data, which increases the likelihood that AI systems process personal data at some point within their lifecycle.
  • While AI is not explicitly mentioned in the GDPR, the automated decision-making framework (article 22 GDPR) serves as a form of indirect control over the use of AI systems, on the basis that AI systems are frequently used to take automated decisions that impact individuals.
  • In some respects, there is tension between the GDPR and AI. AI typically entails the collection of vast amounts of data (in particular, in the training phase), while many AI systems have a broad potential range of applications (reflecting the imitation of human-like intelligence), making the clear definition of “processing purposes” difficult.
  • At the same time, there is a clear overlap between many of the data protection principles and the principles and requirements established by the EU AI Act for the safe development and use of AI systems. The relationship between AI and data protection is expressly recognised in the text of the EU AI Act, which states that it is without prejudice to the GDPR. In developing the EU AI Act, the European Commission relied in part on article 16 of the Treaty on the Functioning of the European Union (“TFEU”), which mandates the EU to lay down the rules relating to the protection of individuals regarding the processing of personal data.
1.2 Data protection authorities’ enforcement against AI systems
  • Before the EU AI Act, the EU data protection authorities (“DPA”) were among the first regulatory bodies to take enforcement action against the use of AI systems. These enforcement actions have been based on a range of concerns, in particular, lack of legal basis to process personal data or special categories of personal data, lack of transparency, automated decision-making abuses, failure to fulfil data subject rights and data accuracy issues.
  • Examples of DPA enforcement actions are already lengthy. The most notable ones include the Italian DPA’s temporary ban decision on OpenAI’s ChatGPT, the Italian DPA’s Deliveroo fine in relation to the company’s AI-enabled automated rating of rider performance, the French DPA’s Clearview AI fine, a facial recognition platform that scrapes billions of photographs from the internet and the Dutch DPA’s fine on the Dutch Tax and Customs Administration for various GDPR infringements in relation to an AI-based fraud notification facility application.
  • As the DPAs shape their enforcement policies based in part on public concerns, and as public awareness of and interest in AI continues to rise, it is likely that DPAs will continue to sharpen their focus on AI (also see section 6 for DPAs as a potential enforcer of the EU AI Act).

2. Scope and applicability of the GDPR and EU AI Act

2.1 Scope of the GDPR and the EU AI Act
  • The material scope of the GDPR is the processing of personal data by wholly or partly automated means, or manual processing of personal data where that data forms part of a relevant filing system (article 2 GDPR). The territorial scope of the GDPR is defined in article 3 GDPR and covers different scenarios.
  • Consequently, the GDPR has an extraterritorial scope, meaning that: Controllers and processors established in the EU processing in the context of that establishment must comply with the GDPR even if the processing of personal data occurs in a third country. Non-EU controllers and processors have to comply with the GDPR if they target or monitor individuals in the EU.
  • On the other hand, the material scope of the EU AI Act is based around its definition of an AI system. Territorially, the EU AI Act applies to providers, deployers, importers, distributors, and authorised representatives (see, section 2.2 for details).
  • Unlike the GDPR, the EU AI Act has a robust risk categorisation, and it brings different obligations to the different AI risk categories. Most obligations under the EU AI Act apply to high-risk AI systems only (covered in article 6 and Annex III EU AI Act). Various AI systems are also subject to specific obligations (such as general-purpose AI models) and transparency obligations (such as emotional categorisation systems).
2.2  Interplay between roles under the GDPR and the EU AI Act
  • As the GDPR distinguishes between controllers and processors, so the EU AI Act distinguishes between different categories of regulated operators.
  • The provider (the operator who develops an AI system or has an AI system developed) and the deployer (the operator under whose authority an AI system is used) are the most significant in practice.
  • Organisations that process personal data in the course of developing or using an AI system will need to consider the roles they play under both the GDPR and the EU AI Act. Some examples follow.
Example 1: provider (the EU AI Act) and controller (the GDPR)Example 2: deployer (EU AI Act) and controller (the GDPR)
A company (A) that processes personal data in the context of training a new AI system will be acting as both a provider under the EU AI Act and as a controller under the GDPR. This is because the company is developing a new AI system and, as part of that development, is taking decisions about how to process personal data for the purpose of training the AI system.A company (B) that purchases the AI system described in Example 1: provider (EU AI Act) and controller (the GDPR) from company A and uses it in a way that involves the processing of personal data (for example, as a chatbot to talk to customers, or as an automated recruitment tool) will be acting as both a deployer under the EU AI Act and as a separate controller under the GDPR for the processing of its own personal data (that is, it is not the controller for the personal data used to originally train the AI system but it is for any data it uses in conjunction with the AI).
  • More complex scenarios may arise when companies offer services that involve the processing of personal data and the use of an AI system to process that data. Depending on the facts, the customers of such services may qualify as controllers or processors (under the GDPR) although they would typically be deployers under the EU AI Act.
  • These examples raise important questions about the relationship between the nature of roles under the EU AI Act and their relationship to roles under the GDPR which are still to be resolved in practice. Companies that develop or deploy AI systems should carefully analyse their roles under the respective laws, preferably prior to the kick-off of relevant development and deployment projects.

3. Relationship between the GDPR principles and the EU AI Act

  • The GDPR is built around the data protection principles set out in article 5 GDPR. These principles are lawfulness, fairness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality.
  • On the other hand, the first intergovernmental standard on AI, the recommendation on artificial intelligence issued by the OECD (OECD Recommendation of the Council on Artificial Intelligence, “OECD AI Principles”) introduces five complementary principles for responsible stewardship of trustworthy AI that have strong links to the principles in the GDPR: Inclusive growth, sustainability and well-being, human centred-values, fairness, transparency, explainability, robustness, security, safety and accountability.
  • The EU AI Act also refers to general principles applicable to all AI systems, as well as specific obligations that require the principles to be put in place in certain methods. The EU AI Act principles are set out in recital 27 and are influenced by the OECD AI Principles and the seven ethical principles for AI developed by the independent High-Level Expert Group on AI (HLEG). Although recitals do not have the same legally binding status as the operative provisions which follow hem and cannot overrule an operative provision, they can help with interpretation and to determine meaning.
  • Recital 27 EU AI Act refers to the following principles: Human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination, fairness, social and environmental wellbeing. Some of these principles already materialise through specific EU AI Act obligations: Article 10 EU AI Act prescribes data governance practices for high-risk AI systems, article 13 EU AI Act deals with transparency, articles 14 and 26 EU AI Act introduce human oversight and monitoring requirements, article 27 EU AI Act introduces the obligation to conduct fundamental rights impact assessments for some high-risk AI systems.
  • Understanding the synergies and differences between the GDPR principles and the EU AI Act principles will allow organisations to leverage their existing knowledge of GDPR and their existing GDPR compliance programmes. This is therefore a crucial step to lower compliance costs. The full practice note includes comprehensive tables that compare the practicalities in this regard.

4. Human oversight under the EU AI Act and automated decision-making under the GDPR

  • Under article 22 GDPR, data subjects have the right not to be subject to solely automated decisions involving the processing of personal data that result in legal or similarly significant effects. Where such decisions are taken, they must be based on one of the grounds set out in article 22(2) GDPR.
  • Like the GDPR, the EU AI Act is also concerned with ensuring that fundamental rights and freedoms are protected by allowing for appropriate human supervision and intervention (the so called “human-in-the-loop” effect).
  • Article 14 EU AI Act requires high-risk AI system to be designed and developed in such a way (including with appropriate human-machine interface tools) that they can be effectively overseen by natural persons during the period in which the AI system is in use. In other words, providers must take a “human-oversight-by-design” approach to developing AI systems.
  • According to article 26.1 EU AI Act, the deployer of an AI system must take appropriate technical and organisational measures to ensure its use of an AI system is in accordance with the instructions of use accompanying the system, including with respect to human oversight.
  • The level of human oversight and intervention exercised by a user of an AI system may be determinative in bringing the system in or out of scope of the automated decision-making framework under the GDPR. In other words, a meaningful intervention by a human being at a key stage of the AI system’s decision-making process may be sufficient to ensure that the decision is no longer wholly automated for the purposes of article 22 GDPR. Perhaps more likely, AI systems will be used to make wholly automated decisions, but effective human oversight will operate as a safeguard to ensure that the automated decision-making process is fair and that an individual’s rights, including their data protection rights, are upheld.

5. Conformity assessments and fundamental rights impact assessments under the EU AI Act and the DPIAs under the GDPR

  • Under the EU AI Act, the conformity assessment is designed to ensure accountability by the provider with each of the EU AI Act’s requirements for the safe development of a high-risk AI system (as set out in Title III, Chapter 2 EU AI Act). Conformity assessments are not risk assessments but rather demonstrative tools that show compliance with the EU AI Act’s requirements.
  • The DPIA, on the other hand, is a mandatory step required under the GDPR for high-risk personal data processing activities.
  • Consequently, there are significant differences in terms of both purpose and form between a conformity assessment and a DPIA. However, in the context of high-risk AI systems, the provider of such systems may also need to conduct a DPIA relation to the use of personal data in the development and training of the system. In such case, the technical documentation that are drafted for conformity assessments may help establishing the factual context of a DPIA. Similarly, the technical information may be helpful to a deployer of the AI system that is required to conduct a DPIA in relation to its use of the system.
  • The requirement under the EU AI Act to conduct a fundamental rights impact assessment (“FRIA”) is similar, conceptually, to a DPIA. As with a DPIA, the purpose of a FRIA is to identify and mitigate risks to the fundamental rights of natural persons, in this case arising from the deployment of an AI system. For more details regarding the FRIA, see Fundamental Rights Impact Assessments under the EU AI Act: Who, what and how?.
  • Practically speaking, organisations generally already have governance mechanisms in place to bring legal, IT and business professionals together for impact assessments such as the DPIA. When it comes to a FRIA, such mechanisms can be leveraged. As with a DPIA, the first step is likely to consist of a pre-FRIA screening to identify the use of an in-scope high-risk AI system (recognising that, as a good practice step, organisations may choose to conduct FRIAs for a wider range of AI systems than is strictly required by the EU AI Act).

6. National competent authorities under EU AI Act and DPAs

  • Under the EU AI Act, each member state is required to designate one or more national competent authorities to supervise the application and implementation of the EU AI Act, as well as to carry out market surveillance activities.
  • The national competent authorities will be supported by the European Artificial Intelligence Board and the European AI Office. The most notable duty of the European AI Office is to enforce and supervise the new rules for general purpose AI models.
  • The appointment of the DPAs as enforcers of the EU AI Act will solidify the close relationship between the EU GDPR and the EU AI Act.
]]>
UK: Enforcement Against the Use of Biometrics in the Workplace https://privacymatters.dlapiper.com/2024/02/uk-enforcement-against-the-use-of-biometrics-in-the-workplace/ Thu, 29 Feb 2024 09:29:48 +0000 https://privacymatters.dlapiper.com/?p=7238 Continue Reading]]> The ICO has issued an enforcement notice which provides valuable insights into its approach to the use of biometrics in the workplace, and the lawfulness of employee monitoring activities more broadly.

On 23 February 2024, the Information Commissioner’s Office (“ICO”) ordered Serco Leisure Operating Limited (“Serco”), an operator of leisure facilities, to stop using facial recognition technology and fingerprint scanning (“biometric data”) to monitor employee attendance and subsequent payment for their time. Serco operates the leisure facilities on behalf of leisure trusts, some of which were also issued enforcement notices, as joint controllers.

Background

Serco introduced biometric technology in May 2017 within 38 Serco-operated leisure facilities. Serco considered that previous systems for monitoring attendance were prone to abuse, on the basis that manual sign-in sheets were prone to human error. Additionally, Serco found that manual sheets were abused by a minority of employees and further that ID cards were used inappropriately by employees. As a result, Serco considered that using biometric technology was the best way to prevent these abuses.

To support this assessment, Serco produced a data protection impact assessment (“DPIA”) and legitimate interest assessment (“LIA”). Within these documents, Serco identified the lawful bases for the processing of biometric data as Articles 6(1)(b) and (f) and the relevant condition for special category personal data as Article 9(2)(b) of the UK General Data Protection Regulation (“UK GDPR”).

Article 6(1)(b) was selected on the basis that Serco considered that operating the attendance monitoring system was necessary for compliance with the employees’ employment contracts. Article 6(1)(f) was selected in connection with Serco’s legitimate interests, which presumably related to the wider aims of the attendance monitoring system and the move to use biometric data, outlined above.

Serco selected Article 9(2)(b) on the basis that it considered that this processing was required for compliance with applicable laws relating to employment, social security and social protection. In particular, Serco considered that it needed to process attendance data to comply with a number of regulations, such as working time regulations, national living wage, right to work and tax/accounting regulations.

The contravention

Despite the above, the ICO believed Serco, as a controller, had failed to establish an appropriate lawful basis and special category personal data processing condition for the processing of biometric data. Serco had therefore contravened Articles 5(1)(a), 6 and 9 of the UK GDPR. The ICO had previously served Serco with a Preliminary Enforcement Notice in November 2023, giving Serco the opportunity to provide written representations, which the ICO considered in issuing the Enforcement Notice of 23 February 2024.

The ICO gave Serco three months from the date of the Enforcement Notice, to:

  • Cease all processing of biometric data for the purpose of employment attendance checks from the facilities, and not implement biometric technology at any further facilities; and
  • Destroy all biometric data and all other personal and special category data that Serco is not legally obliged to retain.

Key takeaways from the Enforcement Notice

  1. Processing must be necessary in order to rely on most lawful bases and special category personal data processing conditions.

The ICO emphasised that the processing of biometric data cannot be considered as “necessary” when less intrusive means could be used to achieve the same purpose.

It is not ordinarily necessary for an employer to process biometric data in order to operate an attendance monitoring system. It is of course necessary for employee attendance data to be processed, but this would not usually extend to biometric data.

It could perhaps be possible to argue that it is necessary to use biometric data in connection with attendance monitoring in an extreme case, but this would need to be based on specific circumstances. In this case, although Serco had considered that other less intrusive methods were subject to abuse, this consideration was not sufficient to justify use of biometric data on its own.

The ICO’s position was that Serco had not provided enough information to support its argument that eliminating abuse of the attendance monitoring system was a necessity, rather than simply a further benefit to Serco. There was a lack of evidence of consideration of alternative means of handling such abuse e.g. taking disciplinary action against the individuals responsible. The processing of biometric data was therefore not a targeted and proportionate way of achieving the purpose of verifying attendance.

  1. An appropriate balancing test must be conducted when relying on legitimate interest.

The ICO considered that in relying on its legitimate interests as a lawful basis, Serco did not give appropriate weight to the intrusive nature of biometric processing and the risks to the employees. Failure to give such appropriate weight meant that Serco could not rely on Article 6(1)(f).

Additionally, the ICO found that legitimate interests would not be regarded as an appropriate lawful basis where:

  1. The processing has a substantial privacy impact. In this instance, it was the regular and systematic processing of employee biometric data, which would entail a regular intrusion into their privacy over which they have no, or minimal control.
  1. Employees are not given clear information about how they could object or alternative methods of monitoring that did not involve intrusive processing. The fairness of processing, the availability and ease with which to exercise data subject rights and the provision of clear information are factors that should be taken into account when relying on legitimate interests and conducting an appropriate balancing test. The ICO highlighted that Serco had failed to process data fairly by not bringing the alternative mechanisms to the employees’ attention, even when an employee complained. There was also failure to process fairly as employees were not informed on how they could object to the processing.
  1. There is an imbalance of power between the employer and employees, such that employees may not have felt able to object (without detriment) even if they have been informed that they could.
  1. A specific legal obligation must be identified from the onset of processing in order to rely on Article 9(2)(b) UK GDPR.

In this instance, Serco had initially failed (including in its DPIA), to identify the specific obligation or right conferred by law on which it relied in reference to Article 9(2)(b) of the UK GDPR.

In this case, it may be that this omission was due to the fact that there is no such obligation or right conferred by law. Whilst there are legal obligations to record time and attendance data, health and safety obligations and requirements to manage the employment relationship, there are no specific legal obligations that would necessitate the processing of biometric data in connection with attendance monitoring.

In cases where there is a specific legal obligation or right conferred to process special category data (for example, in respect of the employer’s duty to make reasonable adjustments or to manage sickness at work), the ICO emphasised that it is not sufficient to simply select Article 9(2)(b) of the UK GDPR as the basis for processing. The controller must identify the specific obligation or right conferred by law and must have done so from the outset – before the processing of special category personal data commences.

It is also worth noting that, despite having conducted a DPIA and LIA, Serco could also not rely on this condition because Serco did not produce an appropriate policy document as required by Sch. 1 Para 1(1)(b) of the Data Protection Act 2018 (“DPA”) and had failed to demonstrate the necessity of processing biometric data (as referred to above).

4. The ICO will take account of infringement amplifiers.

In addition to biometric data being one that carries greater risk of harm, the length of time of processing without an appropriate lawfu-l basis (since 2017) and the number of data subjects involved (2,283), were also factors that the ICO considered as increasing the seriousness of the infringement.

Summary and conclusion

This decision does allow for the possibility to argue that use of biometric data is necessary, targeted and proportionate for attendance monitoring. However, as mentioned above, this would very much depend on the circumstances and the decision shows that this is likely to be the exception rather than the rule.

If an employer sought to rely on its legal obligations as a lawful basis for the processing, the controller would need to be in a position to show that the processing was now necessary to comply with these requirements. This would require it to provide evidence of widespread abuse and failure of other less intrusive methods. However even in these circumstances the employer would still need to consider fairness and proportionality in the operation of the system, as explained in this post.

It is possible for an employer to consider using employee consent as a basis under Article 9(2)(a) for processing biometric data in an attendance management system, given the limitations of Article 9(2)(b). However, as noted above, the imbalance of power in the employment relationship will act against the employer in relying on this basis unless there is a genuine ability for the employee to refuse using the system. In such a case, the operation of an alternative option to biometric data will be critical.

If an employer did wish to adopt biometric data processing for attendance monitoring systems, following this decision, we recommend that such an employer includes the following steps in the context of undertaking its DPIA, LIA and implementation processes:

  • Identify the appropriate lawful basis for the processing activity.
  • If the lawful basis relates to a specific obligation or right conferred by law, identify and document that law.
  • Consider whether the processing could be said to be necessary for the identified lawful basis and gather supporting evidence for this assessment, where relevant.
  • Provide employees with clear information regarding the processing, including information regarding data retention and use, as well as clear information regarding their right to object. This must be provided in advance of the system being implemented.
  • Undertake a full consideration of the fairness and proportionality of the processing, acknowledging that processing biometric data is extremely intrusive and carries significant privacy impacts for employees.
  • Provide employees an alternative option to participate in the attendance monitoring system should they object to the use of their biometric data and ensure that this is used in practice (meaning that there must always be another way to monitor attendance alongside the biometric data).
  • Ensure that an appropriate policy document is implemented, if relaying on a lawful basis under the UK GDPR that mandates this (e.g. Article 9(2)(b)).
]]>
EU: Significant new CJEU decision on automated decision-making https://privacymatters.dlapiper.com/2023/12/eu-significant-new-cjeu-decision-on-automated-decision-making/ Wed, 13 Dec 2023 09:15:54 +0000 https://privacymatters.dlapiper.com/?p=7166 Continue Reading]]> Authors: James Clark and Verena Grentzenberg

The Court of Justice of the European Union (CJEU) has delivered an important judgment on the scope and interpretation of the ‘automated decision-making’ framework under the GDPR.  It is a decision that could have significant implications for service providers who use algorithms to produce automated scores, profiles or other assessments that are relied upon by customers in a decision-making process.

Background

On 7 December the Court of Justice of the European Union handed down judgment in the Schufa case. 

Schufa AG (“Schufa”) is a (or the) leading German credit rating agency and holds information about almost 70 million individuals.  Amongst other things, it provides credit scores for German residents.  These scores are then relied upon by financial service providers to make lending decisions, such as offering mortgages or other loans.  Other customers of Schufa include retailers (online and stationary), telecommunication service providers, utility and transportation companies.

The case referred to the CJEU revolved around a German resident whose application for a loan was turned down by a German bank.  The bank’s decision was made primarily in reliance on a poor credit score assigned to that individual by Schufa.

The individual challenged Schufa and in particular requested that Schufa disclose information about its automated decision-making processes under Article 15(1)(h) GDPR.

By way of reminder, Article 22 GDPR restricts the taking of a decision about a data subject based solely on automated processing, where that decision produces legal effects concerning him or her or similarly significantly affects him or her.  Such a decision may only be taken under one of a limited number of grounds, and data subjects have an absolute right to contest the decision and obtain human intervention in the decision.

Article 15(1)(h) GDPR, meanwhile, is the component of the ‘right of access’ that allows a data subject to obtain, from the responsible controller, information about automated decision-making, including its ‘logic’ and its consequences.

Schufa rejected the assertion that it was responsible for automated decision-making, asserting that its role was to produce an automated score but that the relevant decision (whether to grant the loan) was taken by the third-party bank. 

Key Findings

The court rejected Schufa’s argument and held that the creation of the credit score was, itself, a relevant automated decision for the purposes of Article 22 GDPR.  This runs contrary to the previous received wisdom that only the ultimate decision-maker – in this case, the bank using the credit score to decide on the loan application – was engaging in automated decision-making.

The following factors were central to the court’s conclusion on this point:

  • The score produced by Schufa was considered to play a ‘determining role’ in the decision about whether to grant credit. 
  • The court adopted a broad interpretation of the term ‘decision’, finding that it could encompass ‘a number of acts which may affect the data subject in many ways.  Consequently, it did not matter that the ultimate decision about whether to grant credit was not taken by Schufa – there was a sufficiently close nexus between Schufa’s decision about what score to award and the subsequent credit decision.
  • Applying a purposive approach, the court also took into account the fact that Schufa was in a much better position that its customer to satisfy the Article 15 GDPR request and to provide meaningful information about the automated decision-making process, including its logic.

Implications

Businesses using algorithms or other automated processes to produce risk scores or similar outputs (for example, identity verification, fraud detection) are likely to be understandably concerned by the potential implications of this judgment.  In general, such companies have developed business models that assume the customer will bear the regulatory risk and responsibility associated with any decision taken using the company’s outputs. 

However, it is important that such companies read this judgment carefully and consider the ways in which their business models may be distinguished from those considered in Schufa.  For example:

  • To what extent does the company’s customer rely solely or predominantly on the provided output when making a decision?  If the output is one of only a number of factors taken into account by the customer, and in particular if the customer only attaches a moderate degree of weight / significance to this factor, then the circumstances may be sufficiently different. If not, it will be important that the company ensures that customers can rely on one of the exceptions to Article 22 GDPR, namely: explicit consent or necessity for a contract between the customer and the data subjects. Member State law can also provide for an authorisation, where such authorisation lays down “suitable measures” to safeguard the data subject’s rights and freedoms.
  • Is the ultimate decision one that has a legal or comparatively significant effect?  For example, a company may be specialised in producing automated marketing profiles / segmentations that are then relied upon by a customer to determine the marketing content to be sent to a consumer.  However, other than in limited special circumstances, it is unlikely that the decision about what marketing content to send to a consumer will constitute a ‘significant’ decision for Article 22 GDPR purposes. For example, in relation to Schufa, it is likely that many of Schufa’s customers do not use the credit scores provided for decisions that have a significant effect on the data subject – for example where the customer is an online shop and only uses the data to decide whether to request payment from a specific customer before or after delivery of their goods or services.

In a quirk of timing, we note that the Schufa judgment was handed down in the same week that the trilogue process around the EU AI Act concluded.  The use of AI systems to make decisions about the offering of credit is one of a number of ‘high risk’ use cases found in the Act.  Going forward, it looks likely that Schufa will become an important touchstone for businesses developing AI-enabled solutions that are relied upon by customers of those businesses in important decision-making processes.  

]]>