| Privacy Matters DLA Piper's Global Privacy and Data Protection Resource Mon, 03 Feb 2025 09:17:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.8&lxb_maple_bar_source=lxb_maple_bar_source https://privacyblog.dlapiperblogs.com/wp-content/uploads/sites/32/2023/07/cropped-Favicon_512x512-32x32.gif | Privacy Matters 32 32 UK: Google’s U-Turn on Device Fingerprinting: ICO’s Response and Subsequent Guidance https://privacymatters.dlapiper.com/2025/01/googles-u-turn-on-device-fingerprinting-icos-response-and-subsequent-guidance/ Thu, 30 Jan 2025 18:25:52 +0000 https://privacymatters.dlapiper.com/?p=7540 Continue Reading]]> In a December, the Information Commissioner’s Office (ICO) responded to Google’s decision to lift a prohibition on device fingerprinting (which involves collecting and combining information about a device’s software and hardware, for the purpose of identifying the device) for organisations using its advertising products, effective from 16 February 2025 (see an overview of Google’s new Ads Platforms policies here). This follows Google’s previous decision in July 2024 to keep third party cookies.

In its response, the ICO criticized Google’s decision to permit device fingerprinting for advertising purposes as “irresponsible” and emphasised that device fingerprinting:

  1. Requires Consent: device fingerprinting enables devices to be identified even where cookies are blocked or the location is disguised, hence its common use for fraud prevention purposes, but the ICO reinforced that it is subject to the usual consent requirements.
  2. Reduces User Control: Despite various browsers now offering “enhanced” tracking protection, the ICO stated that device fingerprinting is not a fair means of tracking users online as it diminishes people’s choice and control over how their information is collected.

This statement echoes concerns previously voiced by Google who had stated that device fingerprinting “subverts user choice and is wrong”.

With the potential for fingerprinting to replace the long-debated third-party (3P) cookie functionality, this statement forms part of a shift in regulatory focus to technologies beyond cookies. Various technologies have recently received greater scrutiny, both in the ICO’s Draft Guidance on the use of storage and access technologies | ICO (“ICO’s Draft Guidance“) – interestingly issued in December 2024 to coincide with the Google update – and the European Data Protection Board (EDPB) Guidelines 2/2023 on Technical Scope of Art. 5(3) of ePrivacy Directive.

ICO Draft Guidance: Key Takeaways

The ICO’s Draft Guidance explores the practical application of the Privacy and Electronic Communications Regulations (PECR) requirement that consent must be obtained by the user for any storage or access of information on/from a device (‘terminal equipment’), unless such storage/access is strictly necessary for the purposes of a communication or to provide a service requested by the user.

In particular, the Draft Guidance addresses the following areas which are explored further in their respective sections below:

Technologies

The ICO’s Draft Guidance looks at how and why the rules relating to storage and access of device information apply to various types of technologies used in web browsers, mobile apps or connected devices, namely: Cookies; Tracking Pixels, Link Decoration and Navigational Tracking, Web Storage, Scripts and tags, and Fingerprinting techniques. The technologies focused on by the ICO overlap to a large extent with those examples used by the EDPB in their guidelines. However, taking the analysis on pixels as an example, the EDPB suggests that any distribution of tracking links/pixels to the user’s device (whether via websites, emails, or text messaging systems) is subject to Regulation 5(3) of the ePrivacy Directive as it constitutes ‘storage’ even if only temporarily via client-side caching.  The ICO’s guidance is less clear, suggesting that tracking pixels are only subject to Regulation 6 Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) when they store information on the user’s device. This might imply a less expansive view compared to the EDPB, highlighting the importance of remaining alive to jurisdictional nuances for any global tracking campaigns.

Detailed Consent Requirements

The ICO reiterates that for a PECR consent to be valid, it must meet UK GDPR standards (freely given, specific, informed and unambiguous statement of the individual’s wishes indicated by a clear affirmative action).

    The ICO highlights the fact that the consent must be provided by the data subject where personal data is processed (this contrasts with the PECR user/subscriber consent requirement) – this tension is an existing issue, but quite how the party collecting the cookie consent for personal data processed via cookies (or a similar technology) is supposed to know whether the user of a device has changed, without either requiring re-consent or user identification on each visit (or carrying out background identification using user fingerprinting or similar, which means more data processing and may be intrusive) is unclear.

    In line with recent ICO statements in relation to the lack of ‘reject all’ options, the ICO emphasises that subscribers/users must be able to refuse the use of storage and access technologies as easily as they can consent. Additional points of interest for controllers include:

    • That users must have control over any use of non-essential storage and access technologies. While this could, on a conservative reading, be interpreted as needing US-style granular per-cookie consent, the examples provided suggest high-level consent mechanisms expressed per category (e.g., analytics, social media tracking, marketing) are still acceptable;
    • Clarification that you must specifically name any third parties whose technologies you are requesting consent to (this information can be provided in a layered fashion provided this is very clear). However, if controls are not required at an individual cookie level, which seems to be the case, then this becomes less meaningful for data subjects who cannot act on this additional information as they only have the choice of rejecting all storage and access technologies for each purpose category (e.g. all analytics cookies/technologies) rather than a relevant third party; and
    • Clarification that users must be provided with controls over any use of storage and access technologies for non-essential purposes (albeit this was arguably already required in order to facilitate withdrawal of consent/changing of preferences on an ongoing basis).

    Exemptions to consent: Strictly Necessary

    Leaving aside technologies necessary for communications, the ICO emphasises that the “strictly necessary” exemption applies when the purpose of the storage or access is essential to provide the service the subscriber or user requests. Helpfully, the ICO Draft Guidance clarifies that technologies used to comply with applicable law e.g. meeting security requirements, can be regarded as “strictly necessary”, such that no consent is required. This will not apply if there are other ways that you can comply with this legislation without using cookies or similar technologies.

    Other examples of activities likely to meet the exemption include: (i) ensuring the security of terminal equipment; (ii) preventing or detecting fraud; (iii) preventing or detecting technical faults; (iv) authenticating the subscriber or user; and (v) recording information or selections made on an online service.

    One area of ambiguity remains in relation to fraud prevention and detection. In the financial services sector, websites/apps often use third-party fingerprinting for fraud detection (in order to meet legal obligations to ensure the security of their services).  ‘Preventing or detecting fraud’ is listed as an example of an activity likely to meet the exemption, whilst third party fingerprinting for fraud prevention is used by the ICO as an example of an activity subject to Article 6 PECR, with the implication that consent is needed (albeit this is not stated). However, the DUA Bill (if passed in its current form) provides some helpful clarity here, as it states that use of such technologies should be regarded as “strictly necessary” where used to protect information, for security purposes, to prevent or detect fraud or technical faults, to facilitate automatic authentication, or to maintain a record of selections made by the user.

    Interestingly, the guidance suggests that the use of social media plugins/tools by logged-in users might be strictly necessary, though this does not extend to logged-out users, users who are not a member of that network, or any associated tracking.

    Governance and compliance

    A number of the ICO’s clarifications are likely to impact day to day resourcing and operations for any organisation using material numbers of storage and access technologies:

    • Governance: the ICO emphasises what it expects in respect of governance of storage and access requirements, including an audit checklist, emphasising the need to regularly audit the use of such technologies and ensure that the rest of the consent ecosystem (including transparency, consent, data sharing, and subsequent processing) is consistent and up to date. This is likely to be resource intensive, and few organisations will be set up for this level of assurance.
    • Transparency:  The ICO guidance reinforces the need for transparency around whether any third parties will store/access information on the user’s device or receive this information, making clear that all third parties providing cookies or receiving data must be named (avoiding ambiguous references to “partners” or “third parties.”), and that specific information must be provided about each, taking into account UK GDPR considerations where personal data is processed. This will be a considerable challenge for complex ecosystems, most notably in the context of online advertising (albeit this has been a known challenge for some time).
    • Consent Ecosystem: The guidance makes very clear that a process must be in place for passing on when a user withdraws their consent. In practice, the entity collecting the consent is responsible for informing third parties when consent is no longer valid. This is crucial but challenging to comply with, and is again perhaps most relevant in the context of online advertising. 
    • Subsequent Processing: as it has done in the past, the ICO continues to strongly suggests that any subsequent processing of personal data obtained via storage/access technologies on the basis of consent should also be based on consent, going as far as to suggest that reliance on an alternative lawful basis (e.g. legitimate interest) may invalidate any initial consent received.

    Conclusion

    As device fingerprinting and other technologies evolve, it is crucial for organisations to stay informed and ensure compliance with the latest guidance and consider that there may be nuance between regulation in EU / UK.

    The ICO’s Draft Guidance provides helpful clarity on existing rules in the UK, including detailed examples of how to conduct cookie audits, but does not otherwise provide practical guidance on how to overcome many of the operational privacy challenges faced by controllers (such as monitoring changing users and managing consent withdrawals within online advertising ecosystems).

    With increasing regulatory commentary and action in this space, including the ICO’s most recent announcement regarding its focus on reviewing cookie usage on the biggest UK sites, now is the time to take stock of your tracking technologies and ensure compliance!

    The ICO’s Draft Guidance is currently open for consultation, with input sought by 5pm on Friday 14th March 2025. If you have any questions or would like to know more, please get in touch with your usual DLA contact.

    ]]>
    UK: Data protection authority issues reprimand to gambling operator for unlawfully processing personal data https://privacymatters.dlapiper.com/2024/09/uk-data-protection-authority-issues-reprimand-to-gambling-operator-for-unlawfully-processing-personal-data/ Wed, 25 Sep 2024 15:04:20 +0000 https://privacymatters.dlapiper.com/?p=7435 Continue Reading]]> On 16 September 2024, the UK’s data protection authority, the Information Commissioner’s Office (ICO), issued a reprimand against Sky Betting and Gaming (SkyBet) for unlawfully processing people’s data through advertising cookies without their consent.

    Between 10 January and 3 March 2023, SkyBet’s website dropped third-party AdTech cookies to visitors’ browsers before visitors could accept or reject them via a cookie banner. As a result, the visitors’ personal data (e.g., device information and unique identifiers) was shared automatically with third-party AdTech companies without visitors’ consent or a lawful basis. The cookies were deployed to allow advertising to be placed on other websites viewed by the visitor.

    Whilst the ICO found no evidence of deliberate misuse of personal data to target vulnerable gamblers, it reprimanded SkyBet because it processed personal data in a way that was not lawful, transparent or fair.

    This reprimand forms part of the ICO’s wider strategy to ensure that individuals’ rights and freedoms are respected. The ICO has recently reviewed the UK’s most-visited 100 websites and contacted more than half to warn of enforcement action. Many are reported to have implemented improvements, such as displaying a “reject all” button or presenting “accept all” and “reject all” options on an equal footing.

    The ICO intends to assess the next 100 most-frequented websites and urges all organisations to assess their cookie banners to ensure freely given consent may be given. The ICO also intends to publish guidance on cookies and tracking technology before the end of the year.

    DLA Piper advises all businesses on cookie compliance and is currently engaged by several businesses operating in the AdTech ecosystem, on assessing risk exposure and responding to ICO engagement. Should you wish to discuss this further, please reach out to your regular DLA Piper contact, or the authors of this blog.

    ]]>
    Keeping an ‘AI’ on your data: UK data regulator recommends lawful methods of using personal information and artificial intelligence https://privacymatters.dlapiper.com/2022/11/keeping-an-ai-on-your-data-uk-data-regulator-recommends-lawful-methods-of-using-personal-information-and-artificial-intelligence/ Tue, 08 Nov 2022 12:25:21 +0000 https://blogs.dlapiper.com/privacymatters/?p=3718 Continue Reading]]> Authors: Jules Toynton, Coran Darling

    Data is often the fuel that powers AI used by organisations. It tailors search parameters, spots behavioural trends, and predicts future possible outcomes (to highlight a just a few uses). In response, many of these organisations seek to accumulate and use as much data as possible, in order to make their systems work that little bit faster or more accurately.

    In many cases, providing the data is not subject to copyright or other such restrictions, this is without many issues – organisations are able to amass large quantities of data that can be used initially to train their AI systems, or, after deployment, continue to update their datasets to ensure the latest and most accurate data is used.

    Where this becomes a potential issue, is when the data being collected and used is personal information. For example, the principle of ‘data minimisation’ requires that only the necessary amount and type of personal data is used to develop an AI system. This is at odds with the ‘data hoarding’ corporate mentality described above, which seeks to know as much detail as possible. Furthermore, the principle of ‘purpose limitation’ places several restrictions on the re-use of historic data sets to train AI systems. This may cause particular headaches when working with an AI vendor that wishes to further commercialise the AI which has benefited from the learnings and developments of your data in a way that is beyond the purpose for which the data was originally provided.

    It is however acknowledged by the Information Commissioner’s Office (“ICO”), the UK’s data regulator, that AI and personal data will forever be interlinked – unavoidably so in certain situations. In response, in November 2022, the ICO released a set of guidance on how organisations can use AI and personal data appropriately and lawfully, in accordance with the data privacy regime of the UK. The guidance is also supplemented by a number of frequently raised concerns when combining AI with personal data, including: should I carry out an impact assessment, do outputs need to comply with the principle of accuracy, and do organisations need permission to analyse personal data.

    In this article we discuss some of the key recommendations in the context of the wider regulatory landscape for data and AI.

    Key Recommendations:

    The guide offers eight methods organisations can use to improve their handling of AI and personal information.

    Take a risk-based approach when developing and deploying AI:

    A first port of call for organisations should be an assessment of whether AI is needed for what is sought to be deployed. Most AI will typically fall within the remit of ‘high-risk’ if it engages with personal information for the purposes of the proposed EU AI Regulation (“AI Act”) (and likely a similar category within the developing UK framework). This will result in additional obligations and measures that will be required to be followed by the organisation in its deployment of the AI. A less technical and more privacy preserving alternative is therefore recommended by the ICO where possible.

    Should AI be chosen after this, a data privacy impact assessment should be carried out to identify and minimise data risks that the AI poses to data subjects, as well as mitigating the harm it may cause. At this stage the ICO also recommends consulting different groups who may be impacted using AI in this context to better understand the potential risks.

    Consider how decisions can be explained to the individuals affected:

    As the ICO notes, it can be difficult to explain how AI arrives at certain decisions and outputs, particularly in the case of machine learning and complex algorithms where input values and trends change based on the AI’s ability to learn and teach itself based on the data it is fed.

    Where possible, the ICO recommends that organisations:

    • be clear and open with subjects on how and why personal data is being used;
    • consider what explanation is needed in the context that the AI will be deployed;
    • assess what explanations are likely to be expected;
    • assess the potential impact of AI decisions to understand the detail required in explanations; and
    • consider how individual rights requests will be handled.

    The ICO have acknowledged that this is a difficult area of data privacy and has provided detailed guidance, co-badged with the Alan Turing Institute, on “Explaining decisions made with AI”.

    Limit data collection to only what is needed:

    Contrary to several held beliefs by organisations, the ICO recommend that data is kept to a minimum where possible. This does not mean that data cannot be collected, but rather appropriate consideration must be given to the data that is collected and retained.

    Organisations should therefore:

    • ensure that the personal data you use is accurate, adequate, relevant and limited, based on the context of the use of the AI; and
    • consider which techniques can be used to preserve privacy as much as practical. For example, as the ICO notes, synthetic data or federated learning could be used to minimise the personal data being processed.

    It should be noted that data protection’s accuracy principle does not mean that an AI system needs to be 100% statistically accurate (which is unlikely to be practically achievable). Instead organisations should factor in the possibility of inferences/decisions being incorrect, and ensure that there are processes in place to ensure fairness and overall accuracy of outcome.

    Address risks of bias and discrimination at an early stage:

    A persistent concern throughout many applications of AI, particularly those interacting with sensitive data, is bias and discrimination. This is made worse in instances where too much of one trend of data is used, as the biases present in such data will form part of the essential decision-making process of the AI, thereby ‘hardwiring’ bias into the system. All steps should therefore be taken to (to the extent that it reflects the wider trend accurately) get as much variety within data used to train AI systems as possible.

    To greater understand this issue, the ICO recommends that organisations:

    • assess whether the data gathered is accurate, representative, reliable, relevant, and up-to-date with the population or different sets of people with which the AI will be applied; and
    • map out consequences of the decisions made by the AI system for different groups and assess whether these are acceptable from a data privacy regulatory standpoint as well as internally.

    Where AI does produce biased or discriminatory decisions, this is likely to conflict with the requirement for processing of personal data to be fair, as well as obligations of several other more specific regulatory frameworks. A prime example of this is the Equality Act, which ensures that discrimination on the grounds of protected characteristics, by AI or otherwise, is prohibited. Care should be taken by organisations to ensure that decisions are made in such a way that prevents repercussions from the wider data privacy and AI regimes, as well as those specific to the sectors and activities in which they are involved.

    Dedicate time and resources to preparing data:

    As noted above, the quality of an AI’s output is only going to be as good as the data it is fed and trained with. Organisations should therefore ensure sufficient resources are dedicated to preparing the data to be used.

    As part of this process, organisations should expect to:

    • create clear criteria and lines of accountability about the labelling of data involving protected characteristics and/or special category data;
    • consult members of protected groups where applicable to define the labelling criteria; and
    • involve multiple human labellers to ensure consistency of categorisation and delineation and to assist with fringe cases.

    Ensure AI systems are made and kept secure:

    It should be of little surprise that the addition of new technologies can create new security risks (or exacerbate current ones). In the context of the AI Act and UK data privacy regulation (and indeed when a more established UK AI regime emerges), organisations are/will be legally required to implement appropriate technical and organisational measures to ensure suitable security protocols are in place for the risk associated with the information.

    In order to do this, organisations could:

    • complete security risk assessments to create a baseline understanding of where risks are present;
    • complete regular model debugging on a regular basis; and
    • proactively monitor the system and investigate any anomalies (in some cases, the AI Act and any future UK AI framework may require human oversight as an additional protective measure regardless of the data privacy requirement).

    Human review of AI outcomes should be meaningful:

    Depending on the purpose of the AI, it should be established early on whether the outputs are being used to support a human decision-maker or whether decisions are solely autonomous. As the ICO highlights, data subjects deserve to know whether decisions with their data have been made purely autonomously, or with the assistance of AI. In instances where they are being used to assist a human, the ICO recommends that they are reviewed in a meaningful way.

    This would therefore require that reviewers are:

    • adequately trained to interpret and challenge outputs made by AI systems;
    • sufficiently senior to have the authority to override automated decisions; and
    • accounting for other additional factors that weren’t included as part of the initial input data.

    Data subjects have the right under the UK GDPR not to be subject to a solely automated decision, where that decision has a legal or similarly significant effect, and also have the right to receive meaningful information about the logic involved in the decision. Therefore, although worded as a recommendation, where AI is making significant decisions, meaningful human review becomes a requirement (or at least must be available on request).

    Work with external suppliers involved to ensure that AI is used appropriately:

    A final recommendation offered by the ICO is that where AI is procured from a third party, it is done so with their involvement. While it is usually the organisation’s responsibility (as controller) to comply with all regulations, this can be achieved more effectively with the involvement of those who create and supply the technology.

    In order to comply with the obligations of both the AI Act and relevant data privacy regulations, organisations would therefore be expected to:

    • choose a supplier by carrying out the appropriate due diligence ahead of procurements;
    • work with the supplier to carry out assessments prior to deployment, such as impact assessments;
    • agree and document roles and responsibilities with the external supplier, such as who will answer individual rights requests;
    • request documentation from the external supplier that demonstrates they implemented a privacy by design approach; and
    • consider any international transfers of personal data.

    When working with some AI providers, for example, with larger providers who may develop AI for a large range of applications as well as offer services to tailor their AI solutions for particular customers (and to commercialise these learnings), it may not be clear whether they are a processor or controller (or even a joint controller with the client for some processing). Where that company has enough freedom to use its expertise to decide what data to collect and how to apply its analytic techniques, it is likely to be a data controller as well.

    Get in touch 

    For more information on AI and the emerging legal and regulatory standards visit DLA Piper’s focus page on AI.

    You can find a more detailed guide on the AI Regulation and what’s in store for AI in Europe in DLA Piper’s AI Regulation Handbook.

    To assess your organisation’s maturity on its AI journey in (and check where you stand against sector peers) you can use DLA Piper’s AI Scorebox tool.

    You can find more on AI, technology, data privacy, and the law at Technology’s Legal Edge, DLA Piper’s tech-sector blog and Privacy Matters, DLA Piper’s Global Privacy and Data Protection resource.

    DLA Piper continues to monitor updates and developments of AI and its impacts on industry in the UK and abroad. For further information or if you have any questions, please contact the authors or your usual DLA Piper contact.

    ]]>
    UK: ICO issue fine of £4.4m to Interserve for security failings https://privacymatters.dlapiper.com/2022/10/ico-issue-fine-of-4-4-to-interserve-for-security-failings/ Tue, 25 Oct 2022 16:30:23 +0000 https://blogs.dlapiper.com/privacymatters/?p=3714 Continue Reading]]> Authors: Ross McKean, Henry Pelling

    On 24 October 2022, the ICO issued a penalty notice (MPN) to Interserve Group Limited (Interserve), imposing a fine of £4.4m for violations of the GDPR (the violations were pre-Brexit).
    The ICO found that Interserve had failed to put appropriate technical and organisational measures in place to secure personal data (in contravention of Articles 5(1)(f) and 32 GDPR) for a period of ~20 months.

    The Incident

    The incident followed what is proving to be a familiar fact pattern. A phishing email was sent to a group employee which was designed to appear as though the attached document needed urgent action. Subsequent download and ZIP extraction resulted in the installation of malware onto the workstation giving the threat actor access to that workstation (Patient Zero). This was flagged by Interserve’s end point protection system, which reported automatic removal of malware had been successful. Interserve took no further action to verify this, and the threat actor continued to have ongoing access to the workstation.

    Following initial access, a server was compromised which was then used to “move laterally” within the Interserve estate (i.e., moving from the initial point of compromise to other parts of the victim’s IT estate). In the subsequent days, the threat actor compromised 283 systems and 16 accounts (12 being privileged admin accounts) across the estate. A privileged account was then used by the threat actor to uninstall Interserve’s anti-virus solution to prevent detection of malware used by the threat actor. The attacker then compromised four HR databases containing data of 113k employees and former employees. The databases were encrypted and rendered unavailable to Interserve. Regulatory notification followed to the NCA, the NCSC and the ICO.

    The personal data held on the compromised databases comprised a common HR data set, including employees’ and former employees’: telephone numbers; email addresses; national insurance numbers; bank account details; marital status’; birth dates; education; countries of birth; genders; number of dependants; emergency contact information, and salary. The databases also held special category personal data including ethnic origin; religion; details of disabilities; sexual orientation, and health information relevant to ill-heath retirement applications. Interestingly, each of these items of information was not necessarily held for each of the 113,000 individuals, rather these categories of information were recorded in the relevant databases. Under Article 33(1) GDPR an organisation is only obliged to be able to describe the approximate categories and number of personal data records when notifying the ICO which appears to have been the approach adopted by Interserve.

    Digest of points to note in the MPN

    The MPN is littered with useful insights into the ICO enforcement and provides further detail around what the ICO expects with regards to the principle-based obligations in Article 5(1)(f) and 32 GDPR. We found the following points of particular interest:

    • % of revenue. On the face of it, this is a sizeable fine issued to a non household name controller for perceived failings in information security. Dig a little deeper and, in fact, the level of fine appears to be a relatively small percentage of Interserve’s last reported revenues (less than 1/5th of 1%).

    It is nevertheless a significant amount of money and the reputational damage arising from a public fine was also taken into consideration by the ICO when setting the fine. The fact that the fine is a relatively small percentage of revenues may indicate that the new ICO John Edwards, favours a less aggressive approach to enforcement than his predecessor Elizabeth Denham, at least when it comes to setting the level of fine. Lower fines are also less likely to result in successful appeals and tie up the ICO’s enforcement team with legal arguments.

    A key open legal question remains whether the correct maximum fine when calculating fines under the UK GDPR (NB this MPN was issued under EU GDPR) is either a) the greater of 2% of turnover or £8.7 million; or b) the greater of 4% of turnover or £17.5 million (in each case where turnover is total worldwide annual turnover of the preceding financial year). The ICO has previously taken the position that the higher limit applies though this has not yet been tested on appeal and there are good arguments that the lower maximum should apply.

    • One group controller to rule them all: Interserve was held to be the relevant controller for the purpose of enforcement, regardless of the fact the incident and the security failings were applicable across numerous group companies. Interserve was the parent company, it was responsible for info-sec for the group and employed individuals working in information security. Enforcement against multiple entities in the same group is complicated and time consuming. It is much simpler for the ICO to target the parent company when that company is responsible for info sec for the entire group.
    • Paper based compliance represents a small and incomplete part of the picture. Central to the decision (and another identified recurring point of failure) was that Interserve had extensive info-sec policies and standards however these policies were not implemented nor were they subject to appropriate oversight (despite the fact the exec were aware of issues with the Interserve estate). While policies and procedures are an essential part of any compliance programme as the “paper shield”, without the resources and budgets needed to implement and oversee them effectively, they can become a liability for controllers providing an easy way for regulators to prove breach. Employee training remains a key consideration for the ICO in the context of post incident enforcement. The Interserve MPN is yet another reminder of the importance of regular and effective training.
    • Period for assessing duration of infringement / enforcement: the “relevant period” for the ICO’s assessment around the duration of the infringement was held to start at the time Interserve became the relevant controller (following the winding up of another group company) and did not end until remediation was complete. This emphasises the importance of remediating any gaps in security measures promptly to meet the legal standard of care. Any delay to remediation will extent the duration of the infringement, aggravating the risk of fines and also potentially compounding losses caused to data subjects. The MPN also provides an insight into the timing of, and procedural steps around, ICO enforcement. The Notice of Intent was not served on Interserve for almost 2 years after the Article 33 notification was made to the ICO. A month later, Interserve provided written representations in response to that notice. The ICO updated the notice and invited supplemental representations, which were made by Interserve. The final procedural step was an ICO meeting ~4 weeks before the MPN was published.
    • What was the risk of harm to the individuals? an eagle-eyed reader may question what was the risk to the data subjects here? There was no evidence of exfiltration, and one view may be that the threat actor applied encryption in an attempt to extort money from or cause nuisance to Interserve rather than to cause harm to the individuals (e.g., fraud).

    The ICO found that all the data subjects had their personal data processed unlawfully and the processing had the potential for concern, anxiety and stress, due to: (a) data had been accessed by criminal actors with malicious intent; (b) the personal data compromised included data which was commonly used to facilitate identity/financial fraud (home addresses, bank account details, pay slips, passport data and national insurance numbers); (c) special category data was compromised – it is particularly sensitive (per Recital 51). Employees may be content to share with their employer, they would not want this data accessed by malicious individuals: (d) compromised data included salary details, which enables social and financial profiling which is dangerous in the hands of threat actors; (e) while there was no evidence of exfiltration, the ICO could not rule out this possibility and the risks of exfiltration remain significant as privileged accounts could exfiltrate data / advanced groups can prevent detection of exfiltration / measures that can identify exfiltration (firewall filtering and logging) were not implemented until after the incident.

    • What should you be discussing with your Info-Sec team? While the ICO MPN does not necessarily reflect the legal standard of care (as the ICO does not make the law) it is an indication as to the ICO’s view as to the legal standard of care at the date of the incident. In particular, the ICO considers that the following gaps and deficiencies fell short of the legal standard of care required by Articles 5(1)(f) and 32 GDPR:
      • outdated operating systems/protocols;
      • inadequate end point protection (outdated / firewalls not enabled);
      • no pen tests conducted for two years prior to the incident;
      • inadequate investigation by the info-sec team; and
      • poor privileged account management.

    It would be prudent for organisations to check that their own IT estates do not suffer from the same shortcomings. As with previous decisions regulatory guidance/standards (NIST / NCSC) continues to be an appropriate benchmark. The MPN strongly implies that Interserve spent considerable amounts to remediate in accordance with ICO expectations. Remediation before a cyber incident is invariably less costly, stressful and damaging to an organisation’s reputation and balance sheet compared to remediation after a cyber incident.

    We continue to frequently advise clients both on incident response together with pro-active cyber assurance and resilience. If you need any advice in this area, please do reach out to your DLA contact.

    Authors: Ross McKean (Partner and co-chair of the UK data protection and cyber security practice) and Henry Pelling (Senior Associate in the DLA data protection and cyber security practice).

    ]]>