Author: Sarah Birkett

Anyone with a passing interest in Australian privacy laws will no doubt have heard about the Optus data breach. The incident, which was made public in late September 2022, is thought to have affected around 9 million individuals (almost 40% of the Australian population), with identity documents relating to approximately 2.22 million Australians being made available on the dark web. The news was swiftly followed up with an announcement from Medibank, Australia’s largest private health insurer, of a breach affecting all of its 3.9 million customers.

As part of the Australian Government’s response to the public outcry generated by these breaches, a change to the Privacy Act 1988 (Cth) has been introduced into the Australian Parliament.  If passed, this will increase the maximum civil penalties payable under the Act from the current AUD 2.22 million to the greater of:

  • AUD 50 million;
  • three times the value of the benefit resulting from the breach; or
  • 30% of the adjusted turnover of the entity in the 12 months prior to the breach.

The draft Bill (titled the Privacy Legislation Amendment (Enforcement and Other Measures) Bill 2022) also seeks to strengthen the Office of the Australian Information Commissioner’s powers to request information in order to assess actual or suspected data breaches and changes the extraterritorial reach of the Australian privacy regime. Organisations will no longer be required to collect or hold personal information within Australia in order for the Privacy Act 1988 (Cth) to apply. They must however still be carrying on a business in Australia.

The opposition has indicated its broad support of the measures and it is expected that the Bill will pass without significant amendment.

The new Attorney-General, Mark Dreyfuss, has also committed to introduce broader changes to the Privacy Act 1988 (Cth) sooner rather than later, with the Government’s review scheduled to be completed before the end of 2022. This comes after a broad review of the Australian privacy regime was commenced by the previous Federal Government in 2019 but never completed.

Authors: Eliza Saunders, Sarah Birkett, James Clark, Senal Premarathna

Introduction

The benefits of using genetic information for research purposes are clear, especially as the technology underpinning medical research continues to advance at such a rapid pace. Outside of research and clinical development, the number of organisations which use blood and saliva samples and other genetic information for diagnostic and treatment purposes, as well as ancestry research, has exponentially increased.

When an individual provides a genetic sample, whether as part of a medical treatment, a clinical trial or in connection with ancestry research, what regimes are in place to protect his or her privacy?

In this article we examine, by way of example, the differing regimes in place in Australia and the UK.

Australia

When does the privacy regime apply in Australia?

Australia’s Privacy Act 1988 (Cth) expressly includes health information and genetic information in the definition of “Sensitive Information”.  Genetic information is not further defined, however more clarity is provided in respect of “health information”.  This includes:

genetic information about an individual in a form that is, or could be, predictive of the health of the individual or a genetic relative of the individual (with the genetic relative of an individual (the first individual) being another individual who is related to the first individual by blood, including but not limited to a sibling, a parent or a descendant of the first individual).

There is no requirement for information falling within the definition above to also be “personal information” – namely information about an identified individual or an individual who is reasonably identifiable.  The key requirement is that the genetic information must be about an individual. However, when can it be said that genetic information is not “about” an individual?

The answer appears to be that genetic information is per se about an individual (and therefore within the scope of the Privacy Act) if it is associated with information that otherwise identifies an individual i.e. some form of record/label containing identifiers of an individual (and this does not necessarily need to include a name).

Looked at another way, for privacy purposes, unless and until a genetic sample is dis-associated with information which could be used to identify a specific individual, it is within the scope of the Privacy Act.

If genetic samples are processed in isolation, without any identifying information, the Privacy Act is unlikely to apply.  At the other end of the scale, the Privacy Act will apply to a genetic sample with the name of the individual affixed.  The grey area is where genetic samples are associated with some information about the individuals who provided those samples, whether or not that information is linked to a specific sample.  Here, it will depend upon the facts and the extent to which it is possible to ascertain the identify of an individual based on all of the information available (including any pre-existing records of the processor).

Who does it apply to?

Any organisation which collects, holds (i.e. has within its possession and control), uses or discloses a record of genetic information falls within the scope of the Privacy Act (although the extent of the compliance requirement varies).

A “record” is defined broadly and includes records captured in documents, electronically or via other devices.  No settled position applies as to whether a genetic sample constitutes a “record” for this purpose, but certainly any data or other information accompanying the sample (and, possibly, generated as a result of that sample such as test results) will qualify.

In the complex ecosystem of medical research this may result in multiple parties being subject to privacy obligations in respect of the same record.  For example, a patient suffering from a rare disease is involved in a clinical trial for a new treatment run by a local clinical trial company on behalf of an Australian research institution. The patient provides written consent to the research institution and amongst other things, provides blood samples at various stages of the trial.  These blood samples are sent to the UK for testing by an expert facility.  The clinical trial agreement with the patient permits the overseas entity to retain leftover blood samples for research purposes.  Following the conclusion of the trial, the UK facility uses the leftover blood samples for its own and third party studies.  In this case, there are multiples entities which are collecting, holding and otherwise controlling the use of the genetic information provided by the patient, however Australian privacy laws do not automatically apply to each entity which processes the personal information of Australians.

Research guidelines

Organisations wishing to use health information for research purposes in Australia may wish to have reference to the so-called “section 95A guidelines” on the collection, use or disclosure of health information published by the National Health and Medical Research Council.

Generally, these guidelines are not binding.  Organisations wishing to avail themselves of the exceptions related to “permitted health situations” in the Privacy Act are required to comply other than where consent is used as the basis for processing.  The Office of the Australian Information Commissioner recommends that consent should be informed, specific and voluntarily provided by an individual with the requisite capacity.

In addition to Privacy Act, organisations must also be aware of the health records laws which operate in several jurisdictions in Australia (namely New South Wales, Victoria and the Australian Capital Territory).

United Kingdom

The UK GDPR identifies both ‘genetic data’ and ‘health data’ as ‘special category data’ that merit additional protection in comparison with normal personal data.

This is because the risk-based approach of the UK GDPR provides that the processing of genetic and health data presents a heightened inherent risk to an individual’s fundamental rights and freedoms, including:

  • the freedom of thought, conscience and religion;
  • the right to bodily integrity;
  • the right to respect for private and family life; and
  • freedom from discrimination.

‘Genetic data’ is defined under Art 4(13) UK GDPR as:

personal data relating to the inherited or acquired genetic characteristics of a natural person which give unique information about the physiology or the health of that natural person and which result, in particular, from an analysis of a biological sample from the natural person in question.

Recital 34 further elaborates to state that this definition includes chromosomal, DNA or RNA analysis or any other analysis that would result in equivalent information. As with the position in Australia above, genetic information only constitutes genetic data if can be linked back to an identifiable individual. However, it is increasingly challenging to determine when genetic information is anonymised – i.e., no longer constitutes personal data – due to technological advances.[1]  In this context, the grouping of EU data protection authorities (the EDPB) has ‘strongly advised’ data controllers to treat genetic data as personal data by default.[2]  Whilst the UK has now left the EU, its laws are inherited from its EU membership and EDPB guidance remains persuasive.

However, at the same time it is important to remember that the unique nature of a person’s genomic data does not inherently make it identifying (and therefore personal data).  A number of factors need to be considered, including the other information and technical means available to the persons processing the data, as well as the context and purposes for which the data is being processed (e.g., is it being processed to create a profile concerning, or take measures or decisions relating to, a specific individual or, on the other hand, is it being processed as part of a much larger dataset to lead to the publication of anonymised research findings?).  ‘Individuation’ (or the ability to single out one person’s data from the data of other persons) can be a factor contributing to the existence of personal data, but is not by itself determinative.

‘Health data’ is defined under Art 4(15) UK GDPR as:

personal data related to the physical or mental health of a natural person, including the provision of health care services, which reveal information about his or her health status.

The ICO clarifies that health data as a concept is broader than information about specific medical conditions, tests or treatment. It can also include any related data that reveals any information regarding the state of an individual’s health such as medical examination data or information on disease risk.

The privacy regime for genetic data in the United Kingdom

The UK GDPR requires a lawful basis to process personal data. It further prohibits processing special category data unless one of the 10 exceptions, referred to as ‘conditions’, apply (see table below).

In addition to the UK GDPR conditions the Data Protection Act 2018 states that, when using a UK GDPR condition, you must also meet one of the additional conditions in Schedule 1 as follows:

UK GDPR Art 9(2) condition DPA Schedule 1 conditions
(a) explicit consent
(b) employment, social security and social protection + condition 1
(c) vital interests
(d) not-for-profit bodies
(e) manifestly made public
(f) legal claims or judicial acts
(g) substantial public interest + one of conditions 6 – 28
(h) health or social care + condition 2
(i) public health + condition 3
(j) archiving, research or statistical purpose + condition 4

In any case, Art 22(4) UK GDPR prohibits the use of special category data solely for automated decision-making purposes unless you have either explicit consent or meet the substantial public interest condition.

What else must be done?

You must carry out a data protection impact assessment (DPIA) for any type of high risk data processing. You are therefore likely required to carry out a DPIA if you plan on processing special category data:

  • on a large scale;
  • to determine access to a product, service, opportunity or benefit; or
  • which includes genetic data.

Other considerations recommended by the ICO include:

  • data minimisation – ensuring the data collected and retained is kept to the minimum required amount;
  • security measures – ensuring the appropriate level security is in place for the sensitive data;
  • transparency – ensuring the special categories of data are included in a privacy notice;
  • rights related to automated decision-making – considering whether automated decision-making might have a ‘legal or similarly significant effect’ on the individual and taking the appropriate steps;
  • documentation – ensuring accurate records documenting the categories of data are provided and considering whether an ‘appropriate policy document’ is required under DPA 2018;
  • data protection officer – considering whether a data protection office must be appointed; and
  • EU representatives – considering whether an EU representative must be designated.

 

[1] https://www.phgfoundation.org/media/94/download/gdpr-and-genomic-data-exec-summary.pdf?v=1&inline=1

[2] ‘EDPB Document on response to the request from the European Commission for clarifications on the consistent application of the GDPR, focusing on health research.’ European Data Protection Board 2 February 2021.

Google LLC has agreed to pay AUD 60 million to Australia’s competition regulator, the Australian Competition and Consumer Commission (ACCC), after it was held that Google breached the Australian Consumer Law (ACL) regarding its collection of location data.

In October 2019, the ACCC commenced proceedings alleging that Google had engaged in misleading and deceptive conduct and made false or misleading representations to consumers between January 2017 and December 2018 in respect of the way location data was collected from users of Android mobile devices.

Two settings on the Android mobile devices were central to the ACCC’s case: “Web & App Activity” and “Location History”. When setting up a Google Account on an Android device, Web & App Activity was defaulted to “on” and Location History was defaulted to “off”.  These default settings prevented Google from adding the user’s movements to their “timeline,” namely the visualisation of the user’s daily travels.  However, it did not prevent Google from collecting other location markers when the user was using Google Search and Google Maps.  This location data was used by Google to personalise advertisements for users. This issue is estimated to have impacted around 1.3 million Australians.

In April 2021 Australia’s Federal Court found in favour of the ACCC in respect of some, but not all, of the regulator’s claims.  Several examples of misleading or deceptive conduct were upheld, as well as one count each of making a false or misleading representations and engaging in conduct that is liable to mislead the public regarding goods. On 12 August 2022, it was announced that Google will pay AUD 60 million for these breaches, after a joint submission to the court was made by ACCC and Google nominating this amount as an appropriate penalty.

This case has attracted headlines in Australia not just due to the size of the penalty but also the fact that it involves a regulator other than Australia’s privacy regulator, the Office of the Australian Information Commissioner, taking action in respect of Google’s privacy related practices.  The emergence of an active ACCC which is willing to take on high profile plaintiffs on privacy issues is an interesting development for the Australian market, particularly given that changes to the Australian privacy regime are yet to be made yet following the Federal Government’s Privacy Act review and, currently, the fines which can be imposed for breach of Australian consumer laws are significantly higher than the maximum penalties payable under the Privacy Act.

Authors: Alex Horder, Anthony Lloyd and Edmond Lau 

What is the CDR Sandbox?

Following the expansion of the Consumer Data Right (CDR) regime last year to a wider range of organisations, the ACCC has now released the ‘CDR Sandbox’, a free tool that lets CDR participants test their proposed CDR compliance solutions in a virtual environment that mirrors the live CDR ecosystem.

The CDR Sandbox is the latest in a series of tools released by the ACCC, which, along with the ACCC’s ‘mock solutions code library’ and ‘Conformance Test Suite’ enable participants’ products to connect with existing mock solutions and other participants in a secure testing environment, to ensure the efficacy of data sharing between those participants

What does this mean for businesses?

The CDR Sandbox’s release means that businesses that already are CDR participants, or are interested in becoming CDR participants, are more readily able to test and de-risk their proposed CDR solutions prior to release. The CDR Sandbox also provides an opportunity for businesses to see over how their specific applications interact within the broader CDR framework -including between data holders and recipients, and the CDR register.

This boosts business’ visibility over the data flows that will likely occur due to participation in the CDR regime generally, and will enable businesses to understand what personal information is being handled, by whom, and where. By making it easier to test proposed CDR solutions, the CDR Sandbox helps to achieve the ACCC’s goal of engendering mass cross-sector adoption.

How can DLA Piper help?

With our previous experience assisting banking sector clients with preparing for and entering the CDR regime (under Open Banking), DLA Piper is well-placed to assist clients in all relevant sectors in understanding how CDR should be implemented, and the legal, technical, and regulatory challenges that come with it. Please contact us for more information as to how we can assist.

Authors: Sarah Birkett and Alex Moore 

The use of CCTV systems to collect biometric information from individuals in Australia is attracting headlines. The issue relates not to the use of CCTV itself, but rather the collection of biometric information (i.e. electronic copies of faces, fingerprints, voices) via CCTV. Organisations, including retailers, may collect biometric information via CCTV for a variety of reasons, including to build profiles of the individuals entering their stores, identify returning shoppers or to identify specific individuals that have previously been removed from their premises.

Last year the Australian privacy regulator, the Office of the Australian Information Commissioner (OAIC), made a determination against a multinational convenience store operator regarding its large scale collection of sensitive biometric information. The organisation captured images of consumer faces via in-store tablets provided for customers to complete surveys regarding their in-store experience. The OAIC determined that this collection was not reasonably necessary for the purpose of improving and understanding customers’ in-store experience, and that organisation had collected the information without consent. This amounted to two breaches of the Australian Privacy Principles within the Privacy Act 1988 (Cth) (Privacy Act).

More recently, consumer advocate group Choice has announced that, following an investigation into practices in the Australian retail sector, it will refer major national retailers Kmart, Bunnings and the Goods Guys to the OAIC regarding the use of facial recognition technology in their in-store CCTV systems. Choice considers that this information is being collected without sufficient notice to customers and that the information collected is “disproportionate” to the legitimate business functions of the retailers in question. One of the retailers in question, Good Guys, commented that facial recognition was used for security and theft prevention, and also for managing/improving customer experiences.

The OAIC initially responded to Choice’s referral, stating that it would consider the information provided, and noting that retailers need to balance business needs with legal compliance and community expectations and attitudes. In respect of consumer attitudes, we note that a 2020 study commissioned by the OAIC found that a majority of Australians are uncomfortable with their biometric information being collected in retail stores. The OAIC’s full statement is available here.

The OAIC has now opened investigations into both Bunnings’ and Kmart’s use of facial recognition technology (with the Good Guys pausing their use of the technology). Whilst the outcome of these investigations remains to be seen, it is clear that the existing framework in the Privacy Act regarding sensitive personal information applies to biometric information and should be applied carefully.

Hong Kong is following other jurisdictions, including Mainland China, Singapore and the UK, in proposing to enhance cybersecurity obligations on IT systems of those operating critical infrastructure (“CI“). While the proposed new law, tentatively entitled the Protection of Critical Infrastructure (Computer System) Bill (the“proposed legislation”), is still at an early stage and subject to change, it is sensible for those organisations potentially caught by these additional cybersecurity obligations – and their service providers – to start planning. To this end, below is a practice guide to the proposed legislation.

  1. What is the primary goal of the proposed legislation?

The proposed legislation, as set out in the paper submitted by the Hong Kong Government to the Legislative Council Panel on Security on 25 June 2024, aims to enhance the security of Hong Kong’s CIs that are necessary to maintain  “normal functioning” of Hong Kong society and people’s lives, by minimising the chance of disruption to, or compromise of, essential services by cyberattacks.

  1. Who and what will be captured by the proposed legislation?

The proposed legislation would regulate only CI operators (“CIOs”) in respect of their critical computer systems (“CCSs”). Similar to the helpful approach in Mainland China, both CIOs and CCSs will be expressly designated by a new Commissioner’s Office to be set up (or, as explained in Question 6 below, the Designated Authorities for certain groups of organisations). This will ultimately remove uncertainty around whether or not a given organisation is a CIIO, and which of their systems will fall within the CCS framework. However, until such designations are made by the relevant authorities, it does leave significant uncertainty for organisations that may not obviously fall within the definition, especially technology companies.

Designation of CIOs

Under the proposed legislation, an organisation would be designated as a CIO if it were deemed responsible for operating an infrastructure that the Commissioner’s Office determines to be a CI, taking into account the organization’s level of control over the infrastructure. It is proposed that CIs cover the following two categories:

  • infrastructures for delivering essential services in Hong Kong, i.e. infrastructures of the following eight sectors: energy, information technology, banking and financial services, land transport, air transport, maritime, healthcare services, and communications and broadcasting (“Essential Service Sectors”); and
  • other infrastructures for maintaining important societal and economic activities, e.g., major sports and performance venues, research and development parks, etc.

When deciding whether an infrastructure within the scope of the two categories above constitutes a CI, the Commissioner’s Office would take into account:

  • the implications on essential services and important societal and economic activities in Hong Kong in case of damage, loss of functionality, or data leakage in the infrastructure concerned;
  • the level of dependence on information technology of the infrastructure concerned; and
  • the importance of the data controlled by the infrastructure concerned. 

The Government also emphasized that CIOs will mostly be large organisations, and the legislation will not affect small and medium enterprises and the general public

The list of the designated CIOs will not be made public to prevent the CIs from becoming targets of cyberattack.

Designation of CCSs

The proposed legislation would only require CIOs to take responsibility for securing the expressly designated CCSs. Systems operated by CIOs but not designated as CCSs would not be regulated by the proposed legislation.

The Commissioner’s Office would only designate as CCSs the computer systems which:

  • are relevant to the provision of essential service or the core functions of computer systems; or
  • will seriously impact the normal functioning of the CIs if interrupted or damaged.

Importantly, computer systems physically located outside of Hong Kong may also be designated as CCSs.

  1. Would organisations have opportunities to object to CIO or CCS designations?

Yes. Under the proposed legislation, before making CIO or CCS designations, the Commissioner’s Office will communicate with organisations that are likely to be designated, with a view to reaching a consensus on the designations. This is helpful, but adds to the recommendation that those potentially caught as a CIO should start planning now to be ready to put forward a clear, reasoned view on whether or not they – and/or all of their systems – should be designated.

After a CIO or CCS designation is made, any operator who disagrees with such designation can appeal before a board comprising computer and information security professionals and legal professionals, etc.

  1. What are the obligations of CIOs?

Statutory obligations proposed to be imposed on CIOs under the proposed legislation are classified into three categories:

  • Organisational:
    • provide and maintain address and office in Hong Kong (and report any subsequent changes);
    • report any changes in the ownership and operatorship of their CIs to the Commissioner’s Office;
    • set up a computer system security management unit, supervised by a dedicated supervisor of the CIO;
  • Preventive:
    • inform the Commissioner’s Office of material changes to their CCSs, including those changes to design, configuration, security, operation, etc.;
    • formulate and implement a computer system security management plan and submit the plan to the Commissioner’s Office;
    • conduct a computer system security risk assessment at least once every year and submit the report;
    • conduct a computer system security audit at least once every two years and submit the report;
    • adopt measures to ensure that their CCSs still comply with the relevant statutory obligations even when third party services providers are employed;
  • Incident reporting and response:
    • participate in a computer system security drill organised by the Commissioner’s Office at least once every two years;
    • formulate an emergency response plan and submit the plan; and
    • notify the Commissioner’s Office of the occurrence of computer system security incidents in respect of CCSs within (a) 2 hours after becoming aware of serious incidents and (b) 24 hours after becoming aware of other incidents.
  1. What would be the offences and penalties under the proposed legislation?

The offences under the proposed legislation include CIOs’ non-compliance with:

  • statutory obligations;
  • written directions issued by the Commissioner’s Office;
  • investigative requests of the Commissioner’s Office; and
  • requests of the Commissioner’s Office for relevant information relating to a CI.

The penalties for these offences would consist exclusively of fines. The level of fines would be determined by court trials, with maximum fines ranging from HK$500,000 to HK$5 million. For certain offences, persistent non-compliance would result in additional daily fines of HK$50,000 or HK$100,000 per day.

It is noteworthy that a CIO will still be held liable for the non-compliance with its statutory obligations if the non-compliance is caused by a third-party service provider. As such, service providers should also start planning now as to whether or not their customer base may be designated CIOs and, if so, what consequences this may have on contractual service obligations, incident notification obligations, security standards/specifications, SLAs, powers of investigation/inspection (including by regulators) and liability/indemnity provisions (including financial caps and exclusions). We anticipate CIOs will expect higher standards from their service providers in advance of the new regulations being introduced.

  1. Which authorities would enforce the proposed legislation, and what would their powers be?

Commissioner’s Office

A Commissioner’s Office is proposed to be set up under the Security Bureau to implement the proposed legislation, headed by a Commissioner appointed by the Chief Executive. Its powers would include:

  • designating CIOs and CCSs;
  • establishing Code of Practice for CIOs;
  • monitoring computer system security threats against CCSs;
  • assisting CIOs in responding to computer system security incidents;
  • investigating and following up on non-compliance of CIOs;
  • issuing written instructions to CIOs to plug potential security loopholes; and
  • coordinating with various government departments in formulating policies and guidelines and handling incidents.

Among these powers, the most significant might be the investigative powers granted to the Commissioner’s Office. Specifically, in respect of investigations on security incidents, the Commissioner’s Office would have, among others, the powers to:

  • question and request information from CIOs;
  • direct CIOs to take remedial actions; and
  • check the CCSs owned or controlled by CIOs with their consent or with a magistrate’s warrant.

In respect of investigations on offences, it would have the powers to:

  • question and request information from any person who is believed to have relevant information in his or her custody; and
  • enter premises and take possession of any relevant documents with a magistrate’s warrant.

From a service provider perspective, these powers will likely extend – either directly or more likely via contractual flow down – from CIOs to their service providers. As such, again service providers may need to revisit their customer contracts in this regard.

Designated Authorities

Existing regulators of certain Essential Service Sectors which already have a comprehensive regulatory framework, such as a licensing regime in the financial services and telecoms sectors, may be designated as designated authorities (“Designated Authorities”) under the proposed legislation. The Designated Authorities would be responsible for designating CIOs (and CCSs) among the groups of organisations under their supervision and for monitoring such CIOs’ compliance with the organisational and preventive obligations. It is currently proposed to designate the Monetary Authority and the Communications Authority as the Designated Authorities for the banking and financial services sector and the communications and broadcasting sector respectively. The Commissioner’s Office, on the other hand, would remain responsible for overseeing the incident reporting and response obligations of, and retain the power to issue written directions to, such CIOs. It is hoped that the interaction between the Designated Authorities and the Commissioner’s Officer will be clearly defined when it comes to practicalities before the new framework is finalised.

  1. How does the proposed legislation compare to critical infrastructure cybersecurity laws in other jurisdictions?

In formulating the proposed legislation, the government made reference to the legislation of other jurisdictions on critical infrastructure protection, including the United Kingdom, Australia, the United States, the European Union, Singapore, Mainland China and Macao SAR. For instance, the designation-based framework envisaged by the legislation mirrors Australia’s regulatory approach to systems of national significance under the Security of Critical Infrastructure Act 2018. Moreover, many obligations of the CIOs, such as those in respect of security risk assessments, audits and drills, have corresponding counterparts in the cybersecurity legislation of jurisdictions like Mainland China and Singapore. The investigative powers of the regulator to request information, access documents and enter premises can also be found in foreign legislation, including the UK’s Network and Information Systems Regulations 2018 and Singapore’s Cybersecurity Act 2018.

There are, however, technical nuances between similar mechanisms under the proposed legislation and existing laws in other jurisdictions. For instance, the proposed legislation requires organisations to report non-serious security incidents within 24 hours of becoming aware of them, providing greater flexibility compared to Singapore’s requirement of reporting all security incidents affecting critical information infrastructure within two hours of awareness.  

  1. What are the next steps for the proposed legislation?

The proposed legislation is expected to be tabled in the Legislative Council by the end of 2024. Once passed, the Commissioner’s Office will be established within a year, and the law will come into effect around six months thereafter. This, therefore, gives a critical planning period until mid-2026 for organisations which may be designated CIOs and their services providers.

  1. What must organisations do in light of the proposed legislation?

It is hopes that the uncertainty around some critical issues, including the scope of the Essential Service Sectors (particularly the information technology sector), the specific criteria to distinguish CIs among the Essential Service Sectors, and the threshold for “serious” security incidents, will be resolved as the proposed legislation passes through the public consultation and the usual legislative process. 

Organisations should closely monitor the development of the proposed legislation, develop an internal position on their designation (or their customers’ designation, in the case of service providers, as a CIIO and systems as CCS, and prepare to advocate/lobby for their position once the designation communications commence, and monitor and update their cybersecurity measures and procedures and contracts.

Authors: Alex Moore (Associate, Auckland) and Nick Valentine (Partner, Auckland) 

On 30 March 2023, the Digital Identity Services Trust Framework Bill (the Bill) passed its third and final reading in New Zealand’s House of Representatives, with cross-party support. The Digital Identify Services Trust Framework Act will come into effect on 1 July 2024 (at the latest).  It is a ‘flagship initiative’ under the current Government’s Digital Strategy for Aotearoa New Zealand, and will establish a voluntary accreditation scheme for digital identity service providers, similar to existing frameworks in the United Kingdom, Canada and Australia.

What is it?

The Framework is similar to the equivalent scheme in Australia – digital identification verification service providers who opt-in will be required to adhere to a set of trust framework rules (the TF Rules) and in return will be granted the right to use a mark accrediting their services. Individuals and businesses that use the identity verification services are not required to be accredited.

Again, in alignment with the Australian framework, the Bill establishes two administrative bodies: the Trust Framework Board (the TF Board) and the Trust Framework Authority (the TF Authority).  The TF Board will take on governance responsibilities for the framework, including providing guidance about the framework, monitoring its performance and advising the Minister on making and updating the trust framework rules. The TF Authority will be responsible for the day-to-day operations of the framework, including assessing accreditations, investigating complaints, enforcing the TF Rules, and granting remedies for breaches.

While the Bill represents an important building block of the Government’s Digital Strategy, the Bill itself does not establish the TF Rules. The TF Rules will instead be set out in Secondary Legislation made by the Minister and will, at a minimum, cover requirements for identification management, privacy and confidentiality, security and risk, information and data management, and sharing and facilitation.

What do we think?

Although the Bill is a step in the right direction for encouraging public trust of digital services, it does very little to grapple with the bigger issues of the online world, such as rights to digital identity and the data associated with an individual’s online presence and interactions.

In many ways, the Bill is reflective of the slow and usually toothless approach to digital governance in New Zealand to date. New Zealand is often playing catch up when it comes to regulating digital technologies and, given it has taken 18 months to get to this stage without actually establishing any substantive rules for the provision of secure and trusted digital identity services, the final passage of the Bill through the House feels a little underwhelming. Particularly as the scheme is voluntary, largely based on the equivalent Australian framework and was developed with the benefit of learning from similar frameworks in the United Kingdom and Canada.

What’s unique?

Despite its clear Commonwealth influences, the Bill does introduce an important element which is unique to Aotearoa – the need to consider te ao Māori (broadly, the Māori worldview including tikanga Māori – Māori customs and protocols) approaches to identity when developing the TF Rules. The Bill establishes a Māori Advisory Group which the TF Board will be required to consult with prior to advising the Minister on the making of TF Rules and the TF Board must also include members “with expert knowledge of te ao Māori approaches to identity.” These consultation and participation requirements are intended to facilitate equitable Māori participation in the digital environment and recognise the Government’s commitment to the principle of partnership under te Tiriti o Waitangi (the founding document of colonial New Zealand).

In a rather satirical twist, the legislative process itself became the victim of authenticity issues in the online world as a result online misinformation campaigns during the height of the COVID-19 pandemic. Of the roughly 4,500 public submissions on the Bill, around 4,050 of those were received during the last two days of the six week public consultation period, including 3,600 submissions in the final three hours. Parliamentary advisers attributed this influx to “misinformation campaigns on social media that caused many submitters to believe that the Bill related to COVID-19 vaccination passes.” Perhaps this incident was evidence enough that New Zealand needs to take a more proactive approach to regulating the digital environment, as the Bill ended up attracting cross-party support.

What’s next?

While the Bill’s passage itself is nothing to write home about, it will be interesting to see how the framework grapples with te ao Māori perspectives on identity in practice. Hopefully, the cross-party support this Bill garnered will energise the Government to tackle some of the bigger digital rights and privacy issues we are currently facing, both nationally and globally.

The following day, then-DC Attorney General Karl Racine announced a similar settlement agreement. In the two settlements, Google agreed to pay Indiana and the District of Columbia $29.5 million, collectively ($20 million and $9.5 million, respectively). These settlements follow similar settlements last year with 40 US state attorneys general and with Australian regulators.

The settlements highlight government expectations that companies obtain proper consents, including robust disclosures of data practices, for sensitive personal information such as location information.

Regulatory and litigation history

Google provides several apps and platforms that collect user location information, particularly from mobile devices, such as through Google Search and Google Maps. Google has used this information to support its business operations in several ways, including by disclosing user location information to other businesses, e.g., to learn how digital advertising can encourage people to visit brick-and-mortar stores. Following news reports in 2018, state attorneys general, including Attorneys General Rokita and Racine, alleged that Google collected location information from users without their consent, including by misleading users to falsely believe that certain settings limited location data collection.

These allegations included:

  • Deceiving consumers regarding their ability to protect their privacy through Google Account Settings
  • Misrepresenting and omitting material facts regarding the Location History and Web & App Activity Settings
  • Misrepresenting and/or omitting material facts regarding consumers’ ability to control their privacy through Google Account Settings
  • Misrepresenting and omitting material facts regarding the Google Ad Personalization Setting
  • Deceiving consumers regarding their ability to protect their privacy through device settings and
  • Deploying deceptive practices that undermine consumers’ ability to make informed choices about their data, including dark patterns.

Key takeaways

Pursuant to the settlements, in addition to the payments, the company must make prominent disclosures about its data practices prior to obtaining consent to collect location information, provide users with additional account controls, and introduce limits to its data use and retention practices. Certain aspects of the settlements deserve particular attention:

  • The settlement requires Google to issue notices to users who allow certain location tracking settings through Google services or devices, including via pop up notifications and email, that disclose whether their location information is being collected and provide instructions on how to limit collection and delete collected location information. Google is also required to notify users via email of any material changes in its privacy policy about the collection, use, and retention of user location information.
  • Google must establish and maintain a “location technologies” webpage that discloses Google’s location data policies and practices as well as how users can limit collection of, and delete collected, location information. Google must also provide a hyperlink to this webpage, in its privacy policy, in the account creation flow, and whenever users enable or are prompted to enable a location-related account setting while using a Google product.
  • The settlement requires Google to implement more specific language in a few places:
    • Settings webpage, about location information: “Location info is saved and used based on your settings. Learn more.”
    • Location technologies webpage, about ads: That users cannot prevent the use of location information in personalized ads across services and devices, based on user activity on Google services, including Google Search, YouTube, and websites and apps that partner with Google to show ads.
  • Google may only share a user’s precise location information with a third-party advertiser with that user’s express affirmative consent for use and sharing by that third party.
  • Google must conduct internal privacy impact assessments before implementing any material changes of how certain settings pages impact precise location information or how Google shares users’ precise location information related to such settings.

While there are many notable aspects to these settlements, it is also notable that this occurred as many states are beginning to implement new privacy laws and regulations, which include increased business obligations for the collection, use, and disclosure of sensitive personal information, such as location information.

See the Indiana AG and District of Columbia AG press releases here (IN) and here (DC).  Find out more about the implications of these developments by contacting either of the authors.

Author: James Clark

On 19 December 2022 the UK government’s first data adequacy decision of the post-Brexit era came into effect. Under the Data Protection (Adequacy) (Republic of Korea) Regulations 2022, the UK formally determined that the Republic of Korea provides an adequate level of data protection for the purposes of the UK GDPR. Consequently, UK businesses can now freely transfer personal data to recipients in South Korea without needing to take any additional steps (such as entering into standard contractual clauses or carrying out transfer impact assessments).

The UK’s decision was expected, as the European Commission had already granted the Republic of Korea an adequacy decision under EU GDPR back in December 2021. However, the UK’s decision – which it is referring to as a ‘data bridge’ – is broader than the EU decision, as it extends to personal data that benefits from exemptions from South Korea’s primary data protection law, the Korean Personal Information Protection Act.

How did we get here?

Under the GDPR, transfers of personal data to ‘third countries’ are prohibited, unless one of the conditions set out in Chapter V of the GDPR is met. The most favourable condition is that an ‘adequacy decision’ exists for the third country (under Article 45 GDPR), which means that the third country is deemed to provide an equivalent level of data protection (taking into account factors such as the rule of law and fundamental privacy safeguards, in addition to personal data protection laws). Where an adequacy decision exists, personal data can move freely to the third country without any additional steps being required.

Prior to Brexit, the UK, in common with all other Member States, relied on the European Commission to determine adequacy decisions for third countries. Post-Brexit, when the UK created its own parallel version of the GDPR, the power to determine adequacy decisions was transferred to the Secretary of State (at the same time as the existing EU adequacy decisions were grandfathered into UK law on a temporary basis). The Korea data bridge is the first adequacy decision made by the Secretary of State under the UK GDPR.

What does the decision cover?

The data bridge covers any transfer of personal data to a person in the Republic of Korea who is subject to the PIPA. The PIPA is a general and comprehensive data protection statute which is broadly analogous to the GDPR.

Unlike the EU decision, the UK data bridge also encompasses transfers of personal credit information to persons in the Republic of Korea who are subject to the Use and Protection of Credit Information Act, which provides specific rules applicable to organisations in the financial sector when they process personal credit information.

What can we expect from future data bridges?

The UK government has indicated that it has ambitious plans for data bridges. It believes that “global networks of personal data flows are critical to the UK’s prosperity and modern way of life”, and it wants to use data bridges as a mechanism to “remov[e] unnecessary barriers to cross-border data flows”. Under its ‘Data: A New Direction’ strategy, the UK has selected the following countries as its ‘top priorities’ for an adequacy decision:

  • Australia;
  • Colombia;
  • Dubai International Financial Centre;
  • Singapore; and
  • the United States of America.

In addition, the following countries represent the UK’s longer-term priorities:

  • India;
  • Brazil;
  • Indonesia; and
  • Kenya.

Given that the EU is on the cusp of securing a partial adequacy decision for the United States through its ‘EU-US Data Privacy Framework’, the UK’s next steps for that country – which is so crucial when it comes to the IT infrastructure of UK businesses – will be closely watched. In particular, it will be interesting to see whether the UK puts in place a data bridge with the same scope as the EU deal, or whether the UK tries to do something more ambitious – a data bridge with a broader scope, as has been concluded with Korea – on either an immediate or longer-term basis.

Finally, now that the UK is no longer subject to the jurisdiction of the Court of Justice of the European Union – something that wasn’t the case when the Schrems II judgment was handed down in 2020 – it is important to note that any challenges to a UK-US data bridge (or any other UK data bridge, for that matter) by privacy activists will be conducted separately from challenges to the EU-US decision, and will proceed through UK, rather than European, courts.

By: Andy Serwin ‖ Ross McKean ‖ Carolyn Bigg

In response to the heightened geo-political tensions resulting from Russia’s invasion of Ukraine and the package of economic sanctions imposed by the West, the risk of cyber-attacks by Russia and her proxies is high.  We may see an increase in economic extortion to generate revenue to compensate for economic impacts.  We may also see retaliatory attacks that are not necessarily revenue generating, but instead are focused on inflicting widespread or targeted economic harm and other disruption.  Organisations based in countries that have imposed sanctions and are supporting the defence of Ukraine are at heightened risk.

Companies should carry out a risk assessment to determine the likelihood of a cyber-attack, whether for economic, espionage or other retaliatory reasons.  Examples of relevant considerations include:

  • the significance of the company’s industry to Russian interests;
  • the company’s ties to government actors;
  • the company’s criticality to a nation’s economy – i.e. does the company form part of critical national infrastructure, and
  • the awareness and recognition of the company’s brand – would an attack make a big splash of publicity?

Companies that are in the supply chain of, or reliant on, heightened risk companies should consider their risk as a launching pad for an attack or the risk to the company of an attack on its supply chain.

The following considerations should help to harden your organisation’s posture and reduce the impact of a cyber event:

  1. Review and share your Incident Response Plan, Crisis Management Plan and Business Continuity and DR plans with key team members and make sure they address all current threats. Share a copy of the plans with key team members to remind them of their responsibilities.  Ensure team member contact details are up to date (including back-up contact details in the event your main systems are unavailable).  Ensure the team members each have a copy of the plans available off the corporate system (again in case it is unavailable).  Ensure staff have activated back-up email and IT resources if these are offered to them.  An attack may come at any time of the day or night – so ensure that it is clear in the plan who has authority to make emergency out of hours decisions, for example as necessary to protect the wider global network.
  2. Ensure you have third party support available if you need it. Make sure you have an incident response firm and breach counsel law-firm lined up to help if you need them.
  1. Check your cyber insurance policy. Check notification requirements.  Make sure that your preferred third party advisors (incident responders, law firm etc) are covered by your policy.
  1. Ensure you have access to appropriate threat intelligence. Various public resources are available with information on vulnerabilities and cyber-attacks.  Also ensure you have contacts with cyber security intelligence services.  See the list of useful links and resources below.
  1. Ensure that all software and firmware across your network (including BYOD) are patched with the latest information security patches to close down known vulnerabilities that could otherwise be exploited. For known unpatched software and firmware, consider other risk mitigations including ring-fencing these systems so that any infection cannot spread and/or decommissioning or temporarily decommissioning these systems while the cyber threat remains high.
  1. Ensure effective and secure backups are in place and are operating correctly so that in the event of an attack you can recover your data quickly. Beware that threat actors will often seek out and encrypt back-ups – so ideally back-ups should be securely ring-fenced.  Make sure that your back-up strategy and policies also includes key access data such as decryption keys and access tokens – as well as the underlying data – so you can recover your data quickly.
  1. Ensure you can ringfence and decommission infected parts of your network. This may not be easy to do if your network has been designed as a “flat” network with open access once an authorised user is in the network.  Ideally you should have the ability to ring-fence parts of your network to protect wider infection and contain malware and threat actors.
  1. Ensure you have full visibility of any third parties with access to your network. You may have good security – your counterparties may not.  Threat actors often attack counterparties to “pivot” their attack into customers / counterparties of the initial victim organisation.  Given the current heightened threat, it may be timely to revisit third party access and either suspend or restrict certain third party access until the cyber threat is lower.
  1. Ensure good password hygiene. Ensure passwords are regularly changed and meet minimum length and complexity requirements (e.g. by forced password reset after a set period).  Remind staff that they should never use the same passwords for access to business and personal resources.  Encourage staff to use phrases rather than individual passwords.
  1. Ensure good access credential management. Ensure that access credentials have been revoked for all leavers and for dormant accounts that have not recently been accessed. For accounts with wider authorisations such as admin and privileged accounts, consider tighter security such as requiring multi-factor authentication.
  1. Ensure good antivirus and end point security hygiene. Ensure antivirus software is up-to-date with the very latest vulnerabilities and malware signatures. Ensure it is deployed and active across all infrastructure, applications and devices on your network.  Ensure that all devices connected with your corporate network are securely configured.
  1. Ensure logging of system and information access are being recorded and available in the event of an intrusion to facilitate investigations and to help identify the extent of an attack.
  1. Ensure good security hygiene for all internet facing resources. Ensure multi-factor authentication for any public facing applications and resources (or equivalent protection). Perform regular vulnerability scans of your organisation’s internet footprint and ensure patching is up-to-date.
  1. Remind staff to be vigilant – particularly of phishing attacks. Phishing remains one of the most popular forms of attack even for sophisticated nation state actors. Remind staff how to spot and report phishing emails and who to contact if they click on links in or respond to suspected phishing emails.  Encourage staff to report any other unusual behaviour such as unusual system error messages.

Useful links and resources:

This alert highlights only some of the key issues raised by the increased geo-political tension and heightened threat of cyber-attacks. It is not intended to be comprehensive, and it does not constitute legal advice.

Please contact any member of the DLA Piper Cybersecurity Team, or your DLA Piper relationship contact if you would like more specific advice, whether on cybersecurity matters or any wider business issues.