Data security developments under the General Data Protection Regulation

Data security developments under the General Data Protection Regulation

Kuan Hon, a consultant lawyer at Pinsent Masons and a senior researcher on cloud law projects at Queen Mary University of London, explores the latest data security developments under the General Data Protection Regulation. This blog is written in her personal capacity only.

There is political pressure to finalise the draft General Data Protection Regulation (GDPR) before 2015 is out. Proposed by the European Commission in 2012 to modernise the EU Data Protection Directive (DPD), GDPR would spell major changes regarding security as well as other matters. Its final text won’t be known until it’s agreed in “trilogue” between the EU institutions, and significant differences between the European Parliament (Parliament) and Council of Ministers remain in certain areas. However, its overall shape seems reasonably clear. The GDPR would become law directly in all Member States as from its effective date – probably 2 years after its adoption. UK Deputy Information Commissioner David Smith predicts June 2018, or end 2018 as “a more realistic prospect”. 2018 may seem distant, but it would behove organisations to start thinking about GDPR’s impact now so that they are in a position upon adoption to set in train the substantial legal, operational and risk management changes that will be required.

What is the current position on security under the Data Protection Directive?

Currently, Member States must oblige controllers to implement “appropriate technical and organizational measures to protect personal data against accidental or unlawful destruction or accidental loss, alteration, unauthorized disclosure or access…and against all other unlawful forms of processing”, and “Any person acting under the authority of the controller or of the processor, including the processor himself, who has access to personal data must not process them except on instructions from the controller, unless he is required to do so by law” (Arts. 16-17 DPD). In information security parlance, these amount to measures at least to protect confidentiality and integrity, and indirectly availability. Controllers must ensure a level of security “appropriate to the risks” of the processing and nature of the data, in light of the state of the art and cost – effectively a risk-based approach.

A controller’s use of a processor to process personal data on its behalf falls under the DPD’s “security” provisions, such as obligations to choose processors providing “sufficient guarantees” regarding security measures, implement a contract requiring the processor to comply with equivalent security requirements, and (in the UK) take “reasonable steps” to ensure compliance with those measures. Member States’ national DPD implementations vary considerably. Unlike the UK Data Protection Act’s general security obligations, some other Member States’ security obligations are detailed, requiring measures such as technical authentication, authorisation, backups and/or logging.

What is likely to change under the GDPR, what impact will this have on data controllers and processors, and how will it affect cloud providers and users?

 One of the GDPR’s main aims is to harmonise data protection laws across the EU. It is uncertain whether this aim will be achieved, as some provisions explicitly leave Member States room for manoeuvre, while other provisions are ambiguous, enabling Member States to interpret them in different ways. To flesh out certain details, such as in relation to security measures (e.g. what is “state of the art”) and data breaches, we must also await delegated or implementing acts by the Commission or guidelines or recommendations by the proposed European Data Protection Board (which would replace current regulatory committee the Article 29 Data Protection Working Party). Their respective remits and powers remain to be agreed.

The biggest changes affecting security would be as follows.

  • Breach notifications – controllers would have to notify “personal data breaches” to data protection regulators, and in certain situations data subjects. The scope, thresholds and deadlines are still under discussion. For example, too low a threshold could result in “notification fatigue”, too high may impede data subjects’ post-breach self-help actions.
  • Security requirements; use of processors – GDPR would provide separately for security and the use of processors (GDPR Arts. 30 and 23 respectively). General security requirements seem likely to remain unchanged, although Parliament would specifically mention confidentiality, integrity, availability – and “resilence”, undefined but increasingly prominent in practice as a general concept (recovering effectively from security problems). Requirements regarding the use of processors would be broadened, covering security and beyond. Contracts would have to oblige processors to “assist” controllers to comply with certain controller obligations including regarding security, breach notification, and data protection impact assessments.
  • Processor obligations/liability – processors, including service providers who handle personal data, would become directly responsible for various matters, including security, and could be sued by anyone harmed by a breach – see Open Season on Service Providers? The General Data Protection Regulation Cometh. The extent to which liability would be fault-based or strict, whether for security or more generally, is a major issue that remains to be settled.
  • Other matters – controllers (and processors, if Parliament prevails) would have to implement “appropriate” measures such that personal data processing would meet GDPR requirements, including “security by design and default”. This would affect the design and delivery of new products or services as well as existing ones. Security measures would be relevant in other areas, e.g. binding corporate rules, perhaps the level of sanctions, etc.
  • Certifications, codes of conduct, marks and seals – with GDPR’s emphasis on accountability, approved certifications and the like will assume greater importance, as being “an element” (or more) in demonstrating certified controllers’ compliance (but seemingly not processors’ compliance, which seems an oversight). Again, disagreements remain regarding who approves certifications etc. for this purpose and how, e.g. only data protection regulators?

While GDPR was intended to be technology-neutral, it would perpetuate the 1970s “computer services bureaux” outsourcing models that the DPD embeds (see Reed, Making Laws for Cyberspace). Cloud computing exemplifies this problem (see Millard (ed), Cloud Computing Law). Many GDPR requirements do not fit the commoditised, standardised, self-service, “shared security responsibility” model of service delivery that typifies public cloud. In particular, it would not be feasible for cloud providers to assess individual customers’ security risks and tailor security measures accordingly; costs would rise, and the service would no longer be public cloud.

What should businesses be doing to get ready at this stage?

Both controllers and processors should consider their security measures and plans to handle breaches, implementing industry best practices and changing their systems if necessary to detect, contain and manage/notify breaches. This is important not only for GDPR compliance purposes but more generally. Cyberattacks are increasing – it’s a question of when, rather than if, even for organisations who believe (wrongly) that they are “too small” to be targeted – and knock-on effects on organisations’ reputations can hit bottom lines as much as regulatory sanctions, if not more. Data security breaches have also resulted in high-level resignations, such as those of hacked US retailer Target’s CIO and CEO in 2014, and more recently the CEO of “adultery-broking” website Ashley Madison. The 80/20 rule seems to apply: most technical security breaches result from basic precautions being neglected. With initiatives like the UK government’s Cyber-Essentials scheme, organisations have no reason not to implement at least a few core measures.

From a legal perspective, heavy negotiations of controller-processor contracts seem likely, as parties strive to balance the allocation of liability and indemnities between them, particularly in relation to security. Existing contracts will need to be tracked, considered and renegotiated as necessary, again from the perspectives of both controllers and processors.

What difference would these changes make in practice and why?

From Pinsent Masons’ experience, many issues that prove to be important for organisations in the aftermath of large data security breaches may in fact bear little relation to data protection laws per se – such as ensuring legal professional privilege is maintained in relation to breach investigations by relevant regulators, potential civil lawsuits and dealings with law enforcement authorities’ cybercrime units, including seeking undertakings from them regarding the conduct of their criminal investigations. Actual or potential regulations affecting cybersecurity are not limited to data protection laws. For example, the proposed EU Network and Information Security Directive, also currently in trilogue, would require security and breach notification obligations on the part of organisations providing “critical infrastructure”, notably the banking, energy, healthcare and transport sectors. A major debate currently raging is, to what extent should “internet enablers” such as search engines or cloud providers be caught, and what should their (probably more limited) security obligations be? Therefore, other laws will also be relevant. Multinationals will usually also be subject to other jurisdictions, such as, in the US, the Federal Trade Commission’s requirements regarding security measures. Ascertaining, devising and implementing practical measures that meet all these laws’, possibly differing, security requirements, will be challenging.

It’s not just personal data that needs protection from ever-increasing risks of cybercrime and state-sponsored hacks, or indeed garden variety mistakes and accidental breaches. Company-confidential or proprietary information such as source code and trade secrets also require protection. As underlined by UK Deputy Information Commissioner David Smith recently, it will be critical for organisations to formulate, implement and maintain, in advance of any breaches, not only appropriate security policies and procedures, but also a breach management strategy and process (including breach notification procedures). Organisations can no longer afford to allow security to be an afterthought - it is now a board-level issue.

Therefore, security needs to be built in from the outset, and a plan to handle breaches put in place and rehearsed. The use of insurance to transfer cybersecuriy risk also merits consideration, although the cyberinsurance market is still developing in the EU, and policies require careful scrutiny to ensure that the desired risks are covered and stipulated conditions can be met. Organisations should implement not only technical measures such as encryption and tokenisation, access controls and authentication, but also organisational measures, including appropriate policies, procedures and staff awareness and training. To date, the human element has proved to be the biggest source of security breaches, notably staff succumbing to phishing or “watering hole” attacks, and this is likely to continue. The chief takeaway is that, in practice, organisations need to ensure that they are prepared all-round on the security front irrespective of the GDPR.


Latest Articles:
About the author: