“Computer says ‘No’”? …So, what exactly do the regulators think that the GDPR says in response?
Last month, the Article 29 Working Party (Art.29 WP) announced that it is seeking feedback on draft Guidelines on automated individual decision-making including profiling (WP251) under the General Data Protection Regulation ((EU) 2016/679) (GDPR) in advance of its arrival next year. As a reminder, the GDPR specifically addresses automated decision-making and profiling as types of algorithmic techniques used upon data relating to people – to learn / analyse/ predict things about them – which are now increasingly popular for application in a number of sectors. While great benefits can be reaped from such usages, such as in respect of to tailor services and products to align with individual needs, however, there are also accompanying risks related to negatively affecting individuals’ rights that can arise (in particular, around the potential for infringements of privacy, and the perpetuation of discrimination embedded into automated systems) unless appropriate safeguards are implemented to prevent these.
As such, the issue of how data protection law should regulate this area to ensure that such processing activities are fair is a topic which has captured the attention of regulators at the EU level and at the national Member State level. For example, earlier this year, the UK ICO published a discussion paper seeking feedback on profiling provisions under the GDPR to help it advise the Art.29 WP in interpreting them (see my earlier post here on the contents of that discussion paper), whereas the Art.29 WP published an advice paper on essential elements of a definition and a provision on profiling in 2013 (in relation to the GDPR it its earliest Commission draft 2012 form).
So, how does the Art.29 WP’s new guidance build upon the advice set out in this earlier paper?
Foremost, like the ICO, the guidelines tackle definitions. An important conceptual distinction can be made here:
- Under Article 4(4) GDPR, profiling is defined as “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements”. Thus, the guidance says, it requires three elements to be satisfied: 1) it has to be an automated form of processing (although not necessarily a solely automated process); 2) it has to be carried out on personal data; and 3) the objective of the profiling activity must be to evaluate personal aspects about a (living) person. In keeping with both the Art.29 WP’s previous advice paper on profiling and the ICO’s discussion paper, therefore, this approach would encompass the collection of data for the creation and use of profiles (i.e. the analysis of personal aspects as well as processing that has a predictive element).
- Under Article 22 GDPR, there are restrictions imposed on solely automated decision-making (which could potentially including profiling although not necessarily), i.e. it only applies where the process underpinning the decision-making does not involve any human input (“[t]he data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”). Thus, partly automated decision-making/profiling would not be caught. On the other hand, the Art.29 WP clarifies in this respect that the use of superficial human involvement to get around the restrictions is not possible. Circumvention is only possible where such a specialist individual is involved – who has the authority and the competence to change the decision – in particular, says the Art.29 WP, after they have considered all the available input and output data associated with the workings of the automated part of the algorithmic process and checked its conclusion.
In terms of the general obligations that apply when personal data are profiled, as a reminder the GDPR introduces ‘unambiguous’ consent as the strengthened standard that can provide a legal basis for such profiling activities. Other legal bases available to the data controller, if unambiguous consent is not available, include notably the ‘legitimate interests’ basis (e.g. because the processing is necessary “for direct marketing purposes” or “for the purposes of preventing fraud”) albeit that the controller must also carry out a balancing exercise to assess whether such interests are overridden by the data subject’s interest. Under the latter legal basis, the guidance clarifies that the following profiling-specific factors are particularly relevant: “the level of detail of the profile (a data subject profiled within a broadly described cohort such as ‘native English teachers living in Paris’, or segmented and targeted on a granular level); the comprehensiveness of the profile (whether the profile only describes a small aspect of the data subject, or paints a more comprehensive picture); the impact of the profiling (the effects on the data subject); and the safeguards aimed at ensuring fairness, non-discrimination and accuracy in the profiling process”. While the GDPR does not provide any explanation of what would be considered to constitute “compelling legitimate grounds” for the profiling to override the “interests rights and freedoms” of the individual – in the event that someone objects to profiling justified on the basis of the ‘legitimate interests’ of the controller (or a third party) – the guidance does not dwell on this issue, although it does list what the controller would need to prove. These include “the impact on data subjects is limited to the minimum necessary to meet the particular objective (i.e. the profiling is the least intrusive way to achieve this)” and that “the objective is critical for the organisation”.
[Where sensitive personal data is used in a profiling activity, by comparison, the only legal bases available to the controller are where: the data subject has given their (explicit) consent to that processing (except where a law provides that such prohibition cannot be lifted by the individual’s consent); or, because the processing is necessary for reasons of public interest (Article 9 GDPR). Of course, this sensitive personal data includes any new data inferred as part of the processing output, so controllers have to consider how to inform data subjects of such processing and mitigate the risk of identifying special (sensitive) categories of personal data included in their profiling activities, unless they can ensure that it will be also be processed lawfully in line with the GDPR requirements.]
In terms of data subject rights applying specifically to profiling activities, under the GDPR (Article 21(2) and 21(3) and Recital 70), a data subject has the right to object to profiling – specifically a data controller must stop processing a data subject’s personal data for direct marketing purposes when the person objects. Other rights e.g. to personal data rectification/erasure are also discussed, including in respect of both ‘input personal data’ (used to create the profile) and to the ‘output data’ (the profile itself or ‘score’ assigned to the person, which is personal data relating to the person concerned). More generally, the standard GDPR rules applied to personal data processing will apply to profiling activities, notwithstanding its complexity. For example, the controller must provide sufficient information to the data subject to make the processing fair and meet all the other information requirements of Articles 13 and 14 GDPR. Whereas Articles 13(2)(f) and 14(2)(g require controllers to provide to data subjects meaningful information about the logic of solely automated decision-making (including profiling) used in respect of their personal data that produces legal or similarly-significant effects, and “the significance and the envisaged consequences of such processing for the data subject”. These information-provision requirements have sparked many discussions about how exactly a controller can explain to data subjects functioning of complex algorithms involved. In particular, as the ICO asked in it discussion paper: “how do you propose handling the requirement to provide relevant and timely fair processing information, including “meaningful” information on the logic involved in profiling and automated decision-making?” [A novel solution can be found in a recent paper, Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR, here]. The Art. 29 WP attempts to meet this challenge by explaining that controllers should find simple ways to tell the data subject about the mechanism behind reaching of the decisions without necessarily always attempting a complex explanation of the algorithms used. It gives the example of a controller applying a credit scoring method, with an associated explanation to the data subject that this process helps them make responsible and fair lending decisions and provide details of the main characteristics considered in reaching the decision, the source of this information, as well as its relevance.
In terms of interpreting Article 22(1), the guidelines state that this amounts to a prohibition on solely automated individual decision-making that has a legal or similarly significant effect, with limited exceptions: if that processing is necessary for entering into or performing a contract; it is authorised by EU or Member State law applicable to the data controller which contains appropriate safeguards to protect the data subject’s rights; or it is based on the data subject’s explicit consent. Moreover, under Article 22(4), a data controller must not use sensitive personal data in such solely automated decision-making, except under certain limited circumstances.
The Art.29 WP’s guidance here raises several points worth comment:
1. The GDPR’s ‘right not to be subject to…’ is here interpreted as a prohibition with limited exceptions, rather than as a right literally interpreted of a data subject to object to such processing (e.g. by a written notice requiring a controller to cease and desist further processing). Yet the latter interpretation has been chosen by some EU Member States (including the UK, under the Data Protection Act 1998, section 12) in respect of interpreting a similar provision found in Article 15 of the existing Data Protection Directive (granting “the right to every person not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc”). In contrast, under the UK Data Protection Bill 2017, section 47 seems to understand Article 22 GDPR as a general ban and not an opt-out. We are left slightly confused (at least in the UK), as are – more generally – many other academics (see e.g. here). [Especially as section 13(3) of the Bill sets out a process for objection of exempt ‘qualifying significant decisions’, but specifically excludes decisions necessary to a contract, or made with the data subject’s consent, from that definition.]
2. Delving further into the concept of ‘effect’ and how that might be interpreted practically, the Art. 29 WP gives examples of ‘legal effects’ to include situations where automated decision-making would impact upon a person’s entitlement to a particular social benefit. Whereas a ‘similarly significant effect’ is interpreted as an impactful consequence that must be more than trivial and must be sufficiently great or important to be worthy of attention (having the potential to “significantly influence the circumstances, behaviour or choices of the individuals concerned”). [Surely there is a wide variance of different between a ‘more than trivial’, and a ‘significant’, influence?!]. Examples provided include matters such as refusal of credit for a significant purchase, online advertising based solely on automated processing where the individual being targeted is vulnerable or differential pricing, which may have the effect of excluding the data subject from certain goods or services. What WP29 does not address, however, are the factors – and the interplay of such factors – to determining when data processing circumstances might result in a data subject likely being ‘legally’ / ‘significantly’ affected, and the measures can be put in place to assess levels of impact. As I mentioned in my previous post, this remains an important ‘gap’ in regulatory guidance, which is crying out for the explicit establishment of “an external recognised standard to measure such effects”.
3. It is interesting to note that the Art.29 WP appear to view the requirement to have in place a legal basis upon which to justify the lawfulness of the activities underpinning solely automated decision-making processes in a dynamic fashion. Said otherwise, in circumstances in which a controller wishes to rely on the available exemptions relating to necessity for entering into or performing of a contract, or obtaining the data subject’s explicit consent to the processing, suitable safeguards must be in place (e.g. per Article 22(3) GDPR, the data subject must be afforded a right to obtain human intervention, to express their point of view and to contest the decision) but also must remain in place going forwards. The guidelines give an example in the case of credit referencing prior to entry into a contract and conclude that this reason will not always be sufficient to show that the processing is “necessary”. Thus, a controller will need to be able to 1) justify why this processing is critical, including demonstrating the non-availability of a less privacy intrusive means of achieving the same outcome, but also 2) state what the particular business objective is (not always easy when automated processing is used in an innovative context), and 3) demonstrate that the processing remains necessary as the data environment and business context changes over time. [While the ‘legitimate interests’ legal basis is not available to controller engaged in solely automated processing activities with likely effects sufficient to be caught by Article 22, it is interesting to note here that strikingly similar requirements are placed upon data controllers so caught as they are in order to satisfy the ‘legitimate interests’ legal basis under the GDPR available in respect of other types of processing. In both cases, the controller will need to prove that the activity is critical for achieving a desired organisation-specific objective and include adequate safeguards to protect individuals’ rights in that context for monitoring over time]. At the same time, getting GDPR-valid (meaningful and freely given) consent to solely automated (and, indeed partially automated) profiling activities, for example, will be challenging when it is based on complex opaque algorithms as alluded to above.
The guidelines also address other data protection law principles set forth in the GDPR applied to automated decision-making and profiling activities applied to personal data, including purpose limitation, data minimisation, and storage limitation. In that context, it attempts to address what types of controls and safeguards may be needed to be introduced to satisfy these particular requirements. Ensuring that the raw data input into profiling activities, for example, is relevant, accurate, and kept for no longer than necessary, is a key issue vexing data controllers. It will require them to think about mechanisms for implementation to ensure that they meet the GDPR’s requirements to test regularly the effectiveness and fairness of automated systems (as well as the unbiased ‘provenance’ of the underlying data and the ultimate output). As mentioned in my last post, a key measure available in this respect is pseudonymisation. It is another dynamic ‘tool’ advocated by the GDPR to mitigate the risk of harm to data subjects, and to help controllers/processors meet their data protection obligations (Recital 28 GDPR) even if it is not a ‘silver bullet’ solution on its own.
Such issues need to be thought about holistically by data controllers as interrelating compliance factors to be managed. Luckily, the holistic ‘tool’ of a data protection impact assessment (DPIA) is there to assist. Indeed, Article 35 GDPR mandates a DPIA be carried out whenever – inter alia – “a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person”. Although the ICO has provided some specific guidance on the practicalities of conducting DPIAs in a profiling context (in their 2017 Big Data report here), arguably still more guidance is needed from the Art.29 WP on the challenges of implementing the GDPR requirement to carry out inter-connected DPIAs in respect of data analytic operations involving two or more entities including secondary profiling activities requiring compatibility with the original purpose for which the personal data was collected.
In conclusion, the Art.29 WP’s guidance veers on the side of being conservative in interpreting the GDPR’s provisions in this area. Also outstanding are some important questions on how exactly vertical automated operations involving many players operating on different markets should assess and manage risks dynamically using DPIAs when engaging in big data analytic practices. On the other hand, the guidelines do refer to some examples of data broker scenarios and Annex 1 also sets out some useful good practice recommendations to assist controllers in meeting the requirements of the GDPR provisions on profiling and automated decision-making.
Comments on the draft guidelines should be sent to just-article29WP-sec@ec.europa.eu and presidenceg29@cnil.fr before the deadline at the end of this month.
Alison Knight