top of page

Article 22 of the GDPR (the right to opt-out of decision-making)


Article 22 of the GDPR opt-out


At its outset, Article 22 of the GDPR sparked much theoretical legal interest, as being one of the first standalone provisions which deals with automated decision-making head on. It remains to be in a delicate position, however its ambiguity, lack of clarity and complexity mean it is difficult to apply.


The Scope of Article 22


Article 22(1) outlines that an individual cannot be subject to a decision based solely on automated processing which produces legal effects or similarly significant effects on them. The clarity of this provision can however be seen as short-lived once the relevant words and phrases are unpacked.


One main instance of this is in the so-called ‘in-between cases’. For example, in a landmark case before the Court of Justice of the European Union (CJEU), called Schufa, a credit agency claimed it merely profiled the individuals and did not adopt any automated decisions within the meaning of Article 22 of the GDPR. It claimed that the actual decisions about the individuals were made by the banks subsequently. The CJEU, however, disagreed with this assessment and found that the creation of the score represented an independent decision, because the client later strongly relied on it in making the decision to grant a loan. This was further evidenced by the fact that an insufficient score value produced by the system will always lead to a refusal of consumer loans in all cases. The ruling of the case reveals that deciding when a decision has been made ‘solely’ by an automaton is difficult to judge in practice.


For individuals, this vague wording has real practical consequences and leads to great legal uncertainty and unpredictability. In Schufa, had the credit agency only allowed a human reviewer upon the rejection of the application then they would be in a marginally worse legal position as compared with an individual’s application who had been accepted. This is because Article 22 would apply in the latter due to the lack of human intervention, but not in the former case where an actual grievance has occurred. Importantly, this issue trickles down into greater privacy issues for the individual where Big Data companies such as Facebook have employed workers to trawl through outputs so that an individual could not protect themselves under Article 22 as a result of this human intervention.


The application of Article 22 is therefore rarely simple and may leave claimants aggrieved and without justice if the wording of the provisions remain as vague as they currently are. In this regard, it is valuable to note that the CJEU’s guidance has been useful in delimiting the boundaries of this right. However, due to the fact that it acts as the highest court within the EU, very rarely will a claimant’s appeal make it so far upstream.


The Exceptions


To make matters worse, the article further contains exceptions. Article 22(2)(a) states that where the processing is necessary for the performance of a contract, then this decision can be seen as exempt from scrutiny. This raises the issue of how far this exception can apply. If extensive automated profiling is seen as necessary to obtain a loan from the bank, for example, then this means the consumer cannot negotiate or decide whether such a contractual automated decision-making should be seen as allowed. And it is in these circumstances that protection is needed the most because whether an individual receives a loan can often be a matter of insecurity and instability in their personal life and therefore protection against these unfair outcomes should be regarded as the highest priority.


The next exception lies under Article 22(2)(b) which gives deference to domestic laws and European laws to highlight when automated decision-making should always be allowed. In particular, this widens the scope for the government to utilise automatons in national security decisions which increases the risk of profiling individuals, such as for surveillance purposes. The GDPR is silent on any safeguards which must be present in order for this exception to be invoked. Importantly, this can lead to a derogation in approach across various Member States, hindering the uniformity premised under the system of the GDPR.


Under Article 22(2)(c), where there has been consent of an individual, then the automated decision can be protected. The difficulty lies in the fact that real consent as intended by the GDPR is rarely possible to achieve for a consumer due to the lack of knowledge, information and understanding. The matter can be seen as heightened in the context of automated decision-making because the logistics behind this system cannot even be fully understood by those developing and employing the systems, let alone by a consumer typing their information. Arguing therefore that ‘consent’ is a valid exception leads us to question if a consumer is ever capable of reaching this high threshold especially in the case of automated decision-making.


The Impact of Article 22 on the GDPR


Article 15 of the GDPR was written in the same spirit as Article 22 and accords data subjects the right to access to personal data, information and to explore the consequences of processing. This gives the end user insight into the process which has resulted in certain effects on their life. Its main point of contention is whether Article 15(h), the ‘right to meaningful information’, applies adequately to automated decision making. This is because the provision states that ‘at least’ in some cases of automated decision-making, meaningful information about the logic involved as well as the significance and the envisaged consequences of such processing for the data subject must be provided. However, the provision does not delve further into the extent of ‘at least’ and what instances of automated decision-making this targets.


The right to erasure under Article 17 might provide more hope. Article 17 outlines the so-called ‘right to be forgotten’ where an individual can recover the personal data about themselves without undue delay. It specifically protects against profiling which is the main worry with the use of algorithms. However, it does not attack the root of the issue. Big Data platforms such as Facebook and Google Search base their algorithms on accumulating the observed, unwritten personal data about an individual to make their systems personal and targeted. Yet, Article 17 has not been adapted for this purpose shown by the fact that across the history of this provision use in this way has not been considered.


A similar weakness can be illustrated by Article 20- the right to data portability. This right vouches for an individual’s right to self-determination because it enables the data subject to have more control over their data. However, the right is restricted in the sense that it does not specify whether such ‘data’ must have been explicitly gained or whether this includes the inferred data which is the classic type of data gathered under algorithms. Such unknowingly acquired data remains one of the largest worries across data privacy law as a whole because it could reveal information about an individual which they would never willingly give away. Furthermore, the fact that it only applies to data provided for by consent means that such unknowingly acquired data is outside the scope, alongside data which falls under a contract or a legitimate interest for example. Its main redeeming factor in relation to Article 22 however is that it does cover data which was processed through automated means even where human intervention has occurred. This factor makes the applicability of this provision to the discussion promising indeed and raises the question of even if Articles 15 and 17 may provide scant help, a combination of Articles 22 and 20 might cover certainly some of the cases left without resort.


The GDPR is premised on a system which prevents information injustice, this means controllers must provide information, an explanation or a justification. With respect to the right to information, Article 13(2)(f) and 14(2)(g) requires controllers who use personal data to inform individuals about automated decision-making activities and to provide them with ‘meaningful information’ about the logic involved. Although this is not found directly under Article 22, it highlights that the spirit of the GDPR and therefore the motivations of the drafters is for the provision to act as a beacon against nefarious automated decision-making processes without understanding by the individual who is affected.


It has been hotly debated whether Article 22 does outline a right to an explanation. In favour of this argument is that the wide scope under Article 22(3) specifies safeguards which must be included in the design of automated decisions and read alongside Recital 71, a right to an explanation of a specific automated decision can be found. Additionally, if we delve into what exactly is meant by the ‘right to explanation’, we can consider the purpose is to allow an individual to understand the way in which their data is being used. In this way, it is clear that Article 22 provides this because a controller is under an obligation to provide the information which is necessary for the individual to be able to exercise their rights. As argued by certain commentators, this does not mean the black box per se needs to be opened, rather an individual just needs to acquire the knowledge necessary.


In this way therefore, Article 22 has the correct basis, however its wording and ambiguity can be questioned. In light of, the preliminary criticisms against Article 22, perhaps its provisions should be more tightly worded to ensure that it can benefit as many claimants as possible. However, the world of automated decision-making continues to be a murky, less-travelled path and without further information on how exactly these systems operate, we will not be able to judge how to adequately safeguard against these systems. Yet, Article 22 remains to be a good start.

bottom of page