Media Coverage

Kingsley Hayes highlights the pitfalls of automated decision making

Partner and Head of Data Breach, Kingsley Hayes, highlights the pitfalls of automated decision making and explains the importance of GDPR compliance when relying on algorithms to make decisions.

Kingsley’s article was published in ITNow, 17 May 2022, and can be found here.

The relationship between artificial intelligence (AI) and the General Data Protection Regulation (GDPR) is complex. GDPR plays a significant role in creating a more regulated data market, with the overarching aim of protecting individuals’ privacy and rights. At the same time, AI and machine learning use data to improve productivity and to amplify our capacity to solve problems.

Abundant data is the fuel which fires AI and machine learning systems, yet GDPR places constraints on the use of data. For policy makers, a careful balance must be struck between individual data rights and innovation in AI. As more and more organisations bring AI systems online, it is imperative that they are aware of, and compliant with, their GDPR obligations relating to automated decision making.

A recent case provides a timely reminder on the importance of understanding the legal obligations that apply to automated decision making. Estée Lauder Companies UK & Ireland recently reached an out-of-court settlement with three make-up artists who lost their jobs after doing a video interview that was assessed by AI.

The women, who were facing redundancy, were required to reapply for their positions, and were asked to take a video interview as part of this process. However, no human being reviewed the video. Instead, it was analysed by the company’s automated hiring software, which assessed the content of their answers, and even their facial expressions, and then processed the results along with other data about the women’s job performance.

This case illustrates the consequences for failing to comply with the legal obligations under GDPR to prevent solely automated decision making. A failure to incorporate human intervention in decisions that have a significant impact is quite simply illegal.  

AI technologies already exist that automatically make important decisions, such credit scores or the outcome of loan applications, and save banks significant staff time and wage costs. However, banks seeking to adopt these technologies must be mindful that the final decision rests on a human.

Article 22 of GDPR covers, “automated individual decision-making, including profiling.” It says that a data subject has the right not to be subject to a decision based solely on automated processing, including profiling, that produces legal effects concerning them or significantly effects the person.

This means that any decision which significantly impacts a person’s legal rights or individual circumstances cannot be based solely on automated processing. Some argue that this requirement could dampen the potential economic benefits of AI. Other analysts, such as Kalliopi Spyridaki of the SAS Institute Inc., argue that GDPR’s legally guaranteed human oversight of AI could in fact, “help create the trust that is necessary for AI acceptance by consumers and governments.”

Estée Lauder’s failure to incorporate human intervention into a decision that cost three claimants their jobs was a clear breach of Article 22. As such, it gave rise to a data breach claim. However, many people might never know that they have been the victim of such an AI decision. Even where there is some sort of human input, there is a risk that the human merely rubberstamps the AI decision.

Article 15 GDPR enables individuals to obtain information as to “the existence of automated decision-making, including profiling, referred to in Article 22 (1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.”

One of the former Estée Lauder employees said that “They pasted the same sentence about algorithms and artificial intelligence and this tiering bucket of 15,000 data points. I still don’t know what all that means – to me that isn’t an answer.” Given the novelty and complexity of such systems, the question of what qualifies as “meaningful information about the logic involved” is open. Companies using AI may need to work on their communication techniques in order to be compliant with GDPR data access requests, and to obtain valid, informed consent from individuals.

When it comes to human oversight of AI, the University Carlo Cattaneo’s Elena Falletti suggests that GDPR requires human intervention amounting to a person with “the necessary authority, ability, and competence to modify or revise the decision disputed by the user.” Ms Falletti also suggests that genuine transparency for ordinary people means that technical explanations of the AI processes involved, “may not be sufficient if the information received is not comprehensible to the recipient.”

Indeed, the question of precisely which technologies actually amount to AI is complex. The Government Office for Science’s paper on the topic defines AI as, “high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making.”

The society for the study of Artificial Intelligence and the Simulation of Behaviour says that AI is about “giving computers behaviours which would be thought intelligent in human beings.”

Meanwhile, Gernot Fritz of Freshfields suggests that, “AI is not a singular technology but rather a multitude of techniques deployed for different commercial and policy objectives that – in order to be called ‘AI’ – must fulfil three criteria, which are being able to:

  1. perceive its environment and process what is perceived;
  2. independently solve problems, make decisions and act; and
  3. learn from the results and effects of these decisions and actions.

In fact, the term ‘AI’ is often used to describe a basket of different computing methods, which are used in combination to produce a result but which aren’t necessarily AI by themselves.

The distinctive feature of AI is that the machine can go beyond its code and ‘learn’ new things and thus outgrow its original programming.”

Given the subtlety and variety of these definitions, organisations should take a broad approach to defining AI. In reality, many individuals will be unaware that AI was involved in a decision affecting them. Even if they discover the use of AI, they may in any event be unaware of the legal protections afforded to them by GDPR.  It is therefore essential that consumers and citizens are better informed as to the growing impact of AI decision making on their lives, and their rights in relation to it. The basic principles of data protection set out in Article 5 GDPR also apply to AI, of course. These include, for example, the data minimisation principle and the limitation of purpose principle. Yet in our increasingly data-driven economy, many organisations still routinely seek far more personal data than is genuinely necessary.

As our lives move increasingly online, it is essential that those who have experienced GDPR violations as a result of algorithmic and automated decision-making processes are informed and supported in seeking redress. It is also clear that greater public awareness is needed of individual rights in relation to data and AI.

As a society, we are reaching an important threshold when it comes to AI. AI systems are being developed that can carry out roles which were once the exclusive preserve of humans. AI systems can control cars, boats and aircrafts. They can decide who to employ and who to give a loan to. AI performs the role of editor on our social media newsfeeds. It can manage dynamic systems like electricity networks. As these capabilities grow, it is essential for organisations that embrace the potential of AI do so with a clear understanding of the legal requirements governing its use.

Maltin PR

Recent Posts

KP Law Highly Commended at the Modern Law Awards 2024

We are very pleased to share that KP Law has been Highly Commended at the… Read More

10 months ago

Keller Postman UK merges with Lanier, Longstaff, Hedar & Roberts to form specialist collective redress law firm KP Law Limited

Today Keller Postman UK Limited and Lanier, Longstaff, Hedar & Roberts LLP announce their merger… Read More

10 months ago

What is group litigation?

Group litigation, also known as class action or group legal action, is a process where… Read More

10 months ago

What’s been happening in January 2024?

What’s been happening in January 2024? In our regular monthly update, we share the latest… Read More

10 months ago

What is talcum powder cancer?

What is talcum powder cancer? Here, we explain what talcum powder cancer refers to and… Read More

11 months ago

Lucy Burrows comments on 23andMe’s response to its data breach in ITPro

Associate Lucy Burrows provides insight on the 23andMe data breach and highlights the danger of… Read More

12 months ago