Wednesday, December 18, 2024

CDT Europe Calls on EU Leaders to Prohibit Mass Surveillance Through Indiscriminate and Arbitrary Uses Of Biometric Technologies In the EU’s AI Act

Must read

In this blog post, CDT recalls what biometric surveillance entails, why mass surveillance uses of biometric technologies need to be prohibited, and also how non-mass surveillance uses of biometric technology equally need to be carefully regulated.


Earlier this year, the European Parliament adopted its position on the AI Act – and in doing so heeded the calls of civil society – and voted not only to keep, but to extend the European Commission’s initially proposed ban on real-time biometric identification to also include “post” remote biometric identification deployed in publicly accessible spaces. The prohibition was not totally absolute in the Parliament text however, with an exception included ex-post use in the context of serious crimes and with pre-judicial approval.

The European Council has, on the other hand, agreed in their version of the text to extend the allowed uses of remote biometric identification systems by law enforcement and migration authorities to include cases where there are threats to critical infrastructure and health of individuals. In addition, it would authorise the use of “real-time” remote facial and other biometric identification systems to investigate and prosecute all offences carrying a sentence of at least 5 years in EU Member States, and explicitly excludes border control areas from the definition of “publicly accessible spaces”, thus allowing for use of remote biometric surveillance tools in these areas. 

These two opposing positions demonstrate how the question of biometric surveillance will be a key part of the debate during the final months of negotiation on the EU’s AI Act. It will be of vital importance from a rights-protecting perspective that the legislation be clear that mass surveillance use of biometric surveillance should be prohibited, but equally that non-mass surveillance uses of biometric data need to be carefully regulated.

What are Biometrics and How are They Used?

Biometrics refers to the process of recording, measuring and analysing a range of human biological, physical, or physiological features; it can include appearance, behaviour and mental state. Biometric data like fingerprints and DNA have been used in the context of criminal investigations for decades. Biometric identity attributes are both unique to a person and stable over time, they can therefore offer a useful tool for accurate and efficient identification, provided appropriate procedural safeguards are in place. However, these characteristics are also what makes such data incredibly sensitive and personal, in particular when traditional biometric data collection methods are supplemented by remote surveillance—including without the data subject’s knowledge or consent. Moreover, automating the analysis of large volumes of biometric data, as mass surveillance invariably requires, heightens the risks of discriminatory outcomes.

What is Mass Biometric Surveillance and Why Must it be Prohibited?

Biometric mass surveillance refers to uses of biometric technology to scan individuals (usually remotely) arbitrarily and at a mass scale. For example, CDT Europe has already detailed, in a previous blog post, the particular harms posed by law enforcement’s use of untargeted facial recognition systems (for instance, scanning each face in a crowd against a database of suspects) in public spaces. It is impossible for individuals to opt out of such mass scanning, and such use of facial recognition is known to result in discriminatory outcomes because it is more accurate when used on light-skinned cisgender males than for members of other groups. Moreover, law enforcement officers can use mass biometric surveillance to identify attendees at a public political protest absent any suspicion or evidence that the protestors had broken any laws. This use would be a prime example of the type of indiscriminate, arbitrarily-targeted use of biometric mass surveillance that should, given its unacceptable risk to human rights, be prohibited. When CDT says it supports a ban on biometric mass surveillance, this is what we refer to.

What Uses of Biometric Data Need to be Carefully Regulated?

While mass biometric surveillance must be prohibited, other uses of biometric surveillance can also pose a serious risk to human rights and should be permitted only under a robust regulatory regime that ensures transparency, proportionality, oversight and redress. Non-mass surveillance uses of biometric data might include consensual authentication, targeted surveillance (that would still need to pass the proportionality test to be lawful), and certain commercial uses. Such non-mass surveillance uses could be deployed, however would still need to be carefully regulated. For example, a one-to-one match system used at an airport could in theory be done in compliance with EU and human rights law, but it would be important that such systems be subject to audit and oversight, in case it be used in another manner for its specific intent. Such uses need to be looked at on a case by case basis to determine whether the use is in compliance with the GDPR, and any new obligations that the AI Act will introduce.

Latest article