Before facial recognition tech can be used, it needs to be limited
New research on facial recognition technology trials by police calls for tighter regulation to protect human rights. Joe Purshouse and Liz Campbell report
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Automated facial recognition technology has been used at a number of crowd events in England and Wales over the past two years to identify suspects and prevent crime.
The technology can recognise people by comparing their facial features in real time with an image already stored on a “watch list”, which could be from a police database or social media account.
Such technology is becoming increasingly popular for police forces around the world. Where successful, it can have positive and headline-grabbing effects – for example tracing missing children in India. But facial recognition technology is controversial, with research showing that it can be inaccurate and discriminatory. San Francisco is even considering a complete ban on its use by police.
Several British police forces have ongoing facial recognition trials. Our new research into the legal challenges posed by police use of facial recognition technology suggests that, from the data made publicly available, arrest rates are low and are far outweighed by the number of incorrect matches made in live public surveillance operations.
This creates a risk that innocent people may be stopped and searched, which may be a daunting experience.
Such trials are also costly. South Wales Police received a £2.6m government grant to test technology, and, so far, the Metropolitan Police has spent more than £200,000 on its on-going trial.
Police have also been criticised for questionable practices in the use of facial recognition technology. The Metropolitan Police built and used a watch list of “fixated individuals” on Remembrance Sunday in 2017.
Reports suggest these people were identified, in some cases, on criteria relating to mental ill-health, raising concerns that the technology was used in a discriminatory manner.
In June 2017 at the Uefa Champions League final in Cardiff, South Wales Police reportedly deployed facial recognition technology using low-quality images provided by the European football governing body and the system produced more than 2,000 false positive matches.
Its accuracy improved in subsequent deployments, but false positive matches still frequently outnumber successful identifications.
Impact on human rights
When justifying their use of facial recognition technology in terms of its effectiveness in crime control and prevention, senior police figures tend to suggest they are mindful of human rights concerns, and that their deployments of the technology are lawful and proportionate.
However, the courts have not yet tested these claims, and parliament has not debated the appropriate limits of this technology by police.
Facial recognition technology breaches social norms of acceptable conduct in a public space. When in public, we might expect to be subject to a passing glance from others, including police officers.
But we expect to be free from sustained or intensive scrutiny, involving cross-referencing back to our social media feeds. Facial recognition technology allows police to extract such personal information from us and use this information in ways we cannot control.
The limited independent testing and research that has been done so far into facial recognition technology indicate that numerous systems misidentify ethnic minorities and women at higher rates than the rest of the population.
South Wales Police has suggested, without publishing a detailed statistical breakdown, that its system does not suffer from these drawbacks. Despite calls for rigorous testing on the performance of facial recognition systems from the scientific community, the Metropolitan Police has not published how its system has performed relative to the gender, ethnicity or age of those subject to its use.
This creates a risk that minority groups, who are already arrested at much higher rates than white people, will be further over-policed following false positive matches.
Need for tighter regulation
As questions over its accuracy remain, it’s too early for police to be using facial recognition technology surveillance in live policing operations. Accuracy isn’t the only issue with the technology though, and as it improves it’s important to think about how facial recognition technology should be regulated.
While police deployments of facial recognition technology must comply with the Data Protection Act 2018, and the Surveillance Camera Code of Practice, these legal regimes don’t provide guidelines or rules specifically regulating its use by police.
As a result, the regulatory framework gives little indication or guidance about the proper threshold at which inclusion on a watch list is lawful.
In their trials, police forces have been collecting, comparing and storing data in different ways. In 2018, the UK’s Information Commissioner expressed concern about the absence of national-level coordination and a comprehensive governance framework to oversee facial recognition deployment.
Most images used to populate watch lists are gathered from police databases, often from when people are taken into custody. There is a particular risk that people with old and minor convictions, or even those who have been arrested or investigated but have no convictions at all, may find themselves stigmatised through facial recognition surveillance.
Given the impact of facial recognition technology on human rights, its use by police should be limited, focusing only on serious crimes or threats to public safety, rather than being used as pervasively as public CCTV currently is.
Inconsistent practices between police forces also suggest the need for a narrower regulatory framework. This should keep the size of watch lists small and improve the quality requirements of technology systems and the way images are compiled and stored for watch lists.
As some police forces have already begun to embrace facial recognition surveillance, legislators must keep pace so that human rights are respected.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments