Companies are using facial recognition to watch us without our permission. How on earth can we trust them?

These companies claim that our data is safe with them and deleted regularly. But can we be sure?

Janet Street-Porter
Friday 16 August 2019 18:06 BST
Comments
An independent survey found that 80 per cent of people ‘flagged’ as suspicious using facial recognition techniques by the Metropolitan Police are innocent
An independent survey found that 80 per cent of people ‘flagged’ as suspicious using facial recognition techniques by the Metropolitan Police are innocent (Getty)

This summer, thousands of families have taken their children to enjoy free entertainment in the terrific canal-side spaces behind Kings Cross station in central London.

One of the most successful urban regeneration schemes in recent years, the former goods yards and railway offices behind St Pancras and Kings Cross terminals have been transformed into a bustling public space. There are playful fountains, open-air seating, dozens of shops and restaurants, alongside offices for Central St Martin’s University and media companies like the Guardian Media Group and Google.

Like Broadgate, another thriving public space in the City of London, Kings Cross is privately owned, and Argent, the developers, are using extensive surveillance to monitor members of the public. Hundreds of cameras throughout the 67-acre site record visitors using sophisticated facial recognition techniques. What on earth for? Information Commissioner Elizabeth Denton has ordered an immediate investigation.

Collecting information on millions of unsuspecting members of the public is highly questionable, especially when the data is owned by a private company. Modern technology is creepy enough, with Siri trying to read our minds and offering us restaurants we don’t want to visit, slippers we will never wear and trying (and usually failing) to anticipate which friend we’re trying to call.

Our phones and computers spend a great deal of their time trying to “help”, ie monitor, us. They predict our browsing, texting and soon will be intruding on our thinking. These apps “suggest” friends to link up with, and potential dates.

Anyone naïve enough to rely on Amazon’s Alexa now knows those conversations are not necessarily private, they are being checked remotely. Humans have been paid to listen to interactions on Google’s assistant and Microsoft’s Skype conversations (including people having sex) without users’ knowledge. Facebook has been snooping on Messenger voice calls, hiring private contractors to listen to and transcribe calls justified as a means (allegedly) to “improve service”.

Facebook, Apple and Google now say they have “paused” human review, while Amazon says it still monitors voice data but users can opt out.

The troubling question remains what happens to all the data? If digital voice assistants have been exposed as a mechanism to snoop on us, the extensive use of facial recognition technology as we go about our daily lives is even more troubling.

Private landowners are creating data banks which could be passed to the police or third parties without our consent. That’s a world of difference from driving along a motorway past highly visible cameras and being informed an average speed check is in force. The justification for that surveillance is road safety.

I wonder if all my visits to Kings Cross during the last week have been logged and compiled into a master file? I’ve boarded trains, hailed cabs and used the underground. I bought a T-shirt, newspapers, collected tickets, waited for a delayed train, and drank a bottle of lemonade. All at different times on different days is that suspicious? I could be loitering and regularly visiting the site to plan a major act of terrorism or I could be a regular Londoner who commutes at irregular times.

Big Brother Watch, a campaigning organisation monitoring state surveillance, reckons we should be concerned because an independent survey found that 80 per cent of people “flagged” as suspicious using facial recognition techniques by the Metropolitan Police are innocent.

The Met has been carrying out trials using the technology since 2016 in Leicester Square, Westfield shopping centre in Stratford, Whitehall and at large gatherings, like protest marches. A senior officer admitted that they need to invest a huge amount in their IT systems before the results would be more accurate. The current system is particularly unreliable at identifying people with dark skin, so it could be accused of racial bias. The police can’t stop and search us without “reasonable suspicion” but what if their sources are so flaky?

Increasingly, big developments are patrolled by private security companies, not the police, with the site’s owners holding all the data. British Land owns Meadowhall shopping mall in Sheffield, and allowed the police to use facial recognition during 2018 to record the data of up to 2 million visitors. Trials at the Trafford Centre in Manchester in 2018 were stopped after complaints to the Surveillance Camera Commissioner.

Now, museums are secretly filming us too the World Museum in Liverpool recorded visitors to their exhibition of Chinese Treasure and plans further trials on other sites. Canary Wharf in London (owned by British Land) is actively considering using facial recognition.

In some places the technology isn’t necessarily considered an invasion of privacy in China, customers at KFC can smile at a camera and their order (based on previous visits) will be delivered and their credit card debited. But the same techniques are also being used by Chinese police to monitor demonstrations and any activity which might be deemed “harmful” to the state.

Is it OK to be secretly filmed when you are out for a meal or a drink? Almost half of restaurant operators would like to use facial recognition to “aid security” stop customers leaving without paying. Facial recognition is being adopted by two of the big four supermarkets at self-service checkouts. In some London bars it’s being trialled to try to prevent queue-jumping. Datasparq, who are installing the system, claim that data is deleted regularly but how can we be sure?

Amazon’s Rekognition app claims to be able to identify “fear” as well as anger, disgust and sadness. If this is the future, maybe it’s best to wear a balaclava or a smiley mask when you next go shopping.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in