Who is to blame for bad AI? Companies are an easy target – but we also need to look at our own habits

It's easy to understand why tech ethics advocates focus their attention on companies. To examine our own complicity, and that of our colleagues and loved ones hits close to home, writes Andrew Sears

Monday 15 June 2020 17:35 BST
Comments
A number of companies in the US have stopped selling facial recognition software to police departments
A number of companies in the US have stopped selling facial recognition software to police departments (David McNew/AFP/Getty)

In 2018, Amazon began selling a facial recognition AI product to police departments. It didn’t take long for "Amazon Rekognition" to attract the condemnation of human rights groups and AI experts, who criticised the product’s high error rate and propensity for mistaking black Congresspeople for known criminals.

Despite this, Amazon continued selling their AI to police departments for two full years, only stopping when the killing of yet another unarmed black man – George Floyd – drew mainstream attention to Rekognition’s flaws. Amazon has committed only to a one-year moratorium on the sale of Rekognition to police departments. In recent days, Microsoft has also said it will stop selling such technology to police departments until there is more regulation in place, while IBM has said it will no longer offer its facial recognition software for “mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values”.

However, Amazon has so far declined to comment on whether it will continue to market the product to federal law enforcement agencies.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in