Who is to blame for bad AI? Companies are an easy target – but we also need to look at our own habits
It's easy to understand why tech ethics advocates focus their attention on companies. To examine our own complicity, and that of our colleagues and loved ones hits close to home, writes Andrew Sears
In 2018, Amazon began selling a facial recognition AI product to police departments. It didn’t take long for "Amazon Rekognition" to attract the condemnation of human rights groups and AI experts, who criticised the product’s high error rate and propensity for mistaking black Congresspeople for known criminals.
Despite this, Amazon continued selling their AI to police departments for two full years, only stopping when the killing of yet another unarmed black man – George Floyd – drew mainstream attention to Rekognition’s flaws. Amazon has committed only to a one-year moratorium on the sale of Rekognition to police departments. In recent days, Microsoft has also said it will stop selling such technology to police departments until there is more regulation in place, while IBM has said it will no longer offer its facial recognition software for “mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values”.
However, Amazon has so far declined to comment on whether it will continue to market the product to federal law enforcement agencies.
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies