Microsoft retires controversial AI that can guess your emotions
Tech giant warns that ‘new guardrails’ are required for artificial intelligence
Your support helps us to tell the story
This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.
The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.
Help us keep bring these critical stories to light. Your support makes all the difference.
Microsoft has announced that it will halt sales of an artificial intelligence service that can predict a person’s age, gender and even emotions.
The tech giant cited ethical concerns surrounding the facial recognition technology, which it claimed could subject people to “stereotyping, discrimination, or unfair denial of services”.
In a blog post published on Tuesday, Microsoft outlined the measures it would take to ensure its Face API is developed and used responsibly.
“To mitigate these riskes, we have opted to not support a general-purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup,” wrote Sarah Bird, a product manager at Microsoft’s Azure AI.
“Detection of these attributes will no longer be available to new customers beginning 21 June, 2022, and existing customers have until 30 June, 2023, to discontinue use of these attributes before they are retired.”
Microsoft’s Face API was used by companies like Uber to verify that the driver using the app matches the account on file, however unionised drivers in the UK called for it to be removed after it failed to recognise legitimate drivers.
The technology also raised fears about potential misuse in other settings, such as firms using it to monitor applicants during job interviews.
Despite retiring the product for customers, Microsoft will continue to use the controversial technology within at least one of its products. An app for people with visual impairments called Seeing AI will still make use of the machine vision capabilities.
Microsoft also announced that it would be making updates to its ‘Responsible AI Standard’ – an internal playbook that guides its development of AI products – in order to mitigate the “socio-technical risks” posed by the technology.
It involved consultations with researchers, engineers, policy experts and anthropologists to help understand which safeguards can help prevent discrimination.
“We recognize that for AI systems to be trustworthy, they need to be appropriate solutions to the problems they are designed to solve,” wrote Natasha Crampton, Microsoft’s chief responsible AI officer, in a separate blog post.
“We believe that industry, academia, civil society, and government need to collaborate to advance the state-of-the-art and learn from one another... Better, more equitable futures will require new guardrails for AI.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments