Instagram boss says it will change algorithm to stop mistreatment of black users, alongside other updates

'We need to take a harder look at the underlying systems we’ve built, and where we need to do more to keep bias out of these decisions' the company said.

Adam Smith
Tuesday 16 June 2020 08:51 BST
Comments
(Credit: Eric Baradat, AFP, Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Instagram’s CEO Adam Mosseri said that the company needs to better support the black community, and is looking into how its “policies, tools, and processes impact black people”.

The Facebook-owned photo sharing platform will focus on four issues: harassment, verification, distribution, and algorithmic bias.

In a blog post, Instagram was vague about the changes it would make in these areas.

“Any work to address the inequalities Black people face has to start with the specific safety issues they experience day to day, both on and off platform”, Instagram says, claiming it will address “potential gaps” where its policy is lacklustre to those ends.

It is changing its account verification system to “ensure it’s as inclusive as possible,” but gave no further indication what those changes would be.

The company is also looking into the ways its algorithm filters content, both with regards to “shadowbanning” and structural biases in its systems.

Shadowbanning, described by Instagram in its post, is “filtering people without transparency, and limiting their reach as a result”.

The company says that it will be releasing more information about the types of content it does not recommend on its Explore tab “and other places”.

“We need to take a harder look at the underlying systems we’ve built, and where we need to do more to keep bias out of these decisions” the company said.

Questions about algorithmic bias have plagued social media companies for years. In 2019, Instagram was criticised when it was thought to be limiting users' posts to only a small percentage of their followers.

TikTok had to apologise for algorithmically hiding posts that included the Black Lives Matter or George Floyd hashtags from view, with the company saying it had to regain and repair [the] trust“ between it and the black community.

Facebook has recently been criticised about how it manages its algorithm too, after it reportedly shuttered research that would make the platform less divisive but would also be “antigrowth” and require “a moral stance”.

Systemic bias in technological algorithms is not unique to social media platforms either. IBM said it would not continue to develop general purpose facial recognition because of the ways in which it harms communities of colour.

Similarly, Amazon put a moratorium of one year on its own Rekognition facial recognition technology, following the protests in the US over the death of George Floyd.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in