Stay up to date with notifications from TheĀ Independent

Notifications can be managed in browser preferences.

Facebook dithered in curbing divisive user content in India

Leaked documents obtained by The Associated Press show that Facebook in India dithered in curbing hate speech and anti-Muslim content on its platform and lacked enough local language moderators to stop misinformation

Via AP news wire
Sunday 24 October 2021 10:45 BST
Facebook Papers India Misinformation
Facebook Papers India Misinformation (Copyright 2021 The Associated Press. All rights reserved.)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts, particularly anti-Muslim content, according to leaked documents obtained by The Associated Press, even as its own employees cast doubt over the companyā€™s motivations and interests.

From research as recent as March of this year to company memos that date back to 2019, the internal company documents on India highlight Facebookā€™s constant struggles in quashing abusive content on its platforms in the worldā€™s biggest democracy and the companyā€™s largest growth market. Communal and religious tensions in India have a history of boiling over on social media and stoking violence.

The files show that Facebook has been aware of the problems for years, raising questions over whether it has done enough to address these issues. Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modiā€™s ruling Bharatiya Janata Party the BJP, are involved.

Across the world, Facebook has become increasingly important in politics, and India is no different.

Modi has been credited for leveraging the platform to his party's advantage during elections, and reporting from The Wall Street Journal last year cast doubt over whether Facebook was selectively enforcing its policies on hate speech to avoid blowback from the BJP. Both Modi and Facebook chairman and CEO Mark Zuckerberg have exuded bonhomie, memorialized by a 2015 image of the two hugging at the Facebook headquarters.

The leaked documents include a trove of internal company reports on hate speech and misinformation in India. In some cases, much of it was intensified by its own ā€œrecommendedā€ feature and algorithms. But they also include the company staffers' concerns over the mishandling of these issues and their discontent expressed about the viral ā€œmalcontentā€ on the platform.

According to the documents, Facebook saw India as one of the most ā€œat risk countriesā€ in the world and identified both Hindi and Bengali languages as priorities for ā€œautomation on violating hostile speech.ā€ Yet, Facebook didnā€™t have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.

In a statement to the AP, Facebook said it has ā€œinvested significantly in technology to find hate speech in various languages, including Hindi and Bengaliā€ which has resulted in ā€œreduced the amount of hate speech that people see by halfā€ in 2021.

"Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,ā€ a company spokesperson said.

This AP story, along with others being published, is based on disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugenā€™s legal counsel. The redacted versions were obtained by a consortium of news organizations, including the AP.

Back in February 2019 and ahead of a general election when concerns of misinformation were running high, a Facebook employee wanted to understand what a new user in the country saw on their news feed if all they did was follow pages and groups solely recommended by the platform itself.

The employee created a test user account and kept it live for three weeks, a period during which an extraordinary event shook India ā€” a militant attack in disputed Kashmir had killed over 40 Indian soldiers, bringing the country to near war with rival Pakistan.

In the note, titled ā€œAn Indian Test Userā€™s Descent into a Sea of Polarizing, Nationalistic Messages,ā€ the employee whose name is redacted said they were ā€œshockedā€ by the content flooding the news feed which ā€œhas become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.ā€

Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, where hate speech, unverified rumors and viral content ran rampant.

The recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content. Much of the content was extremely graphic.

One included a man holding the bloodied head of another man covered in a Pakistani flag, with an Indian flag in the place of his head. Its ā€œPopular Across Facebookā€ feature showed a slew of unverified content related to the retaliatory Indian strikes into Pakistan after the bombings, including an image of a napalm bomb from a video game clip debunked by one of Facebookā€™s fact-check partners.

ā€œFollowing this test userā€™s News Feed, Iā€™ve seen more images of dead people in the past three weeks than Iā€™ve seen in my entire life total,ā€ the researcher wrote.

It sparked deep concerns over what such divisive content could lead to in the real world, where local news at the time were reporting on Kashmiris being attacked in the fallout.

ā€œShould we as a company have an extra responsibility for preventing integrity harms that result from recommended content?ā€ the researcher asked in their conclusion.

The memo, circulated with other employees, did not answer that question. But it did expose how the platformā€™s own algorithms or default settings played a part in spurring such malcontent. The employee noted that there were clear ā€œblind spots,ā€ particularly in ā€œlocal language content.ā€ They said they hoped these findings would start conversations on how to avoid such ā€œintegrity harms,ā€ especially for those who ā€œdiffer significantlyā€ from the typical U.S. user.

Even though the research was conducted during three weeks that werenā€™t an average representation, they acknowledged that it did show how such ā€œunmoderatedā€ and problematic content ā€œcould totally take overā€ during ā€œa major crisis event.ā€

The Facebook spokesperson said the test study ā€œinspired deeper, more rigorous analysisā€ of its recommendation systems and ā€œcontributed to product changes to improve them."

ā€œSeparately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,ā€ the spokesperson said.

Other research files on misinformation in India highlight just how massive a problem it is for the platform.

In January 2019, a month before the test user experiment, another assessment raised similar alarms about misleading content. In a presentation circulated to employees, the findings concluded that Facebookā€™s misinformation tags werenā€™t clear enough for users, underscoring that it needed to do more to stem hate speech and fake news. Users told researchers that ā€œclearly labeling information would make their lives easier.ā€

Again, it was noted that the platform didnā€™t have enough local language fact-checkers, which meant a lot of content went unverified.

Alongside misinformation, the leaked documents reveal another problem plaguing Facebook in India: anti-Muslim propaganda, especially by Hindu-hardline groups.

India is Facebookā€™s largest market with over 340 million users ā€” nearly 400 million Indians also use the companyā€™s messaging service WhatsApp. But both have been accused of being vehicles to spread hate speech and fake news against minorities.

In February 2020, these tensions came to life on Facebook when a politician from Modiā€™s party uploaded a video on the platform in which he called on his supporters to remove mostly Muslim protesters from a road in New Delhi if the police didnā€™t. Violent riots erupted within hours, killing 53 people. Most of them were Muslims. Only after thousands of views and shares did Facebook remove the video.

In April, misinformation targeting Muslims again went viral on its platform as the hashtag ā€œCoronajihadā€ flooded news feeds, blaming the community for a surge in COVID-19 cases. The hashtag was popular on Facebook for days but was later removed by the company.

For Mohammad Abbas, a 54-year-old Muslim preacher in New Delhi, those messages were alarming.

Some video clips and posts purportedly showed Muslims spitting on authorities and hospital staff. They were quickly proven to be fake, but by then Indiaā€™s communal fault lines, still stressed by deadly riots a month earlier, were again split wide open.

The misinformation triggered a wave of violence, business boycotts and hate speech toward Muslims. Thousands from the community, including Abbas, were confined to institutional quarantine for weeks across the country. Some were even sent to jails, only to be later exonerated by courts.

ā€œPeople shared fake videos on Facebook claiming Muslims spread the virus. What started as lies on Facebook became truth for millions of people,ā€ Abbas said.

Criticisms of Facebookā€™s handling of such content were amplified in August of last year when The Wall Street Journal published a series of stories detailing how the company had internally debated whether to classify a Hindu hard-line lawmaker close to Modiā€™s party as a ā€œdangerous individual" ā€” a classification that would ban him from the platform ā€” after a series of anti-Muslim posts from his account.

The documents reveal the leadership dithered on the decision, prompting concerns by some employees, of whom one wrote that Facebook was only designating non-Hindu extremist organizations as ā€œdangerous.ā€

The documents also show how the companyā€™s South Asia policy head herself had shared what many felt were Islamophobic posts on her personal Facebook profile. At the time, she had also argued that classifying the politician as dangerous would hurt Facebookā€™s prospects in India.

The author of a December 2020 internal document on the influence of powerful political actors on Facebook policy decisions notes that ā€œFacebook routinely makes exceptions for powerful actors when enforcing content policy.ā€ The document also cites a former Facebook chief security officer saying that outside of the U.S., ā€œlocal policy heads are generally pulled from the ruling political party and are rarely drawn from disadvantaged ethnic groups, religious creeds or casts" which "naturally bends decision-making towards the powerful.ā€

Months later the India official quit Facebook. The company also removed the politician from the platform, but documents show many company employees felt the platform had mishandled the situation, accusing it of selective bias to avoid being in the crosshairs of the Indian government.

ā€œSeveral Muslim colleagues have been deeply disturbed/hurt by some of the language used in posts from the Indian policy leadership on their personal FB profile,ā€ an employee wrote.

Another wrote that ā€œbarbarism" was being allowed to ā€œflourish on our network.ā€

Itā€™s a problem that has continued for Facebook, according to the leaked files.

As recently as March this year, the company was internally debating whether it could control the ā€œfear mongering, anti-Muslim narrativesā€ pushed by Rashtriya Swayamsevak Sangh, a far-right Hindu nationalist group which Modi is also a part of, on its platform.

In one document titled ā€œLotus Mahal,ā€ the company noted that members with links to the BJP had created multiple Facebook accounts to amplify anti-Muslim content, ranging from ā€œcalls to oust Muslim populations from Indiaā€ and ā€œLove Jihad,ā€ an unproven conspiracy theory by Hindu hard-liners who accuse Muslim men of using interfaith marriages to coerce Hindu women to change their religion.

The research found that much of this content was ā€œnever flagged or actionedā€ since Facebook lacked ā€œclassifiersā€ and ā€œmoderatorsā€ in Hindi and Bengali languages. Facebook said it added hate speech classifiers in Hindi starting in 2018 and introduced Bengali in 2020.

The employees also wrote that Facebook hadnā€™t yet ā€œput forth a nomination for designation of this group given political sensitivities.ā€

The company said its designations process includes a review of each case by relevant teams across the company and are agnostic to region, ideology or religion and focus instead on indicators of violence and hate. It did not, however, reveal whether the Hindu nationalist group had since been designated as ā€œdangerous.ā€

___

Associated Press writer Sam McNeil in Beijing contributed to this report.

___

See full coverage of the ā€œFacebook Papersā€ here: https://apnews.com/hub/the-facebook-papers

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in