Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Hate speech in Myanmar continues to thrive on Facebook

Years after coming under scrutiny for contributing to ethnic and religious violence in Myanmar, Facebook still has problems detecting and moderating hate speech and misinformation on its platform in the Southeast Asian nation, internal documents viewed by The Associated Press show

Via AP news wire
Thursday 18 November 2021 06:06 GMT

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Years after coming under scrutiny for contributing to ethnic and religious violence in Myanmar Facebook still has problems detecting and moderating hate speech and misinformation on its platform in the Southeast Asian nation, internal documents viewed by The Associated Press show.

Three years ago, the company commissioned a report that found Facebook was used to “foment division and incite offline violence” in the country. It pledged to do better and developed several tools and policies to deal with hate speech.

But the breaches have persisted -- and even been exploited by hostile actors -- since the Feb. 1 military takeover this year that resulted in gruesome human rights abuses across the country.

Scrolling through Facebook today, it’s not hard to find posts threatening murder and rape in Myanmar.

One 2 1/2 minute video posted on Oct. 24 of a supporter of the military calling for violence against opposition groups has garnered over 56,000 views.

“So starting from now, we are the god of death for all (of them),” the man says in Burmese while looking into the camera. “Come tomorrow and let’s see if you are real men or gays.”

One account posts the home address of a military defector and a photo of his wife. Another post from Oct. 29 includes a photo of soldiers leading bound and blindfolded men down a dirt path. The Burmese caption reads, “Don’t catch them alive.”

Despite the ongoing issues, Facebook saw its operations in Myanmar as both a model to export around the world and an evolving and caustic case. Documents reviewed by AP show that Myanmar became a testing ground for new content moderation technology, with the social media giant trialing ways to automate the detection of hate speech and misinformation with varying levels of success.

Facebook’s internal discussions on Myanmar were revealed in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions received by Congress were obtained by a consortium of news organizations, including The Associated Press.

Facebook has had a shorter but more volatile history in Myanmar than in most countries. After decades of censorship under military rule, Myanmar was connected to the internet in 2000. Shortly afterward, Facebook paired with telecom providers in the country, allowing customers to use the platform without needing to pay for the data, which was still expensive at the time. Use of the platform exploded. For many in Myanmar, Facebook became the internet itself.

Htaike Htaike Aung, a Myanmar internet policy advocate, said it also became “a hotbed for extremism” around 2013, coinciding with religious riots across Myanmar between Buddhists and Muslims It’s unclear how much, if any, content moderation was happening at the time.

Htaike Htaike Aung said she met with Facebook that year and laid out issues in the country, including how local organizations were seeing exponential amounts of hate speech on the platform and how preventive mechanisms, such as reporting posts, didn’t work in the Myanmar context.

One example she cited was a photo of a pile of bamboo sticks that was posted with a caption reading, “Let us be prepared because there’s going to be a riot that is going to happen within the Muslim community.”

Htaike Htaike Aung said the photo was reported to Facebook, but the company didn’t take it down because it didn’t violate any of the company’s community standards.

“Which is ridiculous because it was actually calling for violence. But Facebook didn’t see it that way,” she said.

Years later, the lack of moderation caught the attention of the international community. In March 2018, United Nations human rights experts investigating attacks against Myanmar’s Muslim Rohingya minority said Facebook had played a role in spreading hate speech.

When asked about Myanmar a month later during a U.S. Senate hearing, CEO Mark Zuckerberg replied that Facebook planned to hire “dozens” of Burmese speakers to moderate content, would work with civil society groups to identify hate figures and develop new technologies to combat hate speech.

“Hate speech is very language specific. It’s hard to do it without people who speak the local language and we need to ramp up our effort there dramatically,” Zuckerberg said.

Internal Facebook documents show that while the company did step up efforts to combat hate speech, the tools and strategies to do so never came to full fruition, and individuals within the company repeatedly sounded the alarm. In one May 2020 document, an employee said a hate speech text classifier that was available wasn’t being used or maintained. Another document from a month later said there were “significant gaps” in misinformation detection in Myanmar.

“Facebook took symbolic actions I think were designed to mollify policymakers that something was being done and didn’t need to look much deeper,” said Ronan Lee, a visiting scholar at Queen Mary University of London’s International State Crime Initiative.

In an emailed statement to the AP, Rafael Frankel’s, Facebook’s director of policy for APAC Emerging Countries, said the platform “has built a dedicated team of over 100 Burmese speakers,” but declined to state exactly how many were employed. Online marketing company NapoleonCat estimates there are about 28.7 million Facebook users in Myanmar.

During her testimony to the European Union Parliament on Nov. 8, Haugen, the whistleblower, criticized Facebook for a lack of investment in third-party fact-checking, and relying instead on automatic systems to detect harmful content.

“If you focus on these automatic systems, they will not work for the most ethnically diverse places in the world, with linguistically diverse places in the world, which are often the most fragile,” she said while referring to Myanmar.

After Zuckerberg’s 2018 congressional testimony, Facebook developed digital tools to combat hate speech and misinformation and also created a new internal framework to manage crises like Myanmar around the world.

Facebook crafted a list of “at-risk countries” with ranked tiers for a “critical countries team” to focus its energy on, and also rated languages needing more content moderation. Myanmar was listed as a “Tier 1” at-risk country, with Burmese deemed a “priority language” alongside Ethiopian languages, Bengali, Arabic and Urdu.

Facebook engineers taught Burmese slang words for “Muslims” and “Rohingya” to its automated systems. It also trained systems to detect “coordinated inauthentic behavior” such as a single person posting from multiple accounts, or coordination between different accounts to post the same content.

The company also tried “repeat offender demotion” which it lessens the impact of posts of users who frequently violate guidelines. In a test in two of the world’s most volatile countries, demotion worked well in Ethiopia, but poorly in Myanmar -- a difference that flummoxed engineers, according to a 2020 report included in the documents.

“We aren’t sure why … but this information provides a starting point for further analysis and user research,” the report said. Facebook declined to comment on the record if the problem has been fixed a year after its detection, or about the success of the two tools in Myanmar.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in