Concerns raised after social media giants approve anti-LGBT+ adverts

Global Witness submitted ads that used extreme and violent language to three social media companies for approval.

By Grinne N. Aodha
Thursday 23 February 2023 10:38 GMT
A view of Facebook parent company Meta’s headquarters in Dublin. (PA)
A view of Facebook parent company Meta’s headquarters in Dublin. (PA) (PA Wire)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Concerns have been raised about the processes used by social media giants to block advertisements containing hateful language towards the LGBT+ community.

NGO Global Witness submitted ads that used extreme and violent language to three social media companies for approval.

Ten were submitted to Facebook, TikTok and Google, as part of the group’s investigation.

Both YouTube, which is owned by Google, and TikTok approved all ten ads while Facebook rejected two.

Global Witness removed all the ads after they had been approved and before they were published.

Social media companies suggested that processes to screen content are constantly evolving and there are multiple steps to monitor and remove online content.

It is also possible for ads to be removed after they go live, as social media companies have reporting mechanisms that can trigger further scrutiny.

Hate speech has no place on our platforms, and these types of ads should not be approved

Meta spokesperson

A spokesperson for Facebook owners Meta said: “Hate speech has no place on our platforms, and these types of ads should not be approved.

“That said, these ads never went live, and our ads review process has several layers of analysis and detection, both before and after an ad goes live.

“We continue to improve how we detect violating ads and behaviour and make changes based on trends in the ads ecosystem.”

A spokesperson for TikTok said: “Hate has no place on TikTok. Our advertising policies, alongside our community guidelines, prohibit ad content that contains hate speech or hateful behaviour.

“Ad content passes through multiple levels of verification before receiving approval and we remove violative content. We regularly review and improve our enforcement strategies.”

The concern comes as Minister for the Media, Catherine Martin signed ministerial orders on Wednesday to establish media regulator Coimisiun na Mean – which is hoped will reduce harmful content online.

The Department of Tourism, Culture, Arts, Gaeltacht, Sport and Media said in a statement to the PA news agency that the establishment of Coimisiun na Mean and the appointment of an online safety commissioner will mean there will be more pressure on social media companies to reduce hate content.

Coimisiun na Mean will have a range of powers to monitor and enforce compliance with online safety codes

Government spokesperson

The online safety commissioner, along with other commissioners and the chair of the commission, are expected to be formally appointed on March 15 when the Coimisiun is expected to be established.

“Coimisiun na Mean will have a range of powers to monitor and enforce compliance with online safety codes,” the department said.

“For example, if a service is suspected to be non-compliant, An Coimisiun can appoint authorised officers to investigate and this may lead to the imposition of a financial sanction of up to 20 million euro or 10% of turnover.”

The Online Safety and Media Regulation (OSMR) Act provides the legal basis for the online safety commissioner to establish individual complaints schemes for online platforms.

This would allow individuals to submit complaints about the availability of suspected harmful online content.

The department said “it is not envisaged” that an individual complaints scheme would be established until systemic regulation, through online safety codes, has been allowed to “bed-in”.

No timeline has been given on how long this will take.

“The role of the commissioner will be to develop and enforce a regulatory framework for online safety for certain online services which host user-generated content,” it said.

“A key feature of the regulatory framework for online safety is the power of the online safety commissioner to create and apply obligations through binding online safety codes.

“These codes will require designated online services to take measures to tackle the availability of defined categories of harmful online content and can regulate commercial communications (advertising, sponsorship) made available on those services.

“These categories of harmful online content include online content linked to 42 existing offences, including those under the Harassment, Harmful Communications and Related Offences Act 2020 and the Prohibition of Incitement to Hatred Act 1989.”

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in