Elon Musk disbands Twitter’s Trust and Safety advisory group formed to address hate speech

‘Our work to make Twitter a safe will be moving faster and more aggressively’

Vishwam Sankaran
Tuesday 13 December 2022 04:39 GMT
Comments
Related video: US: Elon Musk Savagely Booed At Dave Chappelle's Comedy Show In San Francisco

Elon Musk has dissolved a key advisory group at Twitter consisting of about 100 independent organisations that the company formed in 2016 to address hate speech, child abuse, and other harmful content on the platform.

Twitter’s Trust and Safety Council was expected to convene on Monday but was instead sent an email informing the Council was disbanded shortly before the scheduled meeting, the Associated Press reported.

In the email, Twitter reportedly said it was “reevaluating how best to bring external insights” adding that the council is “not the best structure to do this”.

“Our work to make Twitter a safe, informative place will be moving faster and more aggressively than ever before and we will continue to welcome your ideas going forward about how to achieve this goal,” the email, signed “Twitter,” reportedly noted.

The council consisted of over 100 independent civil, human rights, and other organisations which was formed in 2016 to help Twitter tackle harmful content on the microblogging platform such as hate speech, suicide, self-harm, and child exploitation.

After buying Twitter in October for $44bn and taking over as the company’s new boss, Mr Musk said he would be forming a content moderation council with “widely diverse viewpoints”, adding that major decisions and account reinstatements would not happen before this council convenes.

However, the multibillionaire changed his mind, reinstating the accounts of several people who were previously banned from the platform, including that of former US president Donald Trump.

Several research groups have pointed out that hate speech on Twitter has surged since Mr Musk’s takeover of the platform.

Earlier this month, the Centre for Countering Digital (CCDH) noted that “Mr Musk’s Twitter has become a safe space for hate”, observing that racial slurs, antisemitic and misogynistic tweets have increased since he took over the company.

The Network Contagion Research Institute (NCRI) also found earlier that the use of the N-word increased by nearly 500 per cent in the 12 hours immediately after Mr Musk’s deal to buy Twitter was finalised.

Twitter’s new boss has also made several sweeping changes to the platform’s content moderation approach.

Following layoffs, in which Twitter slashed its entire workforce from 7,500 to roughly 2,000, including its entire human rights and machine learning ethics teams, as well as outsourced contract workers working on the platform’s safety, the company said it would rely more on artificial intelligence to moderate its content.

The team responsible for removing child sexual abuse content from Twitter has been reportedly cut in half since Mr Musk’s takeover, even though he has claimed that removing child exploitation from the platform is his “priority 1”.

Another report by the Wired noted that only one person remained on a “key team dedicated to removing child sexual abuse content from the site” in the entire Asia Pacific region which is one of Twitter’s busiest markets.

Three members of the Trust and Safety Council – Eirliani Abdul Rahman, Anne Collier, and Lesley Podesta – resigned last week, claiming that “contrary to claims by Elon Musk, the safety and wellbeing of Twitter's users are on the decline.”

“The establishment of the Council represented Twitter's commitment to move away from a US-centric approach to user safety, stronger collaboration across regions, and the importance of having deeply experienced people on the safety team,” they said in a joint statement.

“That last commitment is no longer evident, given Twitter's recent statement that it will rely more heavily on automated content moderation,” the trio said.

The members added that Twitter’s new approach relying on algorithmic systems can only protect users from “ever-evolving abuse and hate speech” after significant detectable patterns arise.

“We fear a two-tiered Twitter: one for those who can pay and reap the benefits, and another one for those who cannot. This, we fear, will take away the credibility of the system and the beauty of Twitter, the platform where anyone could be heard, regardless of the number of their followers,” they said.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in