Twitter ‘flags rockets as intimate content’ due to AI use for image recognition
‘You can imagine how a rocket might be misidentified,’ former Twitter employee says
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Twitter is reportedly confusing photos of rockets for “intimate” content due to the platform’s increased use of machine learning tools for image recognition.
Several accounts, including that of journalists covering space news, were suspended from the social media platform earlier this week due to the confusion, Quartz reported, citing a former Twitter employee.
Following a recent SpaceX launch, many accounts that shared the video of the rocket returning to Earth were booted off Twitter, including that of space journalist Michael Baylor and of the Spaceflight Now blog.
The microblogging platform flagged Spaceflight Now’s tweet as “violating our rules against posting or sharing privately produced/distributed intimate media of someone without their express consent.”
“Our account has been locked by Twitter for violating unspecified rules while covering a [SpaceX] launch,” Spaceflight Now editor Stephen Clark tweeted.
The suspended accounts seem to have been caught by Twitter’s automated content moderation system the use of which predates Mr Musk’s takeover of the company.
BBC reported in November last year that an Oxfordshire astronomer’s account was suspended for three months after sharing a video of a meteor that the platform’s automated moderation tool flagged as “intimate content”.
“You can imagine how a rocket might be misidentified,” the former employee said.
“Seems like our image recognition needs some work!” Mr Musk responded on Twitter to one of the suspended accounts.
Since the Tesla and SpaceX chief’s takeover of Twitter, content moderation approaches have seen major changes.
Last month, Twitter said it would rely more on artificial intelligence to moderate content instead of banking on its staff to conduct manual checks, even as hate speech has reportedly surged on the site.
Ella Irwin, the company’s vice president of Trust and Safety Product, told Reuters in December that the platform was doing away with manual reviews.
“The biggest thing that’s changed is the team is fully empowered to move fast and be as aggressive as possible,” Ms Irwin said.
Twitter’s entire human rights and machine learning ethics teams as well as outsourced contract workers working on safety concerns, were all reduced to no staff or a handful of people following layoffs in November that slashed the company’s workforce from 7,500 to roughly 2,000.
A key team at the company dedicated to removing child sexual abuse material across Japan and the Asia-Pacific region was also left with only one person after the layoffs, reported Wired magazine.
“If Twitter wanted to decrease reliance on human moderators while not resulting in a flood of sensitive content, an obvious way to do that is lowering the precision thresholds of machine learning models responsible for detecting sensitive content,” the former employee told Quartz.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments