Porn on Twitter: network builds robots to find and eradicate offensive pictures
The AI systems are likely to spread to other offensive content, and could save humans from having to look at disturbing images
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Twitter has built highly-intelligent robots that can recognise porn, in an attempt to stop it spreading on the network.
The site has long been said to have a porn problem – which has been reported to scare off marketers who are afraid that their ads will appear next to content that is not safe for work. But it is putting to work technology it acquired when it bought a startup last year, which can spot NSFW images and other offensive media, helping automatically hide it in people’s feeds.
The porn-spotting robots are part of a broader move towards artificial intelligence at Twitter. But in the short term it could be able to cut out the huge amount of offensive content that appears in some parts of the site.
If the technology is set to filter out 99 per cent of NSFW content on the feed, it only gets it wrong 7 per cent of the time, according to a report in Wired. Identifying images of porn is particular difficult, Wired notes, since entirely innocent pictures of human flesh or breastfeeding mothers regularly appear on the site and the networks must avoid filtering those out.
Neural networks are built to learn like humans. So instead of having to be told what porn looks like, the robots can just be fed huge amounts of it and then pick out its identifying features – which it can then use to spot similar media.
The same artificial intelligence networks power other, less NSFW image-spotting capabilities. Google’s Photos app, for instance, can spot certain characteristics in images and filter them in special ways – and the company has created trippy pictures by setting those robots loose on other pictures.
The plan is part of a move by the social network to develop what it calls Twitter Cortex, which will use artificial intelligence to automatically analyse the hundreds of billions of tweets that are sent.
The Twitter Cortex technology will eventually be used to more effectively target ads. It will be able to read a whole person’s Twitter feed, for instance, building up a complete picture of what they like and then targeting ads much more specifically.
The systems focused on NSFW content could also save human employees from having to look through photos to find objectionable ones – a job that requires looking at not just suspected porn, but violent and offensive images, and can do lasting damage to those that are required to do it. That work represents a big economic cost to the countries, as well as a huge emotional toll on those doing it – many of whom work in countries like the Philippines.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments