Porn on Twitter: network builds robots to find and eradicate offensive pictures

The AI systems are likely to spread to other offensive content, and could save humans from having to look at disturbing images

Andrew Griffin
Thursday 09 July 2015 11:18 BST
Comments
An illustration picture shows the Twitter logo reflected in the eye of a woman in Berlin, November 7, 2013
An illustration picture shows the Twitter logo reflected in the eye of a woman in Berlin, November 7, 2013 (Reuters)

Twitter has built highly-intelligent robots that can recognise porn, in an attempt to stop it spreading on the network.

The site has long been said to have a porn problem – which has been reported to scare off marketers who are afraid that their ads will appear next to content that is not safe for work. But it is putting to work technology it acquired when it bought a startup last year, which can spot NSFW images and other offensive media, helping automatically hide it in people’s feeds.

The porn-spotting robots are part of a broader move towards artificial intelligence at Twitter. But in the short term it could be able to cut out the huge amount of offensive content that appears in some parts of the site.

If the technology is set to filter out 99 per cent of NSFW content on the feed, it only gets it wrong 7 per cent of the time, according to a report in Wired. Identifying images of porn is particular difficult, Wired notes, since entirely innocent pictures of human flesh or breastfeeding mothers regularly appear on the site and the networks must avoid filtering those out.

Neural networks are built to learn like humans. So instead of having to be told what porn looks like, the robots can just be fed huge amounts of it and then pick out its identifying features – which it can then use to spot similar media.

The same artificial intelligence networks power other, less NSFW image-spotting capabilities. Google’s Photos app, for instance, can spot certain characteristics in images and filter them in special ways – and the company has created trippy pictures by setting those robots loose on other pictures.

The plan is part of a move by the social network to develop what it calls Twitter Cortex, which will use artificial intelligence to automatically analyse the hundreds of billions of tweets that are sent.

The Twitter Cortex technology will eventually be used to more effectively target ads. It will be able to read a whole person’s Twitter feed, for instance, building up a complete picture of what they like and then targeting ads much more specifically.

The systems focused on NSFW content could also save human employees from having to look through photos to find objectionable ones – a job that requires looking at not just suspected porn, but violent and offensive images, and can do lasting damage to those that are required to do it. That work represents a big economic cost to the countries, as well as a huge emotional toll on those doing it – many of whom work in countries like the Philippines.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in