YouTube hiring thousands of staff to stop disturbing videos aimed at children
The company will use technologies developed for terrorism to try and keep children safe
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.YouTube says it will hire more than 10,000 people in part to address a disturbing trend among its videos.
The new moderators will try and stop the spread of bizarre, potentially damaging videos across its site. In recent weeks, the site has been increasingly criticised for hosting the posts, which seem aimed to target children but in fact show graphic, extremist and violent content.
The videos often pose as containing footage from TV shows for children, concentrate on well known characters, or actively claim to be showing things aimed at kids. But when they click through, they show videos that might not even be suitable for adults – such as Peppa Pig swinging a chainsaw, or children being forced to pretend to be sick.
Many of the videos appear to be generated by bots that pick out heavily searched terms and create new videos – many of which turn out to include extreme violence or other disturbing content. Still others are made by people who have been accused of abusing their children on video for clicks.
The site says it is proud of its success in developing software that can identify extremist videos and those linked to terrorism. Now it will use that same technology to search out other problem videos, like those that target children.
Once it finds those videos, it will stop their uploaders being paid ad money and might even take them off the service.
The extra moderators will help identify the videos when they are posted. That will then be fed into machine learning algorithms to make them able to pick them out themselves.
The goal is to bring the total number of people across Google working to address content that might violate its policies to over 10,000 in 2018, YouTube CEO Susan Wojcicki said in one of a pair of blog posts Monday.
"We need an approach that does a better job determining which channels and videos should be eligible for advertising," she said. "We've heard loud and clear from creators that we have to be more accurate when it comes to reviewing content, so we don't demonetize videos by mistake."
In addition, Wojcicki said the company would take "aggressive action on comments, launching new comment moderation tools and in some cases shutting down comments altogether."
The moves come as advertisers, regulators and advocacy groups express ongoing concern over whether YouTube's policing of its service is sufficient.
YouTube is reviewing its advertising offerings as part of response and it teased that its next efforts could be further changing requirements to share in ad revenue.
YouTube this year updated its recommendation feature to spotlight videos users are likely to find the most gratifying, brushing aside concerns that such an approach can trap people in bubbles of misinformation and like-minded opinions.
Additional reporting by Reuters
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments