YouTube will use artificial intelligence to decide if videos are safe for kids
The company has increased use of artificial intelligence during the coronavirus pandemic
Your support helps us to tell the story
This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.
The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.
Help us keep bring these critical stories to light. Your support makes all the difference.
YouTube will use artificial intelligence to automatically age-restrict videos that are inappropriate for children.
The video hosting site currently uses human reviewers to flag videos that it believes should not be watched by viewers under 18 years old, but will soon be using machine learning to make that decision.
“Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions”, it wrote in a blog post.
Uploaders can appeal the decision if they believe it was incorrectly applied,” it continued.
YouTube said that it does not expect these changes to make a difference to those who collect revenue for videos.
Many videos that would be picked up under this setting already violate its advertiser-friendly guidelines, and as such already run limited or no adverts.
YouTube has increased use of artificial intelligence as a way to detect harmful content in its videos as a result of the coronavirus pandemic.
The company removed more videos in the second quarter of 2020 than it ever had before.
Due to the fact the video site could not rely on human moderators, it increased use of automated filters to take down videos which might violate its policies.
While YouTube’s content removal system is not necessarily more accurate, the company “accepted a lower level of accuracy to make sure that we were removing as many pieces of violative content as possible".
Other social media companies are also relying on artificial intelligence to keep their platforms secure, but are running into issues with users attempting to subvert their systems.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments