December 08, 2017
All online platforms with user-generated content struggle with how best to moderate that content.The stakes are especially high when the platform is exclusively aimed at young children.
This is something YouTube, whose parent company Google is part of ADL’s Problem-Solving Lab, learned in a particularly disturbing way, as the young viewers of its popular YouTube Kids app were suddenly exposed to videos of their favorite characters suffering grisly deaths, torturing each other, and depicted in explicitly sexualized ways.
According to the New York Times article that helped expose these malicious videos and holes in the YouTube Kids filter, the videos “are another example of the potential for abuse on digital media platforms that rely on computer algorithms, rather than humans, to police the content that appears in front of people — in this case, very young people.”
This revelation occurred while YouTube was already dealing with the setback of several global brands pulling their advertising after it was reported that their ads were being broadcast alongside “videos of scantily dressed children” accompanied by scores of obscene comments that slipped past the YouTube filters.
On December 4, YouTube announced a major step toward doing something about the problem. On its official blog, YouTube’s CEO, Susan Wojcicki, unveiled a plan to bring the number of “people across Google working to address content that might violate our policies to over 10,000 in 2018.” Wojcicki also focused on “violent extremist” content, noting that YouTube has used a combination of machine learning and human review to remove over 150,000 videos for violent extremism since June.
Wojcicki reports that “[m]achine learning is helping our human reviewers remove nearly five times as many videos than they were previously.” Because of the positive results YouTube has observed from machine learning, the technique is now being applied to other areas, including child safety and hate speech.
ADL has long partnered with YouTube and other online platforms to help devise strategies for combatting online hate and extremism. For a massive video platform like YouTube, there is much potential in deploying cutting-edge Artificial Intelligence and machine learning, and YouTube is reporting what seem to be positive results. To ensure that parents can trust that an app like YouTube Kids is safe, and for platforms generally to excise the hate and racism that have made so many inhospitable, trained human reviewers must play a significant role in content moderation.
With Wojcicki’s recent announcement, it will be worth watching to see the steps YouTube takes toward finding the right balance.