AI Case Study
Google YouTube identifies 98% of the videos it removes for promoting extremism using machine learning
YouTube is using machine learning to automatically detect content that violates its terms of service. YouTube's goal is to minimise how many views violating videos get before they're removed, which automated flagging can help with. The algorithm has identified 98% of the videos which have been removed, and more than 50% of them have been taken down before receiving 10 views.
Internet Services Consumer
From TechCrunch: Youtube's "anti-abuse machine learning algorithm, which it relies on to monitor and handle potential violations at scale, is “paying off across high-risk, low-volume areas (like violent extremism) and in high-volume areas (like spam).” YouTube says "Once potentially problematic content is flagged by our automated systems, human review of that content verifies that the content does indeed violate our policies and allows the content to be used to train our machines for better coverage in the future."
Of the 8.2 million videos YouTube reportedly deleted in Q4 of 2017. According to TechCrunch: "6.7 million were automatically flagged by its anti-abuse algorithms first. At the beginning of 2017, 8% of videos removed for violent extremist content were taken down before clocking 10 views. After YouTube started using its machine-learning algorithms in June 2017, however, it says that percentage increased to more than 50%. As of December 2017, 98% of the videos we removed for violent extremism were identified by our machine learning algorithms."
After the machine learning algorithms identify videos as potentially problematic, they are "then escalated to human reviewers, who look at nuance and apply their judgment to identify if the content is intending to glorify violence or is just documenting it." (Financial Times)
Legal And Compliance
According to Wired: "YouTube desperately needs the artificial intelligence tools that... MTurk workers train. The platform has failed repeatedly over the last several months to police itself. Since the new year alone, it has had to confront one of its biggest stars for uploading a video featuring a suicide victim’s body, faced criticism for allowing a conspiracy theory about a Parkland shooting victim to trend on the platform, and failed to ban a white supremacist group believed to be connected to five murders until coming under public pressure."
According to YouTube policy, the company "relies on a combination of people and technology to flag inappropriate content... teams from around the world review flagged videos and remove content that violates our terms; restrict videos (e.g., age-restrict content that may not be appropriate for all audiences); or leave the content live when it doesn’t violate our guidelines".
"The algorithms work by crawling YouTube looking for various signals, including tags, titles, images and colour schemes, pulling in content that they think is potentially problematic... teams have manually reviewed over two million videos to provide large volumes of training examples." (Google)