AI Case Study
Google YouTube recommendation system uses human raters to train AI resulting in potential inappropriate content promotion
YouTube uses human raters to rate its user generated videos based on in order to provide a training data set for its recommendation system algorithms. However, the ambiguous guidelines for doing so may result in potentially harmful material being promoted.
Consumer Goods And Services
Entertainment And Sports
YouTube uses "search quality raters, who help train Google’s systems to surface the best search results for queries. Google uses a combination of algorithms and human reviewers like these raters to analyze content across its vast suite of products. 'We use search raters to sample and evaluate the quality of search results on YouTube and ensure the most relevant videos are served across different search queries,' a company spokesperson said in an emailed statement to BuzzFeed News. 'These raters, however, do not determine where content on YouTube is ranked in search results, whether content violates our community guidelines and is removed, age-restricted, or made ineligible for ads.' ...according to task screenshots and a copy of the guidelines used to evaluate YouTube videos reviewed by BuzzFeed News, raters make direct assessments about the utility, quality, and appropriateness of videos. From time to time, they are also asked to determine whether videos are offensive, upsetting, or sexual in nature. And all these assessments, along with other input, build out the trove of data used by YouTube’s AI systems to do the same work, with and without human help.
But guidelines and screenshots obtained by BuzzFeed News, as well as interviews with 10 current and former “raters” — contract workers who train YouTube’s search algorithms — offer insight into the flaws in YouTube’s system. These documents and interviews reveal a confusing and sometimes contradictory set of guidelines, according to raters, that asks them to promote “high quality” videos based largely on production values, even when the content is disturbing. This not only allows thousands of potentially exploitative kids videos to remain online, but could also be algorithmically amplifying their reach."
Legal And Compliance
"Since the public backlash against YouTube over unacceptable children’s content on its platform, the company has taken steps to combat the problem. It said it would soon publish a report sharing aggregated data about the actions it took to remove videos and comments that violate its policies. The company also promised to apply its “cutting-edge machine learning” that it already uses on violent extremist content to trickier areas like child safety and, of course, said that it plans to have more than 10,000 human reviewers evaluating videos on the platform in 2018. But YouTube did not comment on how it plans to revise its evaluation guidelines for its expanded workforce."
Videos which meet certain criteria for being highly rated may also be misaligned with other business goals - in this case, reducing harmful and exploitative videos aimed at children.
Machine learning is ultimately used in the video recommendation system based on human ratings of videos.
Video and ratings scores to create a training data set