top of page

AI Case Study

Facebook is trying to combat misinformation in the site’s news feed with machine learning

Facebook is trying to reduce misinformation in the site’s news feed with the use of machine learning. The company's AI has been trained to evaluate the source of an article, along with other signals like negative comments, and sends the suspicious link to human fact-checkers. If they then rate it as false they reduce future views of these links by 80%.

Industry

Consumer Goods And Services

Media And Publishing

Project Overview

"Jim Kleban, a Facebook product manager who works on reducing misinformation in the site’s news feed, explains that Facebook now uses AI to augment human intelligence. The AI goes through the millions of links shared on Facebook every day to identify suspect content, which is then sent to human fact-checkers. “For the foreseeable future, all these systems will require hybrid solutions,” he says.
When fact-checkers rate a piece of content as false, Facebook places it lower in users’ news feeds."

Reported Results

The method reduces future views of the piece of content rated as false by 80%.

Technology

"Facebook’s AI is trained via machine learning. Kleban says the Facebook AI uses a variety of signals to pick out articles that contain misinformation, starting with the source of the content.
As for the text itself, the AI isn’t equipped to evaluate statements for their truthfulness, but it can find signals, such as expressions of disbelief in the comment section."

Function

Risk

Security

Background

"Facebook, which was widely criticized for failing to take action against false content in 2016, says it will use AI to do better in the U.S. midterm elections this November—and in other elections around the world."

Benefits

Data

Web content, articles, Facebook news feed

bottom of page