top of page

AI Case Study

Predictim offers trustworthy background checks for babysitters using natural language processing and computer vision

Berkeley startup Predictim has leveraged natural language processing and computer vision technology to develop a system that generates personality assessments from digital footprints, such as social media posts. The platform is able to create a report consisting of a person's predicted traits, behaviors, and areas of compatibility and a risk assessment score. The solution is particularly targeted at screening candidates for babysitting to offer parents a reliable solution for employing a trustworthy individual to look after their kid. However, social media companies Facebook and Twitter have restricted Predictim's access to their users' data, claiming that the platform violates their terms of agreement and data privacy rules, and as of December 2018 the service said it was stalling its launch in light of critcism.

Industry

Consumer Goods And Services

Travel And Leisure

Project Overview

According to VentureBeat: "Predictim’s algorithms take into account “billions” of data points dating back years in a person’s online profile, according to Parsa, and within minutes deliver an evaluation with predicted traits, behaviors, and areas of compatibility, and a digest of their digital history. Each report consists of a risk assessment score — a speedometer-like gauge indicating the “overall risk” of the babysitter, from green (“not risky”) to red (“very risky”) — and an activity graph showing the number of posts and images they’ve published over the past decade. Personality attributes are broken out into categories like drug abuse, bullying and harassment, explicit content, and attitude. Parents pony up $24.99 per report."

However, Facebook and Twitter have recently started restricting the platform's usage and access to their users' data for violating terms of service. From the Seattle Times: "Facebook said it dramatically limited Predictim’s access to users’ information on Instagram and Facebook a few weeks ago for violating a ban on developers’ use of personal data to evaluate a person for decisions on hiring or eligibility. Facebook spokeswoman Katy Dormer said the company also launched an investigation earlier this week into Predictim’s extraction, or “scraping,” of personal data. That investigation is ongoing and could include further penalties. Twitter spokesman Nick Pacilio said the site conducted its own investigation earlier this week and revoked Predictim’s access to important site tools – known as application programming interfaces, or APIs – that would allow the start-up to review and analyze babysitters’ tweets on a massive scale."

It appears that Predictim intends to go ahead, regardless. From the Seattle Times: "Parsa and Joel Simonoff, Predictim’s chief technology officer, said they spoke with Facebook policy officials in recent weeks and received a letter from Twitter on Monday. The changes would not hurt their algorithms’ accuracy, Parsa said, because the company had 'decided to source data from other means.'"

Reported Results

According to Gizmodo, Predictim has put its launch on "pause" after receiving much public criticism and attention.

Technology

VentureBeat: "The eponymous Predictim platform, which launches today, uses natural language processing (NLP) and computer vision algorithms to sift through social media posts — including tweets, Facebook posts, and Instagram photos — for warning signs."

Function

Risk

Audit

Background

From VentureBeat: "If you’re a parent with young kids, you probably know how arduous it can be to screen a babysitter. According to a Care.com survey, roughly 51 percent of families opt not to hire a sitter because it’s too stressful to find someone they like. And among those who have hired one, a whopping 62 percent didn’t bother to check their references. 'The current background checks parents generally use don’t uncover everything that is available about a person. Interviews can’t give a complete picture,' Parsa said. 'A seemingly competent and loving caregiver with a ‘clean’ background could still be abusive, aggressive, a bully, or worse.'"

Benefits

Data

Over 6 billion data points of digital footprints, such as tweets, Facebook posts, and Instagram photos.

bottom of page