AI Case Study
University of Toronto researchers prevents facial image recognition by creating an adversarial AI program
Researchers at the University of Toronto have developed an adversarial algorithm which subtly alters images to make them unidentifiable for image recognition AI programs.
Industry
Public And Social Sector
Education And Academia
Project Overview
"Adversarial attacks involve adding, small, often imperceptible, perturbations to inputs with the goal of getting a machine learning model to misclassifying them. We propose a novel attack on a Faster R-CNN based face detector by producing small perturbations that when added to an input face image causes the pretrained face detector to fail. Our approach is fast and scalable, requiring only a forward pass through our trained generator network to craft an adversarial sample. Visually speaking the crafted adversarial samples have largely imperceptible differences."
Reported Results
The program "reduced the proportion of faces that could be identified from 100% to between 0.5% and 5%"
Technology
The adversarial AI generator was used against a trained Faster R-CNN face detector. "To create the adversarial perturbations we propose training a generator against a pretrained Faster R-CNN based face detector. Given an image, the generator produces a
small perturbation that can be added to the image to fool the face detector. The face detector is trained offline only on unperturbed images and as such remains oblivious to the generator’s presence. Over time, the generator learns to produce perturbations that can effectively fool the face detector it is trained with."
Function
R And D
Core Research And Development
Background
"If Artificial Intelligence is increasingly able to recognise and classify faces, then perhaps the only way to counter this creeping surveillance is to use another AI to defeat it. We’re in the early years of AI-powered image and face recognition but already researchers at the University of Toronto have come up with a way that this might be possible."
Benefits
Data
Test set was the "300-W face dataset, an industry-standard pool based on 600 faces in a range of lighting conditions".