AI Case Study

University of Toronto researchers have developed an algorithm that makes facial recognitions systems misclassify 99% of images by applying a dynamic filter undetectable to human eye

University of Toronto researchers have developed an algorithm that can reduce the rate of correctly identified images to less than .5% from 99% for facial recognition systems based on Faster R-CNN architecture. The algorithm adjusts pixels around eyes or lips as a filter which tricks Faster R-CNN based face detectors but are undetectable to human eye.

Industry

Public And Social Sector

Education And Academia

Project Overview

"We propose a novel attack on a Faster R-CNN based face
detector by producing small perturbations that when added
to an input face image causes the pretrained face detector
to fail. To create the adversarial perturbations we propose
training a generator against a pretrained Faster R-CNN based
face detector. Given an image, the generator produces a
small perturbation that can be added to the image to fool
the face detector. The face detector is trained offline only
on unperturbed images and as such remains oblivious to
the generator’s presence. Over time, the generator learns to produce perturbations that can effectively fool the face detector
it is trained with. Generating an adversarial example is fast and
inexpensive, even more so than for FGSM, since creating a
perturbation for an input only requires a forward pass once the
generator is sufficiently well-trained. We validate the efficacy
of our attack on the cropped 300-W test set."

According to Aarabi, the AI attempting to disable face recognition found a weakness. "If you just adjust the pixels at the corner of the eye or the edge of the lips, just the right amount, or just change the colour slightly, then the main detection AI is not able to find a face."

Reported Results

Initial testing of the system on 300-W face dataset suggests that the system could reduce the proportion of faces that were originally detectable from nearly 100 per cent down to 0.5 per cent.

Technology

" A novel adversarial attack on Faster R-CNN based face detectors by way of solving a constrained optimization problem using a generator network."

Function

Risk

Security

Background

With facial recognition systems gaining more accuracy, personal privacy is no longer an individual choice.

Benefits

Data

"The 300-W dataset, was first introduced for Automatic Facial
Landmark Detection in-the-Wild Challenge and is widely
used as a benchmark for Face Alignment. Landmark annotations
are provided following the Multi-PIE 68 points markup
[27] and the 300-W test set consists of the re-annotated
images from LFPW [11], AFW [12], HELEN [13], XM2VTS
[14] and FRGC [15] datasets. Moreover, the 300-W test set is
split into two categories, indoors and outdoors, of 300 images
per category"