top of page

AI Case Study

Labsix researchers generate images and objects which deceive Google's image classifier successfully 96% and 84% of the time but remain undetectable to humans

Researchers develop an algorithm which can trick neural network-based image classifiers into misclassifying images of 2D and 3D objects. The disruptions to the images are undetectable to humans but worked to cause Google's image classifier into miscategorising them 96% of the time for the 2D images and 84% for the 3D objects.

Industry

Technology

Software And It Services

Project Overview

Targeting Google's InceptionV3 image classifier, researchers developed a new algorithm "for reliably producing adversarial examples that cause targeted misclassification under transformations like blur, rotation, zoom, or translation, and we use it to generate both 2D printouts and 3D models that fool a standard neural network at any angle. Our process works for arbitrary 3D models. The examples still fool the neural network when we put them in front of semantically relevant backgrounds; for example, you’d never see a rifle underwater, or an espresso in a baseball mitt." (labsix.org)

From the arXiv paper: "By introducing EOT, a general-purpose algorithm for the creation of robust examples under any chosen distribution, and modeling 3D rendering and printing within the framework of EOT, we succeed in fabricating three-dimensional adversarial examples. In particular, with access only to low-cost commercially available 3D printing technology, we successfully print physical adversarial objects that are strongly classified as a desired target class over a variety of angles, viewpoints, and lighting conditions by a standard ImageNet classifier."

Reported Results

Labsix.org: "Our work demonstrates that adversarial examples are a significantly larger problem in real world systems than previously thought." The generated 2D images were 96.4% adversarial, while the 3D models had "an average adversariality of 84.0% with a long left tail, showing that EOT usually produces highly adversarial objects". For the photos of the 3D printed objects, the adversarial percentage was 82% for the turtle and 59% for the baseball.

Technology

"We produce 3D adversarial examples by modeling the 3D rendering as a transformation under EOT [the algorithm]. Given a textured 3D object, we optimize over the texture such that the rendering is adversarial from any viewpoint. We consider a distribution that incorporates different camera distances, lateral translation, rotation of the object, and solid background colors. We consider 5 complex 3D models, choose 20 random target classes per model, and use EOT to synthesize adversarial textures for the models with minimal parameter search". For the physical object test: "We choose target classes for each of the models at random — “rifle” for the turtle, and “espresso” for the baseball — and we use EOT to synthesize adversarial examples. We evaluate the performance of our two 3D-printed adversarial objects by taking 100 photos of each object over a variety of viewpoints." (arXiv paper)

Function

Information Technology

Security

Background

From labsix's websiteL "Neural network based classifiers reach near-human performance in many tasks, and they’re used in high risk, real world systems. Yet, these same neural networks are particularly vulnerable to adversarial examples, carefully perturbed inputs that cause targeted misclassification." However, as the researchers state in their arXiv paper, "The existence of adversarial examples for neural networks has until now been largely a theoretical concern. While minute, carefully-crafted perturbations can cause targeted misclassification in a neural network, adversarial examples produced using standard techniques lose adversariality when directly translated to the physical world as they are captured over varying viewpoints and affected by natural phenomena such as lighting and camera noise. This phenomenon suggests that practical systems may not be at risk because adversarial examples generated using standard techniques are not robust in the physical world."

Benefits

Data

RGB images of 2D images and 3D models. From the arXiv paper, for the 2D images: "We take the first 1000 images in the ImageNet validation set, randomly choose a target class for each image, and use EOT to synthesize an adversarial example that is robust over the chosen distribution". In the 3D models case: "For each of the 100 adversarial examples, we sample 100 random transformations from the distribution".

bottom of page