AI Case Study
Adobe is developing technology to detect manipulation in images using a deep neural network
Adobe revealed that it is currently working on developing an AI system that spots whether an image has been altered artificially. It does so using a deep neural network which is trained on existing data sets of altered images.
Software And It Services
In 2016, Vlad "started applying his talents to the challenge of detecting image manipulation as part of the DARPA Media Forensics program. Building on research he started fourteen years ago and continued as a Ph.D. student in computer science at the University of Maryland, Vlad describes some of these new techniques in a recent paper: 'We focused on three common tampering techniques—splicing, where parts of two different images are combined; copy-move, where objects in a photograph are moved or cloned from one place to another; and removal, where an object is removed from a photograph, and filled-in,' he notes. Every time an image is manipulated, it leaves behind clues that can be studied to understand how it was altered. 'Each of these techniques tend to leave certain artifacts, such as strong contrast edges, deliberately smoothed areas, or different noise patterns,' he says. Although these artifacts are not usually visible to the human eye, they are much more easily detectable through close analysis at the pixel level, or by applying filters that help highlight these changes. Now, what used to take a forensic expert hours to do can be done in seconds."
"The results of this [research] project are that AI can successfully identify which images have been manipulated. AI can identify the type of manipulation used and highlight the specific area of the photograph that was altered."
The researchers proposed a "two-stream Faster R-CNN network and train it end- to-end to detect the tampered regions given a manipulated image. One of the two streams is an RGB stream whose purpose is to extract features from the RGB image input to find tampering artifacts like strong contrast difference, unnatural tampered boundaries, and so on. The other is a noise stream that leverages the noise features extracted from a steganalysis rich model filter layer to discover the noise inconsistency between authentic and tampered regions. We then fuse features from the two streams through a bilinear pooling layer to further incorporate spatial co-occurrence of these two modalities."
According to Vlad Morariu, a senior research scientist at Adobe who has been "working on technologies related to computer vision for many years, 'a variety of tools already exist to help document and trace the digital manipulation of photos. 'File formats contain metadata that can be used to store information about how the image was captured and manipulated. Forensic tools can be used to detect manipulation by examining the noise distribution, strong edges, lighting and other pixel values of a photo. Watermarks can be used to establish original creation of an image.' Of course, none of these tools perfectly provide a deep understanding of a photo’s authenticity, nor are they practical for every situation. Some are easily defeated; some tools require deep expertise and some lengthy execution and analysis to use properly."
Tens of thousands of examples of known, manipulated images. From the research paper: "Previous image manipulation datasets contain only several hundred images, not enough to train
a deep network. To overcome this, we created a synthetic tampering dataset based on COCO for pre-training our
model and then finetuned the model on different datasets