AI Case Study
Scientists at New York University improve image manipulation detection accuracy from 45% to 90% using machine learning
Researchers from New York University's Tandon School of Engineering are working on adapting the signal processors inside cameras to place watermarks in photos' codes in an attempt to combat digital manipulation or "deep fakes". Their research focuses on using machine learning to insert watermarks into colour frequencies as the photographs are being created and processed without affecting their high quality. According to their research, image manipulation detection accuracy has improved from about 45 percent to more than 90 percent.
Consumer Goods And Services
Media And Publishing
"Researchers from New York University's Tandon School of Engineering are starting to develop strategies that make it easier to tell if a photo has been altered, opening up a potential new front in the war on fakery.
But what if that tamper-resistant seal originated from the camera itself? The NYU team demonstrates that you could adapt the signal processors inside—whether it's a fancy DSLR or a regular smartphone camera—so they essentially place watermarks in each photo's code. The researchers propose training a neural network to power the photo development process that happens inside cameras, so as the sensors are interpreting the light hitting the lens and turning it into a high quality image, the neural network is also trained to mark the file with indelible indicators that can be checked later, if needed, by forensic analysts.
"People are still not thinking about security—you have to go close to the source where the image is captured," says Nasir Memon, one of the project researchers from NYU Tandon who specializes in multimedia security and forensics. "So what we’re doing in this work is we are creating an image which is forensics-friendly, which will allow better forensic analysis than a typical image. It's a proactive approach rather than just creating images for their visual quality and then hoping that forensics techniques work after the fact."
The main thing consumers expect from cameras is ever-improving image quality and fidelity. So one main focus of the project was showing that incorporating machine learning into the image signal processing that goes on inside of a camera doesn't visibly detract from photo quality as it paves the way for tamper-resistant elements. And adding these features within the image-generation hardware itself means that by the time files are being stored in the camera's SD card or other memory—where they're potentially at risk of manipulation—they are already imbued with their tamper-evident seals.
The researchers mainly insert their watermarks into certain color frequencies, so they will persist through typical post-processing—like compression or brightness adjustments—but show modification if the content of an image is altered.
Deepfakes have become a major concern as their use in disinformation campaigns, social media manipulation, and propaganda grows worldwide. And being able to reliably identify them is crucial to combatting false narratives. The NYU researchers, who will present their work at the June IEEE International Conference on Computer Vision and Pattern Recognition in Long Beach, California, emphasize that there is no panacea for dealing with the problem. They suggest that foundational watermarking techniques like theirs would be most effective when used in combination with other methods for spotting fakes and forgeries.
"One of the most difficult things about detecting manipulated photos, or "deepfakes," is that digital photo files aren't coded to be tamper-evident.
Forensic analysts have been able to identify some digital characteristics they can use to detect meddling, but these indicators don't always paint a reliable picture of whatever digital manipulations a photo has undergone. And many common types of "post-processing," like file compression for uploading and sharing photos online, strip away these clues anyway."
"Overall, the forensic-friendly additions improved image manipulation detection accuracy from about 45 percent to more than 90 percent."