Intel Unveils Real-Time Deepfake Detector, Claims 96% Accuracy Rate (venturebeat.com) 27
An anonymous reader quotes a report from VentureBeat: On Monday, Intel introduced FakeCatcher, which it says is the first real-time detector of deepfakes -- that is, synthetic media in which a person in an existing image or video is replaced with someone else's likeness. Intel claims the product has a 96% accuracy rate and works by analyzing the subtle "blood flow" in video pixels to return results in milliseconds. Ilke Demir, senior staff research scientist in Intel Labs, designed FakeCatcher in collaboration with Umur Ciftci from the State University of New York at Binghamton. The product uses Intel hardware and software, runs on a server and interfaces through a web-based platform.
Unlike most deep learning-based deepfake detectors, which look at raw data to pinpoint inauthenticity, FakeCatcher is focused on clues within actual videos. It is based on photoplethysmography, or PPG, a method for measuring the amount of light that is absorbed or reflected by blood vessels in living tissue. When the heart pumps blood, it goes to the veins, which change color. With FakeCatcher, PPG signals are collected from 32 locations on the face, she explained, and then PPG maps are created from the temporal and spectral components. "We take those maps and train a convolutional neural network on top of the PPG maps to classify them as fake and real," Demir said. "Then, thanks to Intel technologies like [the] Deep Learning Boost framework for inference and Advanced Vector Extensions 512, we can run it in real time and up to 72 concurrent detection streams."
"FakeCatcher is a part of a bigger research team at Intel called Trusted Media, which is working on manipulated content detection -- deepfakes -- responsible generation and media provenance," she said. "In the shorter term, detection is actually the solution to deepfakes -- and we are developing many different detectors based on different authenticity clues, like gaze detection." The next step after that will be source detection, or finding the GAN model that is behind each deepfake, she said: "The golden point of what we envision is having an ensemble of all of these AI models, so we can provide an algorithmic consensus about what is fake and what is real." Rowan Curran, AI/ML analyst at Forrester Research, told VentureBeat by email that "we are in for a long evolutionary arms race" around the ability to determine whether a piece of text, audio or video is human-generated or not.
"While we're still in the very early stages of this, Intel's deepfake detector could be a significant step forward if it is as accurate as claimed, and specifically if that accuracy does not depend on the human in the video having any specific characteristics (e.g. skin tone, lighting conditions, amount of skin that can be see in the video)," he said.
Unlike most deep learning-based deepfake detectors, which look at raw data to pinpoint inauthenticity, FakeCatcher is focused on clues within actual videos. It is based on photoplethysmography, or PPG, a method for measuring the amount of light that is absorbed or reflected by blood vessels in living tissue. When the heart pumps blood, it goes to the veins, which change color. With FakeCatcher, PPG signals are collected from 32 locations on the face, she explained, and then PPG maps are created from the temporal and spectral components. "We take those maps and train a convolutional neural network on top of the PPG maps to classify them as fake and real," Demir said. "Then, thanks to Intel technologies like [the] Deep Learning Boost framework for inference and Advanced Vector Extensions 512, we can run it in real time and up to 72 concurrent detection streams."
"FakeCatcher is a part of a bigger research team at Intel called Trusted Media, which is working on manipulated content detection -- deepfakes -- responsible generation and media provenance," she said. "In the shorter term, detection is actually the solution to deepfakes -- and we are developing many different detectors based on different authenticity clues, like gaze detection." The next step after that will be source detection, or finding the GAN model that is behind each deepfake, she said: "The golden point of what we envision is having an ensemble of all of these AI models, so we can provide an algorithmic consensus about what is fake and what is real." Rowan Curran, AI/ML analyst at Forrester Research, told VentureBeat by email that "we are in for a long evolutionary arms race" around the ability to determine whether a piece of text, audio or video is human-generated or not.
"While we're still in the very early stages of this, Intel's deepfake detector could be a significant step forward if it is as accurate as claimed, and specifically if that accuracy does not depend on the human in the video having any specific characteristics (e.g. skin tone, lighting conditions, amount of skin that can be see in the video)," he said.