Startup Can Identify Deepfake Video In Real Time (wired.com) 13
An anonymous reader quotes a report from Wired: Real-time video deepfakes are a growing threat for governments, businesses, and individuals. Recently, the chairman of the US Senate Committee on Foreign Relations mistakenly took a video call with someone pretending to be a Ukrainian official. An international engineering company lost millions of dollars earlier in 2024 when one employee was tricked by a deepfake video call. Also, romance scams targeting everyday individuals have employed similar techniques. "It's probably only a matter of months before we're going to start seeing an explosion of deepfake video, face-to-face fraud," says Ben Colman, CEO and cofounder at Reality Defender. When it comes to video calls, especially in high-stakes situations, seeing should not be believing.
The startup is laser-focused on partnering with business and government clients to help thwart AI-powered deepfakes. Even with this core mission, Colman doesn't want his company to be seen as more broadly standing against artificial intelligence developments. "We're very pro-AI," he says. "We think that 99.999 percent of use cases are transformational -- for medicine, for productivity, for creativity -- but in these kinds of very, very small edge cases the risks are disproportionately bad." Reality Defender's plan for the real-time detector is to start with a plug-in for Zoom that can make active predictions about whether others on a video call are real or AI-powered impersonations. The company is currently working on benchmarking the tool to determine how accurately it discerns real video participants from fake ones. Unfortunately, it's not something you'll likely be able to try out soon. The new software feature will only be available in beta for some of the startup's clients.
As Reality Defender works to improve the detection accuracy of its models, Colman says that access to more data is a critical challenge to overcome -- a common refrain from the current batch of AI-focused startups. He's hopeful more partnerships will fill in these gaps, and without specifics, hints at multiple new deals likely coming next year. After ElevenLabs was tied to a deepfake voice call of US president Joe Biden, the AI-audio startup struck a deal with Reality Defender to mitigate potential misuse. [...] "We don't ask my 80-year-old mother to flag ransomware in an email," says Colman. "Because she's not a computer science expert." In the future, it's possible real-time video authentication, if AI detection continues to improve and shows to be reliably accurate, will be as taken for granted as that malware scanner quietly humming along in the background of your email inbox.
The startup is laser-focused on partnering with business and government clients to help thwart AI-powered deepfakes. Even with this core mission, Colman doesn't want his company to be seen as more broadly standing against artificial intelligence developments. "We're very pro-AI," he says. "We think that 99.999 percent of use cases are transformational -- for medicine, for productivity, for creativity -- but in these kinds of very, very small edge cases the risks are disproportionately bad." Reality Defender's plan for the real-time detector is to start with a plug-in for Zoom that can make active predictions about whether others on a video call are real or AI-powered impersonations. The company is currently working on benchmarking the tool to determine how accurately it discerns real video participants from fake ones. Unfortunately, it's not something you'll likely be able to try out soon. The new software feature will only be available in beta for some of the startup's clients.
As Reality Defender works to improve the detection accuracy of its models, Colman says that access to more data is a critical challenge to overcome -- a common refrain from the current batch of AI-focused startups. He's hopeful more partnerships will fill in these gaps, and without specifics, hints at multiple new deals likely coming next year. After ElevenLabs was tied to a deepfake voice call of US president Joe Biden, the AI-audio startup struck a deal with Reality Defender to mitigate potential misuse. [...] "We don't ask my 80-year-old mother to flag ransomware in an email," says Colman. "Because she's not a computer science expert." In the future, it's possible real-time video authentication, if AI detection continues to improve and shows to be reliably accurate, will be as taken for granted as that malware scanner quietly humming along in the background of your email inbox.
Re:Laser focused? (Score:4, Insightful)
Cool! Do you have a Dockerfile available so I can deploy you into our K8s cluster?
Or are you not scaleable enough to help us meet our compliance requirements?
Just a headsup, this isn't about you. Its about the other 8.2bil-1 people in the world
Re: Laser focused? (Score:1)
Re:Laser focused? (Score:4, Interesting)
No, lasers aren't focused.
Yeah, I've seen people claim that too. Doesn't mean it's correct.
https://www.edmundoptics.com/f/laser-focusing-singlet-lenses/39590/ [edmundoptics.com]
Re: (Score:3)
Those who use that phrase are selling something.
Well, no sh!t... People usually make startups to sell something.
It can detect deepfakes? So can I.
Good for you. I, however, can not, or at least I'm not at all sure that I can. And the majority of the human population is just like me concerning this.
With all that said, I highly doubt that what this startup has can detect deepfakes, either. It's trying to play an unwinnable game of cat and mouse. It might at best be able to detect yesterday's deepfakes, while today's ones are flooding our lives, and tomorrow it will be the same, and the day
Re: (Score:2)
At worst it's like we are back in the 1900s before we had easy access to video and audio recordings. People managed pretty well then. I think it will be less disruptive than you suggest even if -- just like now -- older folks who aren't used to the new dangers are vulnerable to scams.
But I think we can solve this by just having cameras and audio recording devices sign their output using hardware keys.
Re: (Score:2)
My take as well. Bombastic language, not credible claims. Looks like a scam to me.
Do they sell the countermeasures too? (Score:2)
You know, introduce the detection mechanism, introduce the counter measures, cash in twice.
More Harm Than Good? (Score:2)
The problem with any technology like this that can be run cheaply by the end user is that the more advanced attackers can just take that software and train models to specifically trick it. Sure, maybe it catches the low effort attacks but at the cost of potentially helping the more advanced attacks seem more legitimate when they don't trigger the fake detection.
The real solution is the same one we used back before photography, audio and video were common and people could pretend to be anyone they wanted in
Better Solution (Score:2)
Actually, it occurs to me that there is a technological solution to this problem. Simply have camera device makers sign their output using some kind of secure hardware key so the receiver can verify that the video was the input as seen by the camera on a X laptop or whatever. Of course, you still need to guard against attacks that stick a screen in front of a camera but that's doable if the camera has any focusing information or uses IR lines to reconstruct 3d information.
I'm sure there will be all sorts
Re: (Score:2)
Forget about iot. Seriously. Cameras would need to be secure devices for this. They are not.
Sounds like a scam (Score:2)
Well, they want to sell something in the "AI" space, so "scam" is the normal approach.