Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Businesses AI Security

Startup Can Identify Deepfake Video In Real Time (wired.com) 13

An anonymous reader quotes a report from Wired: Real-time video deepfakes are a growing threat for governments, businesses, and individuals. Recently, the chairman of the US Senate Committee on Foreign Relations mistakenly took a video call with someone pretending to be a Ukrainian official. An international engineering company lost millions of dollars earlier in 2024 when one employee was tricked by a deepfake video call. Also, romance scams targeting everyday individuals have employed similar techniques. "It's probably only a matter of months before we're going to start seeing an explosion of deepfake video, face-to-face fraud," says Ben Colman, CEO and cofounder at Reality Defender. When it comes to video calls, especially in high-stakes situations, seeing should not be believing.

The startup is laser-focused on partnering with business and government clients to help thwart AI-powered deepfakes. Even with this core mission, Colman doesn't want his company to be seen as more broadly standing against artificial intelligence developments. "We're very pro-AI," he says. "We think that 99.999 percent of use cases are transformational -- for medicine, for productivity, for creativity -- but in these kinds of very, very small edge cases the risks are disproportionately bad." Reality Defender's plan for the real-time detector is to start with a plug-in for Zoom that can make active predictions about whether others on a video call are real or AI-powered impersonations. The company is currently working on benchmarking the tool to determine how accurately it discerns real video participants from fake ones. Unfortunately, it's not something you'll likely be able to try out soon. The new software feature will only be available in beta for some of the startup's clients.

As Reality Defender works to improve the detection accuracy of its models, Colman says that access to more data is a critical challenge to overcome -- a common refrain from the current batch of AI-focused startups. He's hopeful more partnerships will fill in these gaps, and without specifics, hints at multiple new deals likely coming next year. After ElevenLabs was tied to a deepfake voice call of US president Joe Biden, the AI-audio startup struck a deal with Reality Defender to mitigate potential misuse. [...] "We don't ask my 80-year-old mother to flag ransomware in an email," says Colman. "Because she's not a computer science expert." In the future, it's possible real-time video authentication, if AI detection continues to improve and shows to be reliably accurate, will be as taken for granted as that malware scanner quietly humming along in the background of your email inbox.

Startup Can Identify Deepfake Video In Real Time

Comments Filter:
  • You know, introduce the detection mechanism, introduce the counter measures, cash in twice.

  • The problem with any technology like this that can be run cheaply by the end user is that the more advanced attackers can just take that software and train models to specifically trick it. Sure, maybe it catches the low effort attacks but at the cost of potentially helping the more advanced attacks seem more legitimate when they don't trigger the fake detection.

    The real solution is the same one we used back before photography, audio and video were common and people could pretend to be anyone they wanted in

  • Actually, it occurs to me that there is a technological solution to this problem. Simply have camera device makers sign their output using some kind of secure hardware key so the receiver can verify that the video was the input as seen by the camera on a X laptop or whatever. Of course, you still need to guard against attacks that stick a screen in front of a camera but that's doable if the camera has any focusing information or uses IR lines to reconstruct 3d information.

    I'm sure there will be all sorts

    • by gweihir ( 88907 )

      Forget about iot. Seriously. Cameras would need to be secure devices for this. They are not.

  • Well, they want to sell something in the "AI" space, so "scam" is the normal approach.

UNIX is many things to many people, but it's never been everything to anybody.

Working...