Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Businesses AI Security

Startup Can Identify Deepfake Video In Real Time (wired.com) 28

An anonymous reader quotes a report from Wired: Real-time video deepfakes are a growing threat for governments, businesses, and individuals. Recently, the chairman of the US Senate Committee on Foreign Relations mistakenly took a video call with someone pretending to be a Ukrainian official. An international engineering company lost millions of dollars earlier in 2024 when one employee was tricked by a deepfake video call. Also, romance scams targeting everyday individuals have employed similar techniques. "It's probably only a matter of months before we're going to start seeing an explosion of deepfake video, face-to-face fraud," says Ben Colman, CEO and cofounder at Reality Defender. When it comes to video calls, especially in high-stakes situations, seeing should not be believing.

The startup is laser-focused on partnering with business and government clients to help thwart AI-powered deepfakes. Even with this core mission, Colman doesn't want his company to be seen as more broadly standing against artificial intelligence developments. "We're very pro-AI," he says. "We think that 99.999 percent of use cases are transformational -- for medicine, for productivity, for creativity -- but in these kinds of very, very small edge cases the risks are disproportionately bad." Reality Defender's plan for the real-time detector is to start with a plug-in for Zoom that can make active predictions about whether others on a video call are real or AI-powered impersonations. The company is currently working on benchmarking the tool to determine how accurately it discerns real video participants from fake ones. Unfortunately, it's not something you'll likely be able to try out soon. The new software feature will only be available in beta for some of the startup's clients.

As Reality Defender works to improve the detection accuracy of its models, Colman says that access to more data is a critical challenge to overcome -- a common refrain from the current batch of AI-focused startups. He's hopeful more partnerships will fill in these gaps, and without specifics, hints at multiple new deals likely coming next year. After ElevenLabs was tied to a deepfake voice call of US president Joe Biden, the AI-audio startup struck a deal with Reality Defender to mitigate potential misuse. [...] "We don't ask my 80-year-old mother to flag ransomware in an email," says Colman. "Because she's not a computer science expert." In the future, it's possible real-time video authentication, if AI detection continues to improve and shows to be reliably accurate, will be as taken for granted as that malware scanner quietly humming along in the background of your email inbox.

This discussion has been archived. No new comments can be posted.

Startup Can Identify Deepfake Video In Real Time

Comments Filter:
  • You know, introduce the detection mechanism, introduce the counter measures, cash in twice.

  • More Harm Than Good? (Score:5, Interesting)

    by logicnazi ( 169418 ) <gerdesNO@SPAMinvariant.org> on Thursday October 17, 2024 @05:34AM (#64871461) Homepage

    The problem with any technology like this that can be run cheaply by the end user is that the more advanced attackers can just take that software and train models to specifically trick it. Sure, maybe it catches the low effort attacks but at the cost of potentially helping the more advanced attacks seem more legitimate when they don't trigger the fake detection.

    The real solution is the same one we used back before photography, audio and video were common and people could pretend to be anyone they wanted in a letter. People need to be skeptical and authenticate interactions in other ways -- be it via shared knowledge or cryptography.

    ---

    Yes, if you only run the detection on a server and limit reporting -- for instance only report a fake/non-fake determination after several minutes of video -- and don't share information about how the technology works adversarial training might be difficult but that has it's own problems. If researchers can't put the reliability of the security to the test there is every incentive just to half-ass it and eventually attackers will figure out the vulnerabilities like most security through obscurity.

  • Actually, it occurs to me that there is a technological solution to this problem. Simply have camera device makers sign their output using some kind of secure hardware key so the receiver can verify that the video was the input as seen by the camera on a X laptop or whatever. Of course, you still need to guard against attacks that stick a screen in front of a camera but that's doable if the camera has any focusing information or uses IR lines to reconstruct 3d information.

    I'm sure there will be all sorts

    • by gweihir ( 88907 )

      Forget about iot. Seriously. Cameras would need to be secure devices for this. They are not.

    • Actually, it occurs to me that there is a technological solution to this problem. Simply have camera device makers sign their output

      Simply? Oh no, definitely not simply. When you have a trusted device, that's the device people will want to attack, and the entities with an interest in it will be nations so they will have The People's money to spend attacking it. And since we're talking about commercial devices, they will be able to buy their own and mount attacks on them.

  • Well, they want to sell something in the "AI" space, so "scam" is the normal approach.

  • Yet another arms race, then.
    • by mjwx ( 966435 )

      Yet another arms race, then.

      Like trying to build a better battleship when the aircraft carrier will make them obselete.

      I think what will happen is that people will learn to distrust videos from the internet the same as young people have learned not to believe what's in the papers, doubly so if said paper is owned by Murdoch.

  • Why this is BS (Score:4, Interesting)

    by CEC-P ( 10248912 ) on Thursday October 17, 2024 @07:41AM (#64871627)
    Just remember, anyone wanting to truly fake a video just needs to hire this company and run it through until it passes. It's called "adversarial training" or something like that. That's why there will never, ever be a true AI detection platform that works at scale and this is just flashy nothing to scam investors into throwing money at them.
    • by Rinnon ( 1474161 )
      Couldn't you say the same thing about anti-virus software? Let us not make "perfect" the enemy of "good." Perhaps there never will be a perfect AI detection platform, and perhaps what it can detect today, it can't detect next year and will need to update and adapt, but that doesn't mean it's worthless.
      • by CEC-P ( 10248912 )
        That's why Defender was always a joke until it implemented AI and heuristics and behavior instead of just normal AV methods from pre-2010.
    • "There should never be seat belts because some people will die even if they wear it."

      Standard moron pablum. Yes, some deepfakes will be so sophisticated they avoid detection. But not all.

  • The problem is when true videos, inconvenient to one side of the spectrum, are automatically/accidentally suppressed across a wide range of platforms.

    Case in point, the Hunter Biden laptop story, suppressed at a crucial moment before an election, across social media, that later turned out to be true. They may appologise later, but the effects are done, it can't be undone.

    China probably also has such an automated system for ensuring "social harmony".
  • I'm giving up on modern communications. I'm going back to traveling criers, bards and paintings.

  • As the people who run the chatbots that generate deepfakes are attacked, and the datacenters destroyed, by people who have had their lives screwed by them.

Economists state their GNP growth projections to the nearest tenth of a percentage point to prove they have a sense of humor. -- Edgar R. Fiedler

Working...