Free AI Programs Prone To Security Risks, Researchers Say (bloomberg.com) 17
Companies rushing to adopt hot new types of artificial intelligence should exercise caution when using open-source versions of the technology, some of which may not work as advertised or include flaws that hackers can exploit, security researchers say. From a report: There are few ways to know in advance if a particular AI model -- a program made up of algorithms that can do such things as generate text, images and predictions -- is safe, said Hyrum Anderson, distinguished engineer at Robust Intelligence, a machine learning security company that lists the US Defense Department as a client. Anderson said he found that half the publicly available models for classifying images failed 40% of his tests. The goal was to determine whether a malicious actor could alter the outputs of AI programs in a manner that could constitute a security risk or provide incorrect information. Often, models use file types that are particularly prone to security flaws, Anderson said. It's an issue because so many companies are grabbing models from publicly available sources without fully understanding the underlying technology, rather than creating their own. Ninety percent of the companies Robust Intelligence works with download models from Hugging Face, a repository of AI models, he said.
Oh boy... (Score:5, Funny)
Obligatory XKCD: https://xkcd.com/2228/
In theory (Score:5, Insightful)
With open source you can find out about the bugs at the same time as the bad guys.
With closed source, you'll find out after you're burned - if you can afford the lawsuit against the software vendor.
Re: (Score:2)
In this case, closed source means in the cloud. The security vulnerabilities they're talking about here are malicious code being embedded in python pickle formatted checkpoints.
tl;dr: Don't download stable diffusion, pay us, the trustworthy saviors of humanity, OpenAI to use our image generator instead.
Re: (Score:2)
tl;dr: Don't download stable diffusion, pay us, the trustworthy saviors of humanity, OpenAI to use our image generator instead.
Yep. My comment on the other article [slashdot.org] didn't last 12 hours before this article's "warning" showed up....
"Beware of he who would deny you access to information, for in his heart he dreams himself your master." - Commisioner Pravin Lal.
Re: In theory (Score:2)
This is true with source code -- even a relatively large code base. But a large language model can be terabytes upon terabytes of data, and it's unlikely to have been scoured with the diligence of, say, the Linux kernel.
Re: (Score:2)
Can't they run it through an AI to check for security flaws?
And so it begins. (Score:5, Interesting)
AI, such as it is today, takes massive amounts of horsepower, which is barrier to entry number 1. All the yelling about controlls from the massive corporations hoping to profit from AI today is, or likely soon will be once governments take it seriously enough, barrier to entry number 2. "Open Source is bad, mmmmkay," is barrier to entry number 3, and the real kick to the crotch that would have like to have seen this particular part of technology not just be one more bit of possibility used to pull yet more money away from people and consolidate it in the hands of the few.
It's not like AI itself isn't filled with scary possibilities. Let's do everything we can to keep it from being open and available without oversight from our corporate overlords, please. We've seen over the centuries that big money interests always do what is right for the overall population, so they're the ones best equipped to decide for us what's right and wrong on this front as well.
I thought even the press grew tired of the "open source = worthless because it might have bugs" rhetoric a long while back. Because closed source has such a stellar track record lately. Glad to see some layers of tech bullshit never go out of style.
Less than 100% accuracy is not a security risk (Score:5, Interesting)
There are no AI models that are 100% accurate. Yet, there are quite a few software systems that incorporate these less than perfect AI models. The key is the realization that the AI models are only part of the pipeline and that succeeding stages of the pipeline are needed to compensate for the expected AI model inaccuracy. For example, for an autonomous vehicle processing camera images at 30 fps, if the AI object detection were 99% accurate (which is really good), then an error would be expected every few seconds. A tracker that compensates for that inaccuracy is indispensable.
Now, there is academic research on adversarial attacks, where the inputs to the perception are slightly altered to produce security critical effects. However, the required attack models are usually a bit extreme and arguably not practical, i.e., hard to pull off in practice or requires enough system access so that the system has already been compromised. But even the vulnerability aspects of these attacks are inherent in the models and have nothing to do with the purchase price or permissiveness of the software license.
Re: (Score:2)
Well then, that went well. Clearly I wasn't properly caffeinated. Ignore me everyone. They were talking about free, not open source. Funny how my mind went right there. Meh. Long day.
Re: (Score:2)
Actually, it is a security risk when used in security critical applications. The first problem is that it is hard to find out when exactly it is hallucinating or giving false information, such as not detecting an attack. The attacker-defender asymmetry applies: An attacker needs only one vulnerability to get in, a defender needs to fix all of them (or at least all that an attacker can potentially find). Second, an attacker can introduce obscure or bizarre cases that nobody expects but that the attacker late
Typical Office worker prone to security risks... (Score:3)
Research shows!
This just in (Score:3, Interesting)
Corporation are terrified that you may take advantages of new technology and AI without paying th..I mean without due consideration for your safety.
Of course they are prone to security problems (Score:3)
All your data is kept by the company you are using on their servers. Not only is that a hack risk, but privacy as well. Until versions are available allowing storage locally, companies will be vulnerable to hacks of their vendor or just plain snooping by their vendor to make sure their use aligns with what the vendor wants to present.
Closed Source comapny is worried .... (Score:3)
....and so says Open Source is unsafe and they can prove it ...
then fails to show their closed source system is safer
All over again (Score:2)
Straight from Microsoft's 1999 FUD 101 textbook.
And closed commercial is better? I think not. (Score:2)
The only difference is that finding these flaws in closed commercial stuff is harder. The same risks exist there, and because people think they can hide them, they are often worse.