Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security AI

Researchers Hid Malware Inside An AI's 'Neurons' And It Worked Scarily Well (vice.com) 44

According to a new study, malware can be embedded directly into the artificial neurons that make up machine learning models in a way that keeps them from being detected. The neural network would even be able to continue performing its set tasks normally. Motherboard reports: "As neural networks become more widely used, this method will be universal in delivering malware in the future," the authors, from the University of the Chinese Academy of Sciences, write. Using real malware samples, their experiments found that replacing up to around 50 percent of the neurons in the AlexNet model -- a benchmark-setting classic in the AI field -- with malware still kept the model's accuracy rate above 93.1 percent. The authors concluded that a 178MB AlexNet model can have up to 36.9MB of malware embedded into its structure without being detected using a technique called steganography. Some of the models were tested against 58 common antivirus systems and the malware was not detected.

Other methods of hacking into businesses or organizations, such as attaching malware to documents or files, often cannot deliver malicious software en masse without being detected. The new research, on the other hand, envisions a future where an organization may bring in an off-the-shelf machine learning model for any given task (say, a chat bot, or image detection) that could be loaded with malware while performing its task well enough not to arouse suspicion. According to the study, this is because AlexNet (like many machine learning models) is made up of millions of parameters and many complex layers of neurons including what are known as fully-connected "hidden" layers. By keeping the huge hidden layers in AlexNet completely intact, the researchers found that changing some other neurons had little effect on performance.

According to the paper, in this approach the malware is "disassembled" when embedded into the network's neurons, and assembled into functioning malware by a malicious receiver program that can also be used to download the poisoned model via an update. The malware can still be stopped if the target device verifies the model before launching it, according to the paper. It can also be detected using "traditional methods" like static and dynamic analysis. "Today it would not be simple to detect it by antivirus software, but this is only because nobody is looking in there," cybersecurity researcher and consultant Dr. Lukasz Olejnik told Motherboard. Olejnik also warned that the malware extraction step in the process could also risk detection. Once the malware hidden in the model was compiled into, well, malware, then it could be picked up. It also might just be overkill.

This discussion has been archived. No new comments can be posted.

Researchers Hid Malware Inside An AI's 'Neurons' And It Worked Scarily Well

Comments Filter:
  • Scarily (Score:5, Funny)

    by phantomfive ( 622387 ) on Friday July 23, 2021 @02:15AM (#61610659) Journal

    When "Scarily" is in the headline, the whole story can be disregarded. -Grisham's law

    • Re:Scarily (Score:4, Informative)

      by ShanghaiBill ( 739463 ) on Friday July 23, 2021 @02:18AM (#61610661)

      Indeed. This isn't very scary. TFA says that if you download and run untrusted software, it may do untrustworthy things.

      • by vivian ( 156520 )

        So this technique is storing encoded malware in a neural network which is then extracted by some process and reassembled into functioning malware.

        How is this any different from encoding malware using say, tabs and spaces as whitespace in a bunch of emails and doing the same thing?
        More importantly, if you already have a vector for getting the malware reconstruction software / compiler into your target's network, why the hell do you need the Rube Goldberg technique technique of stenographical like encoding th

        • by jythie ( 914043 )
          So one thing that actually would concern me here : I am still pretty new to figuring out how the various tricks and language abuses packages like Tensor Flow go through to work, but full models can include more than simply weights and topology, but also custom code blocks for how to handle things within nodes. Meaning that there is a possible vector for including executable code in the model itself. It doesn't sound like they did this, and it might not even be possible, but as long as the models include b
      • Indeed. This isn't very scary.

        Whoosh.

    • Thaaats's an easy to abuse rule... ;)

      Wanna make phantomfive ignore something, no matter what? Put "Scarily" in the headline.
      Like when phantomfive's home is about to be struck with deadly a natural disaster that actually *is* scary. ^^

      Who makes up such stupid blind rules anyway?

      • Who makes up such stupid blind rules anyway?

        You?

      • Wanna make phantomfive ignore something, no matter what? Put "Scarily" in the headline.

        What did you say here? I missed it.

        Like when phantomfive's home is about to be struck with deadly a natural disaster that actually *is* scary. ^^

        Woah, scary natural disasters, I see what you're saying.

  • Meh, so you need to already have undetected malicious software running and this just a delivery method for new code which is likely to be detected when it's run, doesn't sound particularly useful unless your malware needs regular updates, piggybacking on some other software's updates which you compromised.

    It made me think though, is there a way to detect malware actually embedded in the functionality of a neural net, since neural nets are essentially blackboxes?
    You could create a brainwashed Manchurian Cand

    • You could create a brainwashed Manchurian Candidate neural net that does something useful like recognize people in photos, but then once in a while if something like a credit card number comes up, it would send that to its creator.

      No. This would not work. A NN is configured as a classifier, or probability predictor, or whatever. So if you have 100 employees and you show it a photograph, it will return a "softmax" array with a probability that the photograph is the image of each employee.

      There would be no mechanism for it to do anything different. If it detected a CC number, there is no plausible way to represent that in a softmax output. Even if there was, there would be no way for a NN to send an email, or connect to the Intern

      • Yeah, TFA isn't scary. The scary thing is AI controlling something important. What about a self driving car that has been trained to drive into a wall when it sees a special pre determined object? That could be hidden for years with no one knowing just waiting for activation. It would be the perfect sleeper agent. You can't really audit a neutral net to detect such malicious intent.

        • by gweihir ( 88907 )

          Yeah, TFA isn't scary. The scary thing is AI controlling something important. What about a self driving car that has been trained to drive into a wall when it sees a special pre determined object? That could be hidden for years with no one knowing just waiting for activation. It would be the perfect sleeper agent. You can't really audit a neutral net to detect such malicious intent.

          That is why in a good design, the Artificial Idiot is only one of the things contribution to the driving decisions. There are other systems, safety interlocks and plausibility checks, The AI is not able to simply drive things into a wall. Example: Use AI to identify a street-sign. Once you have identified it, normalize position and size and verify using cross-correlation with a reference image. Verification is usually much, much simpler and much more reliable than identification.

          Of course, if this redundanc

          • To verify you need another neural net, you simply can't code that manually.
            • by gweihir ( 88907 )

              Nonsense. Please retake "Algorithms 101". Verification given a result candidate is much, much easier for a lot of questions than finding that result candidate in the first place.

      • Even if there was, there would be no way for a NN to send an email, or connect to the Internet, or to run untrusted software.

        That's exactly what the NN wants you to believe.

      • by jythie ( 914043 )
        In a general sense you are right, but in a more targeted sense this might not be a safe assumption. ML frameworks can include the ability to execute arbitrary (if limited) user supplied code within individual nodes, and someone could potentially craft a node that gains access to the rest of the library and thus the interpreter itself (so yeah, I am thinking python+tensorflow or something). So you would need a seperate softmax looking for the trigger condition and tie its output to a node that executes a c
    • It made me think though, is there a way to detect malware actually embedded in the functionality of a neural net, since neural nets are essentially blackboxes?

      Not really, but there's also no way to deliberately hide malware functionality there either that's unique to neural nets. If you hide it in the code then it's no more hidden in the code than in any other code. If you hide it in the learning data then it's no more likely to be triggered than any other code. The network might well learn to do something else when your trigger is invoked.

  • from steganographic concealing methods in media like images or videos, besides being more difficult? Other than containing cool words like "neural network", that is. I somehow doubt that it is more practical.
  • by k2r ( 255754 ) on Friday July 23, 2021 @02:53AM (#61610729)

    So one can store data a neural network without disturbing it too much and some external software can read the data and make use of it?
    Basically its steganography in neural networks and something needs to extract and execute the hidden payload - it would be amazing if this wasn’t possible but it’s not scary that it is.

  • So much for "works scarily well"
  • by mysidia ( 191772 ) on Friday July 23, 2021 @04:37AM (#61610833)

    ... , and assembled into functioning malware by a malicious receiver program that can also be used to download the poisoned model via an update.

    Ok.. So wouldn't you say that writing executable file or script content specified by some random bit of user data is inherently suspicious if not malicious in-itself?

    The AI model is not special in this case; it's just a cute way of concealing data in transit, But you still require a malicious payload to trigger an automatic extraction and execution of that data.

    By the same token, you could use stegonography to conceal malware in pixels within a folder of JPEGs or PDF documents, or other datafiles or program files.

  • Sadly, the headline seems misleading. I assumed the malware would actually be *running* inside the neural net.
    But this just seems to be a case of Johnny Mnemonic, transporting data inside a neural net at the expense of some resolution. (Like going from 4K to 2K might seem like only a marginal change for the viewer, even though 50% of the detail is removed. [Feels like I miscalculated there. Shouldn't it be 75%? Strange.])

    • by gweihir ( 88907 )

      Well, there are two aspects to get malware running on a system:
      1) getting it in there
      2) getting it started

      Somewhat surprisingly, it seems 1) is harder, because AV software looks primarily for this. For 2) it is often enough to make any code on the target machine or process do a somewhat random jump if you do not need 100% success. There was an attack on JavaCards some years ago, where the attacker simply stored the malware code on the card and then heated it up until it started to perform a bit erratically.

  • I guess it was only a question of time. Since Artificial Ignorance is characterized by "teaching" and nobody knows what is actually in there, this one will be a classic soon. Next step: Manipulate the training data so that it creates that malware when used in training!

    Also can be used to hide other things. For example, hide a copy of something illegal to have in there and than later threaten those that use it to reveal that presence. Or even better, put in a copy of Mikey Mouse. Then everybody using that mo

  • You can use steganography to encode arbitrary messages into arbitrary datasets as long as the data is somehow a bit forgiving in the precise bits.

    Neural networks can tolerate low-order-data-bits being flipped, no problem.

    The article makes it sound as if there is a real problem. A real new finding. People who are not as tech savvy as most people on Slashdot will think downloading a neural net based application has the risk of "getting infected" with something.

    I first thought: wow, interesting, did they make

  • Actually, this opens the door to completely new kinds of attacks. Take biometrics, for example: a phone unlocks to its owner's face, but all phones open to Dr. Evil.
    • How about putting fake information in a widely circulated book, would that be a similar kind of attack, on humans? Especially if it's not clear from the immediate context the information is malicious.
  • I want malware that is designed to corrupt the model to give advantage to some favored group. For example, an insurance customer rating program that had malware which recognized Methodists and gave them favorable rates.
    Or there is the AI for criminal sentencing guidance.
    https://www.technologyreview.c... [technologyreview.com]

    Perhaps the AI could be corrupted in such a way that violators from Wall street always get no jail time, while those from San Francisco's financial crimes always get the max sentence.

  • This sounds suspiciously like gain-of-function research as is used in scientific study of viruses. There is some questioning as to whether or not COVID-19 is the result of a gain-of-function experiment that escaped the lab.
  • and don't even think to/know how to look under the hood.

    Of course you can hide anything you want in there.

    It's the stupid version of that self-replicating compiler backdoor from the 70s.

  • ...if it hears voices in its processor?

  • Data poisoning attacks work on a variety of models, not just NNs. There are some excellent papers out there showing how to poison a linear model. The scenario of encoding a malware application is only the beginning. A good attack can manipulate the outcome of the model in favor of a given solution. So it's about time people started paying more attention to model security.
  • This only demonstrates writing catchy headlines brings attention. I could easily counter by saying "nothing to see here", but technically there is some real finding. But it is small, and not earth breaking as the title suggest.

    Yes, we can embed data pretty much in any container. We had steganography in jpegs, we had real live viruses in Word documents. (For contrast, these "hidden malware" is nothing but a carrier medium, the neural network will not execute it), we can do it in pretty much all file formats.

    • I am reminded of the early 1990s. Quick infodump:

      At the time of macOS 1.0 to 7.x, when a SCSI hard drive was detected, the OS would go to a part of the disk, and load in a chunk of code to figure out what to do next with the drive. Initially, just Apple's drives used a driver, but as newer external drives came to market, each drive maker had to write code to have their drive be mountable and usable. Some standard hard disk driver utilities emerged, like La Cie's Silverlining, and FWB's Hard Disk Toolkit.

  • 25 years ago we declared success if our Internet technology worked at all. Now we sneer at old insecure code as if we should have known better (we really couldn't)

    Neural networks and machine learning are at that stage of development today. Poisoning neural nets and ML systems with bad training examples can be an intractable problem.

    Unlike decades before, feigning ignorance about the security implications of these systems isn't just ignorant - its dangerous.
  • This sounds like the artificial intelligence version of The Manchurian Candidate....

    Welcome to the sleeper hit of 2021.

  • Filling AIs with malware. QAnon already beat them to the punch.

  • 1. Alexnet learns a particularly inefficient representation of the training data. Do more modern models, in particular ones that are fully convolutional (e.g. no fully connected layers), have similar problems?

    2. How does this steganography technique work in the face of weight quantization and pruning?

Life is a game. Money is how we keep score. -- Ted Turner

Working...