Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security AI Software Technology

Scientists Help Artificial Intelligence Outsmart Hackers (sciencemag.org) 61

sciencehabit shares a report from Science Magazine: A hacked message in a streamed song makes Alexa send money to a foreign entity. A self-driving car crashes after a prankster strategically places stickers on a stop sign so the car misinterprets it as a speed limit sign. Fortunately these haven't happened yet, but hacks like this, sometimes called adversarial attacks, could become commonplace -- unless artificial intelligence (AI) finds a way to outsmart them. Now, researchers have found a new way to give AI a defensive edge. The work could not only protect the public. It also helps reveal why AI, notoriously difficult to understand, falls victim to such attacks in the first place. Because some AIs are too smart for their own good, spotting patterns in images that humans can't, they are vulnerable to those patterns and need to be trained with that in mind, the research suggests.

To identify this vulnerability, researchers created a special set of training data: images that look to us like one thing, but look to AI like another -- a picture of a dog, for example, that, on close examination by a computer, has catlike fur. Then the team mislabeled the pictures -- calling the dog picture an image of a cat, for example -- and trained an algorithm to learn the labels. Once the AI had learned to see dogs with subtle cat features as cats, they tested it by asking it to recognize fresh, unmodified images. Even though the AI had been trained in this odd way, it could correctly identify actual dogs, cats, and so on nearly half the time. In essence, it had learned to match the subtle features with labels, whatever the obvious features. The training experiment suggests AIs use two types of features: obvious, macro ones like ears and tails that people recognize, and micro ones that we can only guess at. It further suggests adversarial attacks aren't just confusing an AI with meaningless tweaks to an image. In those tweaks, the AI is smartly seeing traces of something else. An AI might see a stop sign as a speed limit sign, for example, because something about the stickers actually makes it subtly resemble a speed limit sign in a way that humans are too oblivious to comprehend.
Engineers could change the way they train AI to help outsmart adversarial attacks. When the researchers trained an algorithm on images without the subtle features, "their image recognition software was fooled by adversarial attacks only 50% of the time," reports Science Magazine. "That compares with a 95% rate of vulnerability when the AI was trained on images with both obvious and subtle patterns."
This discussion has been archived. No new comments can be posted.

Scientists Help Artificial Intelligence Outsmart Hackers

Comments Filter:
  • Okay then (Score:2, Informative)

    by Anonymous Coward

    So, made up attacks that are not happening can be prevented by made up AI that is yet to exist.

    Great article, /., excellent work

    • by Anonymous Coward

      It does correctly identify things nearly half the time, so it will be almost as accurate as flipping a coin.

      Pretty cool huh? Almost 50/50 on CAT OR DOG. That seems fairly scientific.

      • If it is almost as good as a coin toss, why don't we just never believe it, and we'll be doing slightly better than a coin toss?

      • 50% is good you'd expect performance around 0%.
      • Sorry if the Science story wasn't clear (I wrote it). There were multiple categories of objects. And if you'd been trained to see cats as something else, one might expect cat recognition performance of 0%.
  • Flawed By Design (Score:2, Interesting)

    by Anonymous Coward

    If a self-driving car needs a stop sign to prevent it from driving into another car or person, it's a deathtrap that shouldn't be allowed on the road. A functional self-driving car shouldn't need any road signs. It should 'simply' not hit anything that'll cross its path nor cross the path of something that would hit it. You don't need traffic signs for that.

    Any anyway, these attacks won't be common place. No one goes around injecting glue into everyone's key hole in the parking lot. In terms of the tra

    • I can run stop signs and red lights based on timing it so I don't hit anything too. There's good reasons why if I were to actually do that I'd get a ticket. Can you really not figure out why self driving cars need to obey traffic control devices? When there's no more human drivers at all, then we can remove them to some extent, but then you're still going to need some means of controlling traffic flow beyond "don't hit anything".
  • It's not just theoretical. Over a half dozen self-driving Teslas have crashed into parked fire trucks after prankster fire fighters parked a fire truck on the side of the road.
  • Maybe you shouldn't give your banking details to Alexa, et al.
    • My landlord thinks I'm old-fashioned because I set up auto-bill-pay at my bank to mail them a check every month.

      They tried to talk me into using an app, but no way in hell I'm putting my banking info into an app! That's between me, and my bank. Need-to-know information.

      I've never wanted losing money to be more convenient. Never.

  • Comment removed based on user account deletion
  • by geschild ( 43455 ) on Wednesday May 15, 2019 @01:35AM (#58594656) Homepage

    Humans make the same mistake when the visual differences are confusing or contradictory and we avoid them usually because our way of judging is based on the way we learn in the first place: large features first, only considering finer details if further classification is needed or if the larger features give too little information. Visual tricks that play into this can confuse humans even where they may not confuse AI.

    Examples: sometimes it is hard to see if someone is male or female. We then call that person androgyn but factually it is a failure of our classification system. Some forms of optical illusions are the same: they deliberately take advantage of the way people (fail to) discern between 'objects'.

    I think it is time to admit that our brains aren't too special. They really are nothing more than weighted networks that take inputs filter them and classify them. Ideas may be nothing more than a way for the brain to code and store the results. If that is true, then the 'human advantage' may be in that specific area: the greater ability to store and use these results as further input.

    • by swilver ( 617741 )

      ...or to decide to *not* use something as input.

      If I see a sign that's out of place, I'll ignore it. Train AI on that.

    • Not Only that, but when we doubt a sign we can see what it could have said, and apply some basic "what if?" scenarios. We also look at the context, and we decide on that.

      We see context so much that we can and do ignore signs when they are obviously bogus. Interpreting the entire context is much harder to do for programs.

      As a side note, even our vision works by ignoring warnings. We see depth before we interpret the image from our eyes, and see things coming at us before we know what it is. Only when we inte

    • by f00zbll ( 526151 )

      Agree human brains aren't special, but lets not jump to conclusions. If you ask the "god father" of DNN G. Hinton or any cognitive scientists, we still have zero conclusive proof of how the human brain works. We don't even understand how our brains distribute memory storage. Go back 30 years and scientists thought it was mostly the connections between neurons. With better technology and techniques, scientists are starting to see the neuron may also store memory. If you look at the amount of data it takes fo

      • by geschild ( 43455 )

        Indeed, let's wait and see. For now, even though the mechanism underneath may be unclear, I don't see proof of a more complex mechanism at work. Many complex human cognitive skills (Chess, Go, visual recognition, language recognition, etc.) have been replicated with machines to a varying degree and I dare wager that all of them will be at some point in time with nothing more than 'neural networks'. If some quantum effect or other esotheric mechanism is indeed involved in memory storage, I doubt it will chan

    • by gtvr ( 1702650 )
      Humans get fooled by stuff all the time. Chemtrail conspiracy theories, anyone?
  • It's just some fancy math, statistics and probability.
  • From the summary "A self-driving car crashes after a prankster strategically places stickers on a stop sign so the car misinterprets it as a speed limit sign."

    I would cut down the sign rather than trying some elaborate trickery.

    • Sometimes adversarial perturbations are imperceptible to humans. To be safe you could cut down all traffic signs ;-)
  • by omfglearntoplay ( 1163771 ) on Wednesday May 15, 2019 @09:45AM (#58596012)

    I hate sales lies. This sales pitch is trying to make the current hyped "AI" equivalent to our science fiction dreams (or nightmares) of sentient robots and computers. Let's take these statements and translate:

    "... AI, notoriously difficult to understand, falls victim to such attacks in the first place. Because some AIs are too smart for their own good, spotting patterns in images that humans can't,"

    AI making mistakes that can kill or maim, also known as:
    Stupid, mindless machines without brains that do stupid as fuck stuff stuff because they aren't able to grasp concepts that a day old rat would automatically understand.

    Now to take a step back, I'm all for AI that improves things. But here, let's talk facts and figures, not bullshit.

No spitting on the Bus! Thank you, The Mgt.

Working...