Scientists Help Artificial Intelligence Outsmart Hackers (sciencemag.org) 61
sciencehabit shares a report from Science Magazine: A hacked message in a streamed song makes Alexa send money to a foreign entity. A self-driving car crashes after a prankster strategically places stickers on a stop sign so the car misinterprets it as a speed limit sign. Fortunately these haven't happened yet, but hacks like this, sometimes called adversarial attacks, could become commonplace -- unless artificial intelligence (AI) finds a way to outsmart them. Now, researchers have found a new way to give AI a defensive edge. The work could not only protect the public. It also helps reveal why AI, notoriously difficult to understand, falls victim to such attacks in the first place. Because some AIs are too smart for their own good, spotting patterns in images that humans can't, they are vulnerable to those patterns and need to be trained with that in mind, the research suggests.
To identify this vulnerability, researchers created a special set of training data: images that look to us like one thing, but look to AI like another -- a picture of a dog, for example, that, on close examination by a computer, has catlike fur. Then the team mislabeled the pictures -- calling the dog picture an image of a cat, for example -- and trained an algorithm to learn the labels. Once the AI had learned to see dogs with subtle cat features as cats, they tested it by asking it to recognize fresh, unmodified images. Even though the AI had been trained in this odd way, it could correctly identify actual dogs, cats, and so on nearly half the time. In essence, it had learned to match the subtle features with labels, whatever the obvious features. The training experiment suggests AIs use two types of features: obvious, macro ones like ears and tails that people recognize, and micro ones that we can only guess at. It further suggests adversarial attacks aren't just confusing an AI with meaningless tweaks to an image. In those tweaks, the AI is smartly seeing traces of something else. An AI might see a stop sign as a speed limit sign, for example, because something about the stickers actually makes it subtly resemble a speed limit sign in a way that humans are too oblivious to comprehend. Engineers could change the way they train AI to help outsmart adversarial attacks. When the researchers trained an algorithm on images without the subtle features, "their image recognition software was fooled by adversarial attacks only 50% of the time," reports Science Magazine. "That compares with a 95% rate of vulnerability when the AI was trained on images with both obvious and subtle patterns."
To identify this vulnerability, researchers created a special set of training data: images that look to us like one thing, but look to AI like another -- a picture of a dog, for example, that, on close examination by a computer, has catlike fur. Then the team mislabeled the pictures -- calling the dog picture an image of a cat, for example -- and trained an algorithm to learn the labels. Once the AI had learned to see dogs with subtle cat features as cats, they tested it by asking it to recognize fresh, unmodified images. Even though the AI had been trained in this odd way, it could correctly identify actual dogs, cats, and so on nearly half the time. In essence, it had learned to match the subtle features with labels, whatever the obvious features. The training experiment suggests AIs use two types of features: obvious, macro ones like ears and tails that people recognize, and micro ones that we can only guess at. It further suggests adversarial attacks aren't just confusing an AI with meaningless tweaks to an image. In those tweaks, the AI is smartly seeing traces of something else. An AI might see a stop sign as a speed limit sign, for example, because something about the stickers actually makes it subtly resemble a speed limit sign in a way that humans are too oblivious to comprehend. Engineers could change the way they train AI to help outsmart adversarial attacks. When the researchers trained an algorithm on images without the subtle features, "their image recognition software was fooled by adversarial attacks only 50% of the time," reports Science Magazine. "That compares with a 95% rate of vulnerability when the AI was trained on images with both obvious and subtle patterns."
Re: (Score:2)
Software can't do anything of its own volition, of which it has none.
Actually, the entire point of AI is about changing that.
Re: This should be the other way round (Score:1)
Looks like theyâ(TM)re failing so far.
Re:This should be the other way round (Score:5, Insightful)
I agree, that's the point of AI, so why are they using advanced pattern recognition for this instead and then call it "intelligence" ? Perhaps because nobody understands how these things work after they've been trained, so it must be intelligent?
Re: (Score:3)
I agree, that's the point of AI, so why are they using advanced pattern recognition for this instead and then call it "intelligence" ? Perhaps because nobody understands how these things work after they've been trained, so it must be intelligent?
That's exactly it, and I think perhaps you've said something very smart that you didn't even realize or mean to say. And I can't see how it could possibly be any other way, given that our understanding of even our own neurological process, (never mind the 'software' of the thought processes that runs on that neurological 'hardware'), is so imperfect.
All the nay-saying here on Slashdot about AI, and all the pedantic insistence that it's not really intelligence because it's not self aware, is missing the poin
Re: (Score:2)
Because the premise is bullshit. We do know how "these things work". You guys keep parroting that like it is a fact. It isn't. We know exactly how neural nets work and if we cared to, we could examine how any particular one "works". We don't bother, because it would be pointless.
Re: (Score:2)
> All the nay-saying here on Slashdot about AI, and all the pedantic insistence that it's not really intelligence because it's not self aware, is missing the point.
A glorified table lookup is NOT fucking intelligence.
STOP hijacking terms because you are fucking ignorant about how it works.
Re: (Score:1)
In image recognition,
Non linearity prevents it from being a table lookup, its more like multiple layers of multiple lookup tables with layers hooked together through lossy convolution to reduce the data size.
Okay then (Score:2, Informative)
So, made up attacks that are not happening can be prevented by made up AI that is yet to exist.
Great article, /., excellent work
Re: (Score:1)
It does correctly identify things nearly half the time, so it will be almost as accurate as flipping a coin.
Pretty cool huh? Almost 50/50 on CAT OR DOG. That seems fairly scientific.
Re: (Score:3)
If it is almost as good as a coin toss, why don't we just never believe it, and we'll be doing slightly better than a coin toss?
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
When your only tool is hatred, every problem starts to look like it was written by a slashdot editor.
Flawed By Design (Score:2, Interesting)
If a self-driving car needs a stop sign to prevent it from driving into another car or person, it's a deathtrap that shouldn't be allowed on the road. A functional self-driving car shouldn't need any road signs. It should 'simply' not hit anything that'll cross its path nor cross the path of something that would hit it. You don't need traffic signs for that.
Any anyway, these attacks won't be common place. No one goes around injecting glue into everyone's key hole in the parking lot. In terms of the tra
Re: (Score:3)
Re: (Score:2)
[NO CARRIER]
You forgot the [NO CARRI¥€€÷
Re: (Score:2)
You need not like anything _about_ them. You just need to like _them_.
As people... humans. Ignoring their hair/talk/looks. Just as every other human strain owes to each other.
It's already happened (Score:2)
Here's a thought (Score:2)
Re: (Score:2)
My landlord thinks I'm old-fashioned because I set up auto-bill-pay at my bank to mail them a check every month.
They tried to talk me into using an app, but no way in hell I'm putting my banking info into an app! That's between me, and my bank. Need-to-know information.
I've never wanted losing money to be more convenient. Never.
Re: (Score:2)
Hhave found a new way to give AI a defensive edge (Score:2)
Early brainstorming idea. [bash.org]
Take a cue from human learning? (Score:3)
Humans make the same mistake when the visual differences are confusing or contradictory and we avoid them usually because our way of judging is based on the way we learn in the first place: large features first, only considering finer details if further classification is needed or if the larger features give too little information. Visual tricks that play into this can confuse humans even where they may not confuse AI.
Examples: sometimes it is hard to see if someone is male or female. We then call that person androgyn but factually it is a failure of our classification system. Some forms of optical illusions are the same: they deliberately take advantage of the way people (fail to) discern between 'objects'.
I think it is time to admit that our brains aren't too special. They really are nothing more than weighted networks that take inputs filter them and classify them. Ideas may be nothing more than a way for the brain to code and store the results. If that is true, then the 'human advantage' may be in that specific area: the greater ability to store and use these results as further input.
Re: (Score:3)
...or to decide to *not* use something as input.
If I see a sign that's out of place, I'll ignore it. Train AI on that.
Re: (Score:3)
Not Only that, but when we doubt a sign we can see what it could have said, and apply some basic "what if?" scenarios. We also look at the context, and we decide on that.
We see context so much that we can and do ignore signs when they are obviously bogus. Interpreting the entire context is much harder to do for programs.
As a side note, even our vision works by ignoring warnings. We see depth before we interpret the image from our eyes, and see things coming at us before we know what it is. Only when we inte
Re: (Score:1)
Agree human brains aren't special, but lets not jump to conclusions. If you ask the "god father" of DNN G. Hinton or any cognitive scientists, we still have zero conclusive proof of how the human brain works. We don't even understand how our brains distribute memory storage. Go back 30 years and scientists thought it was mostly the connections between neurons. With better technology and techniques, scientists are starting to see the neuron may also store memory. If you look at the amount of data it takes fo
Re: (Score:2)
Indeed, let's wait and see. For now, even though the mechanism underneath may be unclear, I don't see proof of a more complex mechanism at work. Many complex human cognitive skills (Chess, Go, visual recognition, language recognition, etc.) have been replicated with machines to a varying degree and I dare wager that all of them will be at some point in time with nothing more than 'neural networks'. If some quantum effect or other esotheric mechanism is indeed involved in memory storage, I doubt it will chan
Re: (Score:2)
There's no AI damn it!!!!! (Score:2)
Re: (Score:1)
Cut the sign down (Score:1)
From the summary "A self-driving car crashes after a prankster strategically places stickers on a stop sign so the car misinterprets it as a speed limit sign."
I would cut down the sign rather than trying some elaborate trickery.
Re: (Score:1)
AI smarter than human? Very funny. (Score:3)
I hate sales lies. This sales pitch is trying to make the current hyped "AI" equivalent to our science fiction dreams (or nightmares) of sentient robots and computers. Let's take these statements and translate:
"... AI, notoriously difficult to understand, falls victim to such attacks in the first place. Because some AIs are too smart for their own good, spotting patterns in images that humans can't,"
AI making mistakes that can kill or maim, also known as:
Stupid, mindless machines without brains that do stupid as fuck stuff stuff because they aren't able to grasp concepts that a day old rat would automatically understand.
Now to take a step back, I'm all for AI that improves things. But here, let's talk facts and figures, not bullshit.