New (Deep Learning-Enhanced) Acoustic Attack Steals Data from Keystrokes With 95% Accuracy (bleepingcomputer.com) 50
Long-time Slashdot reader SonicSpike quotes this article from BleepingComputer:
A team of researchers from British universities has trained a deep learning model that can steal data from keyboard keystrokes recorded using a microphone with an accuracy of 95%...
Such an attack severely affects the target's data security, as it could leak people's passwords, discussions, messages, or other sensitive information to malicious third parties. Moreover, contrary to other side-channel attacks that require special conditions and are subject to data rate and distance limitations, acoustic attacks have become much simpler due to the abundance of microphone-bearing devices that can achieve high-quality audio captures. This, combined with the rapid advancements in machine learning, makes sound-based side-channel attacks feasible and a lot more dangerous than previously anticipated.
The researchers achieved 95% accuracy from the smartphone recordings, 93% from Zoom recordings, and 91.7% from Skype.
The article suggests potential defenses against the attack might include white noise, "software-based keystroke audio filters," switching to password managers — and using biometric authentication.
Such an attack severely affects the target's data security, as it could leak people's passwords, discussions, messages, or other sensitive information to malicious third parties. Moreover, contrary to other side-channel attacks that require special conditions and are subject to data rate and distance limitations, acoustic attacks have become much simpler due to the abundance of microphone-bearing devices that can achieve high-quality audio captures. This, combined with the rapid advancements in machine learning, makes sound-based side-channel attacks feasible and a lot more dangerous than previously anticipated.
The researchers achieved 95% accuracy from the smartphone recordings, 93% from Zoom recordings, and 91.7% from Skype.
The article suggests potential defenses against the attack might include white noise, "software-based keystroke audio filters," switching to password managers — and using biometric authentication.
No no no (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Make the F keys non-functional when the cursor is in a password field and then incorporate an arbitrary number of F keystrokes into your passwords.
Which they’d simply recognize and filter out.
Also, given that this is AI-trained, would it need to be trained on a recording of you typing known text first? Otherwise they’d be using key frequency and other techniques to try and guess which letters were which, but even then it feels like they’d need a sample of non-password typing.
Re: (Score:2)
Which they’d simply recognize and filter out.
How? You think that the AI is identifying different keys based on pitch?
Actually, I was dismissive of that idea but looking at the paper it does seem as though that's their method. I didn't think that would be possible without significant data from each individual keyboard, because every keyboard sounds so different. While typing rhythm has some commonalities between different keyboards and people.
Re: (Score:3)
Yeah, I had assumed—thankfully not wrongly, based on your quick glance at the paper, though I could have been—that it was acoustically fingerprinting them in some manner, hence why higher fidelity recordings yielded better results. And yeah, I’m with you in assuming that it has to be trained on each keyboard, perhaps even for each typist, which to me would significantly limit the danger posed by this attack. Unless it can learn without a curated training data set, in which case this is ter
Re:No no no (Score:4, Funny)
F... that!
Re: (Score:2)
Here's another solution, only marginally better [wikimedia.org]...
Re: (Score:3)
Re: rofl (Score:4)
Re: (Score:1)
My model M keyboard won't fall for this.
Re: (Score:2)
Not worried (Score:2)
All my login/password information is pasted in using Bitwarden. They can have everything else.
Re: (Score:2)
Exactly this.
The only danger with this approach is that they might grab your master password, and thus get access to the whole lot of your secret store.
However, Bitwarden, and several competitors, support OTP integrations to unlock the vault. So, even if they catch all of your keystroke audio, replaying a recording thru an AI model would not be very helpful, as the OTP invalidates rather quickly.
Re: (Score:3)
All my login/password information is pasted in using Bitwarden. They can have everything else.
Yep. All my online passwords are generated by randomly bashing the keyboard so I could never remember them, they have to copy-pasted. Good luck listening to CTRL-V and figuring out what my password is.
Plus: This probably needs training for every keyboard. It probably only works on macbooks at the moment, not my Model M.
Teams is secure (Score:4, Funny)
The researchers achieved 95% accuracy from the smartphone recordings, 93% from Zoom recordings, and 91.7% from Skype
I checked the forementioned document and the word 'Teams' doesn't show. Which isn't surprising considering the poor audio quality of Teams.
Re: (Score:2)
Breaking news (Score:3)
Building on work from the 1950s [wikipedia.org]. Good job, guys.
Re: (Score:3)
THANK YOU.
I clearly remember reading decades ago about basically the same attack; but not with new deep learning but still quite effective. Also radio noise gives off a lot of data and I've read a few on that topic... I suspect another paper off that but with deep learning used on the radio... doesn't appear the AI is doing much improvement either.
Re: (Score:2)
When there's grant money, you use it.
80s television, too (Score:2)
or maybe it was the 90s.
Some show with a mountie down in the US for some reason or another.
He heard a password typed in, and then repeated the sequence/tempo of clicks while he tried finger strokes in the air, and then entered the password.
And that's *all* I remember about that apparently unmemorable show . . .
And nowe... (Score:4, Funny)
Now what I'd really like to see... (Score:1)
People still use keyboards? (Score:2)
My employer requires me to login with a smart card, a personal PIN followed by random digits from an RSA fob. Digits change every 15 seconds. Laptop doesn't work without the card inserted.
Re:People still use keyboards? (Score:4, Funny)
I believe you mean your personal PIN identification number.
For this attack to work... (Score:5, Informative)
The attacker would first have to get you to record specific keystrokes on command, at different pressures and speeds, so they have a baseline to compare to. Only with this kind of data up front, is the attack able to work.
Nobody is going to do this.
See page 5 of the linked research document, describing the process they used to train their system. There is no reason to think the training would be effective on other keyboards, or other people using them.
Re: For this attack to work... (Score:1)
Re: (Score:3)
Let me put on my 'evil hat' for a moment. Generally I'm adequately paranoid but insufficiently devious, but I think I got this one...
What if you can get a listening device into a secure location, but not a key logger? What if you know where the person lives... and you go to their home and put in a combo keylogger / listening device to get your training data?
It's nothing you or I will ever come up against, but maybe the high security folks should give it a think.
Re: (Score:3)
Agreed. If you are the specific target of a well-funded and highly-motivated attacker, they can use this to get you.
Most of us will never be such a high-value target.
If you *are* such a high-value target, you'd better be using a password manager and 2FA.
Re: (Score:2)
Re: (Score:3)
The attacker would first have to get you to record specific keystrokes on command, at different pressures and speeds, so they have a baseline to compare to.
For now, perhaps. But figuring out the 'space' key should be pretty easy. That gives you word breaks, which should allow various analysis aids - including LLM's and a word-frequency lookup table - to start figuring out words and therefore keys pressed. I suspect stringent application of already existing tech could solve the problem given an hour or two - perhaps much less - of the same person typing on the same keyboard.
Re: (Score:3)
No doubt such tools could be developed, but an hour or two? I don't think so. And when it comes to passwords, these are generally subject to complexity requirements, which will thwart predictions made by LLMs.
In any case, even if my time and cost estimate isn't correct, most of us don't have to worry about a well-funded attack specifically targeted at *us*. The bigger concerns is malware that is widely distributed and depends on low-hanging security vulnerabilities, like people's willingness to click on lin
Hey Google: (Score:3)
Re: (Score:2)
This is Slashdot - I'm not fucking anybody.
Does your first name happen to be Larry or Sergey?
Re: (Score:2)
Make some noise (Score:4, Informative)
I read the whole paper (yeah I know, weird right ?!) and I'm pretty skeptical of how well this would work outside an experimental setup.
The authors briefly acknowledge that desk vibrations, white noise, audio filters and fake key noise seriously affect accuracy. They use a cloth to reduce desk vibrations and adjust the call settings to reduce distortion. They discuss simultaneous key presses reducing accuracy. They discuss potential white-noise filters. They're clearly aware of real-world problems/mitigations; however they make absolutely no mention of how accuracy is affected by regular background noise or speech (you know, the things most people are doing in meetings - in the kinds of places most people are doing them).
The authors specifically call out public areas as places you might be vulnerable to a listening device (libraries, coffee shops and study spaces) but didn't actually test their method in any of those places. They use an experiment where the keyboard is the ONLY source of noise.
I get the feeling the authors went out of their way to avoid testing real world scenarios where the keyboard would be regularly drowned out by other sounds, including obvious things like people talking during a video call.
So, in my opinion, if you sit silently in a silent room, doing nothing but typing sensitive text during a video call with untrusted parties on a common/known device then maybe there's a real threat here but I see no reason for most people to panic. Not that there is zero risk but I think it's a bit much for the authors to call it "practical" without showing it works outside of a contrived experiment.
Maybe an IBM ZTIC-like device? (Score:2)
What might be useful are USB devices that not just have the functionality of a YubiKey, but have a touch screen on them. The closest to this would be a Trezor Model T.
When authenticating with the Trezor Model T's FIDO functionality, it presents a PIN, and randomly places the digits. That way, smudges due to typing in "12345" will never be in the same place twice. From there, once the device is unlocked, it is just tapping "allow" or "deny". What might be a useful addition would be a fingerprint scanner,
Just keep ... (Score:3)
Casual observations (Score:3)
Anyone avoiding the sin of password re-use, will already have one. In addition, it allows the use of OTP (2FA), making all casual observations, worthless.
Wait till they read our minds... (Score:2)
Soon they'll be able to read our freaking minds. Then I'm moving to cabin in the woods.
Re: (Score:3)
Semantic reconstruction of continuous language from non-invasive brain recordings [nature.com]
Re: (Score:2)
Ima fuck off now.
That's a pretty big deal... (Score:2)
First of all people type passwords and other sensitive things while on teams calls and such all the time.
But an even bigger issue, if you have a microphone enabled device nearby while you work it can be scraping content all day. If it can hear you say "hey google" or "hey siri" then it *is* listening. And thanks to ubiquitous 2FA everyone is sure to have such a device by their machine.
hifi to the rescue (Score:2)
Ok then (Score:1)