Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel AMD Security

A New Vulnerability in Intel and AMD CPUs Lets Hackers Steal Encryption Keys (arstechnica.com) 30

Microprocessors from Intel, AMD, and other companies contain a newly discovered weakness that remote attackers can exploit to obtain cryptographic keys and other secret data traveling through the hardware, researchers said on Tuesday. From a report: Hardware manufacturers have long known that hackers can extract secret cryptographic data from a chip by measuring the power it consumes while processing those values. Fortunately, the means for exploiting power-analysis attacks against microprocessors is limited because the threat actor has few viable ways to remotely measure power consumption while processing the secret material. Now, a team of researchers has figured out how to turn power-analysis attacks into a different class of side-channel exploit that's considerably less demanding.

The team discovered that dynamic voltage and frequency scaling (DVFS) -- a power and thermal management feature added to every modern CPU -- allows attackers to deduce the changes in power consumption by monitoring the time it takes for a server to respond to specific carefully made queries. The discovery greatly reduces what's required. With an understanding of how the DVFS feature works, power side-channel attacks become much simpler timing attacks that can be done remotely. The researchers have dubbed their attack Hertzbleed because it uses the insights into DVFS to expose -- or bleed out -- data that's expected to remain private. The vulnerability is tracked as CVE-2022-24436 for Intel chips and CVE-2022-23823 for AMD CPUs. The researchers have already shown how the exploit technique they developed can be used to extract an encryption key from a server running SIKE, a cryptographic algorithm used to establish a secret key between two parties over an otherwise insecure communications channel.

This discussion has been archived. No new comments can be posted.

A New Vulnerability in Intel and AMD CPUs Lets Hackers Steal Encryption Keys

Comments Filter:
  • Are there any practical attacks here?

    • by waspleg ( 316038 ) on Wednesday June 15, 2022 @03:40PM (#62622766) Journal

      view with the researchers who found it. [intel.com]

      This is linked along with the PDF [hertzbleed.com] to be presented at USENIX. Both are linked in the bleeping computer article [bleepingcomputer.com] I submitted and was ignored this morning.

      • by Junta ( 36770 ) on Wednesday June 15, 2022 @03:52PM (#62622818)

        In short, assuming they have an ideal scenario, crypto implementations that do throw-away computations to muddy the waters as part of their constant-time approach are still well protected.

        You pretty much need to know what's happening and need to be using a crypto implementation that both understands the need to go constant-time yet doesn't understand the need to noise up the load. If a crypto implementation doesn't do constant-time, you don't need this trick to get this far, so it's only useful for constant-time that just sleeps to get to the constant time, which would seem to be a very narrow area of software since contstant-time is pretty much always considered alongside this sort of risk already.

        • by sjames ( 1099 ) on Wednesday June 15, 2022 @10:30PM (#62623750) Homepage Journal

          This. Well designed crypto is careful to implement constant time.

          For example, a common way to compute large powers is to take the exponent as a series of booleans, maintain an accumulator, and repeatedly multiply a temp value by itself, then multiply that into the accumulator if the next bit of the exponent is a 1. GOOD crypto maintains a second junk accumulator that gets multiplied when the next bit is zero so that each branch takes the same amount of time/power.

        • by gweihir ( 88907 )

          That is pretty much what I expected. One of the reason my IT Sec students get warned very explicitly that doing a secure implementation is much, much harder that just getting it to work. You know, like basically everything in secure coding, this is not restricted to secure cypto coding.

    • Intelligence services placing surveillance VM's on same physical hardware as targets?

      • by gweihir ( 88907 )

        No. Intelligence services just demand and get root on the hypervisor. Or the demand the encryption keys in the first place and get them.

  • by BardBollocks ( 1231500 ) on Wednesday June 15, 2022 @03:15PM (#62622698)

    over, and over, and over again we keep seeing 'vulnerabilities' that screw with the ability for end users to secure their data.

    call me a skeptic...

    • "While this issue is interesting from a research perspective, we do not believe this attack to be practical outside of a lab environment. "
      • Like all side channel attacks - useful to state intelligence services who can place a VM on the same physical hardware as another hosted VM to go after targets hosted infrastructure.. like say.. anyone who doesn't maintain physical control of their physical servers.

        • by gweihir ( 88907 )

          They do not need that. They will simply ask for the keys or VM dumps or admin privileges on the hypervisor. And they will get these things.

      • by gweihir ( 88907 )

        I.e. good research, but not a current threat. Something to keep in mind for the case that something fundamental changes and makes this attack a lot easier.

    • That's because we have people trying to cash-in on finding vulnerabilities no matter how esoteric they may seem.

    • by gweihir ( 88907 )

      This will eventually stop, but only when IT becomes stable and progress becomes very slow. At the moment things are still moving too fast for them to be called anything but "experimental".

  • Being able to guess an encryption key by watching the power consumption ?!? On a Linux system running 300 other power-hungry threads at the same time ? And remotely too ? Color me skeptic. Might be theoretically possible on a microcontroller running nothing else, but wake me up when it is a practical attack vector.
    • by Echoez ( 562950 ) *

      Seriously. Considering that I currently have 15 Brave tabs open, along with email, antivirus (plus probably Windows Update downloading some 1gb patch), color me extremely skeptical that this is worth worrying about. Sometimes these are interesting in a lab with some real-time operating system running a single foreground process, but until someone shows me a POC running on real-world multiprocess hardware? No way.

      Especially server-side, where everything has multiple levels of virtualization and containe

    • Everyone knows that adding (4 + 4) takes twice as much power as adding (2 + 2). The real trick is knowing it was (4 + 4) when the power doubled instead of (3 + 5) or (6 + 2)? /s
    • The threat is real. Read these pages, then see if you're still skeptical:

      Hertzbleed Vulnerability [hertzbleed.com]
      A Beginner's Guide to Constant-Time Cryptography" [chosenplaintext.ca]

      • Whose responsibility is that? The job of the chip is to do the calculation using minimal amount of energy. If it can do it by reducing voltage or frequency or whatever, it is doing exactly what it is designed to do. The chip has no idea whether you are doing cryptographic computation or using the last joules of the battery to do some critical stuff. In fact, the chip should not know. Should not be told which thread, or which process is doing cryptographic analysis.

        It is the job of the threads and codes do

        • by Junta ( 36770 )

          Agree, except explicit sleep doesn't help. 'constant' time and energy both together squash this sort of analysis, but the point of this is that you have to make your filler look as much like the real work as possible, not just runtime.

          • by Anonymous Coward

            Opportunistic power management is an optimisation of the same kind as "oh hey this doesn't do anything so we can cut it out entirely". The latter infamously found in compilers, the former is in the CPU. The effect is the same: Loss of confidentiality.

            So yeah, the chip does in fact need to know which threads not to cut short with power optimisations.

            The announcement, snazzy name+domainname, cute logo, and of course the breathless reporting, lazy copy/paste "summary" and excreable headlines all don't help.

    • by vux984 ( 928602 )

      There are some potentially interesting practical uses for such an attack vector:

      - cracking game consoles and other locked hardware
      - cracking bitlocker and other disk encryption systems

      During boot up of either you'd be running pretty limited and predictable code; not 300 random threads.

      • by bws111 ( 1216812 )

        Huh? Consoles and such are protected with public key encryption. The 'key' you would need is at the manufacturer, not on your device.

        Disks are encrypted using symmetric encryption. You need the key to decrypt. This does not make the key magically appear.

        • by vux984 ( 928602 )

          The 'key' you would need is at the manufacturer, not on your device.

          If a softwre image is signed by the manfacturer using the private key. Then the console only needs the public key to verify the signature. Is that what you mean?

          If so, you'd be right. But consoles use all kinds of encryption all over the place. I am not going to pretend to be a console hacking expert, because I'm not.

          But fundamentally, consoles haven't ever been cracked by defeating public-key-encryption; they're cracked by modifying the console so you don't have to. These days a lot of the communication on

  • by nospam007 ( 722110 ) * on Wednesday June 15, 2022 @03:26PM (#62622732)

    The word you're looking for is 'everybody'.

  • by nuckfuts ( 690967 ) on Wednesday June 15, 2022 @03:49PM (#62622800)
    There is a very interesting article [chosenplaintext.ca] on how to mitigate this type of attack.
    • by gweihir ( 88907 )

      Anybody competent is already doing this. I learned about it more than 30 years ago. The issue is that there are so few competent people in the software space and that you can still get a degree in software without ever having had any lectures on cyptography or security. Nice reference though, I think I am going to use it teaching.

  • I call BS (Score:4, Interesting)

    by RitchCraft ( 6454710 ) on Wednesday June 15, 2022 @03:50PM (#62622812)
    This reminds me of the technique to steal passwords (reported in the 90's?) by using a microphone to listen to keystrokes and deduce the keys being pressed. I've yet to see Crazy Ivan standing somewhere in a room listening for passwords. It was researchers in perfect conditions getting their algorithms and kit to work some of the time, but I remember hearing a lot of press about it. Keyboards have not changed and the world moved on. Watching power fluctuations with the hundreds of threads in a modern processor? Yeah, BS.
    • More like Ivan bugs the room and exfiltrates the data out that way. Didn't they hack the Iranians air-gaped centrifuge system with some kind of attack based off sounds the computer makes or was that an inside job done by someone that CIA got to? I forget what the papers said.

      Still, these kinds of really hard to pull off attacks are likely more in the realm of espionage and nation state spies. Not to many people are sophisticated enough to pull this attack off on their neighbor or competing business down the

"Marriage is low down, but you spend the rest of your life paying for it." -- Baskins

Working...