Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Bug

Private Keys Stolen Within Hours From Heartbleed OpenSSL Site 151

Billly Gates (198444) writes "It was reported when heartbleed was discovered that only passwords would be at risk and private keys were still safe. Not anymore. Cloudfare launched the heartbleed challenge on a new server with the openSSL vulnerability and offered a prize to whoever could gain the private keys. Within hours several researchers and a hacker got in and got the private signing keys. Expect many forged certificates and other login attempts to banks and other popular websites in the coming weeks unless the browser makers and CA's revoke all the old keys and certificates."
This discussion has been archived. No new comments can be posted.

Private Keys Stolen Within Hours From Heartbleed OpenSSL Site

Comments Filter:
  • by Z00L00K ( 682162 ) on Sunday April 13, 2014 @01:05PM (#46741139) Homepage Journal

    Be aware that even the root CA certificates can be at risk right now, and that can really cause problems.

    • by mlts ( 1038732 ) on Sunday April 13, 2014 @01:13PM (#46741187)

      Depends. A website's SSL key may be slurped up. However, a root CA key should be either kept on an offline machine or kept in a hardware security module where the key won't be divulged, ever... the module will sign a key, and that's it.

      I'm sure some places will have their root CA on an externally connected machine, then try to place blame, likely saying how insecure UNIX is (when it isn't any particular flavor of UNIX that is at fault.)

      • by mysidia ( 191772 )

        I'm sure some places will have their root CA on an externally connected machine, then try to place blame, likely saying how insecure UNIX is (when it isn't any particular flavor of UNIX that is at fault.)

        Since this is in violation of the CA/Browser forum rules and Mozilla policies that pertain to trusted CA certificates; they are either lying, grossly negligent, OR both: if they have a root CA's private key ever loaded into an externally connected machine.

        In fact.... a CA root certificate itself, is

        • You would not believe what VP's will force you to do to get their $20 million flagship project out the door and then quickly forgotten about after the guy that was forced to do it quits in disgust. There was a time when I'd be surprised by insanely stupid security vulnerabilities but after a few years in the trade I've learned never to be surprised by anything.

          • by mysidia ( 191772 )

            You would not believe what VP's will force you to do to get their $20 million flagship project out the door and then quickly forgotten about after the guy that was forced to do it quits in disgust.

            Fraud that can get you in jail is not one of those things that some VP can force you to do.

            The CA has to be validated by third party auditors, before it can even be trusted. One of the aspects that must be audited is the governance of that CA and the policies and controls of the CA designed to ensure the CA

            • The CA has to be validated by third party auditors, before it can even be trusted.

              All those big box stores that got their credit card numbers hacked....
              Validated by third party auditors.

              IIRC, one of the stores was actively being exploited throughout the audit process and still passed.

    • by gweihir ( 88907 ) on Sunday April 13, 2014 @01:19PM (#46741229)

      That is BS. Nobody sane installs a root certificate on productive network-connected hardware, unless they are terminally stupid. Oh, wait...

  • by lougarou ( 34028 ) on Sunday April 13, 2014 @01:06PM (#46741143) Homepage

    For all practical purposes, https is dead. There is no way browsers will carry around the hundreds of thousands of possibly-stolen-so-unsafe certificates fingerprints (to consider these tainted/revoked). The only way forward is probably to move away to an incompatible protocol. And if possible, cure some of the X509 wrong ways.

    • Re:https is dead (Score:5, Insightful)

      by Anonymous Coward on Sunday April 13, 2014 @01:30PM (#46741317)

      Nah, the browsers will just reset 'zero epoch' for SSL certificates they'll accept to ONLY accept certificates issued after some date post-exploit, and all major SSL vendors will likely reboot their intermediate keychains so there's only a handful of 'revocation' certificates that will actually be needed due to the tree-of-trust model: Anything in the chain gets revoked everything below it goes away.

      And yes, this means the folks that were Johnny on the Spot about reissuing their certs might have to re-issue them AGAIN due to fixing their issue so quickly, but that's honestly pretty minor compared to the huge swaths of forever-vulnerable sites that need to effectively have their SSL status revoked regardless of what they do or don't do.

      WolfWings, who hasn't logged into SlashDot in YEARS.

      • Just like Apple stopped using certs signed by the compromised Comodo Root certificate to sign patches?

        Oh wait.. they they kept using it for years. They might still be using it, even.

        Certs are window dressing. Companies only care enough about them to make sure they don't throw warnings. If they can get away with it, they will invariably "solve" the issue by telling the users to reduce security settings.

    • Re:https is dead (Score:4, Interesting)

      by jonwil ( 467024 ) on Sunday April 13, 2014 @03:07PM (#46741959)

      The problem with replacing HTTPS is that you will need to maintain regular HTTPS for all those clients that cant upgrade to a newer browser. (which exposes web sites to these threats) And you have to convince browser and web server vendors to support the new HTTPS replacement.

      Google would probably do it (on desktop, ChromeOS, Android and its custom web/SSL server software) especially if it made it harder for the kind of man-in-the-middle-using-fake-certificates type attacks the NSA have been using (the ones that let the NSA serve up fake copies of popular web sites as a vector to infect other machines). Opera and others that use the Google rendering engine would probably use the Google support.

      Mozilla would probably do it if you could convince them that its not just going to be bloat that never gets used.

      Apache would probably support it via a mod_blah and if they dont, someone else would probably write one.

      Other FOSS browsers and servers (those that do HTTPS) would probably support it if someone wrote good patches.

      But good luck convincing commercial vendors like Microsoft and Apple to support a new protocol. And the Certificate Authorities would fight hard against anything that made them obsolete (which any new protocol really needs to do)

      • by AHuxley ( 892839 )
        Hi jon,
        How safe are Perfect Forward Security (PFS) and other "per-session" encryption keys from this mess? Thanks
        • If the server (or client, for that matter) was hit with Heartbleed *during* (or shortly after) the session, the symmetric encryption key may have been retrieved and an attacker who had recorded the whole session could then decrypt it. If the session was ongoing and they were in position to do so, they could MitM it.

          Similarly, if the attacker used Heartbleed during the key exchange, they might have leaked the private information (from either endpoint) needed to derive the symmetric key, even if for some reas

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Time for SHTTP. The useless certificate authorities can go and die in a dark hole. Your bank can send you their public key. For most places you wouldn't even need a password.
      OpenSSH is developed by competent people and it has virtually no preauthentication attack surface.

      • Your bank can send you their public key.

        That is the key problem with schemes that don't involve a CA. A bank will be sending me bits of paper anyway when I open a new account, the better ones will be sending me a fob for two-factor auth too in fact, so sending an extra bit of paper with "this is the fingerprint of our signing key, when your browser asks you to confirm a certificate make sure the signer finger-print matches this one" is no hardship. But what about sites that don't have any other comms channel with their users? How do they prove th

        • > But what about sites that don't have any other comms channel with their users?

          They never have been secure and they will never be, because an early enough MITM attack renders checksums, certificates, and certificate authorities potentially irrelevant. You got your certificates from the internet or from a preinstalled OS which has likely been vetted by some agency. Your packets travel along thanks to routers with closed source OS. Your cellphone is designed as to permit the modem to do what heartbleed di

      • How do you know its your bank's public key and not someone elses?

        Thats the ENTIRE problem that CAs attempt to solve.

    • by sjames ( 1099 )

      It'll never happen. Look at how hard it was to finally kill IE6. Then add all those embedded web servers in APs, switches, routers, NAS, etc etc and imagine getting their firmware updated (not to mention the devices that are no longer updated by the manufacturer).

  • Oh, man, what a mess (Score:4, Interesting)

    by 93 Escort Wagon ( 326346 ) on Sunday April 13, 2014 @01:12PM (#46741175)

    I do have to wonder if the task was made easier given the purpose of the server. After all, I'd think it wouldn't get traffic at all except for those people responding to the challenge. But, still, this proved it's possible.

    So not only do those of us responsible for web servers need to generate new server certs for all of our servers... pretty much every current web server cert in existence also needs to be revoked. Are the CAs even willing/able to do something on that scale in a short amount of time?

    • by sphealey ( 2855 ) on Sunday April 13, 2014 @01:50PM (#46741421)

      From the linked site: "He sent at least 2.5 million requests over the course of the day." So, no rate limiters, anti-DDS protection, or other active countermeasures in operation. Reasonable for this challenge but not overly realistic.

      sPh

      • With the amount of Bot Net's out their costing peanuts to rent it would be pretty tough to protect from this in any meaningful manner without prior knowledge of the exploit.
    • by heypete ( 60671 ) <pete@heypete.com> on Sunday April 13, 2014 @01:52PM (#46741445) Homepage

      So not only do those of us responsible for web servers need to generate new server certs for all of our servers... pretty much every current web server cert in existence also needs to be revoked. Are the CAs even willing/able to do something on that scale in a short amount of time?

      Netcraft actually has an interesting article [netcraft.com] about that very situation.

      Obviously, the CAs don't really have a choice in the matter, but I can't imagine they really have capacity issues in regards to the actual revoking/signing as that's all automated. If things get crazy busy, they can always queue things -- for most admins it doesn't really matter if the new cert is issued immediately or after 15 minutes.

      Human-verified certs like org-verified and EV certs might have a bit of delays, but domain-validated certs should be quick to reissue.

      Of course, revocation checking for browsers is really bad [netcraft.com]. Ideally, all browsers would handle revocation checking in real-time using OCSP and all servers would have OCSP stapling [wikipedia.org] enabled (this way the number of OCSP checks scales as the number of certs issued, not the number of end-users). Stapling would help reduce load on CA OCSP servers and enable certs to be verified even if one is using a network that blocks OCSP queries (e.g. you connect to a WiFi hotspot with an HTTPS-enabled captive portal that blocks internet traffic until you authenticate; without stapling there'd be no way to check the revocation status of the portal).

      Also, browsers should treat an OCSP failure as a show-stopper (though with the option for advanced users to continue anyway, similar to what happens with self-signed certificates).

      Sadly, that's basically the opposite of how things work now. Hopefully things will change in response to Heartbleed.

    • by mysidia ( 191772 ) on Sunday April 13, 2014 @02:01PM (#46741495)

      pretty much every current web server cert in existence also needs to be revoked. Are the CAs even willing/able to do something on that scale in a short amount of time?

      Calm down. A majority of web servers are not vulnerable and never were. All in all... less than 30% of SSL sites need to revoke any keys.

      Some websites are running with SSL crypto operations performed by a FIPS140-2 hardware security module; these are not vulnerable, since OpenSSL doesn't have access to the private key stored in the server's hardware crypto token.

      Many web sites are running on Windows IIS. None of these servers are vulnerable.

      Plenty of web sites are running under Apache with mod_nss, instead of mod_ssl. None of the websites using the LibNSS implementation of SSL are vulnerable.

      Many web sites are running on CentOS5 servers with Redhat's openssl 0.9.x packages. None of these servers were ever vulnerable.

      Many web sites are running on CentOS6 servers, that had not updated OpenSSL above 1.0.0. These websites weren't vulnerable.

      Many websites are running behind a SSL offload load-balancer; instead of using OpenSSL. Many of these sites were not vulnerable.

      • by Vairon ( 17314 )

        All websites running under any publicly released version of SUSE Linux Enterprise Server using the distribution's openSSL package were not vulnerable to HeartBleed.

    • Comment removed based on user account deletion
      • by pnutjam ( 523990 )
        I don't see what the point of that article is? Sure, lots of people were requesting certs. I saw their site slow to a crawl, from 30 seconds to 5 or 10 minutes to get the page to load so I could request certs.

        They did not however, change any of their intermediate trust chain certs. I don't see Commodo really did anything except issue certs as requested, like always.
    • I do have to wonder if the task was made easier given the purpose of the server. After all, I'd think it wouldn't get traffic at all except for those people responding to the challenge.

      On the contrary, it may have made things harder.

      Reading the private key relies on forcing malloc() to reuse some small block from the free block list with a lower address than the block containing the key, insteading of simply carving a new block out of free memory (with an address higher than the key).

      That may be easi

  • Interestingly enough, my browser (firefox) doesn't let me access https://www.cloudflarechallenge.com/ [cloudflarechallenge.com], complaining about the security certificate...?

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Be glad...

      I can see t with my browser, and this is, what I can read (among other things)
      :
      "Can you see this site? You shouldn't be able to, we have revoked the certificate. If you can still see this message, Certificate Revocation may be broken in your browser. See this post for more details."

      • by _Shad0w_ ( 127912 ) on Sunday April 13, 2014 @01:32PM (#46741323)

        Chrome turns the "check for revocation" option off by default, it seems.

        • You're right! But why!?!?

          Why would it turn off checking revocation by default! Is there any possible reason that this is anything but grossly irresponsible?

          • Because it can massively slow down access to SSL sites.

    • Cloudflare said on their wall on Facebook that they were going to leave the site up, as a means of checking how browsers deal with revoked certs: https://www.facebook.com/Cloud... [facebook.com] So Firefox is probably doing the right thing here. Last time I looked, Chrome didn't display any warning. Which is nice.
      • Re: (Score:2, Informative)

        by Anonymous Coward

        Chrome has online revocation check turned off by default - you can go to Settings -> Advanced and switch on "Check for server certificate revocation" under HTTPS/SSL section

      • Not sure about the branded Chrome, but Chromium on my arch machine shows one hell of a scary message about the browser down-right refusing to connect to it due to the certificate.
  • by Anonymous Coward

    Until they revoke all potentially compromised CA's and roll out a brand new set I don't see how they can consider the breach closed even if the vuln itself is fixed.

  • by Anonymous Coward

    Most sites don't have PFS enabled, and that means anyone who has recorded a site's traffic prior to the publication of the bug only needs a short time to get the key and can then decrypt all recorded sessions. The Heartbleed exploit doesn't just jeopardize the data that is currently flowing through OpenSSL while the attacker is reading server memory through malicious heartbeat requests. If you used a vulnerable server via a public Wifi hotspot in the past two years and someone else recorded your session, th

    • by ledow ( 319597 )

      When I looked into my server, I found out:

      The OpenSSL library I'm using wasn't vulnerable.
      Thus, my keys are as "safe" as they were before.

      Also, to enable PFS, I would have to upgrade - to one of those OpenSSL versions that is vulnerable (but obviously there are "fixed" ones now).

      I would also only be able to use EC cryptography with PFS with OpenSSL. I don't trust EC personally, yet. It's just not been around long enough for me. And I find it suspicious that every time something happens, the answer is "Le

      • by jonwil ( 467024 )

        https://www.openssl.org/docs/a... [openssl.org] suggests that OpenSSL (the official upstream version at least) does in fact support DHE and PFS without EC.

      • by mysidia ( 191772 )

        I would also only be able to use EC cryptography with PFS with OpenSSL. I don't trust EC personally, yet. It's just not been around long enough for me.

        The promise of PFS is that a private key compromised or lost after the fact does not compromise the contents of all sessions. Which means it's useless for an attacker to intercept thousands of SSH sessions, and then later make an attempt to break into the server --- they need private key at the time of any attack.

        You're argument is the equivalent of say

  • by gweihir ( 88907 ) on Sunday April 13, 2014 @01:17PM (#46741213)

    Seriously, how out-of-touch can you get? That the X.509 global certificate system has been fundamentally compromised has been well-known for quire a few years to everybody that follows the news at least in a cursory fashion.

    • by fuzzyfuzzyfungus ( 1223518 ) on Sunday April 13, 2014 @01:28PM (#46741303) Journal
      The bigger issue is that even people who don't trust the (braindead; but too convenient to die) "Hey! Let's just trust about 150 zillion different 'secure' Certificate Authorities and if they signed the cert and it matches the domain everything must be OK!" are still pretty screwed if whatever specific certificate or certificates they are using are now also in the hands of some unknown and probably malicious 3rd party...

      There's a pretty big difference between 'because the system is pretty stupid, you can generate a valid certificate for any domain by knocking over any one of an alarming number of shoddy and/or institutionally captured CAs' and 'your private key, yours specifically, can be remotely slurped out of your system and used to impersonate it exactly'.
      • by gweihir ( 88907 ) on Sunday April 13, 2014 @03:05PM (#46741941)

        I am not sure it is a bigger issue, since many of these sites will not be publicly reachable. But it definitely is an issue foe example for large corporations that use SSL in their Intranet with self-signed certificates. They now have to wonder whether some of their staff has attacked their servers this way.

        • I agree that it isn't a bigger issue in terms of expected ongoing pain/users affected, since the issue with trusting too many shady/incompetent CAs is showing no signs of real solution ('pinning' is an OK hack, so far as it goes; but it doesn't go very far on most users' systems and nobody seems to have an actual ready-for-prime-time solution that shows signs of making it out the door).

          I was thinking 'bigger' in that only SSLed stuff accessed by excessively-trusting systems can be compromised by a rogue
  • by Anonymous Coward

    Like this would come as a surprise?
    Like heartbleed in itself is a surprise? For every major zero day like this out from the dark there is ten more in the forest.
    General people just have no clue. Most organizations have no clue. Media have no clue. Just people in the dark know.
    Anyone working in the industry would know better than to trust anything online or supposedly safe.
    Secret stuff fares far better offline or in general "snail-mail", than online.
    No software is safe, ever has been or ever will be.
    The chai

  • by SuricouRaven ( 1897204 ) on Sunday April 13, 2014 @01:36PM (#46741343)

    Fuck.

    (Except here in the UK, we are more creative with our profanity.)

  • Well yeah, considering the severity and size of attack vector. I'm sure the NSA are having a field day over at HQ, too (Hi, BTW).
  • Tools for checking (Score:4, Informative)

    by bobstreo ( 1320787 ) on Sunday April 13, 2014 @02:08PM (#46741531)

    There are a couple tools available at:

    https://github.com/Lekensteyn/... [github.com]

    It's python based so YMMV

    They will tell you if you are vulnerable (See the README.md file)

    • The cool feature of Pacemaker is that it checks TLS *clients*, actually. There are other tools for server checks (one of which is included with Pacemaker) but it's actually very important to make sure any clients you have are invulnerable to Heartbleed as well. Software that ships with bundled or integrated OpenSSL libraries - and I've seen quite a few - could be vulnerable to this.

  • by Anonymous Coward on Sunday April 13, 2014 @02:39PM (#46741753)

    Coverity is a static analysis tool. It was tested on the source code with the Heartbleed vulnerability and did not find it. The developers of Coverity made a proof-of-concept modification to treat variables as tainted if they're subjected to endianess conversion, based on the assumption that such variables contain external and thus potentially hostile data. With this modification, Coverity finds the Heartbleed bug, as described in this blog post [regehr.org]. Note the comment below the screenshot: "As you might guess, additional locations in OpenSSL are also flagged by this analysis, but it isn’t my place to share those here." This may just be a consequence of not detecting all ways in which a tainted variable is sanitized, or it may point to more problems.

  • How do I become a trusted root certificate authority ?
    • by dkf ( 304284 )

      How do I become a trusted root certificate authority ?

      You ask the browser vendors, who respond by asking some very pointed questions about how trusted you are. These sorts of questions include "do you have regular audits to ensure that you're managing your keys correctly?" and "what policies do you have in place for dealing with a security breach that compromises one of the keys you've signed?" Convince enough people that you're really trustworthy, and congratulations, you're a root CA. At least until the next time they ask those questions. It's only really re

This is the theory that Jack built. This is the flaw that lay in the theory that Jack built. This is the palpable verbal haze that hid the flaw that lay in...

Working...