Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Encryption Security IT

SSL/TLS Vulnerability Widely Unpatched 103

kaiengert writes "In November 2009 a Man-In-the-Middle vulnerability for SSL/TLS/https was made public (CVE-2009-3555), and shortly afterwards demonstrated to be exploitable. In February 2010 researchers published RFC 5746, which described how servers and clients can be made immune. Software that implements the TLS protocol enhancements became available shortly afterwards. Most modern web browsers are patched, but the solution requires that both browser developers and website operators take action. Unfortunately, 16 months later, many major websites, including several ones that deal with real world transactions of goods and money, still haven't upgraded their systems. Even worse, for a big portion of those sites it can be shown that their operators failed to apply the essential configuration hotfix. Here is an exemplary list of patched and unpatched sites, along with more background information. The patched sites demonstrate that patching is indeed possible."
This discussion has been archived. No new comments can be posted.

SSL/TLS Vulnerability Widely Unpatched

Comments Filter:
  • by roguegramma ( 982660 ) on Monday June 20, 2011 @02:54PM (#36504932) Journal
    I recall that firefox detects this, so using firefox + firebug and checking the console might tell you if a site is vulnerable.
    • by nzac ( 1822298 )

      According to the firebug console my bank is unpatched.

      [bank].co.nz : server does not support RFC 5746, see CVE-2009-3555

      Method:
      Install firebug and open it to the console window
      Enable and check all outputs except JavaScript warnings (not sure which is right one) and reload the page.

  • Anyone have a link to a simple test script that I could use to check the sites of our suppliers (and to verify our own server's configurations before)? None of the linked articles mention one that is readily available.
    • Re:Self test? (Score:5, Informative)

      by Mysteray ( 713473 ) on Monday June 20, 2011 @03:00PM (#36505040)
      I like Qualys' SSL Labs [ssllabs.com]
      • And if you have a self-signed certificate, you fail with an F no matter how good everything else is.

        So once again it's a site implying that unencrypted plain http is somehow better than an TLS connection that happens to be using an unsigned certificate.

        And firefox is still raising these big scary alerts about unsigned certs encouraging people to use unencrypted http instead of https. Just goddamned brilliant.

        It's the doorlock equivalent of raising a stink that since you don't have a triple deadbolt, just

        • And if you have a self-signed certificate, you fail with an F no matter how good everything else is.

          Because with a self-signed certificate you are vulnerable to MiTM attacks.

          So once again it's a site implying that unencrypted plain http is somehow better than an TLS connection that happens to be using an unsigned certificate.

          A self-signed certificate gives a false sense of security, so plain http is better. Here is the breakdown:
          * Proper signed certificate with verifiable trust chain: protects against active MiTM attacks and passive eavesdropping so the user has all the protection that HTTPS can offer
          * Self-signed certificate: will protect from casual eavesdropping but does not protect against MiTM attacks because a MiTM attacker can fake the cert easil

          • > A self-signed certificate gives a false sense of security, so plain http
            > is better

            [...]

            Sorry bro...1995 called and wants its arguments back.

            On surface level you sound correct and reasonable. And yet, it's still total nonsense.
            Not to be too personal....most of the nonsense is really in the SSL model as used (like trusting people you have no reason to trust) and browsers by extension implementing that messed up model.

            To make it short:
            a CA-signed certificate does not protect you from MITM-attacks. Why

            • Not to be too personal....most of the nonsense is really in the SSL model as used (like trusting people you have no reason to trust) and browsers by extension implementing that messed up model.

              So your argument is essentially that as the browser's trust by default a number of people you have no particular reason to trust (the current holders of generally accepted CA certs), we should just let go and trust everybody in the whole world? The trust model might be a bit broken, but the answer is not to break it completely and just give up on it without having a suitable replacement.

              Whereas a self-signed certificate does not automatically make you vulnerable to an MITM. It depends entirely, whether the fingerprint of the presented cert is distributed through some other, out-of-band, channel. In fact, it can even be more secure if you use it right.

              You seem to be proposing the model used by SSH. This falls down on the key exchange issue though. While the browser can ve

              • The MitM vulnerability is only there the first time you connect to a self-signed certificate site. After that you're fine.

                With plain http, you are vulnerable every time you connect to the site. They don't even need MitM; they can just sniff the session. This is absolutely terrible.

                Folks like me who run small sites use self-signed certs all the time because I don't want to pay the extortion fees of the CA keepers, who seem to dole out certificates without really checking anyway.

                Self signed certs are

                • The MitM vulnerability is only there the first time you connect to a self-signed certificate site. After that you're fine.

                  Which is why Firefox's behaviour is correct. It can not verify the certificate against any of the CA certs it has been told to trust first time it see it, so it warns you that it has no idea if the cert is valid or it isn't and you are about to connect to EvilCorp pretending to be GoodieTwoShoes Inc. You can then inspect the certificate and confirm it is genuine using information gleaned from a known secure source and add a permanent exception for it so you won't be bothered when connecting to that site in

                  • Well I suppose I should look on the bright side.

                    Since browsers make such a fuss about self-signed certs, all webserver installations default to plain http. They could generate a self-signed cert on install and serve https by default (redirecting http to https), but this is unworkable thanks to browsers treating self-signed worse than unencrypted.

                    Thanks to this, I can sniff my LAN and haul in gobs of login/password combination (like from slashdot.org, which doesn't support https), since the vast majority o

                    • Since browsers make such a fuss about self-signed certs, all webserver installations default to plain http. They could generate a self-signed cert on install and serve https by default (redirecting http to https), but this is unworkable thanks to browsers treating self-signed worse than unencrypted.

                      Web servers do not default to HTTPS because self signed certs are not (rightly) convenient to use in the wild, until about five or six years ago (before people realised how easily properly certificated sites could be MitMed that way if you manage a DNS poisoning attack first) browsers didn't make the big fuss and HTTPS-with-self-signed-certs wasn't done by default then. The default to HTTP because HTTPS is not needed for most content. Back in the day when I were nowt but knee 'igh t' grass'opper the extra p

                    • Gah, what a lot of verbiage. Please let's keep this brief

                      Here is a typical and realistic scenario instead: You connect to my site, and being the paranoid type you prefer https. You get a self-signed cert. Would you grumpily accept it or would you go to http? I'd imagine the former.

                      Next, you bring your laptop to a Starbucks, where for all you know some bastard has stuck a hub and plug computer on the router when it was all installed and the proprietor has no idea. You want to use my site again. Would you

                    • Here is a typical and realistic scenario instead: You connect to my site, and being the paranoid type you prefer https. You get a self-signed cert. Would you grumpily accept it or would you go to http? I'd imagine the former.

                      Neither. If I didn't care who might snoop what I'm about to read/post I might go with either http of let the self-signed in (though just this once, not as a permanent exception), though obviously not for the webmail example you quoted above as that would involve a username+password and other content I might care about. If I did care about the info and if (and only if) I was expecting a self-signed certificate and had a fingerprint from a secure source to check against or some other way to verify the cert, t

                    • I will repeat myself:

                      Self signed certs are not ideal, but they are definitely better than plain http. All I ask is that the browsers don't make quite such a big fuss about them. They could simply say "This connection is using weak encryption. No bank or large institution would do this. [Ok this time] [Cancel] [Ok forever]" instead of the big fuss that Firefox currently does. That message isn't entirely accurate, but it more or less explains it to a normal user. If it's just some free webmail site (like mine), that's fine.

                      The above is all I want. To summarize:

                      Signed TLS is terrific.

                      Self-signed TLS is less so.

                      Plain http is terrible.

                      My entire complaint is that browsers are currently not reflecting this. They are reversing the last two. If you maintain that plain http is equal or better than self-signed TLS then we have nothing more to disuss.

                    • The above is all I want. To summarize:
                      Signed TLS is terrific.
                      Self-signed TLS is less so.
                      Plain http is terrible.
                      My entire complaint is that browsers are currently not reflecting this. They are reversing the last two.

                      That is not how I (or peple in the security community) see it.

                      * http is fine for things that need no transport security, if your service should have transport level security then do not provide the service over http

                      * https with a properly signed certificate is for things that do need transport security

                      * self signed certificates are intended for testing only. If they are uses in live environments then users must be warned about the possible dangers of accepting one. If you provide your users with a

                    • Why on earth should I have to pay some random extortionist bloke just to have an encrypted connection?

                      Seriously, your whole gambit is all about that. Why can't I just have a D-H exchange and be done with it? Why should I have to involve and pay some random corporation just to have an encrypted connection?

                      I honestly don't understand your hatred or self signed certs. Obviously it's not a lot of money, but I see no reason to pay $10 yearly to some random company for encryption that I can do myself. Furthermo

                    • Why can't I just have a D-H exchange and be done with it?

                      That is what you have with a self-signed certificate. DH key exchange ensures that communication between A and B can not be read by E in the middle without needing to have a pre-determined key (i.e. it solves the distribution part of the key distribution problem), but it offers no assurance as to the identity of either party which is the other half of what LTS is intended to provide. So if E is in the middle as the handshake starts that person can mediate by setting keys between A & E and E & B with

      • Interesting..My bank is marked down because it supports both patched and unpatched. Really, there's no need to support the unpatched one.
    • Re:Self test? (Score:4, Informative)

      by caljorden ( 166413 ) on Monday June 20, 2011 @03:28PM (#36505370)

      I spent a few minutes looking for the same thing, and found that Firefox includes a check. If you visit an HTTPS site that is not secure, you will get a message in the Error Console under Messages saying something like this:

      site.example.com : server does not support RFC 5746, see CVE-2009-3555

      For more information, see https://wiki.mozilla.org/Security:Renegotiation [mozilla.org]

  • by Tarlus ( 1000874 ) on Monday June 20, 2011 @02:57PM (#36505000)

    Unfortunately, 16 months later, many major websites, including several ones that deal with real world transactions of goods and money, still haven't upgraded their systems. Even worse, for a big portion of those sites it can be shown that their operators failed to apply the essential configuration hotfix.

    Lately we've also been finding out that many major websites are storing passwords as plain text and are untested against SQL injection. So it's unsurprising that they're also unpatched.

    Web servers need to be actively watched, maintained and scanned for vulnerabilities. Just because it's a LAMP server doesn't mean it's rock-solid. The fire-and-forget philosophy does not apply.

    • by Hyppy ( 74366 )

      Lately we've also been finding out that many major websites are storing passwords as plain text and are untested against SQL injection. So it's unsurprising that they're also unpatched.

      Web servers need to be actively watched, maintained and scanned for vulnerabilities. Just because it's a LAMP server doesn't mean it's rock-solid. The fire-and-forget philosophy does not apply.

      The problem is generally far beyond the necessary LAMP or IIS patching: The vulnerabilities you describe are flaws in the site's design and code. You can't patch a stupid divaloper.

      • by Anonymous Coward on Monday June 20, 2011 @03:13PM (#36505196)

        You can't patch a stupid divaloper.

        Diva-loper:
        1. (n) A portmanteau [wikipedia.org] of diva and interloper. It describes a software developer (or programmer) who believes themselves to be excellent at their craft, while an independent review of their developed code will demonstrate that the person has no business touching a computer.
        2. (n) A singer (diva) who gets married in Vegas (elopes).

        • Re: (Score:3, Insightful)

          by Anonymous Coward

          Well that depends. Some orgs the developers show up build the system then it is up to operations to keep it going. Or contract it out. Or some 3rd party 'handles it'. Some are afraid to change anything. Others have stacks of legal hurdles to jump thru. Others have SLA's they have to keep up with...

          A developer who saw it would say 'yeah just fix and reboot'. But the IT org would say uh we have about 2 mins downtime per year it will happen in six months.

          • by dgatwood ( 11270 )

            The competent IT guy would say, "Okay. Let's take one machine temporarily off of the load balancer's list. Patch that one up. We'll reintroduce it into the farm when you're done. If there are no problems with it, we'll update the remaining machines a few at a time."

            Don't get me wrong, that doesn't always work. For example, it's a lot harder if you're trying to introduce changes that affect the actual client content (e.g. JavaScript fixes). For everything else, there's MasterCard... or something....

            • by Sproggit ( 18426 )

              Actually, you're referring to an IT guy with a competent Manager / Director, who would have ensured that a decent load balancing / ADC solution was in place...
              Now that I think about it, if they used a decent load-balancer / ADC, they would probably be doing SSL termination on the device, so the higher up would have to be even more competent, and ensured the devices were purchased / installed in a fail-over pair, with connection mirroring and persistence mirroring enabled, meaning that the standby device can

      • by morcego ( 260031 ) on Monday June 20, 2011 @03:47PM (#36505596)

        Besides the obvious "stupid divaloper" joke, and I will refrain from making, I agree the problem is much bigger.

        It is what I call the "Windows Mentality". "So simple anyone can do it" is another way of stating it.

        Companies (Microsoft is a leader on this) sell their software as something extremely simple, that anyone can install, run and maintain. And, for someone who doesn't understand it (a good number of managers, CEOs and director), it actually looks that simple. Well, guess again ? It is not. I'm sorry, but your 17 years old intern (hourly rate = 1 candle bar) can't install and maintain a good server. You need to have someone who actually knows what he is doing, has the experience and the knowledge to do it well. Oh ? Too expensive, is it ? Really ? I suppose you take your Porche to be serviced buy that tatooed guy at the gas station too ?

        Nope, sorry. There are no simple server. Windows, Linux (LAMP or otherwise). They all require skilled admins, skilled coders and skilled designers. And those cost money. They require regular and constant maintenance. In other words: money.

        That is the real problem. Most companies are just cheap.

        • For the record, Microsoft pushed out (via Windows Update) a patch fully implementing the fix for this well before many other vendors (including some popular Linux distros) did, even though their server (IIS) wasn't nearly as vulnerable in its default configuration as Apache+OpenSSL.
          • by Hyppy ( 74366 )
            You have obviously not come across the special breed of divalopers that we like to call Updatus Avoidus. Above and beyond the lovable characteristics of your run-of-the-mill divaloper, the Updatus Avoidus can be identified by it's shrill cries that often sound like "Don't patch! *squaaaak* My code will break! *squaaaaak*"
        • your 17 years old intern (hourly rate = 1 candle bar)

          I think that's your problem, we pay our divalopers 3 candles an hour and we never have any problems (or lack of illumination for that matter).

        • stupid divalopers only eat candle bars.
      • by jd ( 1658 )

        Sufficient duct tape should patch the developers just fine.

    • I hear that's also how many companies deal with the developpers of unpatched code... fire them, and forget about the code they wrote. I hear NASA even has that problem, often not even having the code or design of outdated systems. I wonder what the ratio of unpatched but fixable to unpatched and unknown is.

    • Doesn't this all eerily fit in with the slashdot story about the hating of IT and all the Anon 'hacking' activity lately? Or should I just keep my mouth shut?
      • Yes, the overall security research community has greatly benefited from some of these large password database disclosures. We've learned a lot about password handling practices both on the back-end (unsalted MD5, or bcrypt?) and users (password crackability). In fact, there has been some overlap in the user base of the breached sites that we can start to look at things like how common password re-use is across multiple sites.
        • My point is that if you walked in to Mr Pointyheads office and started rattling things off like MD5, plain text or ROT-13- eyes would just roll back and he would grumble how much he hates IT and that it should just be stuffed in to the Cloud because the Cloud just works and is safe and makes toast and answers help desk support calls and installs the latest patches and and and... All while saving "8 billion dollars" of salary of those horrible IT people who just sit there an complain all the time and put up
          • "And this is the same guy who will have moved on to some other department when the DB containing all the credit cards and passwords ends up on a tweet and the fingers get pointed at the surly cave dwllers who have been telling Pointyhead to prioritize the patch over his ill thought through bullet points."

            Not even that. It might even happen that the PHB is still there but then, it has been the turrists and those bad-smelling freaks from IT, and after all, an unavoidable stroke of bad luck, not his guilt. O

    • Comment removed based on user account deletion
      • "or are they just incompetent?"

        Usually admins, and the bigger the company, the truer this is, don't admin but operate. You'll have to look higher in the food chain to find the culprit.

      • by L-four ( 2071120 )
        Also lot of the time your unable to "restart the server" because it it's always in use and if you where to restart it even with notice you would get like 50 phones calls saying the server has gone down. so it's easier and less time consuming to just let the severs keep there 600+ day uptime.
  • Is this why StartSSL is down as seen here? [startssl.com]

    I am wondering if this is why one of my sites is now showing the "untrusted site" screen in firefox?

    Error code:
    blah.com uses an invalid security certificate.
    The certificate is not trusted because no issuer chain was provided.
    (Error code: sec_error_unknown_issuer)

    • Note: in Chrome I get the green https, so I am confident that it is installed correctly.
      (and it was fine in firefox a few weeks ago)

    • Not likely, and no.

      StartSSL says they're down due to a security breach, not an inability to patch their servers. It's possible the attackers used the vulnerability mentioned here to breach the site, but that's a stretch. It could have been one of many other vulnerabilities.

      And the untrusted site error you're receiving is due to a certificate problem at the site you're visiting. Blah.com's certificate is possibly self-signed or the issuer's certificate was revoked. Or their certificate is simply cr
    • by heypete ( 60671 )

      Is your server configured properly to send both the server certificate *and* the intermediate certificate?

      Some browsers are more tolerant of such misconfigurations, and may be able to acquire the appropriate intermediate through a separate channel (e.g. Chrome and IE on Windows can often get certs from Microsoft Update in the background), while others are less tolerant.

    • by Nursie ( 632944 )

      They've been hacked. Not through this (probably).

      It's another example of the broken PKI that we have on the web. There aren't many real bugs in the SSL/TLS protocol family any more (clearly the one in TFA should be patched in more places) but the infrastructure of "trusted" authorities used by the web is b0rked, IMHO.

      Too many authorities, too many unknowns.

  • In November 2009 a Man-In-the-Middle vulnerability for SSL/TLS/https was made public (CVE-2009-3555), and shortly afterwards demonstrated to be exploitable.

    Isn't a vulnerability, by definition, exploitable?

    • by roothog ( 635998 )

      In November 2009 a Man-In-the-Middle vulnerability for SSL/TLS/https was made public (CVE-2009-3555), and shortly afterwards demonstrated to be exploitable.

      Isn't a vulnerability, by definition, exploitable?

      "Demonstrated to be exploitable" means "actually wrote the exploit, and it worked". It's a step beyond simply thinking about it hypothetically.

      • I'd published packet captures of the exploit in action as part of the initial disclosure. Someone else had working exploit code posted to [Full-Disclosure] within hours.
    • Re: (Score:3, Interesting)

      by Calos ( 2281322 )

      Interesting question. I guess you could argue that a theoretical shortcoming isn't a vulnerability if there's no practical exploit.

      But that ignores the temporal part of it. It is only not a vulnerability, because it's not practically exploitable right now. Things change, technology changes, new avenues for attacking the shortcoming open up.

      It's like the recent proven exploit we saw a few days ago on a quantum message transfer. The method had been theorized, but never been shown. Now that it's been show

      • It is only not a vulnerability, because it's not practically exploitable right now

        Yea it is, a guy already did a PoC with Twitter.

    • The blind plaintext injection capability that an exploit gives to the attacker was uncommon at the time and the initial reaction among experts was that it looked a lot like a CSRF attack. Most important sites had built in some protections against that.

      It wasn't until a few days later when it was demonstrated against a social networking site (Twitter) that the problem was declared "real" (by Slashdot).

      So it's a complex exploit and it did take a few days for a consensus to emerge about the actual severity

    • Depends on your definition of "vulnerability". For example, there's a vulnerability in AES's key schedule that weakens the 256 and 192 bit versions down to roughly 99.5 bits security. However, this is not an exploitable vulnerability, as the stars will still have all gone cold and dark by that time.

      • It may be that 2^100 computation will never be practical for any plausible attacker, but it's not the truly cosmic level of work you make it out to be.
    • Before an exploit is demonstrated, a vulnerability is only theoretically exploitable.
      • Perhaps, but who's going to pay for the development of the first exploit? The attacker or the defender?
    • by Spad ( 470073 )

      Exploitable in theory is not the same as exploitable against a "live" target. It's still a vulnerability either way.

  • http://pastebin.com/uRsvDd82 [pastebin.com]

    I still have the html. If anyone has an idea where to host it let me know.

  • by Anonymous Coward on Monday June 20, 2011 @03:30PM (#36505406)

    After reading this article I ran a quick audit on all of our server farms and noticed that KB980436 was dutifully installed Sept 2010...however, upon closer scrutiny I noticed that this Security Patch from Microsoft doesn't prevent this vulnerability by default (but rather keeps it in "Compatibility Mode" by default). Windows SysAdmins need to take care to read MS10-049 and add the appropriate RegKeys to enforce "Strict Mode" to keep their servers from being vulnerable to this exploit. FYI, downloading and installing KB980436 is not enough.

  • I tried reading that document and glazed over. Is there a site that gives you some practical procedures for making sure your site is secure? Because based on what I've read I only vaguely understand the problem and don't know how to determine if my site has it. I'd prefer not to find out the hard way...

    • I mentioned Qualys' SSL Labs nice test utility [ssllabs.com] in another comment.

      The fix is to ask your vendor for a patch for CVE-2009-3555 which implements RFC 5746 [ietf.org] Transport Layer Security (TLS) Renegotiation Indication Extension. Responsible vendors will have implemented support for RFC 5746 by now so you may already be patched.

  • There is a simple explanation why major sites are not supporting RFC 5746. A lot of these sites are probably sitting behind F5 hardware. The SSL is probably implemented just on the F5. F5 hasn't implemented the RFC in any version of their software yet.

    http://support.f5.com/kb/en-us/solutions/public/10000/700/sol10737.html

    Smaller sites of course are probably just a single http server running Apache. They or their hosting provide update their OpenSSL and Apache httpd versions. So the smaller sites get fi

    • by DarkFyre ( 23233 )

      Exactly so. Citrix NetScalers have the same issue. Those people claiming this is due to incompetent, stupid or lazy coders or admins have merely never seen the business end of a website big enough to need hardware load balancers with SSL offload.

  • I see that both Apple and Microsoft fail the test.

    If their own websites fail, does that mean that we cannot expect patches to their server software to fix this?

    (My specialties do not include web servers.)

  • You are only vulnerable if you combine (for example) client certificate authentication and unprotected but SSL secured stuff on the same webserver process. Only then the server needs to do a renegotiation otherwise you don't need to enable renegotiations (it's disabled by default in Apache). So I'm pretty sure very few sites actually need that and are perfectly fine/secure with what they have.
    • Maybe! But is "maybe" good enough when dealing with security?

      All it takes is an IT person who is unaware that renegotiation was disabled for a reason, and now reenables it because of a new business need: the site is vulnerable again.

      I don't like the idea that customers have to trust website operators to run with a hotfix and will never reenable the vulnerability accidentally. It's better to know for sure that a site is fixed. Only patched sites can give us this certainty.

      The less unpatched servers there are

      • It's not maybe, it either you need it or you dont! And for the vast majority it's the latter.

        it's disabled by default, so it's not a matter of re-enabling it. Besides there is no patch for stupidity.

        I could turn that around and say those who blindly always install the latest stuff probably don't know what they really need or do.
        New code often means new bugs.

        But in this case it only makes sense to warn if the server asks for renegotiation and only accepts an unsecure renegotiation. Browsers can/should
        • If I understand correctly, your proposal is, when a browser connects to an unpatched server, the browser shouldn't worry until the server requests to renegotiate. You propose, it's fine to wait until this happens, and warn in that situation.

          Unfortunately your plan doesn't work. The issue is that a browser cannot detect whether a MITM renegotiated with the server!

          Even if a server is configured to allow renegotiation, the ordinary browser visitor might never trigger it, and therefore the browser might never s

          • Quoting myself:

            This is why we must aim to get servers upgraded, so that browsers can start to warn about this risk when dealing with unpatched servers.

            Clarification: The reason, why browsers don't warn yet, is that browsers would have to warn very often. Currently there are too few patched servers. According to numbers provided by Yngve Pettersen [opera.com], in May 2001, about 45% of all servers in total were still unpatched. A frequent warning would likely have the undesired effect to educate users to ignore the warning.

          • Ah, yes, you're right about that! In that case it does indeed make sense to push for servers to be updated even if they are not directly impacted.
            Thanks for laying it out again!
  • I see that Microsoft is in the list, of all the people, you would think the ones pushing out the update could at least use the update themselves...!

Dinosaurs aren't extinct. They've just learned to hide in the trees.

Working...