Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security IT

Hackers May Have Nabbed Over 200 SSL Certificates 141

CWmike writes "Hackers may have obtained more than 200 digital certificates from a Dutch company after breaking into its network, including ones for Mozilla, Yahoo and the Tor project — a considerably higher number than DigiNotar has acknowledged earlier this week when it said 'several dozen' certificates had been acquired by attackers. Among the certificates acquired by the attackers in a mid-July hack of DigiNotar, Van de Looy's source said, were ones valid for mozilla.com, yahoo.com and torproject.org, a system that lets people connect to the Web anonymously. Mozilla confirmed that a certificate for its add-on site had been obtained by the DigiNotar attackers. 'DigiNotar informed us that they issued fraudulent certs for addons.mozilla.org in July, and revoked them within a few days of issue,' Johnathan Nightingale, director of Firefox development, said Wednesday. Looy's number is similar to the tally of certificates that Google has blacklisted in Chrome."
This discussion has been archived. No new comments can be posted.

Hackers May Have Nabbed Over 200 SSL Certificates

Comments Filter:
  • Boring (Score:5, Informative)

    by Mensa Babe ( 675349 ) * on Wednesday August 31, 2011 @06:13PM (#37270138) Homepage Journal
    All of the news about the SSL security flaws are starting to get boring. We had a related scandal just yesterday [slashdot.org]. The problem with SSL (or TLS, actually) is that it uses X.509 with all of its problems, like the mixed scope of certification authorities. It's like using global variables in your program - it is never a good idea. I can only agree with Bruce Schneier, Dan Kaminsky and virtually all of the competent security experts that we have to completely abandon the inherently flawed security model of X.509 certificates and finally fully embrace the DNSSEC as specified by the IETF. It is both stupid and irresponsible to have a trust system used to verify domain names in 2011 that is completely DNS-agnostic - and in fact designed in the 1980s when people were still manually sending the etc/hosts files around! There could be a lot of better solutions than the good old X.509 but in reality the only reasonable direction that we can choose today is to use the Domain Name System Security Extensions. Use 8.8.8.8 and 8.8.4.4 exclusively as your recursive resolvers. Configure your servers and clients. Define and use the RRSIG, DNSKEY, DS, NSEC, NSEC3 and NSEC3PARAM records in all of your zones. Use and verify them on every resolution. Educate people to do the same. This problem will not solve itself. We have to start acting.
    • Re:Boring (Score:5, Interesting)

      by Gerald ( 9696 ) on Wednesday August 31, 2011 @06:27PM (#37270242) Homepage

      "If you think it's nice that you can remove the DigiNotar CA, imagine a world where you couldn't, and they knew you couldn't. That's DNSSEC." -- Moxie Marlinspike [twitter.com]

      • by 0123456 ( 636235 )

        "If you think it's nice that you can remove the DigiNotar CA, imagine a world where you couldn't, and they knew you couldn't. That's DNSSEC."

        Is it just me, or does this make no sense to anyone else either?

        • Re:Boring (Score:4, Informative)

          by the_enigma_1983 ( 742079 ) <enigma@@@strudel-hound...com> on Wednesday August 31, 2011 @07:05PM (#37270532) Homepage

          In response to DigiNotar incidences, some people are removing the root CA for DigiNotar from their computers. This way your computer will not trust _anything_ signed by DigiNotar.

          With DNSSEC, if the people in charge of your DNS have an incident (hackers, malpractice or otherwise) which changes the "certificate" (for lack of a better word) for your website, you are stuck. There is no "root" certificate that you can remove.

          • It's true that in DNSSEC, it is potentially a huge logistical nightmare if a party entitled to 'bless' public keys as yours is persistently compromised (basically, an unprecedented display of ongoing incompetence to always be open to hijack).

            However, on the flip side, the relatively long lifetime of certificates incurs the nastiness of CRLs and some amount of faith that CRLs make it to the right place at the right time. If a DNSSEC authority is compromised and fixes it in an hour, all signs of the compromi

            • by Junta ( 36770 )

              I have to retract a bit, OSCP actually does a fair amount to address the issue of revocation, it's just that it isn't universal. Someone will have to explain how DNSSEC would be fundamentally any better than x509 with ubiquitous OSCP.

              • DNSSEC has its place, even for key distribution. But it does not provide a basis for trust because mere holdership of a DNS domain does not mean you are trustworthy.

                The big win for DNSSEC is to distribute security policy in a scalable fashion. See my CAA and ESRV Internet drafts.

                Imagine that you are visiting slashdot, wouldn't it be better to use SSL than en-clair if the site supports it? Wouldn't it be better to have encryption with a duff cert than no encryption at all? [*]

                DNSSEC allows a site to pu

                • But it does not provide a basis for trust because mere holdership of a DNS domain does not mean you are trustworthy.

                  It does not provide a cure for cancer either...

                  See my CAA and ESRV Internet drafts.

                  Maybe you should try submitting a paper to the Lancet too. With a little bit of luck you might catch a peer-reviewer off guard amazed to notice that cancer starts with CA.

                  No seriously, the "trust" in "trusted third party" has nothing to do with the trust that you put in the second party (i.e. the server or business with which you are communicating). It has all about to do with the trust you put in the third party (the certification agency), that it correctly d

                  • No seriously, the "trust" in "trusted third party" has nothing to do with the trust that you put in the second party (i.e. the server or business with which you are communicating). It has all about to do with the trust you put in the third party (the certification agency), that it correctly does its job (only giving certificates to properly identified entities and appropriately securing their infrastructure so that hackers and spies can't just "help themselves"). The threat that SSL certificates are supposed to protect against is wiretapping, not rogue businesses. I'm sure, all of those shady banks that failed in the 2008/2009 crisis had valid SSL certificates, and rightly so!

                    Let me explain. I have been working on Web security now for 19 years. I was present at the original meetings at which the SSL system was proposed, I convened several of the relevant meetings.

                    At no time was government wiretap a design consideration for SSL. NEVER. In fact to claim this was totally ridiculous since at the time we were fighting a running battle with the FBI and the NSA who were trying to stop us using strong cryptography at all. The original SSL design was limited to 40 bits and was very cle

                    • Let me explain. I have been working on Web security now for 19 years.

                      How cute. I have been working on Web security for 19 years and one day.

                      Uh, you do realise who you're making fun of there, right? He was helping create the web while you were still in nappies.

                    • Uh, you do realise who you're making fun of there, right?

                      Yes, I do realize that I'm making fun of somebody who believes that the purpose of SSL is to give you warm and fuzzy feelings about online shopping, that CAs certify business' good standing and integrity and then acts all astonished when somebody dares to point out that SSL is supposed to protect against interception/manipulation of data in transit. And then bolsters his position by pointing out his presence at some conferences where SSL was a subject.

                      He was helping create the web while you were still in nappies.

                      Well, as long as he enjoyed the nibbles and the drinks .

                • by PybusJ ( 30549 )

                  Imagine that you are visiting slashdot, wouldn't it be better to use SSL than en-clair if the site supports it? Wouldn't it be better to have encryption with a duff cert than no encryption at all?

                  Why do you think it would be better to use SSL with a 'duff' cert than an unencrypted transport? What does it protect against, most of those in a position to read your traffic would be in a position to mount a MITM attack?

              • by makomk ( 752139 )

                Anyone that can launch a man-in-the-middle attack can block OSCP verification requests, and for non-EV certificates they can do so in a way that causes all browsers to accept the certificate as valid with no kind of warning whatsoever.

                • by Junta ( 36770 )

                  But from what I've read, I'd consider that a failing of how OSCP is *implemented*, not how it is architected. First pass was returning 'tryLater' which looked innocuous enough and lo and behold it was treated as innocuous (I would think that should count as a validation error if implemented properly). Second time around it was shown that most browsers would even treat a 500 error as 'close enough'. In all these cases, the problem is not that OSCP is incapable, it's that the browsers erred on the side of

          • by qubezz ( 520511 )
            I just did that. If a certificate authority has been compromised and arbitrary signed certificates are being shown in the wild, it's probably best to deauthorize them and let them issue a new root CA and certificates to everyone once they have the person who leaked private info beaten within an inch of their life, and then 2.54cm more.
            Firefox: Tools -> Options -> Advanced pane -> Encryption tab. "View Certificates". "Authorities" tab. Select DigiNotar Root CA, Edit Trust, De-select check boxes.
        • No idea what he's talking about... a cursory Google search [google.com] reveals that provision has been made to revoke certificates, so presumably he's making some larger point about something else. ...Damned if I know what that is, though. But I do follow the Convergence project and am testing out the browser plug-in... If Moxie reads Slashdot and sees this: Would you care to expound on the quoted Tweet?

          • You can revoke your own certificate. You cannot revoke someone else's certificate. With a web browser you can remove someone else's root certificate which means that your trust problems with that person go away.
          • by Lennie ( 16154 )

            Moxie meens dat with the current CA-system, you have several CA's. With DNSSEC you in a way have just one CA. So if one CA messes up, with the current system, you can remove that one CA. But with DNSSEC you can't remove that one CA, because it is the only one.

            It is all more complicated ofcourse, but that is his message.

        • Re:Boring (Score:4, Insightful)

          by Zeinfeld ( 263942 ) on Wednesday August 31, 2011 @09:21PM (#37271470) Homepage
          Oh I know what he is trying but he has no clue what the threat model is.

          The threat model in this case is a well funded state actor that might well be facing a full on revolution within the next 12 months. It does not matter how convergence might perform, there is not going to be time to deploy it before we need to reinforce the CA system. [Yes I work for a CA]

          I think it most likely we will be seeing the Arab Spring spreading to Syria with the fall of Gaddafi. We are certainly going to be seeing a major ratcheting up of repressive measures in Syria and Iran. Iran knows that if Syria falls their regime will be the next to come under pressure. In many ways the Iranian regime is less stable than some that have already fallen. There are multiple power centers in the system. One of the ways the system can collapse is the Polish model, the people of Poland didn't have a revolution, they just voted the Communist party out of existence. If the Iranian regime ever allows a fair vote the same wil happen there.

          Anyone think that we will have DNSSEC deployed on a widespread scale in the next 12 months? I don't and I am one of the biggest supporters of DNSSEC in the industry. DNSSEC is going to be the biggest new commercial opportunity for CAs since EV. Running DNSSEC is not trivial, running it badly has bad consequences, the cost of outsourced management of DNSSEC is going to be much less than a DNS training course ($1000/day plus travel) but rather more than a DV SSL certificate ($6 for the cheapest).

          The other issue I see with Convergence is that it falls into the category of 'security schemes that works if we can trust everyone in a peer to peer network'.

          Wikipedia manages a fair degree of accuracy, but does anyone think that they really get up to 99% accurate? Until this year the CA system had had three major breaches, all of which were trapped and closed really quickly plus about the same number of probes by security researchers kicking the tires. Until the Diginotar incident anyone who had revocation checking in place was 100% safe as far as we are aware, not a bad record really.

          There is a population of about 1 million certs out there, even 200 would mean 99.95% accuracy.

          Running a CA is really boring work. Not something I would actually do personally. To check someone's business credentials etc takes some time and effort. It is definitely the sort of thing that you want a completer-finisher type to be doing. Definitely not someone like me and for 95% of slashdot readers, probably not someone like you either.

          The weak point in the SSL system is not the validation of certs by CAs, they are (in order) (1) the fact that SSL is optional (2) the fact that the user is left to check for use of SSL (3) the fact that low assurance certificates that have a minimal degree of validation result in the padlock display.

          The weak point being exploited by Iran is the braindead fact that the Web requires users to provide their passwords to the Web site every time they log in. I proposed a mechanism in 1993 that does not require a CA at all and avoids that. Had RSA been unencumbered I would have adopted an approach similar to EKE that was stronger than DIGEST but again did not require a cert.

          Certs are designed to allow users to decide who they can share their credit card numbers with. That is a LOW degree of risk because the transaction is insured. Certs are not intended to tell people it is safe to share their password with a site because it is NEVER safe to do that.

          • by Lennie ( 16154 )

            1. Actually, revocation checking does not solve the problem, alteast if someone had the CA private key, they could generate the same ID's as other existing certificate. OSCP/revocation lists only checks id's not names, which makes it not useful for all possible problems.

            2. I also think DNSSEC can be useful, it would be really helpful for the domain-owner to be able to make it clear that his website uses cert X and cert Y (which implies CA A and CA B). And not any other cert or CA. Deployment of DNSSEC is ve

            • 1. Actually, revocation checking does not solve the problem, alteast if someone had the CA private key, they could generate the same ID's as other existing certificate. OSCP/revocation lists only checks id's not names, which makes it not useful for all possible problems.

              Neither CRLs nor OCSP are intended to mitigate a CA private key breach.

              The only control in the system is to revoke the CA root and that can be effected on Windows by issuing a new CTL (as happened to revoke the Diginotar root) that drops the compromised root. The other browsers have similar mechanisms.

              2. I also think DNSSEC can be useful, it would be really helpful for the domain-owner to be able to make it clear that his website uses cert X and cert Y (which implies CA A and CA B). And not any other cert or CA. Deployment of DNSSEC is very slow though at the moment.

              The war could well be over by the time DNSSEC is deployed. The Iranian group have developed new attacks and dramatically escalated the sophistication of their attacks. The time between attacks has been weeks

            • 3. Can you be a bit more specific about what you proposed in 1993 ?

              There is a Secure Remote Password protocol [wikipedia.org] that allows you to authenticate both server to you and yourself to server at the same time. There's also a RFC 5054 [ietf.org] aimed to incorporate it to TLS, unfortunately without any client support AFAIK.

        • With HTTPS, the people you trust are the few hundreds of CAs your browser is configured to trust. It's way too many, and your vulnerability with them is a logical OR -- any CA fails and you are vulnerable. It's a fucked up system. However, at least you can remove DigiNotar from your browser's trusted list.

          With DNSSEC, you trust the root. They are your "trust anchor". And you get no choice about it.

          Each system is fucked up.

          This relates to the concept of "trust agility" that Marlinspike discussed. He wr

          • by Onymous Coward ( 97719 ) on Wednesday August 31, 2011 @09:59PM (#37271648) Homepage

            SSL And The Future Of Authenticity, Moxie Marlinspike [thoughtcrime.org]:

            Worse, far from providing increased trust agility, DNSSEC-based systems actually provide reduced trust agility. As unrealistic as it might be, I or a browser vendor do at least have the option of removing VeriSign from the trusted CA database, even if it would break authenticity with some large percentage of sites. With DNSSEC, there is no action that I or a browser vendor could take which would change the fact that VeriSign controls the .com TLD.

            If we sign up to trust these people, we're expecting them to willfully behave forever, without any incentives at all to keep them from misbehaving. The closer you look at this process, the more reminiscent it becomes. Sites create certificates, those certificates are signed by some marginal third party, and then clients have to accept those signatures without ever having the option to choose or revise who we trust. Sound familiar?

            The browser CA model is screwed up. DNSSEC is screwed up. What's the answer?

            I think Marlinspike was smart to start with defining the problem. And now, with Convergence, he's also trying to address it. Check it out. (And check out Perspectives. Perspectives is the project he based Convergence on.)

          • by bytesex ( 112972 )

            In the light of the realisation that security can never be absolute - can we not have some sort of 'trust-voting' ? You pick a random amount of trust mechanisms from a fixed set available on your machine and the internet, and you make them decide whether or not something or someone can be trusted and to what degree. You could even have a 'slide-bar' in the bottom of your browser.

            • I get the impression that how we choose is going to be one of the primary issues going forward.

              Currently, Perspectives allows you to specify which notary servers you'd like to use (and what percentage of them must agree (and for how long)).

              But how convenient is that? I imagine people might choose notary configurations much like how they subscribe to DNSBLs or choose Ad Block filter subscriptions.

      • by dgatwood ( 11270 )

        "If you think it's nice that you can remove the DigiNotar CA, imagine a world where you couldn't, and they knew you couldn't. That's DNSSEC." -- Moxie Marlinspike

        That's a fundamental mischaracterization of DNSSEC. You can't realistically remove individual DNS registrars now, but they all feed into registries, and you generally either trust those registries or you don't. If you don't, then you don't go to those TLDs. More to the point, this argument incorrectly tries to model the security of all websites

        • Re:Boring (Score:4, Interesting)

          by Zeinfeld ( 263942 ) on Wednesday August 31, 2011 @09:58PM (#37271642) Homepage
          Unfortunately the registrar system is rather less trustworthy than you imagine. We have not to date encountered an outright criminal CA. We do however know of several ICANN registrars that are run by criminal gangs.

          The back end security model of the DNS system is not at all good. While in theory a domain can be 'locked' there is no document that explains how locking is achieved at the various registry back ends. A domain that is not locked or one that is fraudulently unlocked is easily compromised.

          The part of the CA system that has been the target of recent attacks is the reseller networks and smaller CAs. These are exactly the same sort of company that runs a registrar. In fact many registrars are turning to CAs to run their DNSSEC infrastructure since the smaller ones do not have the technical ability to do it in house. In fact a typical registrar is a pure marketing organization with all technical functions outsourced.

          There are today about 20 active CAs and another 100 or so affiliates with separate brands. In contrast there are over a thousand ICANN registrars.

          Sure there are some advantages to incorporating DNSSEC into the security model. But to improve security it should be an additional check, not a replacement. Today DNSSEC is an untried infrastructure, it is grafted on to a legacy infrastructure that is very old and complex and security is an afterthought.

          The current breach is not even an SSL validation failure. The attacker obtained the certificate by bypassing the SSL validation system entirely and applying for an S/MIME certificate that did not have an EKU (which it should). That makes it a technical exploit rather than a validation issue. DNSSEC is a new code base and a very complicated one. Anyone who tells you that it is not going to have similar technical issues is a snake oilsman.

          • by dgatwood ( 11270 )

            We do however know of several ICANN registrars that are run by criminal gangs.

            Ultimately, it doesn't matter. Bugs notwithstanding, DNSSEC is still provably no less secure than CA-based certs because if you can compromise DNSSEC, you can also change the contact info on a domain and get any CA to give you a cert for the domain. Therefore, even if every CA were above board, you still cannot trust the CAs (even the best CAs) to protect you from someone compromising the domain itself.

            Therefore, your domain, by

      • by Morty ( 32057 )

        Both the current CA model and Moxie Marlinspike's proposed notary system already implicitly trust DNS registration data. When someone requests example.com, how does the CA (or notary) know that the requestor owns it? In a few rare cases, the CA (or notary) knows the requestor personnally, but that's rare, and doesn't scale to the Internet. In the normal case, the CA (or notary) has no information other than DNS. The CA (or notary) will either check that the requestor's contact data matches the DNS whois

        • by Morty ( 32057 )

          . . . and I appear to have misunderstood Moxie's system. It does not implicitly trust DNS at all. It does rely on SSL certs not to change, which I find odd, given that SSL certs tend to be replaced (either shortly before expiration or after a private key compromise.)

          • by TheLink ( 130905 )

            It does rely on SSL certs not to change, which I find odd, given that SSL certs tend to be replaced

            Cert expiration is little to do with security. The main reason why SSL certs expire is so that CAs can make money (that many think they don't deserve to make ;) ).

            IMO having to issue and reinstall certs regularly causes more security problems.

            If a hacker can get hold of a webserver's SSL private keys, the hacker can likely get whatever else that webserver has or can access. Changing the SSL cert regularly won't help.

            Most ssh servers never have their keys changed. If one day they change, it usually means som

            • by Morty ( 32057 )

              Any new SSL cert validation scheme needs to interoperate with the CA-based SSL cert validation scheme. The existing SSL cert validation scheme does have cert expiration, needed or not. Your bank is not going to switch to a self-signed perpetual cert when the overwhelming majority of its customers are relying on CA-based schemes that will claim the bank's site is unsafe. So certs are going to keep changing. For a new cert validation scheme to succeed, it must be able to accommodate this during the transi

      • Re: (Score:2, Insightful)

        by QuantumRiff ( 120817 )

        add to /etc/hosts
        127.0.0.1 diginotar.nl

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Those are Google's nameservers.

      As long as we're distrusting authority you might want to mention that.

      Using DNS provided by an advertising firm isn't exactly the healthiest thing for your privacy, maybe not now, but when those become the new 4.2.2.[1-3] and Google can monetize them.

      Anyone who cares about his privacy should never rely on a Google product.

    • by Anonymous Coward

      All of the news about the SSL security flaws are starting to get boring. We had a related scandal just yesterday [slashdot.org]. The problem with SSL (or TLS, actually) is that it uses X.509 with all of its problems, like the mixed scope of certification authorities. It's like using global variables in your program - it is never a good idea. I can only agree with Bruce Schneier, Dan Kaminsky and virtually all of the competent security experts that we have to completely abandon the inherently flawed security model of X.509 certificates and finally fully embrace the DNSSEC as specified by the IETF. It is both stupid and irresponsible to have a trust system used to verify domain names in 2011 that is completely DNS-agnostic - and in fact designed in the 1980s when people were still manually sending the etc/hosts files around! There could be a lot of better solutions than the good old X.509 but in reality the only reasonable direction that we can choose today is to use the Domain Name System Security Extensions. Use 8.8.8.8 and 8.8.4.4 exclusively as your recursive resolvers. Configure your servers and clients. Define and use the RRSIG, DNSKEY, DS, NSEC, NSEC3 and NSEC3PARAM records in all of your zones. Use and verify them on every resolution. Educate people to do the same. This problem will not solve itself. We have to start acting.

      Uh, right, because cryptographic operations are free and don't represent a DNS DOS opportunity, right? Oh wait...

      • All of the news about the SSL security flaws are starting to get boring. We had a related scandal just yesterday [slashdot.org]. The problem with SSL (or TLS, actually) is that it uses X.509 with all of its problems, like the mixed scope of certification authorities. It's like using global variables in your program - it is never a good idea. I can only agree with Bruce Schneier, Dan Kaminsky and virtually all of the competent security experts that we have to completely abandon the inherently flawed security model of X.509 certificates and finally fully embrace the DNSSEC as specified by the IETF. It is both stupid and irresponsible to have a trust system used to verify domain names in 2011 that is completely DNS-agnostic - and in fact designed in the 1980s when people were still manually sending the etc/hosts files around! There could be a lot of better solutions than the good old X.509 but in reality the only reasonable direction that we can choose today is to use the Domain Name System Security Extensions. Use 8.8.8.8 and 8.8.4.4 exclusively as your recursive resolvers. Configure your servers and clients. Define and use the RRSIG, DNSKEY, DS, NSEC, NSEC3 and NSEC3PARAM records in all of your zones. Use and verify them on every resolution. Educate people to do the same. This problem will not solve itself. We have to start acting.

        Uh, right, because cryptographic operations are free and don't represent a DNS DOS opportunity, right? Oh wait...

        What he said.

    • Screw that, i moved to Convergence [convergence.io].

    • While a complete re-work of the certificate signers is a good idea - and implementing DNSSEC widely is also a good idea - we're still going to need TLS, and that means certificates. DNSSEC doesn't provide any mechanism for encrypting the data stream after you've securely established you're talking to the right server, nor should it, that's not its job.

      So DNSSEC protects against DNS poisoning and some MITM attacks; but there are plenty of other ways such as fake gateway, passive listening of wifi traffic, AR

    • by Lennie ( 16154 )

      While I agree about DNSSEC as a possible solution. A lot of people probably don't agree. Because DNSSEC is to much like a single-CA-model. And many don't like it. I personally probably do trust the root to get it right, I just don't trust all the TLD's.

      Also you mention 8.8.8.8 and 8.8.4.4 but they don't have support for some of the basis parts of DNSSEC yet.

      Which means if I have a working DNSSEC-setup on my end that can verify the DNSSEC key material I can't use them to check what Google gives me.

      So it is c

  • So, I still say that if trust is lost once, nothing that Diginotard touches can ever be trusted.
    • Except that most people don't know anything about certificates, and don't know why they should care.

      And adding/removing certificate authorities isn't an easy task you'd give to anyone.

      So unless the higher-ups (site owners / browser vendors) kill this company, there's nothing much the rest of us can do.

      • by sjames ( 1099 )

        It's quite easy to do actually, but in this case, the vendors are taking care of it. The update went out on debian-security today. IIRC, mozilla is planning an update as well.

        • I fear we may be missing the point. Maybe.

          There are indicators that the number is a lot more than just 200 certs - some speculate that there were log wipes involved, which means we can expect a very, very large number.

          If that's true, it's wonderful that some browsers are blocking a bogus *.google.com cert. It'll be useless, however, if the attackers generated 50,000 OTHER *.google.com certs, along with multiple certs for world+dog.com.

          As to the impact of this CA's incompetence, it's pretty evil when you con

          • by sjames ( 1099 )

            You misunderstand, the updates aren't merely blacklisting a few known bad certs, it is invalidating any cert that ever has or ever will be signed by this CA. Effectively it makes them not exist.

            Any legitimate holder of a cert signed by them will need to go get a new one from someone else.

            • No, I don't think I do.

              http://www.theregister.co.uk/2011/08/30/google_chrome_certificate_blacklist/ [theregister.co.uk]

              "...A side-by-side review comparing code contained in an upcoming version of Chrome increased the number of secure sockets layer certificates hardcoded in the browser's blacklist by 247. A comment accompanying the additions said: “Bad DigiNotar leaf certificates for non-Google sites.”

              Regardless of what's happening now, some reactions were to kill distinct certs. And likely some still are.

              • by sjames ( 1099 )

                Perhaps not chrome, but it is certainly true for Debian and Mozilla.

                If your vendor has let you down, delete the offending keys yourself and show friends how.

  • by GameboyRMH ( 1153867 ) <<moc.liamg> <ta> <hmryobemag>> on Wednesday August 31, 2011 @06:25PM (#37270230) Journal

    CAs are done, stick a fork in 'em. Just generate your own certs. A CA cert only increases your chance of getting MITM'ed (since you don't have sole control over distribution), and without a big store of certs in one place, they'll be harder to steal.

    Fuck CAs, install Convergence / Perspectives, call it a day.

    • by Karl Cocknozzle ( 514413 ) <kcocknozzle AT hotmail DOT com> on Wednesday August 31, 2011 @07:29PM (#37270702) Homepage

      Couldn't agree more. Links for the lazy: Convergence [convergence.io] and Perspectives [perspectives-project.org].

      Enjoy.

      • BTW, after giving Convergence a try, I still prefer Perspectives. Convergence's anonymization feature is nice but it uses a mechanism that installs a local CA, causing CertPatrol to go nuts, and it doesn't offer anywhere near the level of customization of Perspectives.

      • by seyyah ( 986027 )

        It's not Convergence. It's "Convergence Beta". And I'm not interested in beta software protecting my security.

        Wait, you're saying that they use "Beta" to market their product because it sounds cool? Yeah, not interested in that either.

    • No one can "steal" your existing certificate unless they also steal your web server's private key. A CA can issue a fraudulent certificate for your site, but anyone can generate a self-signed certificate for your site as well. How does a CA make MITM attacks more likely? How many users visit your web site for the first time on an untrusted wireless network or in a country where the government may want to feed them a fake certificate anyway? Propagation and widespread trust of self-signed certs is what w
      • No one can "steal" your existing certificate unless they also steal your web server's private key. A CA can issue a fraudulent certificate for your site, but anyone can generate a self-signed certificate for your site as well. How does a CA make MITM attacks more likely?

        Because the CA issues certs that the browser trusts. That fraudulent cert will work A-OK and give no warnings to users. It's as good as the one already installed on the web server. Because the CA will cave to government requests and is a nice juicy target for black hats, this cert is more likely to be issued fraudulently than if the keys are stored on a flash drive in your desk drawer.

        How many users visit your web site for the first time on an untrusted wireless network or in a country where the government may want to feed them a fake certificate anyway?

        AKA the "prayer method" - pray you don't get MITMed the first time. It would be very shortsighted, at best, to rely on this.

        • This is what the network notary system (Perspectives / Convergence plugin) is for, take a look at it. When you visit a site, it compares the cert your browser receives with what other computers around the world are seeing at the same time.

          From the perspectives project: Perspectives is a new approach to helping computers communicate securely on the Internet. With Perspectives, public “network notary” servers regularly monitor the SSL certificates used by 100,000s+ websites to help your browse
          • In the future notaries will run on a darknet-like system, making IP-specific interception and identification impossible. Convergence already offers this capability.

  • ...wouldn't the certs be useless without the associated private keys?
  • Seriously, I wonder what percentage of software actually checks the CRL's. It's extra steps that are annoying to code and I bet a lot of programmers just skipped it.

    So even though these certs have or will be revoked that doesn't mean you're safe. If the programmer(s) of the software you're using were lazy and didn't code the extra steps to get the CRL's (or maybe the CRL itself is inaccessible for some reason) then you're screwed.

    This is one of those things that programmers would have never considered unt

    • I don't know what percentage does, but you can check if your software does by attempting to connect to this site: https://test-sspev.verisign.com:2443/test-SSPEV-revoked-verisign.html [verisign.com]

    • Most check CRLS and OCSP.

      The problem is what they do when they can't reach that data. All the browsers out there now simply fail silently and go to the site anyway.

      For some reason this is seen as a problem with CAs and not the broken browsers. But from the browser providers perspective 99% of their customers are really interested in getting to sites reliably and without fuss and less than 1% are dissidents whose lives might be threatened.

      This is not the fault of the guy who writes the code. They only

  • by subreality ( 157447 ) on Wednesday August 31, 2011 @06:38PM (#37270336)

    How long until we collectively admit that centralized SSL certs are actually causing more problems than they solve?

    The SSH model works great: connect to a site once; verify the fingerprint once if you consider a MITM to be a reasonable concern; cache the key and know that forever after you're connecting to the same site as you did the first time. That narrows the attack vector to active MITM attacks where Mallory can intercept your first connection (if they want to actually get your data) and every connection thereafter (if they don't want to be noticed). It makes widespread surveillance impossible (they'd be noticed) and targeted attacks very unlikely to succeed.

    You can even add a CA to that model: have the first-time dialog be "[ nobody | ] certifies that is . Does that sound OK to you? (looks good) (hell no)". In other words, just make self-signed certs less scary, and CA-signed certs more scary... Which would accurately reflect the actual level of security you're getting: both are probably OK, and one is a little more certified but certainly not golden. Only pop up the BIG SCARY WARNING when the cert changes, even if it's signed by the CA.

    • by J0nne ( 924579 )

      Except in the case of countries like Iran and China, where they can easily do a permanent MITM attack for webmail providers if they wanted to for the first and any subsequent connections. I'm not saying the current system is perfect or even good, but your alternative is worse in many respects.

      • 1) You have an optional CA. Sites like Gmail will get a cert. That (usually) covers the initial connection.

        2) Pop a huge warning if the cert changes, even if the CA signs the new one. This is the really important part.

        3) Even if all of the network is subverted AND all of the CAs are subverted, the MITM is still detected when people VPN to another country, or dial out, or travel, or the fingerprints are manually verified.... you can't guarantee the availability of encryption, but you can always detect wid

        • 2) Pop a huge warning if the cert changes, even if the CA signs the new one. This is the really important part.

          There are firefox extensions which do just that: Certificate Patrol [mozilla.org]. If a certificate changes without reason (i.e. while still being far from expiration), a warning pops up.

          However, the problem with this approach is again stupidity of the webmail operators and ignorance how certificates work.

          Some large webmail providers (yahoo, google, ...) who have load-balanced banks of servers sometimes have half of their servers with one certificate, and the other half with another (possibly even signed by another CA.

      • The only way this would go unnoticed is if they had the MITM already in place before hotmail or gmail existed.

        ... because else the early adopters will suddenly see that the certificate changed at the moment where they introduced this surveillance.

        And because it is impossible to probe remotely whether a browser already has the certificate cached or not, these countries can't even selectively switch on the MITM for the "new" users.

    • by Junta ( 36770 )

      How long until we collectively admit that centralized SSL certs are actually causing more problems than they solve?

      A bit harsh, but the model has some issues due to obsolete objectives.

      The SSH model works great

      Only if you habitually visit the same place does it provide any significant reduction in risk, so if you see a product you want on an as-yet unvisited storefront, you have zero protection against MITM. Maybe they can't keep it up for days, but a single visit is sufficient to mess you up. If the server's key is compromised? You are pretty well screwed, as not fixing the problem *looks* more secure than if they fixed it (e.g. the big deb

      • by Junta ( 36770 )

        I have to add that OSCP really does a lot to address the x509 issues...

      • Only if you habitually visit the same place does it provide any significant reduction in risk, so if you see a product you want on an as-yet unvisited storefront, you have zero protection against MITM.

        Your home ISP isn't going to MITM you. They want to keep you as a customer. The coffee shop you visit isn't going to. They don't want to get prosecuted for credit card fraud. Same thing with a hotel network.

        I'd expect it from random TOR exit nodes, but why would you use an anonymity network to shop with a credit card?

        Passive eavesdropping is a real concern, but what's an example of a network where people would engage in active MITM attacks *hoping* that someone will try to send secret information on the

    • /. ate my angle brackets. Here's what I meant:

      "[ nobody | <CA>] certifies that <fingerprint> is <domain>. Does that sound OK to you? (looks good) (hell no)"

    • The SSH model works great: connect to a site once; verify the fingerprint once if you consider a MITM to be a reasonable concern; cache the key and know that forever after you're connecting to the same site as you did the first time.

      You can (theoretically) do that too with SSL. Connect to the site, you get a certificate warning. Instead of blindly accepting the certificate, read the SHA1 fingerprint (which is displayed in the dialog box asking for acceptance), and call the helpdesk of the business with which your interacting to verify that it is the correct one. After accepting the certificate once, your browser now has it in its cache, and in knows forever (or rather: until expiration) that you're connecting to the same site as you di

    • > The SSH model works great: connect to a site once; verify the
      > fingerprint once if you consider a MITM to be a reasonable
      > concern; cache the key and know that forever after you're
      > connecting to the same site as you did the first time.

      It works great for sites with 1 up to a few certs certs. There are distributed (Akamai-style) sites out there, that will present you a different cert with almost every page refresh! PITA... Normally hidden, since your browser will "trust" all of them anyway, but

  • by 93 Escort Wagon ( 326346 ) on Wednesday August 31, 2011 @06:42PM (#37270384)

    Let's say you were hoping to insinuate yourself unnoticed into traffic destined for a particular site - for the sake of argument, let's use the Tor project. What would be the best way to do this without someone suspecting you had a specific target in mind? Stealing a couple hundred certs all at once, only one of which is related to your project, comes immediately to mind.

    It's not like similar approaches haven't been taken before, even in the non-digital world. I seem to recall that was one explanation John Muhammad gave for the DC Sniper attacks - he really wanted to kill his ex-wife, and hoped killing a bunch of other people would keep suspicion from him.

  • by account_deleted ( 4530225 ) on Wednesday August 31, 2011 @06:55PM (#37270462)
    Comment removed based on user account deletion
    • Just delete DigiNotar from your trusted CAs. Honestly I was just going to wait for the revocation lists like everybody else but seeing the scope of this now I think they've earned the right to be fired from the Internet forever.
      • This is still a manual process which is great now a month after 200 certificates were actively used in the wild, and also great for those who read slashdot. I've removed it too, but what about the rest of the family who don't read Slashdot?

    • by BZ ( 40346 )

      Yes, this is why browsers are also shipping updates with certs explicitly distrusted.... and why the fact that DigiNotar did not tell browsers about the problem a month and a half ago when it happened is such a huge issue.

  • That's not several dozen, that's a few gross.
  • by trawg ( 308495 ) on Wednesday August 31, 2011 @09:43PM (#37271572) Homepage

    Can't see anyone having posted this, but Mozilla have instructions [mozilla.com] on how to remove DigiNotar as a trusted CA in your Firefox. I'm sure other browsers have similar processes.

    I also note they've just released [mozilla.com] a new Firefox (and Thunderbird) version that has removed the CA entirely - good response:

    Because the extent of the mis-issuance is not clear, we are releasing new versions of Firefox for desktop (3.6.21, 6.0.1, 7, 8, and 9) and mobile (6.0.1, 7, 8, and 9), Thunderbird (3.1.13, and 6.0.1) and SeaMonkey (2.3.2) shortly that will revoke trust in the DigiNotar root and protect users from this attack. We encourage all users to keep their software up-to-date by regularly applying security updates. Users can also manually disable the DigiNotar root through the Firefox preferences.

  • Let the next time someone in that company try to "google" something be a very unpleasant experience.
    Google death sentence.

  • over 200 can be expressed as a multiple of 12! In fact, more than 200 can potentially be 17 dozens!
  • by Torodung ( 31985 ) on Wednesday August 31, 2011 @11:00PM (#37271956) Journal

    http://www.mozilla.org/en-US/firefox/6.0.1/releasenotes/ [mozilla.org]

    Expand "what's new" to see the change.

    Update immediately if this is worrysome to you.

    These certs were revoked yesterday in an out-of-band patch.

  • The problem here is that any CA that is in my list of root certificates is able to create a valid certificate for say www.google. com, and that some CA can be tricked into giving someone other than Google such a certificate. That is not enough, the attacker also has to redirect traffic that should go to www.google. com to their own server. The whole thing is mostly dangerous because _many_ people go to www.google. com in the first place; the same attack against say my homepage would have very little potenti

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...