Forgot your password?
typodupeerror
Security Google

Ask Slashdot: Has Gmail's SSL Certificate Changed, How Would We Know? 233

Posted by samzenpus
from the protect-ya-neck dept.
An anonymous reader writes "Recent reports from around the net suggest that SSL certificate chain for gmail has either changed this week, or has been widely compromised. Even less-than-obvious places to look for information, such as Google's Online Security Blog, are silent. The problem isn't specific to gmail, of course, which leads me to ask: What is the canonically-accepted out-of-band means by which a new SSL certificate's fingerprint may be communicated and/or verified by end users?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Has Gmail's SSL Certificate Changed, How Would We Know?

Comments Filter:
  • Revocation (Score:4, Insightful)

    by Anonymous Coward on Friday September 27, 2013 @01:52AM (#44967853)

    Google can easily revoke certificates. They can even install what ever they want on my computer as a replacement for google chome with their automatic background updates. Don't worry about it, they own your computer and will take care of it for you.

    • by ron_ivi (607351) <sdotno AT cheapcomplexdevices DOT com> on Friday September 27, 2013 @03:40AM (#44968221)
      I wonder why HTTPS stuff can't require *two* certificates that validate. That way unless both CAs are compromised, the traffic's safe.

      It's just like any other single-point-of-failure in your network. You probably work with two telcom companies to make sure your website and/or company has network access. Why shouldn't you do the same for certificates. Buy one from a US CA, one from a Russian one, and one from a Chinese one, and if browsers could check to make sure *all* (or two out of the three, whatever) validate, unless they collude you should be pretty safe.

      Even better if one of those can be a self-signed one. You can even exchange those keys over normal boring https, and then unless your commercial CA was already hacked at the time you distribute your self-signed one, your self-signed one will protect against your commercial CA being hacked in the future.

      • The problem is that there's no well-supported way, for example in the DNS, for a server to say which certificate it will use. If HTTPS required two certificates, then that would just mean that you'd need to compromise two CAs (or one CA and get them to issue two certs), which given what we already know about the NSA program, has already happened. This is something that people like Ben Laurie at Google are working on with Certificate Transparency: trying to ensure that there is a recorded and verifiable ch
        • "DANE" is the RFC that is in-progress for storing SSL certificates inside DNSSEC.

          Main advantage is that if ".com" is compromised, they still can't sign DNSSEC requests for ".biz". So damage (bad actor) is a bit more localized.
      • by Ash-Fox (726320)

        I wonder why HTTPS stuff can't require *two* certificates that validate.

        Actually it can, it's called a client side certificate, I have done it in Apache. It requires you generate a certificate for the client, who has to then import it to their browser.

        Even better if one of those can be a self-signed one.

        You can choose whatever certificate trust chain you want.

        • Hmmm. Client certs are used for authentication of the client to the server (the server also has to import the clients public vert). If there is a successful MITM attack, the client will simply authenticate to the wrong server.

      • Don't let the fool trolls get to you - you have a good post.

        For instance, a trivial browser-side implementation could simply check if bytes flowing in on an SSL connection (say, to https://abc.com443/ [abc.com443] matched bytes coming in through a secondary persistent HTTPS connection (say, https://verify.abc.com443/ [verify.abc.com443] and that both HTTPS connections use different CA authorities.

        Sure, this could be defeated if abc.com is compromised. However, an MITM attack would require two separate CA authorities to be fooled or comprom

      • by mlts (1038732) *

        We could assign CAs a trust factor and have multiple CAs in different geographic locations (preferably different countries) sign a key. However, this turns SSL from a PKI into a WoT (web of trust.)

        Without a doubt, a well maintained web of trust is more secure than the SSL/TLS principle of "anything signed by these root certs is 100% trustworthy." However, there is the user issue. Joe Sixpack wants a green lock icon. He doesn't want to worry that CA #1 is more trustworthy than CA #2. He just wants to en

      • by Shatrat (855151)

        You probably work with two telcom companies to make sure your website and/or company has network access

        As someone who works in telecom, this is not as good an idea as you think it is. You're almost always better off buying diverse/protected service from one company than trying to use 'carrier diversity' to save your butt. Very often both telecom companies will be using the same fiber, or leasing transport capacity from one another. Example: You buy an unprotected Ethernet Private Line from Level 3, and then turn around and buy another unprotected EPL from XO. They're both unprotected linear transport, bu

      • by mellon (7048) on Friday September 27, 2013 @01:17PM (#44972629) Homepage

        In fact something like this exists and may even be supported by your browser, but isn't in wide deployment at the moment. The way it works is that example.com goes out and gets an SSL cert for example.com, signed by some reasonable CA. example.com also configures dnssec for their domain. When you go to https://example.com/ [example.com] your web browser does a DNS query against _443._tcp.example.com for TLSA records. If it finds any, it validates the cert it gets via TLS against the TLSA record; the TLSA record can specify what certs are valid, or it can specify what certificate authority key (trust anchor) is valid, and there are a few other modes. The basic principle is that you now have two paths for validating the TLS cert: the CA _and_ DNSSEC. If both validate, use the cert. If either fails, don't use it. You can read all about it here [ietf.org].

        In addition, TLS provides for certificate revocation, so if someone generates a bogus cert and it is _detected_, the cert can be revoked, or if a key is compromised, the cert for that key can be revoked.

        These mechanisms seem more likely to be useful than just requiring certs from two different CAs.

    • by houghi (78078) on Friday September 27, 2013 @06:11AM (#44968787)

      google chome with their automatic background updates.

      That is why I use Firefox. I installed it when it was version 7 and it still is version ... (Checks version) ... How did it get to this version 23?

      • by bsane (148894)

        Firefox checks and updates when you run it. On OSX Chrome creates a launch service that keeps a daemon running that constantly communicates back with goog- even if you never open chrome again. Delete chrome? Background daemon continues to talk to goog- it doesn't go away until you use the CLI to remove it.

      • by hodet (620484)

        23, pfffft. Welcome to last week.

      • Re: (Score:3, Informative)

        by Mr. Slippery (47854)

        I installed it when it was version 7 and it still is version ... (Checks version) ... How did it get to this version 23?

        In case anyone doesn't know, you can turn that off. Also, I advise getting on the "extended service release" (ESR) track.

  • by supersat (639745) on Friday September 27, 2013 @01:56AM (#44967865)

    Back in May, Google announced that they would be making changes to their SSL/TLS certificates in the coming months: http://googleonlinesecurity.blogspot.com/2013/05/changes-to-our-ssl-certificates.html [blogspot.com]

    If you use Chrome, Google's SSL certificates are pinned, so that gives you some additional assurance.

  • Expiry (Score:3, Informative)

    by jamesh (87723) on Friday September 27, 2013 @02:05AM (#44967895)

    Was the old cert due to expire? I have thought before that it would be nice if my browser etc gave me a warning like "Certificate has changed but wasn't due to expire for another 3 months". This still gives the bad guys a window where a subverted certificate could be slipped in without notice, but it closes the window a bit.

    Also is it common to revoke the old certificate when replacing it, even if there is no reason to suspect the old certificate was compromised? If so that would be another warning that could be presented

    • Re:Expiry (Score:4, Informative)

      by Anonymous Coward on Friday September 27, 2013 @03:59AM (#44968303)

      I use with Firefox the Certificate Patrol add-on for detecting, when the certificates are changed. At least then you know, when the certificate has been changed.

      • by pe1chl (90186)

        Unfortunately it issues warnings all the time, especially for google and twitter.
        They occur so often that you (or at least me) get the habit of accepting them without further checking, to be able to continue working.
        This largely defeats the usefulness of this add-on.

        It appears that google twitter use different certificates on different servers around the world, and you get those warnings when
        the loadbalancing mechanisms direct you to another server you were using last time (for the same domain name).
        Either

  • by JWSmythe (446288) <jwsmythe&jwsmythe,com> on Friday September 27, 2013 @02:11AM (#44967921) Homepage Journal

        That made me wonder about something at work recently. All the machines at work are owned by the organization. It would be trivial for them to add their own trusted signing authority, so they could MITM every SSL web site. It wouldn't be terribly hard to auto-generate "valid" SSL certs, and have it tagged as whoever you want the signing authority to be. All they'd have to do is add their own cert, in this case named "GeoTrust Global CA", and they'd have perfect control. To do it perfectly, they'd just need to query the site you're going to, and match up the signer's CN and sign the new fake cert, and you wouldn't know the difference. Who tracks the fingerprint of every cert for every site they go to? Well, I'm sure in this crowd, a few do.

        It's good for network security, as they can pump everything out through a common proxy (or cluster of proxies) and inspect all the traffic for malicious intent (malware inbound, or organization secrets outbound). It's not good for privacy, if you were to visit your bank, gmail, etc.

        As far as that goes, there are an awful lot of "trusted" signing authorities that come with any browser. I know we should probably trust them, because the authors of the browsers trust them. There's no really good reason to do so, other than if you don't, all SSL sites will warn that they may not be trustworthy.

        I was considering a while back, how would *I* become my own signing authority, to be trusted by all browsers. I didn't find a good answer. An intermediary cert would solve it, but I didn't find how to accomplish that. Like, who do I throw money at to get one. Getting added to all browsers would be another even larger headache.

        My thought on it was, technically it isn't hard to do. I could spend a day writing a very nice site, that would verify ownership and make whatever cert for the domain. Why can't I (or whoever) offer $5/yr, or $50/10yr single domain or wildcard cert? The code and infrastructure isn't very heavy.

        Needless to say, since you haven't seen JWSmythe's Cheap Certs available, it never happened.

    • Forge the CA, and the signing CA's thumbprint on the cert wont match. That would be sort of obvious and easy to see, sort of like how these guys are spotting this now.

      Its possible this IS an NSA or whomever MITM, but I sort of doubt it precisely because of how easy to spot it is.

      • by smash (1351)
        Why forge the CA when you can just buy them?
      • Re: (Score:3, Interesting)

        by BitZtream (692029)

        Forge the CA so you can forge the certificates to do a man in the middle, its trivial. I've done it on multiple occasions at work in order to facilitate sniffing passwords to migrate users to different a new service (say from office365 to gmail without getting everyones passwords by asking).

        You only know the thumbprint doesn't match if you check it and manually record it. Your browser's checks are being processed correctly via the forged certs.

        I sign our MITM certs with our domain CA, its clear that we're

    • If you look hard you can detect that very easily. Both the additional CA on your computer and patterns in the generated certificates.

      Yes you do have to be looking for it, but once you look it is dead simple.

      • I operate a webserver, and I'd like to protect my users against SSL proxying. At the moment, all I can do is tell them to check my key's fingerprint against what the browser shows. But I'd really like to do this in JS. Is there any way to use JS to get the fingerprint string (that I can see by clicking on the padlock icon)? Then I could send that back to the server (from JS), and check if it's been tampered.

        (A really effective evil proxy would be able to defeat this, but most corporate proxys aren't going

        • No JS cannot get any SSL information.

          Perhaps hash the page you send and check the hash in JS? If they are sending sensitive information then use a JS strong encryption library which adds an extra layer in the browser.

          Again dead simple to break if someone is targeting you specifically, but it would have to be a custom hack for your site.

          • Hashing it won't help - I want to inform the user that their data is being intercepted, not that it's being corrupted.

            • by Albanach (527650)

              Surely if it were being intercepted, it could also be modified? If it can be modified, then they can change the javascript so as to never report a problem. Similarly for a manual check they can simply change the fingerprint you send to match the one they are using.

              If they have a valid certificate and are using it to perform a MIM attack, nothing you send or receive can be trusted.

              • If we're talking about the great firewall of china, you're right. BUT most corporate proxies run fairly standard software, and only update it every few months (if that). So, there's a pretty good chance of my getting the JS through the first time, and of the vendor taking a long time to work around it (if they ever do). Yes it's cat and mouse, but there are a lot of mice with different strategies, the cat isn't very quick, and as long as the mouse gets through once, it's enough to let the user know he's bei

    • by forkazoo (138186) <wrosecrans.gmail@com> on Friday September 27, 2013 @02:37AM (#44968025) Homepage

      That made me wonder about something at work recently. All the machines at work are owned by the organization. It would be trivial for them to add their own trusted signing authority, so they could MITM every SSL web site. It wouldn't be terribly hard to auto-generate "valid" SSL certs, and have it tagged as whoever you want the signing authority to be. All they'd have to do is add their own cert, in this case named "GeoTrust Global CA", and they'd have perfect control. To do it perfectly, they'd just need to query the site you're going to, and match up the signer's CN and sign the new fake cert, and you wouldn't know the difference. Who tracks the fingerprint of every cert for every site they go to? Well, I'm sure in this crowd, a few do.

      It's not merely possible. It's deployed, off the shelf technology. Not necessarily common, but many companies that do it see it as a cost reduction of more effective proxy usage, rather than anything nefarious.

      That said, the way SSL is handled by the browsers is absurd. Not notifying on changes compared to a cached fingerprint, and giving huge warnings on self certification are blatantly obvious errors in judgement. Conflating encryption and identity in one awkward mess has probably done more harm than good. IMHO, it should work a bit like SSH, where the first time you go to a website, you see a little unobtrusive popup saying, "This connection is encrypted. The site claims to be "Foo corp." The identity is (not verified || vouched for by the following : CA Bar, CA Baz). " Adding certs for CA's should be really obvious, not obscure black magic. So, if you attend University of Foo, you can add their self signed cert and all the servers on campus that you access over https will show up as signed by U of Foo. Untrusting certs should also be obvious in the UI. Some web of trust model should be available. If you ever get something other than what was cached, you should see the details side by side.

      As is, the system is mostly useless. It fails utterly at identification. And, it scares people away from using encryption on self signed certs. (As if that were somehow worse than operating entirely in plain text...)

      • Re: (Score:2, Informative)

        by AK Marc (707885)
        And most commercial sites do the same. They call it "reverse-proxy) and is done because web server software sucks at encryption. So if you are mobing 10 Gbps of encrypted web traffic, you put an encrypting proxy 1RU above the server, and the server serves pages, and the proxy encrypts them. Well, it's usually a little more complicated than that, but that's the general idea. I've done it. It is that easy.
      • by steelfood (895457) on Friday September 27, 2013 @03:22AM (#44968147)

        The current implementation in web browsers was designed by people who couldn't tell the difference between authentication and authorization.

        The reason why this paradigm has persisted is unknown, but the answer for you may vary depending on which end of the paranoia spectrum you're on. If you're on the Hanlon side, you'd say that the code is too old, and trying to change it would require too much work, so nobody really bothered. If you're on the conspiracy nut side, you'd say that the NSA and their agents are actively trying to keep these types of changes from going in.

        This problem with SSL certs has been known for the better part of 10 years now, and has been in focus for at least the past 5-7 years. Why Firefox could go through 30 revisions in that time and keep this behavior while changing practically everything else is quite the mystery. I'd say the same about Opera or IE, but they're closed-source and hence could not be subjected to the same standards of scrutiny. In fact, if there ever was a failure to the OSS model security-wise, Firefox's 1990's method of handling certs would be a prime example.

        • Re: (Score:3, Interesting)

          by BitZtream (692029)

          SSL has absolutely nothing at all to do with authorization. It carries no authorization information.

          You are confused and clearly don't understand how SSL works and what it does.

          SSL works by generating a new password for a symmetrical encryption algorithm on each session. Neither end knows the password before that point, they actually generate it together based on a communication method that ensures only the end points know the password.

          Because both sides generate a password for each session, if you did no

      • by mvdwege (243851)

        Conflating encryption and identity in one awkward mess has probably done more harm than good.

        I suggest you go back to school. Encryption without authentication is useless.

        • by swilver (617741)

          Spoken like a true short-sighted authentication nut.

        • I disagree. Yes there is a risk that you will end up talking to a man in the middle if you use encryption without authentication. However

          1: The attacker does not know if you are performing authentication on a given connection or not. Therefore by attempting to MITM your connections he risks being noticed.
          2: MITM is a lot more effort than passive sniffing.

        • by Lumpy (12016)

          So all the numbers stations out there on shortwave are all useless? That is a perfect example of Encryption without authentication.

      • by VortexCortex (1117377) <VortexCortex.project-retrograde@com> on Friday September 27, 2013 @03:54AM (#44968285)

        That said, the way SSL is handled by the browsers is absurd. Not notifying on changes compared to a cached fingerprint, and giving huge warnings on self certification are blatantly obvious errors in judgement. Conflating encryption and identity in one awkward mess has probably done more harm than good. IMHO, it should work a bit like SSH, where the first time you go to a website, you see a little unobtrusive popup saying, "This connection is encrypted.

        Yep, completely absurd. Go into your browser security certs and notice that the Chinese root cert "CNNIC" is installed. That means any of those trusted roots can simply create an SSL cert for Google.com and unless you're manually verifying the cert chain every time you connect, you won't know you've been MITM'd -- Big green bar and everything... I like your idea about making things more like SSH, but I'm afraid users will just click through it without reading any warnings anyway. Oh, if only PKI hadn't been invented! Why, then we could just use some session salt nonce HMAC'd with our pre-shared key (password) to set up a connection that no man in the middle can intercept (since they don't have our password, or password hash, etc pre-shared secret). I can do this in JavaScript, (or more favorably with a plugin), but we really need the browsers to just prompt us for the credential to our bank or email BEFORE it ever makes the request or displays the password entry form -- The request comes in, says : "I'm user X, here's my nonce, gimme your nonce server, and we can start encrypting data with HMAC( PW, N1 ) as the key". Public key crypto should have only ever been used at account creation (the only time you need to send the pre-shared secret). I've always known the entire security community was full of morons since they didn't bitch about the foolishness of SSL PKI loudly enough -- Oh, and for the "but muh passwerds!" folks: Built in password manager. Different random password for every site, master password to unlock the keystore. This is 2013 and since it's not standard addition to browsers, I'm not sure folks like you or me CAN do anything about it if we haven't already.

        Additionally: People who searched for "Tinfoil Origami" also clicked on Convergence. [convergence.io]

    • by citizenr (871508) on Friday September 27, 2013 @03:25AM (#44968163) Homepage

          That made me wonder about something at work recently. All the machines at work are owned by the organization. It would be trivial for them to add their own trusted signing authority, so they could MITM every SSL web site.

      You just described for every enterprise firewall/scanner solution works

    • Where I recenty worked I was asked to build a proxy server to proxy and filter HTTPS traffic and to use group policy to distribute the wildcard certificate to the workstations. It was really easy, except for firefox which was a pain in the ass. I used Squid for the proxy server and SSL bump. It was very cool and very evil. I wanted to make sure that the CEO, who was asking for this, understood that I could use this to do whatever I liked to web pages, including his internet banking so I gave a demo where I

      • by smash (1351)
        There are off the shelf devices for network acceleration designed for this exact thing: Riverbed Steelhead. Not nefarious, but for acceleration of traffic over slow network connections.
    • by Nkwe (604125) on Friday September 27, 2013 @03:26AM (#44968169)

      All the machines at work are owned by the organization.

      You can stop right there. If the organization owns the machine, you have no expectation of privacy, legally or otherwise. From a technical perspective, if anyone other then you has administrative access to the machine, you should have no expectation of privacy. If you let malware gain administrative access, same story.

      • by AmiMoJo (196126) *

        Actually it is illegal for companies to read personal email received at work in much of the EU. If you open gmail.com in a browser and check your mail at lunch time they are not supposed to spy on it. People have gone to jail for doing so.

    • by wvmarle (1070040) on Friday September 27, 2013 @03:51AM (#44968261)

      As far as that goes, there are an awful lot of "trusted" signing authorities that come with any browser. I know we should probably trust them, because the authors of the browsers trust them. There's no really good reason to do so, other than if you don't, all SSL sites will warn that they may not be trustworthy.

      The one and only reason you can trust them, is because if their trust is broken, the company is out of business really soon. Prime example of course is DigiNotar [wikipedia.org] which was declared bankrupt a month after a breach of its certificates came to light.

      As soon as such a breach happens, browser vendors very quickly remove the offending certificate and push out a new update. Anyone using certificates from that vendor is forced to change almost instantly or people have issues accessing their web sites.

      And that's the one and only reason you can trust them - and why that trust is fairly worthwhile.

    • by the_B0fh (208483)

      10 years ago, it cost $150k to put your own root into the browsers, each. I'm sure the price has gone up.

      The reason wildcard certs aren't offered (actually, you could get them) is because each time you waste a couple of cpu cycles to get another cert signed, they get $$. Why kill the goose that lays dem elegant golden eggs?

    • This is a very good reason not to trust any closed source browser, actually. If the source is closed, how the heck do you know what it's doing to show you that nice, safe green password icon? Of course, actually ploughing through and understanding every line of the SSL implementation code in your browser is a lot of work and 99.9% of us haven't done it, but if there were anything dodgy going on in an open source browser it would pretty quickly hit the headlines on Slashdot and we'd all know. Of course again

  • by magic maverick (2615475) on Friday September 27, 2013 @02:19AM (#44967949) Homepage Journal

    From https://en.wikipedia.org/wiki/Convergence_%28SSL%29 [wikipedia.org]:

    With Convergence, however, there is a level of redundancy, and no single point of failure. Several notaries can vouch for a single site. A user can choose to trust several notaries, most of which will vouch for the same sites. If the notaries disagree on whether a site's identity is correct, the user can choose to go with the majority vote, or err on the side of caution and demand that all notaries agree, or be content with a single notary (the voting method is controlled with a setting in the browser addon). If a user chooses to distrust a certain notary, a non-malicious site can still be trusted as long as the remaining trusted notaries trust it; thus there is no longer a single point of failure.

    The Monkeysphere Project tries to solve the same problem by using the PGP web of trust model to assess the authenticity of https certificates.[8] [monkeysphere.info]

    Now, everyone, let's use the tools available!

    • by BitZtream (692029)

      The Monkeysphere Project tries to solve the same problem by using the PGP web of trust model

      No, lets not.

      This is a horrible model to try and use on a global scale. Crowdsourcing is not an authentication solution, its a stupid idea. Theres a reason PGP has never taken off outside geeks validating their linux binaries ... because someone cares about the linux machine running in your basement.

      When will you guys get it through your heads that 'distributed everything' doesn't work. Central authorities are needed to mediate and ensure everyone is on the same page. Central authorities also come with

      • by Sloppy (14984) on Friday September 27, 2013 @12:43PM (#44972275) Homepage Journal

        When will you guys get it through your heads that 'distributed everything' doesn't work. Central authorities are needed to mediate and ensure everyone is on the same page.

        Those central authorities are welcome to join in, and become highly valued nodes in the WoT.

        Central authorities also come with the risk that they can be compromised, but its far easier to deal with one compromised CA than several billion.

        Aha, now I get it... could it really be this simple? Are X.509 advocates merely bad at math? The terms in your risk assessment formula are wrong.

        If a signer has a probability p of being accurate/trustworthy, then the chance of its attestation being correct, is p. That's how X.509 certs work and of course you understand that very well. Cool. With PGP, if signer1's probability of being accurate is p1, and signer2's probability of being accurate is p2, then the chances their joint attestation of an identity is accurate, is 1-((1-p1)*(1-p2)). Dude, that's a number which is greater than either p1 or p2.

        For example, say you think it's 90% likely that Verisign is telling you the truth about a key belonging to a certain website. They're the one and only signer for some website (because one signature is all this shitty tech can handle), so you think it's about 90% likely you're talking to that site, and 10% likely you're talking to the NSA. If that's your estimate of Verisign's reliability/trustworthiness, then 90% is the best you can do with that tech.

        Now let's say we upgrade from that garbage to 1991 technology: the PGP WoT. Suppose Verisign and CNNIC have both signed something, and you think Verisign is 90% reliable and CNNIC is 60% reliable. (Those sneaky Chinese bastards!)

        You're 1-( (1-0.9)*(1-0.6) ) = 0.96 , that is, 96% confident that you're talking to the website you wanted to, and 4% worried that you're talking to someone who is involved in a join US-China conspiracy (which, now that you think of it, is less than 4% likely to really occur). You have just wiped the floor with X.509's security performance.

        Suppose I signed it too. You don't know me. While it seems absurd at first that I'm less trustworthy than the Chinese government (they're known badguys; I'm merely some internet asshole) at least you know something of their loyalties or lack thereof, and very little of my competence and motivations. It's reasonable to assume I am probably more likely to conspire with your adversaries than they are. Some guy with US government might be holding a gun to my head, right now! So you decide to only trust me 1%. Ok. Guess what? You can work with that!

        Now my super-weak signature is on there. You trust the identity 1-( (1-0.9)*(1-0.6)*(1-0.01) ) = 96.04%. My super-weak nearly-completely-untrusted attestation made it stronger.

        This is why were totally wrong when you said one compromised CA is easier to deal with than a billion. A billion compromised CAs are easier to deal with than one. Distributed authentication is more fault-tolerant, and we're now in a situation where the mainstream finally "gets it" that the faults really do occur, rather than it simply being a tinfoil hat thing that cypherpunk SciFi authors pretend to worry about. X.509 is based on the idea that Verisign is telling you the truth 100% of the time, and cannot model the idea that you think they sometimes fail. PGP, on the other hand, is based on reality: that grey world where sometimes things work and sometimes they don't, where you sort of trust some people some of the time, etc. You know, that world that you actually live in.

  • by seawall (549985) on Friday September 27, 2013 @02:22AM (#44967963)

    Addons for web browsers (e,g. Certificate Patrol in Firefox, there are others) can clue you into certificate changes. Rather like Ghostery (which shows where stuff is loading from in a web page): it is an eye opener.

    • While we're at it some other addons like Perspectives [perspectives-project.org] or Convergence [convergence.io] allow you to compare the cert you're receiving to the certs for the same site seen from multiple 3rd party servers across the world. This would allow you to confirm that not only that the certificate hasn't changed, but that the certificate you're seeing is the same one as *insert 3rd party unlikely to be MITMed the same time as you* is seeing.

    • by wvmarle (1070040)

      It is pretty bad that the web browsers themselves don't do that.

      If I change the key on my server, ssh refuses to connect until I remove the old key from my configuration. That's proper behaviour. That browsers don't even warn a key has changed is of course pretty bad.

    • by Foresto (127767)

      Sadly, warning about certificate changes is practically useless today, since so many of the major sites have a bazillion different certificates, any of which might be the one you get at any given time. I stopped using Certificate Patrol for google sites because it was raising alarms almost every time I visited one.

  • As the NSA said in the 1970's ~NSA (“Wood Study”) "SIGINT sites were generally acceptable as long as they were invisible to the local population" page 393
    from http://www2.gwu.edu/~nsarchiv/NSAEBB/NSAEBB441/docs/doc%201%202008-021%20Burr%20Release%20Document%201%20-%20Part%20A2.pdf [gwu.edu]
    Welcome to the gift of free ENIGMA encoding with another rotor?
  • by goddidit (988396) on Friday September 27, 2013 @02:54AM (#44968057)

    Certificate transparency is a new project initiated at least partly by Google's engineers, which intends to solve this problem with SSL trust model: http://www.certificate-transparency.org/ [certificat...arency.org]
    It uses an append only public log, similar to Bitcoin transaction log to make certificate information public.

  • Certificates have an expiry date. They are supposed to be changed before the expiry date is reached. On a well managed system, you'll never see a certificate which has less than a week left of its validity period. Once the certificates are changed, it should be considered best practice to rotate the server key as well, so the new certificate will always be signing a different key from the previous certificate.

    It would be nice to have more information to verify the correctness of the new certificate than
  • I always wondered why SSH made just a fuzz about storing a site's certificate and warn of changes, but didn't put such a great emphasis on verifying host names or certification chains, but almost every other channel will just happily and silently accept a modified certificate.

    Replacing that "This certificate is self-signed!" pop-up with a "This certificate is new or changed, please verify this MD5 hash on a trusted website: XX-YY-ZZ!" would probably increase security by an order of magnitude.

    Also do this fo

    • by gweihir (88907)

      Simple: SSH has never had its certificate system compromised, because it is de-central and requires individual approval anyways. The X.509 certificate system SSL uses is centralized and was likely compromised from the very start. So why warn users?

    • by MrMickS (568778)

      I always wondered why SSH made just a fuzz about storing a site's certificate and warn of changes, but didn't put such a great emphasis on verifying host names or certification chains, but almost every other channel will just happily and silently accept a modified certificate.

      Replacing that "This certificate is self-signed!" pop-up with a "This certificate is new or changed, please verify this MD5 hash on a trusted website: XX-YY-ZZ!" would probably increase security by an order of magnitude.

      Also do this for background operations, like operating system fixes, virus scanner updates and may even MD5 downloads.

      Given that certificates expire, often yearly, do you really think that this would be a useful thing to do? Think about it for a minute...

      The majority of people don't know much about certificates other than the nice little GUI change to show that a site is validated ok. If you start popping up dialog boxes telling them that a certificate has updated at fairly regular intervals what are they going to do? Check the certificate to make sure its valid, or just click the box away? If people get used to getting a

  • by diamondmagic (877411) on Friday September 27, 2013 @03:36AM (#44968205) Homepage

    A few months ago, Google removed the ability in Chrome to staple a TLS/SSL certificate to your DNSSEC-signed DNS records: https://www.imperialviolet.org/2011/06/16/dnssecchrome.html [imperialviolet.org]

    It was finally a way to get an HTTPS secured website without needing to go to a CA. And they removed it.

    I just thought they were being incompetent as they usually were, but now I can't help but wonder if the NSA got on their backs about not being able to sign their own replacement certificate...

  • Twitter, too (Score:5, Interesting)

    by Lincolnshire Poacher (1205798) on Friday September 27, 2013 @03:43AM (#44968227)

    Another one that Certificate Patrol has flagged inthe past week is *.twimg.com, which appears to be a mess of certs from different CAs.

    One subdomain ( s0 ) has switched from a DigiCert EV wildcard cert to a Verisign per-subdomain cert.

    Another has gone from Verisign to Comodo.

    Annoyingly twimg.com seems to be embedded across the Web...

    I've been rejecting them all, given that Twitter provide no information on their site as to whether this was a planned change.

  • by candlebar (2042064) on Friday September 27, 2013 @05:18AM (#44968565)

    I can still read your email. It hasn't changed.

  • (s)

    Little if any talk about http://convergence.io/ [convergence.io] — watch the linked bulletproof youtube about why SSL certs are so broken and why this package would be so awesome (if popular).

  • If you're paranoid about Man-In-The-Middle attacks or would just like to know whether your own corporation surveils your HTTPS browsing, you can use this checker: https://www.grc.com/fingerprints.htm [grc.com] to confirm whether your certificate fingerprints are the same.
  • I don't believe it, what could ever possible be gained by Google compromising their email security?
    We already know that all the powers that be have access to everyone's gmail account, through a Google made interface, making compromising the security irrelevant.

  • It's worst than that, google and many other big sites have multiple certificates, so when you go to one of those sites you never know what certificate you will get...
    install certificate patrol on firefox (and https everywhere) and you will see how easy is to change certificates in google

You scratch my tape, and I'll scratch yours.

Working...