Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Encryption Networking Security The Internet Upgrades

HTTP 2.0 May Be SSL-Only 320

An anonymous reader writes "In an email to the HTTP working group, Mark Nottingham laid out the three top proposals about how HTTP 2.0 will handle encryption. The frontrunner right now is this: 'HTTP/2 to only be used with https:// URIs on the "open" Internet. http:// URIs would continue to use HTTP/1.' This isn't set in stone yet, but Nottingham said they will 'discuss formalising this with suitable requirements to encourage interoperability.' There appears to be support from browser vendors; he says they have been 'among those most strongly advocating more use of encryption.' The big goal here is to increase the use of encryption on the open web. One big point in favor of this plan is that if it doesn't work well (i.e., if adoption is poor), then they can add support for opportunistic encryption later. Going from opportunistic to mandatory encryption would be a much harder task. Nottingham adds, 'To be clear — we will still define how to use HTTP/2.0 with http:// URIs, because in some use cases, an implementer may make an informed choice to use the protocol without encryption. However, for the common case — browsing the open Web — you'll need to use https:// URIs and if you want to use the newest version of HTTP.'"
This discussion has been archived. No new comments can be posted.

HTTP 2.0 May Be SSL-Only

Comments Filter:
  • by Anonymous Coward on Wednesday November 13, 2013 @03:51PM (#45415941)

    otherwise this sounds like extortion from CAs

    • by geekboybt ( 866398 ) on Wednesday November 13, 2013 @03:54PM (#45415965)

      In the similar thread on Reddit, someone mentioned RFC 6698, which uses DNS (with DNSSEC) to validate certificates, rather than CAs. If we could make both of them a requirement, that'd fit the bill and get rid of the extortion.

      • You are still relying upon a third party to attest to your identity. In the DNS case, your DNS provider takes over the role of signing your stuff, and they can 'extort' you too.

        • Unless I run my own DNS, which is far easier than running a CA.

          • by Shoten ( 260439 )

            Unless I run my own DNS, which is far easier than running a CA.

            Not if you are using DNSSEC, it isn't. You talk about running your own DNS under those conditions as though a self-signed cert doesn't require a CA; it does. There's no such thing as certs without a CA. So, we're back to the "extortion" challenge again...either you run your own CA or are forced to pay someone else so that you can have a cert from theirs. I will say that at least the DNSSEC approach gets rid of the situation where any CA can give out a cert that would impersonate the valid one. In oth

            • by syzler ( 748241 ) <david@syzde[ ]et ['k.n' in gap]> on Wednesday November 13, 2013 @05:10PM (#45416953)

              Unless I run my own DNS, which is far easier than running a CA.

              Not if you are using DNSSEC, it isn't. You talk about running your own DNS under those conditions as though a self-signed cert doesn't require a CA; it does. There's no such thing as certs without a CA...

              DANE (DNS-based Authentication of Named Entities) RFC6698 does NOT require the use of a recognized CA, although it does not disallow it. There are four "usage" types for certificates (excerpts from the RFC follows):

              1. Certificate usage 0 is used to specify a CA certificate, or the public key of such a certificate, that MUST be found in any of the PKIX certification paths for the end entity certificate given by the server in TLS.
              2. Certificate usage 1 is used to specify an end entity certificate, or the public key of such a certificate, that MUST be matched with the end entity certificate given by the server in TLS.
              3. Certificate usage 2 is used to specify a certificate, or the public key of such a certificate, that MUST be used as the trust anchor when validating the end entity certificate given by the server in TLS.
              4. Certificate usage 3 is used to specify a certificate, or the public key of such a certificate, that MUST match the end entity certificate given by the server in TLS.

              Both Certificate usage 2 and Certificate usage 3 allow a domain's administrator to issue a certificate without requiring the involvement of a third party CA. For more information on DANE, refer to either rfc6698 [ietf.org] or the the wikipedia article [wikipedia.org].

      • by Anonymous Coward on Wednesday November 13, 2013 @04:49PM (#45416663)

        My predictions:
        - TLS will be required. No bullshit, no compromise, no whining. On by default, no turning it off.
        - RFC6698 DNSSEC/DANE (or similar) certificate pinning support required.
        - DANE (or similar) to authenticate the TLS endpoint hostname via DNSSEC - that is the MITM defense, not a CA
        - CA not required - if present, authenticates ownership of the domain, but not whether there's an MITM

        We have a shot at a potentially clearer hierarchy with DNSSEC than we do with CAs, where it's anything-goes, anyone can sign anything - and to state-sponsored attackers, clearly have, and do (whether they know it or not). We might need to fix DNSSEC a bit first, along the lines of TLS 1.3 (see below).

        Also, TLS 1.3 ideas are being thrown around. It will be considerably revised (might even end up being TLS 2.0, if not in version number then at least in intent). Here are my predictions based on what's currently being plotted:
        - Handshake changed, possibly for one with less round-trips/bullshit
        - Cipher suites separated into an authentication/key exchange technique and an authenticated encryption technique (rather than the unique combination of the two)
        - Renegotiation might not stay in or redesigned?
        - Channel multiplexing?
        - MD5, SHA-1, and hopefully also SHA-2 removed, replaced with SHA-3 finalists: Skein-512-512? Keccak-512 (as in final competition round 3 winner, but hopefully NOT as specified in the weakened SHA-3 draft)?
        - Curve25519 / Ed25519 (and possibly also Curve3617) for key exchange and authentication: replace NIST curves with safecurves
        - RSA still available (some, such as Schneier, are wary of ECC techniques as NSA have a head start thanks to Certicom involvement and almost definitely know at least one cryptanalysis result we don't), but hardened - blinding mandated, minimum key size changed (2048 bit? 3072 bit? 4096 bit?)
        - PFS required; all non-PFS key exchanges removed; Curve25519 provides very very fast PFS, there's very little/no excuse to not have it
        - All known or believed insecure cipher suites removed. (Not merely deprecated; completely removed and unavailable.)
        - Most definitely RC4 gone, beyond a doubt, that's 100% toast. There may be strong pushes for client support for RC4-SHA/RC4-MD5 to be disabled THIS YEAR in security patches if at all possible! It is DEFINITELY broken in the wild! (This is probably BULLRUN)
        - Possibly some more stream cipher suites to replace it; notably ChaCha20_Poly1305 (this is live in TLS 1.2 on Google with Adam Langley's draft and in Chrome 33 right now and will more than likely replace RC4 in TLS 1.2 as an emergency patch)
        - AES-CBC either removed or completely redesigned with true-random IVs but using a stronger authenticator
        - Probably counter-based modes replacing CBC modes (CCM? GCM? Other? Later, output of CAESAR competition?)
        - The NPN->ALPN movement has pretty much done a complete 180 flip. We will likely revert to NPN, or a new variant similar to NPN, because any plaintext metadata should be avoided where possible, and eliminated entirely where practical - ALPN is a backwards step. IE11 supports it in the wild right now, but that can always be security-patched (and it WILL be a security patch).

        Of course, discussions are ongoing. This is only my impression of what would be good ideas, and what I have already seen suggested and discussed, and where I think the discussions are going to end up - but a lot of people have their own shopping lists, and we're a long way from consensus - we need to bash this one out pretty thoroughly.

        Last week's technical plenary was a tremendous kick up the ass (thanks Bruce! - honestly, FERRETCANNON... someone there reads Sluggy Freelance...). We all knew something had to be done. Now we're scratching our heads to figure out what we CAN do; as soon as we've done that, we'll be rolling our sleeves up to actually DO it. After consensus is achieved, and when the standard is finished, we'll be pushing very, very strongly to get it deployed forthwith - delays for compatibility because of client bugs will not be an acceptable excuse.

        • My predictions:
          It don't matter. The NSA will simply require the root certs of all CAs and make all this work moot.

          • Having access to root CA private keys doesn't help the NSA much if PFS is employed.

          • by Anonymous Coward on Wednesday November 13, 2013 @05:52PM (#45417481)

            If we use DNSSEC to pin a chain of trust all the way to the root, we have, hopefully, authenticated that certificate to its hostname. That, alone, mitigates an MITM, and it is even within scope of the DNS hierarchy

            Contrast: CAs have a flat, global authenticative fiat - therefore, they are fundamentally less secure, as instead of multiplying with the depth of hierarchy, the potential weaknesses multiply with the number of commonly-accepted CAs.

            Of course, you can then (optionally) also choose to use a CA to countersign your certificate in a chain, if you want; presumably you pay them to certify who owns the domain.

            In the event of any disagreement, you just detected (and therefore prevented) an active, real MITM attack. An attacker would have to subvert not just a random CA, but hierarchically the correct DNS registry and/or registrar -and- a CA used to countersign, if present. We can make that a harder problem for them. It increases the cost. We like solutions that make a panopticon of surveillance harder - anything that makes it more expensive is good, because hopefully we can make it prohibitively expensive again, or at least, noisy enough to be globally detectable.

            There's also the separate question of the US Government holding the DNS root. Someone has to. Obviously it can't be them anymore. Even if we saw it, we can no longer trust it.

            We are also welcome to potential more distributed solutions. If you happen to have a really nice, secure protocol for a distributed namespace (although please something that works better than, say, NameCoin), please publish.

          • It don't matter. The NSA will simply require the root certs of all CAs and make all this work moot.

            Actually, it would still help. There's a huge difference in resource requirements between listening in on a connection and doing a MITM for it. Mass surveillance is only possible because modern technology makes it cheap; every speedbumb, no matter how trivial, helps when it applies to every single connection.

        • - MD5, SHA-1, and hopefully also SHA-2 removed, replaced with SHA-3 finalists: Skein-512-512? Keccak-512 (as in final competition round 3 winner, but hopefully NOT as specified in the weakened SHA-3 draft)?

          The SHA-2 family are still unbroken, and I don't believe there are even any substantial weaknesses known. No reason to drop support for them at all.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Downside with self signed certs is that you get that pop up warning about trusting the cert. For HTTP 2.0 to take off and require SSL (which is a good idea) there needs to be cheap access to valid certificates. Right now, the most reputable SSL vendors are way too expensive.

      • StartSSL (Score:5, Informative)

        by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Wednesday November 13, 2013 @04:17PM (#45416251) Homepage Journal
        StartSSL was without charge for individuals the last time I checked. The most expensive part of SSL nowadays for a small site operated as a hobby is the dedicated IP address that you need if you want users of IE on Windows XP and Android Browser on Android 2.x to be able to access your site. These browsers don't support SNI, which allows name-based virtual hosting to work on SSL.
      • by Junta ( 36770 ) on Wednesday November 13, 2013 @04:19PM (#45416275)

        The problem is that all http clients see 'https' as meaning the client has a level of expectation about 'security'. Browsers have long started to do things to very obvious to denote 'good ssl' from 'bad ssl', but the expectation remains that 's' means 'meaningfully secure'.

        So how best to convey 'encrypted, but don't really care about third party cert validation', which would be a must-have in a world where *every* public facing site has a TLS protected socket. Maybe a different uri scheme like 'httpe://', complete with the scare strikethroughs and such, but not with the 'are you sure, are you really really sure' that https does today...

        • by xous ( 1009057 )

          This is just a matter of changing the way the URI is displayed:

          NOT ENCRYPTED http:/// [http]
          ENCRYPTED, NOT VERIFIED https:/// [https]
          ENCRYPTED, VERIFIED https:/// [https]

          With a helpful little blurb that explains (with pictures) the differences. I'm sure with some thought and a few user trails you could come up with a UI solution that most people will be able to understand well enough to know that credit card stuff should be green and that pink is better than red.

        • by dgatwood ( 11270 )

          I think the green versus not green covers that. IMO, anybody who doesn't have an EV certificate probably doesn't really need a TLS certificate. For everybody else, storing the public key (preferably not a fingerprint—disk space is too cheap to cut corners like that) upon the user's first visit is sufficient security.

          Of course, such a scheme would bankrupt the bloated SSL cert industry and their extortion racket, but I would argue that such an outcome would be progress.

        • by TheNastyInThePasty ( 2382648 ) on Wednesday November 13, 2013 @04:48PM (#45416653)

          Normal people know that there's a difference between HTTP and HTTPS?

    • by GameboyRMH ( 1153867 ) <`gameboyrmh' `at' `gmail.com'> on Wednesday November 13, 2013 @04:00PM (#45416039) Journal

      This. If so, it will be a MASSIVE improvement.

      A connection with a self-signed cert is, in a very-worst-case and highly unlikely scenario, only as insecure as the plaintext HTTP connections we use every day without batting an eye. Let's start treating them that way.

      SSL-only would be a great first step in making life miserable for those NSA peeping toms.

      • Only if the standard is actually used. Look how long it's taking IPV6 to get implemented. And there's a very real reason why we need to upgrade. I'm definitely not holding my breath until HTTP 2.0 get significant usage.
      • The problem with self-signed certs is that there isn't a good way to determine whether a site is legitimately using a self-signed cert, or if an attacker is successfully accomplishing a man-in-the-middle attack using a self-signed cert. If I use HTTPS, I want an assurance of security, and if there's a possible MITM attack in progress, I surely want to know immediately.
    • by girlintraining ( 1395911 ) on Wednesday November 13, 2013 @04:12PM (#45416195)

      otherwise this sounds like extortion from CAs

      You are so close. Eliminating plain-http would destroy the internet as we know it because the only alternative then is forking cash over to an easily-manipulated corporation for the priviledge of then being able to talk on the internet. It's an attack on it's very soul.

      It would kill things like Tor and hidden services. It would oblitherate people being able to run their own servers off their own internet connection. It would irrevocably place free speech on the web at the mercy of corporations and governments.

      • by i kan reed ( 749298 ) on Wednesday November 13, 2013 @04:17PM (#45416257) Homepage Journal

        To be fair, no one is telling you to run your server on http 2.0. You can still run a gopher or ftp server, if the outdated technologies appeal to you enough.

        (Please don't dog pile me for saying ftp is outdated, I know you're old and cranky, you don't have to alert me)

        • To be fair, no one is telling you to run your server on http 2.0.

          The same can be said whenever a new version of a protocol comes out. But invariably, people adopt the new one, and eventually nobody wants to support the old one... and so while nobody is "telling you" anything... eventually you just can't do it anymore because all protocols depend on the same thing.. people using them. If nobody's serving content on them, then nobody's supporting the ability to read that content either.

          (Please don't dog pile me for saying ftp is outdated, I know you're old and cranky, you don't have to alert me)

          I am neither old, nor cranky. I am however an experienced IT professional who's been her

      • In theory, yes. If tomorrow Chrome, Safari, IE, FireFox, etc all upgraded their browsers to require HTTP 2.0/HTTPS-Only, tons of websites would be faced with the prospect of paying hundreds of dollars for a SSL certificate. These hobby sites with no actual income wouldn't be able to afford it. Presently, to host a website I can pay for a $10 a month shared hosting plan and $15 a year for a domain name. (My registrar is a bit more expensive but I've had a lot of good experience with them so I'm not likel

        • You can buy a properly signed SSL cert for as little as $9/year (possibly lower), if that is too much then self signed is always available.

          James

    • Validating the certificate is the problem. If it can't be validated then SSL is useless, it is open to a man in the middle [wikipedia.org] attack. The trouble with the current SSL system is that even a certifcate signed by a CA is spoofable; an CA can sign a certificate for any domain, so if you can twist arms stronly enough (think: NSA or well funded crooks) then you can still do a MitM attack. So: the first thing that is needed is a robust way of validating certificates -- I don't know how that could be done.

      The other pr

      • by spitzak ( 4019 ) on Wednesday November 13, 2013 @06:27PM (#45417815) Homepage

        Check if it is the same certificate you saw when you visited the site the last time.

        Go through a third party (perhaps one using a signed certificate) and check that you both see the same certificate.

        Both of these will defeat a lot of man-in-the-middle attacks.

      • You've fallen into the trap of seeing security as either black or white. Self signed certs are dark gray enough that for you it's black, and thus the same as unencrypted HTTP. And CA verified cert is light gray enough to be white, and thus secure.

        The truth is that self signed is much better than unencrypted, and CA verified - while even better - still have enough problems to drive a truck through. Sideways.

        Which is why people are talking about DANE, which is even lighter gray than CA validated, but still fa

  • by fph il quozientatore ( 971015 ) on Wednesday November 13, 2013 @03:56PM (#45415987)
    I predict that the Unknown Powers will convince the committee to bail out and either (a) drop this idea overall (b) default to some old broken/flawed crypto protocol. Check back in 1 year.
    • by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Wednesday November 13, 2013 @04:11PM (#45416189)

      You can laugh at the IETF as much as you want, there are lots of things to laugh at. However, there are still a lot of very technical people involved in the IETF, and a large subset of them are finding it unpleasant that the Internet they helped create has become something very different. They are fighting the hard fight right now, and we should all support them when we can.

      It is possible that the NSA or other similar dark forces may manage to subvert their intentions once more, but so far it looks like there is still hope for the good guys.

      Or I may be hopelessly naive.

  • ...or, you know, https everything now for starters, since the processing increase is negligible.

    A lot of sites are already on board.

    • Processing is negligible, but cost is not. If you are an enterprise-level organization, you should definitely go HTTPS or at least offer that as an option. If you are a hobby site running on a shared server for $10 a month, you aren't going to be able to invest in a $150 SSL certificate.

  • by girlintraining ( 1395911 ) on Wednesday November 13, 2013 @03:59PM (#45416027)

    People think that adding encryption to something makes it more secure. No, it does not. Encryption is worthless without secure key exchange, and no matter how you dress it up, our existing SSL infrastructure doesn't cut it. It never has. It was built insecure. All you're doing is adding a middle man, the certificate authority, that somehow you're supposed to blindly trust to never, not even once, fuck it up and issue a certificate that is later used to fuck you with. www.microsoft.com can be signed by any of the over one hundred certificate authorities in your browser. The SSL protocol doesn't tell the browser to check all hundred plus for duplicates; it just goes to the one that signed it and asks: Are you valid?

    The CA system is broken. It is so broken it needs to be put on a giant thousand mile wide sign and hoisted int orbit so it can be seen at night saying: "This system is fucked." Mandating a fucked system isn't improving security!

    Show me where and how you plan on making key exchange secure over a badly compromised and inherently insecure medium, aka the internet, using the internet. It can't be done. No matter how you cut it, you need another medium through which to do the initial key exchange. And everything about SSL comes down to one simple question: Who do you trust? And who does the person you trusted, in turn, trust? Because that's all SSL is: It's a trust chain. And chains are only as strong as the weakest link.

    Break the chain, people. Let the browser user take control over who, how, and when, to trust. Establish places in the real world, in meat space, in bricks and mortar land, where people can go to obtain and validate keys from multiple trusted parties. That's the only way you're going to get real security... otherwise you're going to be taking a butt torpedo stamped Made At NSA Headquarters up your browser backside. And pardon me for being so blunt, but explaining the technical ins and outs is frankly beyond this crowd today. Most of you don't have the technical comprehension skills you think you do -- so I'm breaking it down for you in everyday english: Do not trust certificate authorities. Period. The end. No debate, no bullshit, no anti-government or pro-government or any politics. The system is inherently flawed, at the atomic level. It cannot be fixed with a patch. It cannot be sufficiently altered to make it safe. It is not about who we allow to be certificate authorities, or whether this organization or that organization can be trusted. We're talking hundreds of billions of dollars in revenue riding on someone's word. You would have to be weapons grade stupid to think they will never be tempted to abuse that power -- and it does not matter who you put in that position. Does. Not. Matter.

    • by compro01 ( 777531 ) on Wednesday November 13, 2013 @04:03PM (#45416081)

      What are your thoughts on RFC 6698 [ietf.org] as a possible solution to the CA problem?

      • by girlintraining ( 1395911 ) on Wednesday November 13, 2013 @04:14PM (#45416213)

        What are your thoughts on RFC 6698 as a possible solution to the CA problem?

        I think that it's already been proving that centralizing anything leads to corruption and manipulation. Whether you put it in DNS, or put it in a CA, the result is the same: Centralized control under the auspices of a third party. Any solution that doesn't allow all the power to stay in the hands of the server operator, must be rejected.

        • So how would you recommend instead that the server operator prove his identity to members the public? Meetings in person cannot scale past a heavily local-interest web site or the web site of a business that already has thousands of brick-and-mortar locations.
          • by EvilSS ( 557649 )
            Why not separate the "here is proof I am who I say I am" from "let's encrypt our conversation"? Web servers could easily use self-generated random certs to encrypt traffic. If you want to validate you are who you say you are, then you add a cert from a "trusted" CA to do just that. It's no different that what we have today with HTTP vs HTTPS. If I go HTTP to a site I have no assurance that they are who they say they are. At least this way the traffic is encrypted even at the lowest trust tier.
            • Why not separate the "here is proof I am who I say I am" from "let's encrypt our conversation"?

              Because if you aren't authenticating, you're encrypting to the man in the middle.

              If I go HTTP to a site I have no assurance that they are who they say they are.

              When the scheme is HTTP, you have no expectation of assurance.

        • So your solution is?
          not using anything 'cause the NSA is over you?

          Saying that the CA system and the DNS(SEC) infrastructure are the same is retarded.
          The CA system is managed by hundred of companies, and you can not possibly know if some company as an unauthorized certificate.
          Want to know if someone is giving false information on DNSSEC? "dig domainname + dnssec" should be enough....

          The current (DNSSEC) system has problems, but it is not as rotten as not having anything, so it's better than nothing. Plea

      • Why is 6698 funny? The author was serious. Now 1149 [ietf.org]? That's funny.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      SSL has great key exchange mechanisms. Diffie-Hellman is the most common one, and with large enough groups and large enough keys it works very well. What works less well is the authentication bit, which is what the CAs are doing. But encryption with bad authentication, or even without authentication at all, is not worthless. It prevents passive surveillance, such as the one NSA, GHCQ and their ilk are perpetrating on hundreds of millions of internet users. Yes, you are vulnerable to man-in-the-middle attack

    • (I the user) am the one who should be purchasing trust from a vendor. A vast network of obviously compromised cert vendors should not be in my browser. They don't work for me so why are they even mentioned in my browser? There should be just one: The one I have chosen to trust. The one I pay to trust and will go out of business if they ever break the trust of their customers.
      Frankly, I would prefer the trust vendor to be the same as the browser vendor. I am trusting them both, so I would rather t

      • Frankly, I would prefer the trust vendor to be the same as the browser vendor. I am trusting them both, so I would rather they be the same thing.

        No you don't want that. The browser developers should concern themselves with rendering the content correctly and standards-compliant, and the user-experience/interface. Separation of duties when it comes to money is paramount -- that's why you have outside accountants review your books, not internal ones. Otherwise you have the situation of creating a financial incentive to break or manipulate the system. Even one character, on one line of code, can change something from secure to exploitable. Don't tempt

        • You are right. The two should remain separate, if for no other reason, then it will be easier to deprecate them individually if they screw up.

    • by Anonymous Coward on Wednesday November 13, 2013 @04:28PM (#45416375)
      Posting anonymously for my reasons...

      People think that adding encryption to something makes it more secure. Encryption is worthless without secure key exchange, and no matter how you dress it up, our existing SSL infrastructure doesn't cut it. It never has. It was built insecure.

      techincally wrong. The key exchange is secure, the infrastructure managing the key trust is not trustworthy. Please do not mix the two.

      Show me where and how you plan on making key exchange secure over a badly compromised and inherently insecure medium, aka the internet, using the internet. It can't be done.

      Lolwut? it *CAN* be done. Personally I'm working on an alternative of TLS with federated authentication, and no X.509. The base trust will be the DNSSEC infrastructure. Sorry but you got to trust someone. I know it's not perfect, in fact as soon as I'm finished with this project (Fenrir -- will be giving a lightning talk @CCC) I think I'll start working on fixing the mess that DNSSEC is. I already have some ideas. It will take some time, but it is doable.

      No matter how you cut it, you need another medium through which to do the initial key exchange.

      You keep confusing the key exchange algorithm and the base of the trust for your system... are you doing this on purpose or are you just trolling?

      Let the browser user take control over who, how, and when, to trust. Establish places in the real world, in meat space, in bricks and mortar land, where people can go to obtain and validate keys from multiple trusted parties.

      And give each other trust in a hippie fasion... No, you need more than that, and why does it need to be the browser? if you have a way to manage trust, why only at the browser level?

      And pardon me for being so blunt, but explaining the technical ins and outs is frankly beyond this crowd today. Most of you don't have the technical comprehension skills you think you do -- so I'm breaking it down for you in everyday english: Do not trust certificate authorities. Period. The end. No debate, no bullshit, no anti-government or pro-government or any politics.

      You keep talking about technical stuff and then you keep pointing at political and management problems... get your facts straight. You always make it seem as if the SSL protocol is broken per se. it is not.

      The system is inherently flawed, at the atomic level. It cannot be fixed with a patch. It cannot be sufficiently altered to make it safe. It is not about who we allow to be certificate authorities, or whether this organization or that organization can be trusted.

      Well, I'm working at a solution! NO X.509, trust from DNSSEC chain. That's as trustworthy as you can get. I'll work at a replacement of the DNS system after this project is finished. Keep your ears open for the "Fenrir" project, I'll be presenting it at the European CCC, although it won't be finished by then. come and discuss if you are interested. Otherwise stop spreading useless fear. -- Luca

    • All you're doing is adding a middle man, the certificate authority, that somehow you're supposed to blindly trust to never, not even once, fuck it up and issue a certificate that is later used to fuck you with.

      I would also add that, in practice, it's the third middle man. The second is the ISP.

      You likely have downloaded the browser with the CA list from the net, has it been tampered with? You validate the download with checksum, gotten from the net, has it been tampered with? what about the package manager

    • So true, yet so moot. Let me use numbering to address some key discrepancies on your otherwise totally intuitive, yet irrelevant arguments:
      1. 1. You just discredited a system that has been successful protecting the majority of internet-bound critical use cases for years now. So I'm assuming you do absolutely no banking or social networking tasks on the WWW as you do not deem them safe enough?
      2. 2. You now tell me you perform such use-cases through TOR. Too bad you just discredited TOR with that weakest link of
    • You're right that it only takes one compromised Certificate Authority - whether it's compromised by the NSA, hackers, or a corrupt owner - to issue what would be considered legitimate certificates for hundreds of websites like Microsoft or a bank.

      But physical key exchange doesn't scale. You can't get a billion people that use desktop, laptops, tablets, or smart phones to understand how to do a physical key exchange or why it matters, let alone organize a practical way to get it done.

      I don't know what
    • While you're right that there are issues with the certificate-based approach, it's not nearly as bad as you describe. There are solutions already implemented that mitigate much of the risk.

      One example is certificate pinning. Google already pins all Google-owned sites in Chrome. This is actually how the DigitNotar compromise was discovered; Chrome users got an error when trying to get to a Google site.

      Another example is SSH-style change monitoring, alerting users when certificates change unexpectedly. Th

    • Oh, and as Anonymous Coward pointed out below - key exchange over the internet is secure, you can exchange keys with someone with certainty that they key has not been modified. What key exchange over the internet does not do is guarantee the identity of the other person.

      You might call that a distinction without a difference, but I think it does matter.
  • by simpz ( 978228 ) on Wednesday November 13, 2013 @04:09PM (#45416167)
    We save about 10% of our Internet bandwidth by running all http traffic through a caching proxy. This would seem to prevent this bandwidth saving for things that just don't need encryption. This would be any public site that is largely consumable content. All in favour of it for anything more personal.

    Also how are companies supposed to effectively web filter if everything is HTTPS. DNS filtering is, in general, too broad as brush. We may not like our web filtered, but companies have a legal duty that employees shouldn't be see questionable material, even if on someone else's computer. Companies have been sued for allowing this to happen.
    • This would seem to prevent this bandwidth saving for things that just don't need encryption. This would be any public site that is largely consumable content.

      Please define what you mean by "consumable content" [gnu.org]. If a user is logged in, the user needs to be on HTTPS so that other users cannot copy the session cookie that identifies him. This danger of copying the session cookie is why Facebook and Twitter have switched to all HTTPS all the time. Even sites that allow viewing everything without logging in often have social recommendation buttons that connect to Facebook, Twitter, Google+, or other services that may have an ongoing session.

      • You would need a different set of cookies for http and https domains. You would always set the cookies in https, and static things like the Facebook logo could go over http with only giving away that you are loading a new page with that logo on it. This gives away information on what you are doing, but much of it can already be assumed if your computer opens a connection to Facebook's servers.

        This will require a lot of micromanagement by web developers, but could be possible for large sites. Also set
    • by iroll ( 717924 )

      Also how are companies supposed to effectively web filter if everything is HTTPS.

      They shouldn't. They should cut off web access for the grunts that don't need it (e.g. telecenter people who are using managed applications) and blow it wide open for those that do (engineers, creative professionals, managers). Hell, the managers are already opting out of the filtering and infecting their networks with porn viruses, so what would this change?

      We may not like our web filtered, but companies have a legal duty that employees shouldn't be see questionable material, even if on someone else's computer. Companies have been sued for allowing this to happen.

      Citation needed. Speaking of too broad a brush; this comment paints with a roller. I've only worked for one company that used WebSense, and that was a

  • by gweihir ( 88907 ) on Wednesday November 13, 2013 @04:09PM (#45416173)

    If there is, I do not see it. Strikes me as people that cannot stop standardizing when the problem is solved. Pretty bad. Usually ends in the "Second System Effect" (Brooks), where the second system has all the features missing in the first one and becomes unusable due to complexity.

    • Since web == http + html, I have a hard time understanding how Web 2.0 can operate with HTTP 1.x.
    • by tepples ( 727027 )
      HTTP 2.0 is a rebadge of Google's SPDY protocol, which uses several methods to hide Internet latency in web connections.
    • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday November 13, 2013 @05:59PM (#45417553) Journal

      Yes, there is a lot of need for HTTP 2.

      HTTP was great when pages were largely just a single block of HTML, but modern pages include many separate resources which are all downloaded over separate connections, and have for a long time, actually. HTTP pipelining helps by reducing the number of TCP connections that have to be established, but they still have to be sequential per connection, and you still need a substantial number of connections to parallelize download. This all gets much worse for HTTPS connections because each connection is more expensive to establish.

      HTTP 2 is actually just Google's SPDY protocol (though there may be minor tweaks in standardization, the draft was unmodified SPDY), which fixes these issues by multiplexing many requests in a single TCP connection. The result is a significant performance improvement, especially on mobile networks. HTTP 2 / SPDY adds another powerful tool as well: it allows servers to proactively deliver content that the client hasn't yet requested. It's necessary for the browser to parse the page to learn what other resources are required, and in some cases there may be multiple levels of "get A, discover B is needed, get B, discover C is needed...". With SPDY, servers that know that B, C and D are going to be needed by requesters of A can go ahead and start delivering them without waiting.

      The result can be a significant increase in performance. It doesn't benefit sites that pull resources from many different domains, because each of those must be a separate connection, and it tends to provide a lot more value for sites that are already heavily optimized for performance, because those that aren't tend to have lots of non-network bottlenecks that dominate latency. Obviously it will be well-optimized sites that can make use of the server-initiated resource delivery, too.

      Actually, though, Google has decided that SPDY isn't fast enough, and has built and deployed yet another protocol, called QUIC, which addresses a major remaining problem of SPDY: it's built on TCP. TCP is awesome, make no mistake, it's amazing how well it adapts to so many different network environments. But it's not perfect, and we've learned a lot about networking in the last 30 years. One specific problem that TCP has is that one lost packet will stall the whole stream until that packet loss is discovered and resent. QUIC is built on top of UDP and implements all of the TCP-analogue congestion management, flow control and reliability itself, and does it with more intelligence than can be implemented in a simple sliding window protocol.

      I believe QUIC is already deployed and is available on nearly all Google sites, but I think you have to get a developer version of Chrome to see it in action. Eventually Google will (as they did with SPDY) start working with other browser makers to get QUIC implemented, and with standards bodies to get it formalized as a standard.

      Relevant to the topic of the article, neither SPDY nor QUIC even have an unencrypted mode. If the committee decides that they want an unencrypted mode they'll have to modify SPDY to do it. It won't require rethinking any of how the protocol works, because SPDY security all comes from an SSL connection, so it's just a matter of removing the tunnel. QUIC is different; because SSL runs on TCP, QUIC had to do something entirely different. So QUIC has its own encryption protocol baked into it from the ground up.

  • If http:// will fall back to HTTP 1.0, how does that make the Internet a more secure place? Will the users actually care that the page is being served by an older protocol, enough to type it again with https? Will they even notice?

    • Well, if the browser puts up a big warning message saying "anybody can see this, are you sure?" people might understand.

      In the last few months, it's been far easier to explain such things to people, because it's suddenly a real thing and is tangible.

  • by fahrbot-bot ( 874524 ) on Wednesday November 13, 2013 @04:29PM (#45416395)

    This is already a pain in the ass for me. I use a local proxy/filter (Proxomitron) to allow me to filter my HTTP streams for any application w/o having to configure them individually - or in cases where something like AdBlock would be browser specific. This doesn't work for HTTPS.

    For example, Google's switch to HTTPS-only annoys me to no end as I use Proxomitron to clean up and/or disable their various shenanigans - like the damn sidebar, URL redirects, suggestions, etc... At this time using "nosslsearch.google.com" with CNAME of "www.google.com" works to get a non-encrypted page (in Proxomitron, I simply edit the "Host:" header for nosslsearch.google.com hits to be www.google.com) but who know how much longer that will last.

    Thankfully, DuckDuckGo and StartPage allow me to customize their display to my liking w/o having to edit it on the fly. If only Google would get their head out of the ass and support the same, rather than only allowing their definition of "enhanced user experience".

    Seriously, do we really *need* HTTPS for everything - like *most* web searches, browsing Wikipedia or News? I think not.

    • There's no reason you couldn't still use a filtering proxy. What you'll need to do is configure that filtering proxy with its own CA cert which you'll have to add to your browser's trusted CAs. The filtering proxy can terminate the outside SSL connection to the remote site (with its own list of trusted external CAs) and generate and sign its own certificates (with its configured CA cert) for the internal SSL connection to your browser. This is how BurpSuite works, for example.

  • by Shoten ( 260439 )

    One big point in favor of this plan is that if it doesn't work well (i.e., if adoption is poor), then they can add support for opportunistic encryption later. Going from opportunistic to mandatory encryption would be a much harder task.

    Err...isn't going from opportunistic to mandatory encryption what they're trying to do now? Last I saw, HTTP was seeing a little bit of use already. The addition of a version number to it doesn't change the fact that they're already faced with existing behavior. It seems

  • by Animats ( 122034 ) on Wednesday November 13, 2013 @04:53PM (#45416729) Homepage

    If everything is to go SSL, we now need widespread "man-in-the-middle" intercept detection. This requires a few things:

    • SSL certs need to be published publicly and widely, so tampering will be detected.
    • Any CA issuing a bogus or wildcard cert needs to be downgraded immediately, even if it invalidates every cert they've issued. Browsers should be equipped to raise warning messages when this happens.
    • MITM detection needs to be implemented within the protocol. This is tricky, but possible. A MITM attack results in the crypto bits changing while the plaintext doesn't. If the browser can see both, there are ways to detect attacks. Some secure phones have a numeric display where they show you two or three digits derived from the crypto stream. The two parties then verbally compare the numbers displayed. If they're different, someone is decrypting and reencrypting the bit stream.
    • "The two parties then verbally compare the numbers displayed"

      But if there is a man in the middle, whats to stop them from changing the numbers in the voice?

      It would be easy for a computer to change some checksum bits. If I have to manually talk to every customer who visits my eCommerce site to confirm their numbers each page load, I would probably walk out.
  • by WaffleMonster ( 969671 ) on Wednesday November 13, 2013 @05:05PM (#45416893)

    I think HTTP/2 should resist the temptation to concern itself with security, concentrate on efficiently spewing hypertext and address security abstractly and cleanly at a separate layer from HTTP/2. Obviously as with HTTP/1 there are some layer dependencies but that is no excuse for even having a conversation about http vs https or opportunistic encryption. It is the wrong place.

    Things change who knows some day someone may want to use something other than TLS with HTTP/2. They should be able to do so without having to suffer. Not respecting layers is how you end up with unnecessarily complex and ridged garbage.

    For example who is to say there is no value in HTTP/2 over an IPsec protected link without TLS?

    • by 0123456 ( 636235 )

      For example who is to say there is no value in HTTP/2 over an IPsec protected link without TLS?

      That is precisely the attitude that made IPSEC a bloated pig that's almost impossible for mortal man to configure: 'Who is to say there is no value in an IPSEC connection with no encryption?'

      • That is precisely the attitude that made IPSEC a bloated pig that's almost impossible for mortal man to configure: 'Who is to say there is no value in an IPSEC connection with no encryption?'

        I'll see your IPSEC and raise you an H323.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...