Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Get HideMyAss! VPN, PC Mag's Top 10 VPNs of 2016 for 55% off for a Limited Time ×
Encryption Networking Security The Internet Upgrades

HTTP 2.0 May Be SSL-Only 320

An anonymous reader writes "In an email to the HTTP working group, Mark Nottingham laid out the three top proposals about how HTTP 2.0 will handle encryption. The frontrunner right now is this: 'HTTP/2 to only be used with https:// URIs on the "open" Internet. http:// URIs would continue to use HTTP/1.' This isn't set in stone yet, but Nottingham said they will 'discuss formalising this with suitable requirements to encourage interoperability.' There appears to be support from browser vendors; he says they have been 'among those most strongly advocating more use of encryption.' The big goal here is to increase the use of encryption on the open web. One big point in favor of this plan is that if it doesn't work well (i.e., if adoption is poor), then they can add support for opportunistic encryption later. Going from opportunistic to mandatory encryption would be a much harder task. Nottingham adds, 'To be clear — we will still define how to use HTTP/2.0 with http:// URIs, because in some use cases, an implementer may make an informed choice to use the protocol without encryption. However, for the common case — browsing the open Web — you'll need to use https:// URIs and if you want to use the newest version of HTTP.'"
This discussion has been archived. No new comments can be posted.

HTTP 2.0 May Be SSL-Only

Comments Filter:
  • by Anonymous Coward on Wednesday November 13, 2013 @03:51PM (#45415941)

    otherwise this sounds like extortion from CAs

    • by geekboybt ( 866398 ) on Wednesday November 13, 2013 @03:54PM (#45415965)

      In the similar thread on Reddit, someone mentioned RFC 6698, which uses DNS (with DNSSEC) to validate certificates, rather than CAs. If we could make both of them a requirement, that'd fit the bill and get rid of the extortion.

      • by Junta ( 36770 ) on Wednesday November 13, 2013 @04:23PM (#45416321)

        You are still relying upon a third party to attest to your identity. In the DNS case, your DNS provider takes over the role of signing your stuff, and they can 'extort' you too.

        • by kthreadd ( 1558445 ) on Wednesday November 13, 2013 @04:30PM (#45416407)

          Unless I run my own DNS, which is far easier than running a CA.

          • by Shoten ( 260439 ) on Wednesday November 13, 2013 @04:45PM (#45416611)

            Unless I run my own DNS, which is far easier than running a CA.

            Not if you are using DNSSEC, it isn't. You talk about running your own DNS under those conditions as though a self-signed cert doesn't require a CA; it does. There's no such thing as certs without a CA. So, we're back to the "extortion" challenge again...either you run your own CA or are forced to pay someone else so that you can have a cert from theirs. I will say that at least the DNSSEC approach gets rid of the situation where any CA can give out a cert that would impersonate the valid one. In other words, the cert for www.google.com would actually be tied to www.google.com instead of having to just come from any one of the dozens of accepted CAs out in the world.

            • by syzler ( 748241 ) <{david} {at} {syzdek.net}> on Wednesday November 13, 2013 @05:10PM (#45416953)

              Unless I run my own DNS, which is far easier than running a CA.

              Not if you are using DNSSEC, it isn't. You talk about running your own DNS under those conditions as though a self-signed cert doesn't require a CA; it does. There's no such thing as certs without a CA...

              DANE (DNS-based Authentication of Named Entities) RFC6698 does NOT require the use of a recognized CA, although it does not disallow it. There are four "usage" types for certificates (excerpts from the RFC follows):

              1. Certificate usage 0 is used to specify a CA certificate, or the public key of such a certificate, that MUST be found in any of the PKIX certification paths for the end entity certificate given by the server in TLS.
              2. Certificate usage 1 is used to specify an end entity certificate, or the public key of such a certificate, that MUST be matched with the end entity certificate given by the server in TLS.
              3. Certificate usage 2 is used to specify a certificate, or the public key of such a certificate, that MUST be used as the trust anchor when validating the end entity certificate given by the server in TLS.
              4. Certificate usage 3 is used to specify a certificate, or the public key of such a certificate, that MUST match the end entity certificate given by the server in TLS.

              Both Certificate usage 2 and Certificate usage 3 allow a domain's administrator to issue a certificate without requiring the involvement of a third party CA. For more information on DANE, refer to either rfc6698 [ietf.org] or the the wikipedia article [wikipedia.org].

      • by Anonymous Coward on Wednesday November 13, 2013 @04:49PM (#45416663)

        My predictions:
        - TLS will be required. No bullshit, no compromise, no whining. On by default, no turning it off.
        - RFC6698 DNSSEC/DANE (or similar) certificate pinning support required.
        - DANE (or similar) to authenticate the TLS endpoint hostname via DNSSEC - that is the MITM defense, not a CA
        - CA not required - if present, authenticates ownership of the domain, but not whether there's an MITM

        We have a shot at a potentially clearer hierarchy with DNSSEC than we do with CAs, where it's anything-goes, anyone can sign anything - and to state-sponsored attackers, clearly have, and do (whether they know it or not). We might need to fix DNSSEC a bit first, along the lines of TLS 1.3 (see below).

        Also, TLS 1.3 ideas are being thrown around. It will be considerably revised (might even end up being TLS 2.0, if not in version number then at least in intent). Here are my predictions based on what's currently being plotted:
        - Handshake changed, possibly for one with less round-trips/bullshit
        - Cipher suites separated into an authentication/key exchange technique and an authenticated encryption technique (rather than the unique combination of the two)
        - Renegotiation might not stay in or redesigned?
        - Channel multiplexing?
        - MD5, SHA-1, and hopefully also SHA-2 removed, replaced with SHA-3 finalists: Skein-512-512? Keccak-512 (as in final competition round 3 winner, but hopefully NOT as specified in the weakened SHA-3 draft)?
        - Curve25519 / Ed25519 (and possibly also Curve3617) for key exchange and authentication: replace NIST curves with safecurves
        - RSA still available (some, such as Schneier, are wary of ECC techniques as NSA have a head start thanks to Certicom involvement and almost definitely know at least one cryptanalysis result we don't), but hardened - blinding mandated, minimum key size changed (2048 bit? 3072 bit? 4096 bit?)
        - PFS required; all non-PFS key exchanges removed; Curve25519 provides very very fast PFS, there's very little/no excuse to not have it
        - All known or believed insecure cipher suites removed. (Not merely deprecated; completely removed and unavailable.)
        - Most definitely RC4 gone, beyond a doubt, that's 100% toast. There may be strong pushes for client support for RC4-SHA/RC4-MD5 to be disabled THIS YEAR in security patches if at all possible! It is DEFINITELY broken in the wild! (This is probably BULLRUN)
        - Possibly some more stream cipher suites to replace it; notably ChaCha20_Poly1305 (this is live in TLS 1.2 on Google with Adam Langley's draft and in Chrome 33 right now and will more than likely replace RC4 in TLS 1.2 as an emergency patch)
        - AES-CBC either removed or completely redesigned with true-random IVs but using a stronger authenticator
        - Probably counter-based modes replacing CBC modes (CCM? GCM? Other? Later, output of CAESAR competition?)
        - The NPN->ALPN movement has pretty much done a complete 180 flip. We will likely revert to NPN, or a new variant similar to NPN, because any plaintext metadata should be avoided where possible, and eliminated entirely where practical - ALPN is a backwards step. IE11 supports it in the wild right now, but that can always be security-patched (and it WILL be a security patch).

        Of course, discussions are ongoing. This is only my impression of what would be good ideas, and what I have already seen suggested and discussed, and where I think the discussions are going to end up - but a lot of people have their own shopping lists, and we're a long way from consensus - we need to bash this one out pretty thoroughly.

        Last week's technical plenary was a tremendous kick up the ass (thanks Bruce! - honestly, FERRETCANNON... someone there reads Sluggy Freelance...). We all knew something had to be done. Now we're scratching our heads to figure out what we CAN do; as soon as we've done that, we'll be rolling our sleeves up to actually DO it. After consensus is achieved, and when the standard is finished, we'll be pushing very, very strongly to get it deployed forthwith - delays for compatibility because of client bugs will not be an acceptable excuse.

        • by denis-The-menace ( 471988 ) on Wednesday November 13, 2013 @05:05PM (#45416887)

          My predictions:
          It don't matter. The NSA will simply require the root certs of all CAs and make all this work moot.

          • by GoogleShill ( 2732413 ) on Wednesday November 13, 2013 @05:38PM (#45417293)

            Having access to root CA private keys doesn't help the NSA much if PFS is employed.

          • by Anonymous Coward on Wednesday November 13, 2013 @05:52PM (#45417481)

            If we use DNSSEC to pin a chain of trust all the way to the root, we have, hopefully, authenticated that certificate to its hostname. That, alone, mitigates an MITM, and it is even within scope of the DNS hierarchy

            Contrast: CAs have a flat, global authenticative fiat - therefore, they are fundamentally less secure, as instead of multiplying with the depth of hierarchy, the potential weaknesses multiply with the number of commonly-accepted CAs.

            Of course, you can then (optionally) also choose to use a CA to countersign your certificate in a chain, if you want; presumably you pay them to certify who owns the domain.

            In the event of any disagreement, you just detected (and therefore prevented) an active, real MITM attack. An attacker would have to subvert not just a random CA, but hierarchically the correct DNS registry and/or registrar -and- a CA used to countersign, if present. We can make that a harder problem for them. It increases the cost. We like solutions that make a panopticon of surveillance harder - anything that makes it more expensive is good, because hopefully we can make it prohibitively expensive again, or at least, noisy enough to be globally detectable.

            There's also the separate question of the US Government holding the DNS root. Someone has to. Obviously it can't be them anymore. Even if we saw it, we can no longer trust it.

            We are also welcome to potential more distributed solutions. If you happen to have a really nice, secure protocol for a distributed namespace (although please something that works better than, say, NameCoin), please publish.

          • by ultranova ( 717540 ) on Wednesday November 13, 2013 @07:45PM (#45418455)

            It don't matter. The NSA will simply require the root certs of all CAs and make all this work moot.

            Actually, it would still help. There's a huge difference in resource requirements between listening in on a connection and doing a MITM for it. Mass surveillance is only possible because modern technology makes it cheap; every speedbumb, no matter how trivial, helps when it applies to every single connection.

        • by broken_chaos ( 1188549 ) on Wednesday November 13, 2013 @08:09PM (#45418601)

          - MD5, SHA-1, and hopefully also SHA-2 removed, replaced with SHA-3 finalists: Skein-512-512? Keccak-512 (as in final competition round 3 winner, but hopefully NOT as specified in the weakened SHA-3 draft)?

          The SHA-2 family are still unbroken, and I don't believe there are even any substantial weaknesses known. No reason to drop support for them at all.

    • by Anonymous Coward on Wednesday November 13, 2013 @03:58PM (#45416015)

      Downside with self signed certs is that you get that pop up warning about trusting the cert. For HTTP 2.0 to take off and require SSL (which is a good idea) there needs to be cheap access to valid certificates. Right now, the most reputable SSL vendors are way too expensive.

    • by GameboyRMH ( 1153867 ) <gameboyrmh@gma i l . com> on Wednesday November 13, 2013 @04:00PM (#45416039) Journal

      This. If so, it will be a MASSIVE improvement.

      A connection with a self-signed cert is, in a very-worst-case and highly unlikely scenario, only as insecure as the plaintext HTTP connections we use every day without batting an eye. Let's start treating them that way.

      SSL-only would be a great first step in making life miserable for those NSA peeping toms.

    • by girlintraining ( 1395911 ) on Wednesday November 13, 2013 @04:12PM (#45416195)

      otherwise this sounds like extortion from CAs

      You are so close. Eliminating plain-http would destroy the internet as we know it because the only alternative then is forking cash over to an easily-manipulated corporation for the priviledge of then being able to talk on the internet. It's an attack on it's very soul.

      It would kill things like Tor and hidden services. It would oblitherate people being able to run their own servers off their own internet connection. It would irrevocably place free speech on the web at the mercy of corporations and governments.

      • by i kan reed ( 749298 ) on Wednesday November 13, 2013 @04:17PM (#45416257) Homepage Journal

        To be fair, no one is telling you to run your server on http 2.0. You can still run a gopher or ftp server, if the outdated technologies appeal to you enough.

        (Please don't dog pile me for saying ftp is outdated, I know you're old and cranky, you don't have to alert me)

        • by girlintraining ( 1395911 ) on Wednesday November 13, 2013 @04:28PM (#45416381)

          To be fair, no one is telling you to run your server on http 2.0.

          The same can be said whenever a new version of a protocol comes out. But invariably, people adopt the new one, and eventually nobody wants to support the old one... and so while nobody is "telling you" anything... eventually you just can't do it anymore because all protocols depend on the same thing.. people using them. If nobody's serving content on them, then nobody's supporting the ability to read that content either.

          (Please don't dog pile me for saying ftp is outdated, I know you're old and cranky, you don't have to alert me)

          I am neither old, nor cranky. I am however an experienced IT professional who's been here since the beginning. And experience is more valuable that book smarts or version numbers. Knowledge is what tells you how to bring the dinosaurs back... Wisdom is knowing why that's a bad idea.

      • by Jason Levine ( 196982 ) on Wednesday November 13, 2013 @05:15PM (#45417011)

        In theory, yes. If tomorrow Chrome, Safari, IE, FireFox, etc all upgraded their browsers to require HTTP 2.0/HTTPS-Only, tons of websites would be faced with the prospect of paying hundreds of dollars for a SSL certificate. These hobby sites with no actual income wouldn't be able to afford it. Presently, to host a website I can pay for a $10 a month shared hosting plan and $15 a year for a domain name. (My registrar is a bit more expensive but I've had a lot of good experience with them so I'm not likely to flee to save $5 a year.) This means I can run a site for $135 a year. (Yes, if it becomes even remotely popular, I'll need to ditch the shared hosting for dedicated but let's assume this is a small-time site to start with.) If you add in an SSL certificate requirement, you're adding in about $149 more every year (based on GeoTrust pricing - other CAs might be more or less). That's doubling the cost.

        In practice, of course, you'll get an IP6 type of situation. Tons of sites will cling to the old version (HTTP 1.0/HTTPS-Optional) and the browsers will need to support this. Any browser that makes HTTP 2.0/HTTPS-Only a requirement will be committing marketshare suicide. (Side note: Is it bad if I'm hoping IE does this?) So while the "official spec" will say that all websites should go HTTPS-Only, people will ignore this and keep on the old spec until either HTTP 3 comes out (which goes back to HTTPS-Optional) or until the SSL situation is sorted out to bring the price radically down.

    • Validating the certificate is the problem. If it can't be validated then SSL is useless, it is open to a man in the middle [wikipedia.org] attack. The trouble with the current SSL system is that even a certifcate signed by a CA is spoofable; an CA can sign a certificate for any domain, so if you can twist arms stronly enough (think: NSA or well funded crooks) then you can still do a MitM attack. So: the first thing that is needed is a robust way of validating certificates -- I don't know how that could be done.

      The other problem with SSL everywere is that multiple virtual hosts on the same IP address would break on a large number of current machines. There is the SNI [wikipedia.org] mechanism that allows that but it is not supported by IE on MS XP (& others), so we cannot use SNI until MS XP use is insignificant -- which will be a few years away, even is MS does declare it dead.

  • by fph il quozientatore ( 971015 ) on Wednesday November 13, 2013 @03:56PM (#45415987) Homepage
    I predict that the Unknown Powers will convince the committee to bail out and either (a) drop this idea overall (b) default to some old broken/flawed crypto protocol. Check back in 1 year.
    • by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Wednesday November 13, 2013 @04:11PM (#45416189)

      You can laugh at the IETF as much as you want, there are lots of things to laugh at. However, there are still a lot of very technical people involved in the IETF, and a large subset of them are finding it unpleasant that the Internet they helped create has become something very different. They are fighting the hard fight right now, and we should all support them when we can.

      It is possible that the NSA or other similar dark forces may manage to subvert their intentions once more, but so far it looks like there is still hope for the good guys.

      Or I may be hopelessly naive.

  • by mythosaz ( 572040 ) on Wednesday November 13, 2013 @03:57PM (#45416001)

    ...or, you know, https everything now for starters, since the processing increase is negligible.

    A lot of sites are already on board.

  • by girlintraining ( 1395911 ) on Wednesday November 13, 2013 @03:59PM (#45416027)

    People think that adding encryption to something makes it more secure. No, it does not. Encryption is worthless without secure key exchange, and no matter how you dress it up, our existing SSL infrastructure doesn't cut it. It never has. It was built insecure. All you're doing is adding a middle man, the certificate authority, that somehow you're supposed to blindly trust to never, not even once, fuck it up and issue a certificate that is later used to fuck you with. www.microsoft.com can be signed by any of the over one hundred certificate authorities in your browser. The SSL protocol doesn't tell the browser to check all hundred plus for duplicates; it just goes to the one that signed it and asks: Are you valid?

    The CA system is broken. It is so broken it needs to be put on a giant thousand mile wide sign and hoisted int orbit so it can be seen at night saying: "This system is fucked." Mandating a fucked system isn't improving security!

    Show me where and how you plan on making key exchange secure over a badly compromised and inherently insecure medium, aka the internet, using the internet. It can't be done. No matter how you cut it, you need another medium through which to do the initial key exchange. And everything about SSL comes down to one simple question: Who do you trust? And who does the person you trusted, in turn, trust? Because that's all SSL is: It's a trust chain. And chains are only as strong as the weakest link.

    Break the chain, people. Let the browser user take control over who, how, and when, to trust. Establish places in the real world, in meat space, in bricks and mortar land, where people can go to obtain and validate keys from multiple trusted parties. That's the only way you're going to get real security... otherwise you're going to be taking a butt torpedo stamped Made At NSA Headquarters up your browser backside. And pardon me for being so blunt, but explaining the technical ins and outs is frankly beyond this crowd today. Most of you don't have the technical comprehension skills you think you do -- so I'm breaking it down for you in everyday english: Do not trust certificate authorities. Period. The end. No debate, no bullshit, no anti-government or pro-government or any politics. The system is inherently flawed, at the atomic level. It cannot be fixed with a patch. It cannot be sufficiently altered to make it safe. It is not about who we allow to be certificate authorities, or whether this organization or that organization can be trusted. We're talking hundreds of billions of dollars in revenue riding on someone's word. You would have to be weapons grade stupid to think they will never be tempted to abuse that power -- and it does not matter who you put in that position. Does. Not. Matter.

    • by compro01 ( 777531 ) on Wednesday November 13, 2013 @04:03PM (#45416081)

      What are your thoughts on RFC 6698 [ietf.org] as a possible solution to the CA problem?

      • by girlintraining ( 1395911 ) on Wednesday November 13, 2013 @04:14PM (#45416213)

        What are your thoughts on RFC 6698 as a possible solution to the CA problem?

        I think that it's already been proving that centralizing anything leads to corruption and manipulation. Whether you put it in DNS, or put it in a CA, the result is the same: Centralized control under the auspices of a third party. Any solution that doesn't allow all the power to stay in the hands of the server operator, must be rejected.

        • by tepples ( 727027 ) <tepples@ g m a i l .com> on Wednesday November 13, 2013 @04:30PM (#45416411) Homepage Journal
          So how would you recommend instead that the server operator prove his identity to members the public? Meetings in person cannot scale past a heavily local-interest web site or the web site of a business that already has thousands of brick-and-mortar locations.
          • by EvilSS ( 557649 ) on Wednesday November 13, 2013 @05:25PM (#45417121)
            Why not separate the "here is proof I am who I say I am" from "let's encrypt our conversation"? Web servers could easily use self-generated random certs to encrypt traffic. If you want to validate you are who you say you are, then you add a cert from a "trusted" CA to do just that. It's no different that what we have today with HTTP vs HTTPS. If I go HTTP to a site I have no assurance that they are who they say they are. At least this way the traffic is encrypted even at the lowest trust tier.
        • by Luke_22 ( 1296823 ) on Wednesday November 13, 2013 @04:47PM (#45416639)

          So your solution is?
          not using anything 'cause the NSA is over you?

          Saying that the CA system and the DNS(SEC) infrastructure are the same is retarded.
          The CA system is managed by hundred of companies, and you can not possibly know if some company as an unauthorized certificate.
          Want to know if someone is giving false information on DNSSEC? "dig domainname + dnssec" should be enough....

          The current (DNSSEC) system has problems, but it is not as rotten as not having anything, so it's better than nothing. Please stop denigrating it to such an extent

      • by girlintraining ( 1395911 ) on Wednesday November 13, 2013 @04:31PM (#45416419)

        Why is 6698 funny? The author was serious. Now 1149 [ietf.org]? That's funny.

    • by Anonymous Coward on Wednesday November 13, 2013 @04:17PM (#45416247)

      SSL has great key exchange mechanisms. Diffie-Hellman is the most common one, and with large enough groups and large enough keys it works very well. What works less well is the authentication bit, which is what the CAs are doing. But encryption with bad authentication, or even without authentication at all, is not worthless. It prevents passive surveillance, such as the one NSA, GHCQ and their ilk are perpetrating on hundreds of millions of internet users. Yes, you are vulnerable to man-in-the-middle attacks, but from what we have learnt from projects like the SSL Observatory, those are exceedingly rare. Always-on HTTPS would be a huge step forward and would make huge swathes of the data that semi-legal organisations like GCHQ records unusable. If you want better authentication it can be added on top of that - see e.g. Monkeysphere - but complaining when someone is trying to add a huge improvement because you don't think it's perfect is stupid.

    • by Marrow ( 195242 ) on Wednesday November 13, 2013 @04:17PM (#45416259)

      (I the user) am the one who should be purchasing trust from a vendor. A vast network of obviously compromised cert vendors should not be in my browser. They don't work for me so why are they even mentioned in my browser? There should be just one: The one I have chosen to trust. The one I pay to trust and will go out of business if they ever break the trust of their customers.
      Frankly, I would prefer the trust vendor to be the same as the browser vendor. I am trusting them both, so I would rather they be the same thing.

      • by girlintraining ( 1395911 ) on Wednesday November 13, 2013 @04:23PM (#45416313)

        Frankly, I would prefer the trust vendor to be the same as the browser vendor. I am trusting them both, so I would rather they be the same thing.

        No you don't want that. The browser developers should concern themselves with rendering the content correctly and standards-compliant, and the user-experience/interface. Separation of duties when it comes to money is paramount -- that's why you have outside accountants review your books, not internal ones. Otherwise you have the situation of creating a financial incentive to break or manipulate the system. Even one character, on one line of code, can change something from secure to exploitable. Don't tempt fate; Keep the trust chain separate from the people maintaining the protocols and infrastructure it sits on.

    • by Anonymous Coward on Wednesday November 13, 2013 @04:28PM (#45416375)
      Posting anonymously for my reasons...

      People think that adding encryption to something makes it more secure. Encryption is worthless without secure key exchange, and no matter how you dress it up, our existing SSL infrastructure doesn't cut it. It never has. It was built insecure.

      techincally wrong. The key exchange is secure, the infrastructure managing the key trust is not trustworthy. Please do not mix the two.

      Show me where and how you plan on making key exchange secure over a badly compromised and inherently insecure medium, aka the internet, using the internet. It can't be done.

      Lolwut? it *CAN* be done. Personally I'm working on an alternative of TLS with federated authentication, and no X.509. The base trust will be the DNSSEC infrastructure. Sorry but you got to trust someone. I know it's not perfect, in fact as soon as I'm finished with this project (Fenrir -- will be giving a lightning talk @CCC) I think I'll start working on fixing the mess that DNSSEC is. I already have some ideas. It will take some time, but it is doable.

      No matter how you cut it, you need another medium through which to do the initial key exchange.

      You keep confusing the key exchange algorithm and the base of the trust for your system... are you doing this on purpose or are you just trolling?

      Let the browser user take control over who, how, and when, to trust. Establish places in the real world, in meat space, in bricks and mortar land, where people can go to obtain and validate keys from multiple trusted parties.

      And give each other trust in a hippie fasion... No, you need more than that, and why does it need to be the browser? if you have a way to manage trust, why only at the browser level?

      And pardon me for being so blunt, but explaining the technical ins and outs is frankly beyond this crowd today. Most of you don't have the technical comprehension skills you think you do -- so I'm breaking it down for you in everyday english: Do not trust certificate authorities. Period. The end. No debate, no bullshit, no anti-government or pro-government or any politics.

      You keep talking about technical stuff and then you keep pointing at political and management problems... get your facts straight. You always make it seem as if the SSL protocol is broken per se. it is not.

      The system is inherently flawed, at the atomic level. It cannot be fixed with a patch. It cannot be sufficiently altered to make it safe. It is not about who we allow to be certificate authorities, or whether this organization or that organization can be trusted.

      Well, I'm working at a solution! NO X.509, trust from DNSSEC chain. That's as trustworthy as you can get. I'll work at a replacement of the DNS system after this project is finished. Keep your ears open for the "Fenrir" project, I'll be presenting it at the European CCC, although it won't be finished by then. come and discuss if you are interested. Otherwise stop spreading useless fear. -- Luca

    • by marcello_dl ( 667940 ) on Wednesday November 13, 2013 @04:43PM (#45416587) Homepage Journal

      All you're doing is adding a middle man, the certificate authority, that somehow you're supposed to blindly trust to never, not even once, fuck it up and issue a certificate that is later used to fuck you with.

      I would also add that, in practice, it's the third middle man. The second is the ISP.

      You likely have downloaded the browser with the CA list from the net, has it been tampered with? You validate the download with checksum, gotten from the net, has it been tampered with? what about the package manager keys, are they safe?
      Possibly if you have a preinstalled OS with secure credentials to communicate with a package repository you are ok, but currently that OSes come from Apple or MS, which managed to happily stay in the market when services trying to offer encrypted services have to shut down without being allowed to disclose the details. Are you going to trust them?

      And the connections through ISPs likely have some proprietary OS running BGP or whatever inter-network protocols.

      The first man in the middle is the hardware, keys are somewhere in ram or proc cache, hardware running malicious supervisors might just get them, while you're happy with your open source stack. (BTW I am happy with open source stacks because they tend to be written to help humans instead of articulated money traps like most commercial stuff)

    • by cloud.pt ( 3412475 ) on Wednesday November 13, 2013 @05:00PM (#45416813)
      So true, yet so moot. Let me use numbering to address some key discrepancies on your otherwise totally intuitive, yet irrelevant arguments:
      1. 1. You just discredited a system that has been successful protecting the majority of internet-bound critical use cases for years now. So I'm assuming you do absolutely no banking or social networking tasks on the WWW as you do not deem them safe enough?
      2. 2. You now tell me you perform such use-cases through TOR. Too bad you just discredited TOR with that weakest link of trust argument. But you are right on that one, according to the Snowden leaks and Silk Road events, it is also unsafe. So will be Bitcoin eventually.
      3. 3. A physical, "meat space" CA agreement is not going to solve all our problems, cuz' every sysop did not just gain immunity to social engineering. amiwrong? Physical CAs are definitely trustworthy because they don't abuse power either. Sure thing...

      There is no such thing as perfect security, or perfect anything. There is good enough (for as long as it is deemed like that), and HTTPS enforcing is definitely BETTER than plain-text for now and forever. This is not like the Chrome "no keyring/no master password" argument: you do get bullet-proof security by mediated access, not nuke-proof, yet not everyone has nukes. That's why people bought kevlar+helmet on Counter-Strike.

      So, without further metaphors: STOP CRITICIZING AN IMPROVEMENT

    • by DuckDodgers ( 541817 ) <keeper_of_the_wolf&yahoo,com> on Wednesday November 13, 2013 @05:12PM (#45416971)
      You're right that it only takes one compromised Certificate Authority - whether it's compromised by the NSA, hackers, or a corrupt owner - to issue what would be considered legitimate certificates for hundreds of websites like Microsoft or a bank.

      But physical key exchange doesn't scale. You can't get a billion people that use desktop, laptops, tablets, or smart phones to understand how to do a physical key exchange or why it matters, let alone organize a practical way to get it done.

      I don't know what the solution is, but this is a fundamental problem. A centralized authority of any kind managing key components of internet security is a very bad idea, because it's exactly where bad guys will go to poke holes in that security. But the average person isn't nearly educated enough for a distributed, user-owned alternative.

      What we need is a distributed, user-owned, extraordinarily simple and easy system. I have no idea what that could be.
    • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday November 13, 2013 @05:21PM (#45417067) Homepage Journal

      While you're right that there are issues with the certificate-based approach, it's not nearly as bad as you describe. There are solutions already implemented that mitigate much of the risk.

      One example is certificate pinning. Google already pins all Google-owned sites in Chrome. This is actually how the DigitNotar compromise was discovered; Chrome users got an error when trying to get to a Google site.

      Another example is SSH-style change monitoring, alerting users when certificates change unexpectedly. That doesn't help if the MITM attack is done from the beginning or during time periods around legitimate key changes, but it does narrow the window of opportunity significantly.

      Another example is multi-perspective observation... does the same certificate get returned to multiple requesters from different places on the Internet, and is it the same one the server is actually serving (the last must be periodically verified by the the site owner).

      There are other possibilities as well.

      Let the browser user take control over who, how, and when, to trust. Establish places in the real world, in meat space, in bricks and mortar land, where people can go to obtain and validate keys from multiple trusted parties.

      Good luck with that. You can build it, but they won't come.

    • by DuckDodgers ( 541817 ) <keeper_of_the_wolf&yahoo,com> on Wednesday November 13, 2013 @05:33PM (#45417213)
      Oh, and as Anonymous Coward pointed out below - key exchange over the internet is secure, you can exchange keys with someone with certainty that they key has not been modified. What key exchange over the internet does not do is guarantee the identity of the other person.

      You might call that a distinction without a difference, but I think it does matter.
  • by simpz ( 978228 ) on Wednesday November 13, 2013 @04:09PM (#45416167)
    We save about 10% of our Internet bandwidth by running all http traffic through a caching proxy. This would seem to prevent this bandwidth saving for things that just don't need encryption. This would be any public site that is largely consumable content. All in favour of it for anything more personal.

    Also how are companies supposed to effectively web filter if everything is HTTPS. DNS filtering is, in general, too broad as brush. We may not like our web filtered, but companies have a legal duty that employees shouldn't be see questionable material, even if on someone else's computer. Companies have been sued for allowing this to happen.
    • by tepples ( 727027 ) <tepples@ g m a i l .com> on Wednesday November 13, 2013 @04:41PM (#45416545) Homepage Journal

      This would seem to prevent this bandwidth saving for things that just don't need encryption. This would be any public site that is largely consumable content.

      Please define what you mean by "consumable content" [gnu.org]. If a user is logged in, the user needs to be on HTTPS so that other users cannot copy the session cookie that identifies him. This danger of copying the session cookie is why Facebook and Twitter have switched to all HTTPS all the time. Even sites that allow viewing everything without logging in often have social recommendation buttons that connect to Facebook, Twitter, Google+, or other services that may have an ongoing session.

      • by stewsters ( 1406737 ) on Wednesday November 13, 2013 @05:45PM (#45417401)
        You would need a different set of cookies for http and https domains. You would always set the cookies in https, and static things like the Facebook logo could go over http with only giving away that you are loading a new page with that logo on it. This gives away information on what you are doing, but much of it can already be assumed if your computer opens a connection to Facebook's servers.

        This will require a lot of micromanagement by web developers, but could be possible for large sites. Also set expires headers to not expire the logo, and you don't need to send it every time, which is far more efficient than caching it in a proxy.
    • by iroll ( 717924 ) on Wednesday November 13, 2013 @07:07PM (#45418147) Homepage

      Also how are companies supposed to effectively web filter if everything is HTTPS.

      They shouldn't. They should cut off web access for the grunts that don't need it (e.g. telecenter people who are using managed applications) and blow it wide open for those that do (engineers, creative professionals, managers). Hell, the managers are already opting out of the filtering and infecting their networks with porn viruses, so what would this change?

      We may not like our web filtered, but companies have a legal duty that employees shouldn't be see questionable material, even if on someone else's computer. Companies have been sued for allowing this to happen.

      Citation needed. Speaking of too broad a brush; this comment paints with a roller. I've only worked for one company that used WebSense, and that was a school. Is my anecdote more valid than yours?

      Filtering is not practical for a lot of jobs, and is certainly not the gold-standard for covering-your-ass against harassment; effective management is. Let's put it this way: if a coworker creates a hostile workplace for you and the boss does everything in their power to deal with it (reprimanding and ultimately firing the offender), you can't sue the boss. If your boss doesn't do anything about it, they're gonna get sued.

  • by gweihir ( 88907 ) on Wednesday November 13, 2013 @04:09PM (#45416173)

    If there is, I do not see it. Strikes me as people that cannot stop standardizing when the problem is solved. Pretty bad. Usually ends in the "Second System Effect" (Brooks), where the second system has all the features missing in the first one and becomes unusable due to complexity.

    • by TeknoHog ( 164938 ) on Wednesday November 13, 2013 @04:45PM (#45416615) Homepage Journal
      Since web == http + html, I have a hard time understanding how Web 2.0 can operate with HTTP 1.x.
    • HTTP 2.0 is a rebadge of Google's SPDY protocol, which uses several methods to hide Internet latency in web connections.
    • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday November 13, 2013 @05:59PM (#45417553) Homepage Journal

      Yes, there is a lot of need for HTTP 2.

      HTTP was great when pages were largely just a single block of HTML, but modern pages include many separate resources which are all downloaded over separate connections, and have for a long time, actually. HTTP pipelining helps by reducing the number of TCP connections that have to be established, but they still have to be sequential per connection, and you still need a substantial number of connections to parallelize download. This all gets much worse for HTTPS connections because each connection is more expensive to establish.

      HTTP 2 is actually just Google's SPDY protocol (though there may be minor tweaks in standardization, the draft was unmodified SPDY), which fixes these issues by multiplexing many requests in a single TCP connection. The result is a significant performance improvement, especially on mobile networks. HTTP 2 / SPDY adds another powerful tool as well: it allows servers to proactively deliver content that the client hasn't yet requested. It's necessary for the browser to parse the page to learn what other resources are required, and in some cases there may be multiple levels of "get A, discover B is needed, get B, discover C is needed...". With SPDY, servers that know that B, C and D are going to be needed by requesters of A can go ahead and start delivering them without waiting.

      The result can be a significant increase in performance. It doesn't benefit sites that pull resources from many different domains, because each of those must be a separate connection, and it tends to provide a lot more value for sites that are already heavily optimized for performance, because those that aren't tend to have lots of non-network bottlenecks that dominate latency. Obviously it will be well-optimized sites that can make use of the server-initiated resource delivery, too.

      Actually, though, Google has decided that SPDY isn't fast enough, and has built and deployed yet another protocol, called QUIC, which addresses a major remaining problem of SPDY: it's built on TCP. TCP is awesome, make no mistake, it's amazing how well it adapts to so many different network environments. But it's not perfect, and we've learned a lot about networking in the last 30 years. One specific problem that TCP has is that one lost packet will stall the whole stream until that packet loss is discovered and resent. QUIC is built on top of UDP and implements all of the TCP-analogue congestion management, flow control and reliability itself, and does it with more intelligence than can be implemented in a simple sliding window protocol.

      I believe QUIC is already deployed and is available on nearly all Google sites, but I think you have to get a developer version of Chrome to see it in action. Eventually Google will (as they did with SPDY) start working with other browser makers to get QUIC implemented, and with standards bodies to get it formalized as a standard.

      Relevant to the topic of the article, neither SPDY nor QUIC even have an unencrypted mode. If the committee decides that they want an unencrypted mode they'll have to modify SPDY to do it. It won't require rethinking any of how the protocol works, because SPDY security all comes from an SSL connection, so it's just a matter of removing the tunnel. QUIC is different; because SSL runs on TCP, QUIC had to do something entirely different. So QUIC has its own encryption protocol baked into it from the ground up.

  • by aap ( 108982 ) on Wednesday November 13, 2013 @04:24PM (#45416333)

    If http:// will fall back to HTTP 1.0, how does that make the Internet a more secure place? Will the users actually care that the page is being served by an older protocol, enough to type it again with https? Will they even notice?

  • by fahrbot-bot ( 874524 ) on Wednesday November 13, 2013 @04:29PM (#45416395)

    This is already a pain in the ass for me. I use a local proxy/filter (Proxomitron) to allow me to filter my HTTP streams for any application w/o having to configure them individually - or in cases where something like AdBlock would be browser specific. This doesn't work for HTTPS.

    For example, Google's switch to HTTPS-only annoys me to no end as I use Proxomitron to clean up and/or disable their various shenanigans - like the damn sidebar, URL redirects, suggestions, etc... At this time using "nosslsearch.google.com" with CNAME of "www.google.com" works to get a non-encrypted page (in Proxomitron, I simply edit the "Host:" header for nosslsearch.google.com hits to be www.google.com) but who know how much longer that will last.

    Thankfully, DuckDuckGo and StartPage allow me to customize their display to my liking w/o having to edit it on the fly. If only Google would get their head out of the ass and support the same, rather than only allowing their definition of "enhanced user experience".

    Seriously, do we really *need* HTTPS for everything - like *most* web searches, browsing Wikipedia or News? I think not.

    • by Chris Pimlott ( 16212 ) on Wednesday November 13, 2013 @04:42PM (#45416567)

      There's no reason you couldn't still use a filtering proxy. What you'll need to do is configure that filtering proxy with its own CA cert which you'll have to add to your browser's trusted CAs. The filtering proxy can terminate the outside SSL connection to the remote site (with its own list of trusted external CAs) and generate and sign its own certificates (with its configured CA cert) for the internal SSL connection to your browser. This is how BurpSuite works, for example.

  • by Shoten ( 260439 ) on Wednesday November 13, 2013 @04:48PM (#45416643)

    One big point in favor of this plan is that if it doesn't work well (i.e., if adoption is poor), then they can add support for opportunistic encryption later. Going from opportunistic to mandatory encryption would be a much harder task.

    Err...isn't going from opportunistic to mandatory encryption what they're trying to do now? Last I saw, HTTP was seeing a little bit of use already. The addition of a version number to it doesn't change the fact that they're already faced with existing behavior. It seems to me that adoption is already poor...they're just trying to force the issue now instead.

  • by Animats ( 122034 ) on Wednesday November 13, 2013 @04:53PM (#45416729) Homepage

    If everything is to go SSL, we now need widespread "man-in-the-middle" intercept detection. This requires a few things:

    • SSL certs need to be published publicly and widely, so tampering will be detected.
    • Any CA issuing a bogus or wildcard cert needs to be downgraded immediately, even if it invalidates every cert they've issued. Browsers should be equipped to raise warning messages when this happens.
    • MITM detection needs to be implemented within the protocol. This is tricky, but possible. A MITM attack results in the crypto bits changing while the plaintext doesn't. If the browser can see both, there are ways to detect attacks. Some secure phones have a numeric display where they show you two or three digits derived from the crypto stream. The two parties then verbally compare the numbers displayed. If they're different, someone is decrypting and reencrypting the bit stream.
    • by stewsters ( 1406737 ) on Wednesday November 13, 2013 @05:35PM (#45417249)
      "The two parties then verbally compare the numbers displayed"

      But if there is a man in the middle, whats to stop them from changing the numbers in the voice?

      It would be easy for a computer to change some checksum bits. If I have to manually talk to every customer who visits my eCommerce site to confirm their numbers each page load, I would probably walk out.
  • by WaffleMonster ( 969671 ) on Wednesday November 13, 2013 @05:05PM (#45416893)

    I think HTTP/2 should resist the temptation to concern itself with security, concentrate on efficiently spewing hypertext and address security abstractly and cleanly at a separate layer from HTTP/2. Obviously as with HTTP/1 there are some layer dependencies but that is no excuse for even having a conversation about http vs https or opportunistic encryption. It is the wrong place.

    Things change who knows some day someone may want to use something other than TLS with HTTP/2. They should be able to do so without having to suffer. Not respecting layers is how you end up with unnecessarily complex and ridged garbage.

    For example who is to say there is no value in HTTP/2 over an IPsec protected link without TLS?

"If truth is beauty, how come no one has their hair done in the library?" -- Lily Tomlin

Working...