Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Encryption Networking Security The Internet Upgrades

HTTP 2.0 May Be SSL-Only 320

An anonymous reader writes "In an email to the HTTP working group, Mark Nottingham laid out the three top proposals about how HTTP 2.0 will handle encryption. The frontrunner right now is this: 'HTTP/2 to only be used with https:// URIs on the "open" Internet. http:// URIs would continue to use HTTP/1.' This isn't set in stone yet, but Nottingham said they will 'discuss formalising this with suitable requirements to encourage interoperability.' There appears to be support from browser vendors; he says they have been 'among those most strongly advocating more use of encryption.' The big goal here is to increase the use of encryption on the open web. One big point in favor of this plan is that if it doesn't work well (i.e., if adoption is poor), then they can add support for opportunistic encryption later. Going from opportunistic to mandatory encryption would be a much harder task. Nottingham adds, 'To be clear — we will still define how to use HTTP/2.0 with http:// URIs, because in some use cases, an implementer may make an informed choice to use the protocol without encryption. However, for the common case — browsing the open Web — you'll need to use https:// URIs and if you want to use the newest version of HTTP.'"
This discussion has been archived. No new comments can be posted.

HTTP 2.0 May Be SSL-Only

Comments Filter:
  • by geekboybt ( 866398 ) on Wednesday November 13, 2013 @03:54PM (#45415965)

    In the similar thread on Reddit, someone mentioned RFC 6698, which uses DNS (with DNSSEC) to validate certificates, rather than CAs. If we could make both of them a requirement, that'd fit the bill and get rid of the extortion.

  • by gweihir ( 88907 ) on Wednesday November 13, 2013 @04:09PM (#45416173)

    If there is, I do not see it. Strikes me as people that cannot stop standardizing when the problem is solved. Pretty bad. Usually ends in the "Second System Effect" (Brooks), where the second system has all the features missing in the first one and becomes unusable due to complexity.

  • by girlintraining ( 1395911 ) on Wednesday November 13, 2013 @04:14PM (#45416213)

    What are your thoughts on RFC 6698 as a possible solution to the CA problem?

    I think that it's already been proving that centralizing anything leads to corruption and manipulation. Whether you put it in DNS, or put it in a CA, the result is the same: Centralized control under the auspices of a third party. Any solution that doesn't allow all the power to stay in the hands of the server operator, must be rejected.

  • by i kan reed ( 749298 ) on Wednesday November 13, 2013 @04:17PM (#45416257) Homepage Journal

    To be fair, no one is telling you to run your server on http 2.0. You can still run a gopher or ftp server, if the outdated technologies appeal to you enough.

    (Please don't dog pile me for saying ftp is outdated, I know you're old and cranky, you don't have to alert me)

  • by Anonymous Coward on Wednesday November 13, 2013 @04:28PM (#45416375)
    Posting anonymously for my reasons...

    People think that adding encryption to something makes it more secure. Encryption is worthless without secure key exchange, and no matter how you dress it up, our existing SSL infrastructure doesn't cut it. It never has. It was built insecure.

    techincally wrong. The key exchange is secure, the infrastructure managing the key trust is not trustworthy. Please do not mix the two.

    Show me where and how you plan on making key exchange secure over a badly compromised and inherently insecure medium, aka the internet, using the internet. It can't be done.

    Lolwut? it *CAN* be done. Personally I'm working on an alternative of TLS with federated authentication, and no X.509. The base trust will be the DNSSEC infrastructure. Sorry but you got to trust someone. I know it's not perfect, in fact as soon as I'm finished with this project (Fenrir -- will be giving a lightning talk @CCC) I think I'll start working on fixing the mess that DNSSEC is. I already have some ideas. It will take some time, but it is doable.

    No matter how you cut it, you need another medium through which to do the initial key exchange.

    You keep confusing the key exchange algorithm and the base of the trust for your system... are you doing this on purpose or are you just trolling?

    Let the browser user take control over who, how, and when, to trust. Establish places in the real world, in meat space, in bricks and mortar land, where people can go to obtain and validate keys from multiple trusted parties.

    And give each other trust in a hippie fasion... No, you need more than that, and why does it need to be the browser? if you have a way to manage trust, why only at the browser level?

    And pardon me for being so blunt, but explaining the technical ins and outs is frankly beyond this crowd today. Most of you don't have the technical comprehension skills you think you do -- so I'm breaking it down for you in everyday english: Do not trust certificate authorities. Period. The end. No debate, no bullshit, no anti-government or pro-government or any politics.

    You keep talking about technical stuff and then you keep pointing at political and management problems... get your facts straight. You always make it seem as if the SSL protocol is broken per se. it is not.

    The system is inherently flawed, at the atomic level. It cannot be fixed with a patch. It cannot be sufficiently altered to make it safe. It is not about who we allow to be certificate authorities, or whether this organization or that organization can be trusted.

    Well, I'm working at a solution! NO X.509, trust from DNSSEC chain. That's as trustworthy as you can get. I'll work at a replacement of the DNS system after this project is finished. Keep your ears open for the "Fenrir" project, I'll be presenting it at the European CCC, although it won't be finished by then. come and discuss if you are interested. Otherwise stop spreading useless fear. -- Luca

  • by fahrbot-bot ( 874524 ) on Wednesday November 13, 2013 @04:29PM (#45416395)

    This is already a pain in the ass for me. I use a local proxy/filter (Proxomitron) to allow me to filter my HTTP streams for any application w/o having to configure them individually - or in cases where something like AdBlock would be browser specific. This doesn't work for HTTPS.

    For example, Google's switch to HTTPS-only annoys me to no end as I use Proxomitron to clean up and/or disable their various shenanigans - like the damn sidebar, URL redirects, suggestions, etc... At this time using "nosslsearch.google.com" with CNAME of "www.google.com" works to get a non-encrypted page (in Proxomitron, I simply edit the "Host:" header for nosslsearch.google.com hits to be www.google.com) but who know how much longer that will last.

    Thankfully, DuckDuckGo and StartPage allow me to customize their display to my liking w/o having to edit it on the fly. If only Google would get their head out of the ass and support the same, rather than only allowing their definition of "enhanced user experience".

    Seriously, do we really *need* HTTPS for everything - like *most* web searches, browsing Wikipedia or News? I think not.

  • by Anonymous Coward on Wednesday November 13, 2013 @04:49PM (#45416663)

    My predictions:
    - TLS will be required. No bullshit, no compromise, no whining. On by default, no turning it off.
    - RFC6698 DNSSEC/DANE (or similar) certificate pinning support required.
    - DANE (or similar) to authenticate the TLS endpoint hostname via DNSSEC - that is the MITM defense, not a CA
    - CA not required - if present, authenticates ownership of the domain, but not whether there's an MITM

    We have a shot at a potentially clearer hierarchy with DNSSEC than we do with CAs, where it's anything-goes, anyone can sign anything - and to state-sponsored attackers, clearly have, and do (whether they know it or not). We might need to fix DNSSEC a bit first, along the lines of TLS 1.3 (see below).

    Also, TLS 1.3 ideas are being thrown around. It will be considerably revised (might even end up being TLS 2.0, if not in version number then at least in intent). Here are my predictions based on what's currently being plotted:
    - Handshake changed, possibly for one with less round-trips/bullshit
    - Cipher suites separated into an authentication/key exchange technique and an authenticated encryption technique (rather than the unique combination of the two)
    - Renegotiation might not stay in or redesigned?
    - Channel multiplexing?
    - MD5, SHA-1, and hopefully also SHA-2 removed, replaced with SHA-3 finalists: Skein-512-512? Keccak-512 (as in final competition round 3 winner, but hopefully NOT as specified in the weakened SHA-3 draft)?
    - Curve25519 / Ed25519 (and possibly also Curve3617) for key exchange and authentication: replace NIST curves with safecurves
    - RSA still available (some, such as Schneier, are wary of ECC techniques as NSA have a head start thanks to Certicom involvement and almost definitely know at least one cryptanalysis result we don't), but hardened - blinding mandated, minimum key size changed (2048 bit? 3072 bit? 4096 bit?)
    - PFS required; all non-PFS key exchanges removed; Curve25519 provides very very fast PFS, there's very little/no excuse to not have it
    - All known or believed insecure cipher suites removed. (Not merely deprecated; completely removed and unavailable.)
    - Most definitely RC4 gone, beyond a doubt, that's 100% toast. There may be strong pushes for client support for RC4-SHA/RC4-MD5 to be disabled THIS YEAR in security patches if at all possible! It is DEFINITELY broken in the wild! (This is probably BULLRUN)
    - Possibly some more stream cipher suites to replace it; notably ChaCha20_Poly1305 (this is live in TLS 1.2 on Google with Adam Langley's draft and in Chrome 33 right now and will more than likely replace RC4 in TLS 1.2 as an emergency patch)
    - AES-CBC either removed or completely redesigned with true-random IVs but using a stronger authenticator
    - Probably counter-based modes replacing CBC modes (CCM? GCM? Other? Later, output of CAESAR competition?)
    - The NPN->ALPN movement has pretty much done a complete 180 flip. We will likely revert to NPN, or a new variant similar to NPN, because any plaintext metadata should be avoided where possible, and eliminated entirely where practical - ALPN is a backwards step. IE11 supports it in the wild right now, but that can always be security-patched (and it WILL be a security patch).

    Of course, discussions are ongoing. This is only my impression of what would be good ideas, and what I have already seen suggested and discussed, and where I think the discussions are going to end up - but a lot of people have their own shopping lists, and we're a long way from consensus - we need to bash this one out pretty thoroughly.

    Last week's technical plenary was a tremendous kick up the ass (thanks Bruce! - honestly, FERRETCANNON... someone there reads Sluggy Freelance...). We all knew something had to be done. Now we're scratching our heads to figure out what we CAN do; as soon as we've done that, we'll be rolling our sleeves up to actually DO it. After consensus is achieved, and when the standard is finished, we'll be pushing very, very strongly to get it deployed forthwith - delays for compatibility because of client bugs will not be an acceptable excuse.

  • by Anonymous Coward on Wednesday November 13, 2013 @05:08PM (#45416929)

    We do not want to fragment the internet, but this enormous subversion of trust already threatens it. Last week's technical plenary's hums clearly (overwhelmingly, even unanimously) agreed that what is happening - massive, state-sponsored surveillance - is an attack on the Internet as a whole. Clearly the US government (not the only attacker, but the biggest) appointing IANA is a conflict of interest. That is not even a point of debate anymore.

    Bruce actually openly pointed this out during the prior talk: "...we're going to need to figure something out, or it's going to be the ITU".

    The room laughed. The ITU is clearly unsuitable, but it does probably need to be a .INT, at least, for the root itself - maybe also for .COM and .NET? The idea of a completely new treaty (IANA.INT?) has been floated and is probably the best solution for everyone, although in practice, it's hard to say what will actually happen - we are engineers, not politicians!

  • by Anonymous Coward on Wednesday November 13, 2013 @05:52PM (#45417481)

    If we use DNSSEC to pin a chain of trust all the way to the root, we have, hopefully, authenticated that certificate to its hostname. That, alone, mitigates an MITM, and it is even within scope of the DNS hierarchy

    Contrast: CAs have a flat, global authenticative fiat - therefore, they are fundamentally less secure, as instead of multiplying with the depth of hierarchy, the potential weaknesses multiply with the number of commonly-accepted CAs.

    Of course, you can then (optionally) also choose to use a CA to countersign your certificate in a chain, if you want; presumably you pay them to certify who owns the domain.

    In the event of any disagreement, you just detected (and therefore prevented) an active, real MITM attack. An attacker would have to subvert not just a random CA, but hierarchically the correct DNS registry and/or registrar -and- a CA used to countersign, if present. We can make that a harder problem for them. It increases the cost. We like solutions that make a panopticon of surveillance harder - anything that makes it more expensive is good, because hopefully we can make it prohibitively expensive again, or at least, noisy enough to be globally detectable.

    There's also the separate question of the US Government holding the DNS root. Someone has to. Obviously it can't be them anymore. Even if we saw it, we can no longer trust it.

    We are also welcome to potential more distributed solutions. If you happen to have a really nice, secure protocol for a distributed namespace (although please something that works better than, say, NameCoin), please publish.

  • by Kjella ( 173770 ) on Wednesday November 13, 2013 @05:52PM (#45417489) Homepage

    Except for this nagging problem that DNS, and DNSsec, is still hierarchical and thus we still have single points of extortion, with as ultimate root... the US government.

    The DNS system by nature has a single root, the trust chain doesn't necessarily have that. You could for example require that all TLDs be signed with the keys of the five permanent members of the UN security council and require all of them to be present. So Canada would own the keys to the ".ca" domain and they would be signed by the US, Russia, China, UK and France. Canada gets to do whatever they want under their domain and nobody can spoof them unless they a) steal Canada's private keys or b) steal all the other five private keys. Of course the tin foil hat brigade don't trust any authority and that's fine, you can always check the fingerprint. But I don't see how it hurts your security to have the ".com" server certifying that you own "yourdomain.com" versus a totally self-signed certificate with yourself as CA. At worst it's no trust vs no trust.

    As for being "blackmailed", well if they're being nasty they're already holding all your traffic hostage (except direct IP access). The base certificate "owner of yourdomain.com" should be free with the DNS service, if you want something to actually certify who you are that's different. Really it's nothing more than the service they're doing already with DNS and DNSSEC, making sure that they're pointing you to the right server.

  • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday November 13, 2013 @05:59PM (#45417553) Journal

    Yes, there is a lot of need for HTTP 2.

    HTTP was great when pages were largely just a single block of HTML, but modern pages include many separate resources which are all downloaded over separate connections, and have for a long time, actually. HTTP pipelining helps by reducing the number of TCP connections that have to be established, but they still have to be sequential per connection, and you still need a substantial number of connections to parallelize download. This all gets much worse for HTTPS connections because each connection is more expensive to establish.

    HTTP 2 is actually just Google's SPDY protocol (though there may be minor tweaks in standardization, the draft was unmodified SPDY), which fixes these issues by multiplexing many requests in a single TCP connection. The result is a significant performance improvement, especially on mobile networks. HTTP 2 / SPDY adds another powerful tool as well: it allows servers to proactively deliver content that the client hasn't yet requested. It's necessary for the browser to parse the page to learn what other resources are required, and in some cases there may be multiple levels of "get A, discover B is needed, get B, discover C is needed...". With SPDY, servers that know that B, C and D are going to be needed by requesters of A can go ahead and start delivering them without waiting.

    The result can be a significant increase in performance. It doesn't benefit sites that pull resources from many different domains, because each of those must be a separate connection, and it tends to provide a lot more value for sites that are already heavily optimized for performance, because those that aren't tend to have lots of non-network bottlenecks that dominate latency. Obviously it will be well-optimized sites that can make use of the server-initiated resource delivery, too.

    Actually, though, Google has decided that SPDY isn't fast enough, and has built and deployed yet another protocol, called QUIC, which addresses a major remaining problem of SPDY: it's built on TCP. TCP is awesome, make no mistake, it's amazing how well it adapts to so many different network environments. But it's not perfect, and we've learned a lot about networking in the last 30 years. One specific problem that TCP has is that one lost packet will stall the whole stream until that packet loss is discovered and resent. QUIC is built on top of UDP and implements all of the TCP-analogue congestion management, flow control and reliability itself, and does it with more intelligence than can be implemented in a simple sliding window protocol.

    I believe QUIC is already deployed and is available on nearly all Google sites, but I think you have to get a developer version of Chrome to see it in action. Eventually Google will (as they did with SPDY) start working with other browser makers to get QUIC implemented, and with standards bodies to get it formalized as a standard.

    Relevant to the topic of the article, neither SPDY nor QUIC even have an unencrypted mode. If the committee decides that they want an unencrypted mode they'll have to modify SPDY to do it. It won't require rethinking any of how the protocol works, because SPDY security all comes from an SSL connection, so it's just a matter of removing the tunnel. QUIC is different; because SSL runs on TCP, QUIC had to do something entirely different. So QUIC has its own encryption protocol baked into it from the ground up.

  • by Bob Uhl ( 30977 ) on Wednesday November 13, 2013 @10:08PM (#45419281)

    The DNS system by nature has a single root, the trust chain doesn't necessarily have that.

    The SPKI guys back in the 90s figured this stuff out really, really well. Ideally, one would have: a DNS trust chain indicating that b24e:6f99:2f6f:34d8:9c8a:c6da:daaf:e3bb:002e:2ba4:2622:4cf9:cd8b:14a5:71d8:5a9c:18dc:47a2:9a2d:2951:a26b:26fa:2165:85fc:7006:0d66:1c8e:a4f4:ea36:4d04:57a0:8ae4 speaks on behalf of owns example.net; an IP trust chain indicating that b24e:6f99:2f6f:34d8:9c8a:c6da:daaf:e3bb:002e:2ba4:2622:4cf9:cd8b:14a5:71d8:5a9c:18dc:47a2:9a2d:2951:a26b:26fa:2165:85fc:7006:0d66:1c8e:a4f4:ea36:4d04:57a0:8ae4 speaks on behalf of the owner of 192.0.2.7, and possibly certifications from other organisations (Better Business Bureau perhaps) that b24e:6f99:2f6f:34d8:9c8a:c6da:daaf:e3bb:002e:2ba4:2622:4cf9:cd8b:14a5:71d8:5a9c:18dc:47a2:9a2d:2951:a26b:26fa:2165:85fc:7006:0d66:1c8e:a4f4:ea36:4d04:57a0:8ae4 speaks on behalf of a decent dude; users' browsers might demand the first two and show more confidence for further certifications.

    SPKI's contributions included a k-of-n standard and, more importantly, transitive authorisations. So once I am granted authorisations from .com to use example.com, I can pass that authorisation on to any machines under my control; I can delegate mail.example.com to my mail-handling group without also giving them financials.example.com. I can do the same thing with my IP address space, my bank account information &c. I could add third-party attestations to my identity, perhaps third parties more resistant to rubber-hose persuasion than the standard ones. Pinning might rely on my personal closely-held private key, which is never online at all but which delegates to online (time-limited and/or revocable) keys. Calculating this stuff is very, very simple and fast. There's no need for any fees along the way. It's easy to reason about.

    You can see how this might work with internet governance: each organisation would be responsible for the namespace it was assigned, and be easily able to segment that namespace however it wished. Anyone at any level could cross-certify; damage to the trust chains could be contained.

    SPKI is, in every way save uptake, superior to XPKI.

Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse

Working...