Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security IT

HTTP Strict Transport Security Becomes Internet Standard 98

angry tapir writes "A Web security policy mechanism that promises to make HTTPS-enabled websites more resilient to various types of attacks has been approved and released as an Internet standard — but despite support from some high-profile websites, adoption elsewhere is still low. HTTP Strict Transport Security (HSTS) allows websites to declare themselves accessible only over HTTPS (HTTP Secure) and was designed to prevent hackers from forcing user connections over HTTP or abusing mistakes in HTTPS implementations to compromise content integrity."
This discussion has been archived. No new comments can be posted.

HTTP Strict Transport Security Becomes Internet Standard

Comments Filter:
  • Isn't the point of mixed web sites to lessen server load from https? I was always under the impression a mixed environment only using https when necessary was a better idea. Obvoiusly not mixing SSL and non on any single page like the article mentions, but wouldn't just be as effective to advocate for better SSL implementations?
    • Re:Server Load (Score:5, Informative)

      by Chrisq ( 894406 ) on Friday November 23, 2012 @08:21AM (#42073241)

      Isn't the point of mixed web sites to lessen server load from https? I was always under the impression a mixed environment only using https when necessary was a better idea. Obvoiusly not mixing SSL and non on any single page like the article mentions, but wouldn't just be as effective to advocate for better SSL implementations?

      No, mixed web sites were never recommended and many browsers will give a "mixed content" warning. The overhead isn't that high, Google commented after its switch to https only for gmail: [techie-buzz.com]

      all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.

      • Very interesting article. Makes me realize I never personally did benchmarks of secure vs non. Maybe it's just kind of a "word on the street" type phenomenon from more senior admins than myself.
        • by Anonymous Coward

          You can see "delay" with https sites easily, no benchmarks required either. It's just the performance price paid for the (hopefully) added security.

          • Re:Server Load (Score:5, Informative)

            by Chrisq ( 894406 ) on Friday November 23, 2012 @08:46AM (#42073403)

            You can see "delay" with https sites easily, no benchmarks required either. It's just the performance price paid for the (hopefully) added security.

            Yes there is added latency due to the handshake, though on my broadband connection I can't say that I can see it. Google has proposed and is implementing [imperialviolet.org] several standards to reduce this delay though. Of course the biggest reduction in the effects of latency came with "Keep Alive" which we have now had for years.

            • Yeah, I can't say I really notice a huge difference when switching to SSL on the page load front either. Being a self-centered asshole, I was more worried about CPU time. More CPU time means server upgrades which means more time working and less time pretending to work.
        • To be fair, the myth that SSL requires too much hardware is rooted on reality. It was once true, at the 90's.

          That article has a big WTF:

          So did Google just compromise speed for security? Definitely not.

          That after explaining how SSL increases latency. And for supporting the stated fact that they didn't compromisse speed, the article comes with a pair of non-sequitours: Google uses chaches, and they optmized it to use much less memory.

          (Just a note... Yes, SSL does increase latency, and no, it's not enough to n

          • Yes, SSL does increase latency, and no, it's not enough to notice in a well written page. But if you go making a ton of different connections, it will be quite noticeable.

            So what's the proper way to incorporate advertisements (which pay the writing and hosting bills) and recommendation widgets (which attract readers) "in a well written page"? Those tend to make "a ton of different connections".

            • by Chrisq ( 894406 )

              Yes, SSL does increase latency, and no, it's not enough to notice in a well written page. But if you go making a ton of different connections, it will be quite noticeable.

              So what's the proper way to incorporate advertisements (which pay the writing and hosting bills) and recommendation widgets (which attract readers) "in a well written page"? Those tend to make "a ton of different connections".

              The "ton of different connections" is a two-edged sword. Obviously each needs to establish a session, and incurs a concurrency overhead. On the other hand requests can be overlepped past the "connection per server" limit you would get if they all came from the same site".

            • Most of the time, the proper way to incorporate those things is by assynchronous requests after the page loads.

              Not everybody is lucky enough to get permission to do that with ads, but there is no reason to make the user wait those social network widgets to load before he can see your page.

              • Most of the time, the proper way to incorporate those things is by assynchronous requests after the page loads.

                The implementation on (for example) Cracked.com of asynchronous requests to Facebook and other social networks has caused article pages to reflow several times, and when these reflows move the navigation buttons as they tend to do, I end up clicking links other than the one I intended to click.

                • by TheLink ( 130905 )

                  I end up clicking links other than the one I intended to click.

                  On some sites that might be considered a feature to them.

                • You know you can fill the space taken by those widgets before loading them, right?

                  (I don't doubt any problem you may have had doing this, I'm just pointing it out.)

                  • by tepples ( 727027 )

                    You know you can fill the space taken by those widgets before loading them, right?

                    I think one of the Facebook widgets has one height for an active Facebook user (in which case a bunch of recommendation options presumably pop up) and another height for someone who has no Facebook account (in which case "Sign up for Facebook to see what your friends like" appears). But I don't feel like signing up for a Facebook account and giving Facebook my cell phone number just to verify this. I haven't even dug into the Google+ account that I have.

      • by Anonymous Coward

        The problem with https isn't the server load per request but the additional server load due to not being able to cache resources as efficiently.

        • Since when do standard HTTP cache control headers, such as Expires at the end of next year, work less efficiently when the HTTP is encapsulated in TLS?
          • Since the cache servers in between the client and the server can't cache the content for multiple users.

            Oh, you thought only browser caches mattered.

            Consider the still excellent though ancient http://www.ircache.net/ [ircache.net]

            • Since the cache servers in between the client and the server

              Who is operating these cache servers you're talking about?

              • If the end user is operating them, such as a business that provides caching for web sites viewed by users of its office network, the business can run an HTTPS caching proxy that uses a self-signed certificate, and everyone behind the firewall can install the business's root certificate.
              • If the operator of the web site is operating them, the caching load balancer can implement all SSL, and the web servers can communicate with the proxy through HTTP.
              • I
              • Every company I do border gateway IT for has a border cache that filters all Internet access for the client. All connections to Internet servers go through it and reduce overall Internet bandwidth requirements, speed up actual content delivery for content accessed by multiple users (Windows Updates are a huge win) and allow the disallowing of specific sites or URIs by policy controls.

                I also operate the same thing at home because well, I'm good at it and its worth it for the PS3+PC+Laptop+Tablet+SmartPhones

                • If the end user is operating them, such as a business that provides caching for web sites viewed by users of its office network, the business can run an HTTPS caching proxy that uses a self-signed certificate, and everyone behind the firewall can install the business's root certificate.

                  Every company I do border gateway IT for has a border cache that filters all Internet access for the client.

                  That's what I was talking about. I don't see why you couldn't just run your own internal CA on a home or office network and use that as part of a man in the middle that caches HTTPS communications.

                  PS3+PC+Laptop+Tablet+SmartPhones+3DS

                  Perhaps the problem with running an internal CA happens with devices onto which only the manufacturer, not the device's owner, can install SSL root certificates. Is this the case with the Sony, Nintendo, and perhaps Apple devices that you mentioned?

      • Re:Server Load (Score:4, Interesting)

        by MikeBabcock ( 65886 ) <mtb-slashdot@mikebabcock.ca> on Friday November 23, 2012 @12:15PM (#42074791) Homepage Journal

        HTTPS-only is a hack from a lack of foresight and breaks caching.

        What we need is a signature-only system for content that isn't private. There's no reason to encrypt the front page images on CNN to each user, but signing them so they are provably from CNN is valuable.

        • Re:Server Load (Score:5, Insightful)

          by lgw ( 121541 ) on Friday November 23, 2012 @01:52PM (#42075669) Journal

          HTTPS-only is a hack from a lack of foresight and breaks caching.

          What we need is a signature-only system for content that isn't private.There's no reason to encrypt the front page images on CNN to each user, but signing them so they are provably from CNN is valuable.

          More myths from the 90s - wrong on both counts. Privacy always matters. Maybe you live in a country where browing CNN won't land you in jail, but others aren't so lucky. And the only one who can't cache HTTPS traffic is the man-in-the-middle, which is sort of the point, really. Server-side there's plenty of hardware solutions to caching these days, it's just a question of where you terminate SSL. Client-side there's plenty of solutions as well, if you're running a home or office network and your users are willing to trust your cert (and thereby allow you to snoop).

          • by Rigrig ( 922033 )

            Maybe you live in a country where browing CNN won't land you in jail, but others aren't so lucky.

            "Honestly, I wasn't browsing at all, just making random https connections to cnn.com"...

          • More myths from the paranoid.

            As someone who deals with security on a regular basis, I know that SSL doesn't fix any of the problems you're mentioning. In fact, my point is only invalidated in situations where the content itself is private to the user. Connection tracking negates your points entirely. Assuming any middle-man knowledge, I can already determine what sites you visit. Assuming any level of police state, I can just put a keyboard monitor on your USB input or a pinhole video monitor through yo

  • SSL (Score:5, Insightful)

    by FriendlyLurker ( 50431 ) on Friday November 23, 2012 @08:23AM (#42073253)
    Now, just gotta get SSL certificate system... secure and working.
    • Now, just gotta get SSL certificate system... secure and working.

      That suggests the question, are there any CAs which have never [knowingly] been compromised?

      • That suggests the question, are there any CAs which have never [knowingly] been compromised?

        Yes, my self-signed CA has never been compromised. Must disclose that has never been connected to the internet

      • by TheLink ( 130905 )
        Unfortunately that's not relevant for security given the way most browsers behave by default. All the attacker needs is ONE compromised/cooperative CA out of the dozens of CAs your browser recognizes or will recognize.

        If you visit China and CNNIC decides to sign a *.yourbank.com certificate and MITM you, your browser wouldn't warn you. It'll show you the usual "secure" icons etc.

        If you want a warning you can use firefox and certificate patrol. Or try Chrome's certificate pinning feature (not sure about the
    • What about all those SSL/TLS attacaks that came out a year ago, injection renegotiations etc.? Is SSL/TLS suddenly considered safe again? I thought they discovered serious issues in the concept.

    • by Anonymous Coward

      Also, as more and more mundsne activities begin to go through SSL, I suspect it will provide more and more opportunity for users to grow accustomed to blindly accepting any and all invalid certificates, because they become conditioned to understand that the certificate warning is just a minor annoyance to dismiss as quickly as possible. (Even many theoretically "secure" websites that deal with actual sensitive information like credit card numbers are already conditioning people to accept invalid certificate

    • by Anonymous Coward

      As long as it is based on the deeply logically flawed concept of "authority" [wikipedia.org], it can by definition never ever work.

      Trust is a personal, individual thing. There can never ever be a thing that can be put into place as a global entity that everyone trusts. You can only decide who to trust *yourself*. You can never offload that to somebody else. Ever. I'm sorry, but there's a limit to how lazy of an ass one can be, before it starts to hurt.

      So what we need is a web of trust system, with trust factors for every l

    • by dissy ( 172727 )

      Free Class-1 SSL certificates are available from StartSSL
      https://www.startssl.com/ [startssl.com]

      Class-1 does not show the "Super secure secret key" icons with organization name because they are only email-verified, and you must used a personal name, but for small personal "hobby" websites they are still a lot better than a self signed certificate.

      Class-2 certs are what supposedly need "verified", and show all the high security flags.
      In practice however this verification is typically lacking depending on which cert author

    • by dog77 ( 1005249 )
      Verification of SSL server certificate is not enough to protect your account. There needs to be additional 2 way authentication, so both sides can prove they know the username password/key to the account. So if the certificate does get compromised, you will still be protected from man in the middle. Here is one such protocol: http://en.wikipedia.org/wiki/Secure_Remote_Password_protocol [wikipedia.org]
  • Comment removed based on user account deletion
    • by heypete ( 60671 ) <pete@heypete.com> on Friday November 23, 2012 @09:05AM (#42073523) Homepage

      What's the difference between using this protocol and, uh, just disabling HTTP on your webserver? Or, from a user standpoint, just making sure you're using HTTPS via the URL?

      Disabling HTTP can break things for users who manually enter URLs and forget the "https" or any number of other bad things. It's usually good form for a secure site to also run a plain-http server that redirects users to the secure site to avoid such confusion.

      Only problem: ssl stripping. If a bad guy can intercept the connection between you and the secure site before the security has been negotiated then they can connect to the secure site in the normal way and present that page to you sans HTTPS and intercept anything you do there.

      In short: browsers don't remember when a site "used to be secure but isn't today" and so don't present any warnings. This method tells the browser "For the next [time interval] you should only connect to me using a secure protocol. If not, the connection should fail." -- all that's required is that the user connect to the secure site at least once (e.g. from home or some other trusted network) to have the HSTS flag set for that site. If they try going to the coffee shop or some other place where there's a bad guy attempting ssl stripping then the connection will fail.

      • It's been a very long time since I've messed with web technologies at this level, so I'm tossing the following out merely for discussion purposes: What about changing the default browser to behavior so that instead of first trying the http: prefix, browsers try https: instead and then fall back to http: only when necessary? Would that work around the 'ssl stripping' issue?

        • by lgw ( 121541 )

          That wouldn't help if someone in the middle is blocking the https site. You really need a way to tell the browser to not try the http site at all, even as a fall back. "Force fall back to insecure legacy mode" is a very common form of attack these days.

    • by DarkOx ( 621550 )

      Because this lets the browser know and it remembers. So when I come along and spoof the MAC address of your gateway, and route the traffic to my own web server. I also need to run HTTPS or you will get a warning.

      Additionally I am also going to need a certificate that your system will see as trusted and valid for the name you requested; which fortuitously remains at least a little hard in the typical case.

      Without this I could pretty much count on you just typing amazon.com, rather than https://amazon.com/ [amazon.com]

  • I can get the security side of things, but how do you do that easily and with zero budget? What about a personal website? I can't afford an SSL certificate for that.

    Is there any "SSL/HTTPS For Dummies With No Cash" manual somewhere, keeping in mind that most people with websites are code monkeys, not network administrators.

    • by icebraining ( 1313345 ) on Friday November 23, 2012 @09:20AM (#42073613) Homepage

      SSL certificates are not the problem: https://cert.startcom.org/ [startcom.org]

      The problem is that some browsers (mainly IE on XP) don't support SNI, so your website needs a dedicated IPv4.

      If you manage the machine, you can get a VPS with a dedicated IP for almost nothing (I pay $3/month), but managed web hosting is another issue.

    • by heypete ( 60671 ) <pete@heypete.com> on Friday November 23, 2012 @09:22AM (#42073623) Homepage

      I can get the security side of things, but how do you do that easily and with zero budget? What about a personal website? I can't afford an SSL certificate for that.

      NameCheap. sells Comodo and GeoTrust domain-validated SSL certs for ~$8-$10/year. Thawte certs are $30. Those are well within an "essentially nil" budget range for even the smallest of businesses.

      StartSSL.com has domain-validated certs for free. Additional validation and features (like wildcards) are available at nominal cost.

      All of the above-mentioned certs are widely trusted by browsers, both on computers and mobile devices.

      Certificate costs haven't been an issue for several years now. The days of needing to get VeriSign certs at outrageous prices are gone (though VeriSign still charges outrageous prices, naturally).

      Is there any "SSL/HTTPS For Dummies With No Cash" manual somewhere, keeping in mind that most people with websites are code monkeys, not network administrators.

      Enabling SSL/TLS for your web server usually requires the addition of a few lines in a configuration file that tell the server (a) to use SSL and (b) the location of the server's private key, public key, and any intermediate certificates from the certificate authority. The details vary based on your server software, but it's usually quite easy and instructions can be found on Google. The steps are basically:
      1. Generate an RSA public key (usually 2048 bits, though 4096 is not uncommon. 1024 bits is deprecated.).
      2. Create a certificate signing request (CSR) for your site using that private key.
      3. Submit the CSR to the certificate authority for signing.
      4. Complete whatever verification process the CA requires (for domain-validated certs this usually requires that you click a link sent to the email address listed in your domain's whois record, while high-validation-level certs may involve you sending the CA various documents).
      5. One you are verified, the CA signs your CSR and sends you the signed certificate. In many cases they also direct you to download the required intermediate certificate that you'll also need.
      6. You save the private key (readable to root only, of course), signed certificate, and the intermediate certificate to your server and configure your server software appropriately (usually only a few lines of configuration changes).

      At present, most HTTPS sites should have their own unique IP address, which rules out most "personal" hosting. This is because Internet Explorer on Windows XP (still a substantial chunk of users) does not handle HTTPS-enabled virtualhosts. Pretty much any other browser on any other system does support it.

    • by DarkOx ( 621550 )

      I would advocate not using HTTPs unless you have something to secure. That might be as simple as login if you have forum or something but if its all just non-interactive public information send it in the clear.

      Do some googling there are some public CAs that will issue minimally verified certificates for personal sites free for at least the first year, so that might be an option. Otherwise if the user community is small enough you can use a self signed certificate. You'll need to contact them out of band

      • by ewieling ( 90662 )
        Why do you want to tell a potential attacker which data you consider important enough to secure with HTTPS?
      • by The Bean ( 23214 )

        You gotta be kidding, if one of my vendor's gave me their root certificate to install on my machine, so I could securely connect to their site, I'd tell them to take a flying leap and get a real certificate. If I'm understanding right, your friend now has the ability to MITM his customers' SSL connections. We can argue about whether the root certificates preinstalled can be trusted, but I'm confident they're safer than the local Dunder Mifflin.

  • by Skapare ( 16644 ) on Friday November 23, 2012 @09:29AM (#42073681) Homepage

    This simple logic that when any SECURE page is requested then EVERYTHING must be accessed in secure mode (valid certificate required of every part if the main requested page has a valid certificate) should have been in there right from the beginning. So many of our security problems exists because people just DON'T THINK right at the beginning AND it takes so damn fscking long for the process to fix their stupidity.

    • Tell me , if all this is so obvious, how come you didn't design it?

      Besides which , when https was designed it was a pretty hefty burden on the server CPUs of the day so it was logical that only the parts that needed security would actually use https. Unfortunately hindsight always does have 20/20 vision and people who pretend its their own insight are usually full of it.

    • I'm not sure you get what the problem is, the problem is that end users often don't "request secure pages" explicitly, they either type a plain domain name with no protocol or they follow a link from another site (possiblly a search engine) so the initial request for a session is often over plain http.

      If the users connection is not subject to MITM then they get redirected to the secure site but if a MITM is present then the MITM can make sure that doesn't happen by rewriting any links or redirects that poin

  • HTTPS is great, if you can afford to pay the fees. I see tons of potential for redirect 301 to HTTPS, becasue it will screw the existing links on the Internet and the unaware businesses would love to pay for maintenance of their broken websites.
  • by Anonymous Coward

    It's a proposed standard.

It's currently a problem of access to gigabits through punybaud. -- J. C. R. Licklider

Working...