Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Firefox Mozilla Privacy

Firefox Prepares To Mark All HTTP Sites 'Not Secure' After HTTPS Adoption Rises (bleepingcomputer.com) 244

An anonymous reader quotes a report from Bleeping Computer: The increased adoption of HTTPS among website operators will soon lead to browsers marking HTTP pages as "Not Secure" by default, and Mozilla is taking the first steps. The current Firefox Nightly Edition (version 59) includes a secret configuration option that when activated will show a visible visual indicator that the current page is not secure. In its current form, this visual indicator is a red line striking through a classic lock that's normally used to signal the presence of encrypted HTTPS pages. According to Let's Encrypt, 67% of web pages loaded by Firefox in November 2017 used HTTPS, compared to only 45% at the end of last year.
This discussion has been archived. No new comments can be posted.

Firefox Prepares To Mark All HTTP Sites 'Not Secure' After HTTPS Adoption Rises

Comments Filter:
  • by Anonymous Coward on Wednesday December 20, 2017 @05:46PM (#55779587)

    Let's say I'm downloading a file that's several GB, like a disk image. When I download it, I'll verify the signature. If it's valid, the file is usable. Encrypting the entire download is a waste of resources for both the server and client. Not everything needs to be encrypted, so this is a little silly. Plus, hosting providers often charge extra fees for https, at least based on my experience.

    • by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Wednesday December 20, 2017 @05:56PM (#55779655) Homepage Journal

      Let's say I'm downloading a file that's several GB, like a disk image. When I download it, I'll verify the signature.

      How can you be sure that the SHA-256 value against which you are verifying the disk image hasn't itself been tampered with on its way to your device?

      Encrypting the entire download is a waste of resources for both the server and client.

      No it isn't. If you fail to encrypt, your ISP, your ISP's ISP, and any snooping government can tell conclusively what you have downloaded. If you do encrypt, the eavesdropper can see only what domain you're accessing and the sizes of what you download. You can obfuscate even the sizes by using range requests to pull the 4 GB disk image a 4 MB chunk at a time.

      Plus, hosting providers often charge extra fees for https

      Then take your business elsewhere. Switch from a hosting provider that charges extra for HTTPS to a competing hosting provider that does not charge extra for HTTPS.

      • by RightwingNutjob ( 1302813 ) on Wednesday December 20, 2017 @06:02PM (#55779701)
        And sometimes you don't care. Like when you're on an internal network and don't want to confuse your users with a red warning signal.
      • by truedfx ( 802492 )

        How can you be sure that the SHA-256 value against which you are verifying the disk image hasn't itself been tampered with on its way to your device?

        Even if the main download is done using HTTP, the SHA-256 value can be requested over HTTPS.

        • by tepples ( 727027 )

          Even if the main download is done using HTTP, the SHA-256 value can be requested over HTTPS.

          But the operator of the site hosting the SHA-256 values will still need to obtain a certificate. Is it more a matter of setting up Certbot to provision one certificate for the hash site rather than a separate certificate for each mirror site?

          • Re: (Score:2, Interesting)

            by truedfx ( 802492 )

            But the operator of the site hosting the SHA-256 values will still need to obtain a certificate.

            Indeed.

            Is it more a matter of setting up Certbot to provision one certificate for the hash site rather than a separate certificate for each mirror site?

            The concern was that for large (multi-gigabyte) files, HTTPS becomes a waste of resources. I'm not going to comment one way or another on the correctness of that claim, but setting up a single server to accept both HTTP and HTTPS connections is trivial, and then

            • The concern was that for large (multi-gigabyte) files, HTTPS becomes a waste of resources. I'm not going to comment one way or another on the correctness of that claim

              I am, by pointing out that Netflix is able to saturate 40GigE NICs in a single machine (I think they can now saturate two of them) serving nothing but HTTPS traffic, and the bottleneck for them is usually the disk and sometimes DRAM, but never the encrypt. On a modern CPU, you're DMAing from cache and you can encrypt a lot faster than line rate (particularly with AES, where it's almost entirely in fixed-function hardware) and then the encrypted data is right next to the DMA unit ready to send to the NIC.

      • No it isn't. If you fail to encrypt, your ISP, your ISP's ISP, and any snooping government can tell conclusively what you have downloaded. If you do encrypt, the eavesdropper can see only what domain you're accessing and the sizes of what you download.

        For most publically available sites this is simply not true. Counting bytes and timing analysis is more than enough to reconstruct users activities with a high degree of confidence.

        You can obfuscate even the sizes by using range requests to pull the 4 GB disk image a 4 MB chunk at a time.

        Is it really more difficult for an adversary to sum up a bunch of 4MB chunks?

        Then take your business elsewhere. Switch from a hosting provider that charges extra for HTTPS to a competing hosting provider that does not charge extra for HTTPS.

        Telling someone who doesn't see the point of HTTPS for x,y and z to get a new provider is probably not likely to result in a positive outcome.

        • by tepples ( 727027 )

          Is it really more difficult for an adversary to sum up a bunch of 4MB chunks?

          Yes. For example, once you have the content length, you can always request the end of a roughly 4 GB file as a full 4 MB range rather than a partial chunk by seeking 4 MB before the content length. Or for an additional data cost smaller than 1 percent, you can randomly request one to ten extra chunks at various points in the file.

          Telling someone who doesn't see the point of HTTPS for x,y and z to get a new provider is probably not likely to result in a positive outcome.

          That was directed at people who do see the point "but...".

          • Yes. For example, once you have the content length, you can always request the end of a roughly 4 GB file as a full 4 MB range rather than a partial chunk by seeking 4 MB before the content length. Or for an additional data cost smaller than 1 percent, you can randomly request one to ten extra chunks at various points in the file.

            Simply chunking a 4 GB file is not the same as implementing a padding scheme. No doubt measures can be implemented to deny timing and size analysis to adversaries. This isn't really the issue.

            None of this actually exists in the real world across vast majority of systems deployed today. To achieve the above you either have to write a custom http client or get explicit buy in from the operator to implement something at a higher level. This has real world consequence in that assertions simply adding SSL me

            • by tepples ( 727027 )

              To achieve the above you either have to write a custom http client [...] What is possible is academic if it isn't being done.

              Except 90 percent of it has been done in existing download managers. I imagine the anti-analysis features that I described (always retrieve full-size final range and retrieve dummy ranges) are straightforward to add to a download manager. Do you need me personally to create a proof of concept in order for it to become no longer "academic"?

              • Do you need me personally to create a proof of concept in order for it to become no longer "academic"?

                Not relevant.

                The point isn't "how" or "whether" something can be achieved. It's the simple fact it has not actually been achieved by any measurable percentage of users therefore any benefit arising from its existence is not being felt... in other words it's academic. Merely creating a "proof of concept" changes nothing.

      • Comment removed (Score:5, Insightful)

        by account_deleted ( 4530225 ) on Wednesday December 20, 2017 @08:48PM (#55780587)
        Comment removed based on user account deletion
        • by tepples ( 727027 )

          KILL JAVASCRIPT DEAD

          Have fun click-click-clicking through a server-side image map that fully reloads the page every time. Or have fun not being able to use an application at all because instead of being a web application, it was developed as a native application for an operating system other than yours.

        • Re: (Score:2, Interesting)

          by AmiMoJo ( 196126 )

          Makes it harder to inject malware, a favourite tactic of governments.

          Also increases the cost of mass surveillance to the point where it is impractical.

          The web should have been fully encrypted from the start.

          • by tepples ( 727027 )

            The web should have been fully encrypted from the start.

            With what? 40-bit keys? At the start of the web, competent encryption was considered a munition in some economically important countries.

            • Also, the CPUs weren't as good -- the time needed to encrypt and decrypt would have been a far greater percent of the available CPU than today, even with the shorter keys of the era. It wasn't practical to do a lot with encryption in the mid-1990s. Zipping up some files the size of a floppy disk into a password-protected .zip could take 20 minutes on the desktop I had; only a couple minutes without the password.
            • by AmiMoJo ( 196126 )

              The web was invented in the UK. No export restrictions on crypto at the time.

        • by Askmum ( 1038780 ) on Thursday December 21, 2017 @02:39AM (#55781549)

          This is a classic example of the "we have to DO SOMETHING!" bullshit, a variation of the "think of the children!" kind of thinking...

          I totally agree. I have a small personal website that hosts some stats about my server (disk usage and such) and hosts pictures I want to share with people.

          Why would that site be unsafe? I use no cookies, I do not require logins. Why would my site be branded like that because some has-been company pushes their agenda?

          • Why would my site be branded like that because some has-been company pushes their agenda?

            Because the people selling certs need some leverage to sell their over priced products. That is what it is about. Nothing to do with security.

          • by Dagger2 ( 1177377 ) on Thursday December 21, 2017 @05:09AM (#55781851)

            Because it's open to MITM and passive snooping. There have been cases of networks inserting DDoS code into unencrypted webpages to recruit clients into attacking an unrelated site. (Or if you prefer, cases of networks inserting cryptocoin miners.) It's also possible to exploit security vulnerabilities in the client by injecting code into a plain-text connection, thus hiding the source of the exploit (and saving you the effort of tricking the client into visiting your own site).

            Plain-text HTTP is just plain unsafe. That's why it should be branded as unsafe.

      • How can you be sure that the SHA-256 value against which you are verifying the disk image hasn't itself been tampered with on its way to your device?

        True. And doing HTTPS encryption isn't all that taxing on a modern CPU. E.g. newish CPUs can do AES with one instruction

        https://en.wikipedia.org/wiki/... [wikipedia.org]

    • Let's say I'm downloading a file that's several GB, like a disk image. When I download it, I'll verify the signature. If it's valid, the file is usable. Encrypting the entire download is a waste of resources for both the server and client.

      As long as the signature file was delivered over HTTPS and you didn't have any evil root certificate authorities installed on your client, you would be fine. If the insecure download was tampered with, signature verification would fail, as you say.

      Encrypting downloads is not that big of a deal resource-wise these days, though. Why not let HTTPS handle MITM detection for you? ;) Most users won't check a sig file anyway.

    • by truedfx ( 802492 )
      Indeed not everything needs to be encrypted, and in some specific circumstances HTTP may be the better option, but firstly, the average user cannot tell what does and does not need to be encrypted, and secondly, even in those cases where HTTP is the better option, it's usually close enough nowadays that it doesn't make that much of a difference. Because of that, I'd be perfectly happy with HTTPS becoming the norm, HTTP flagged as insecure, but HTTP nonetheless continuing to be supported in browsers indefini
    • I run software that distributes non-sensitive data across wide area networks... many people at each site want the same data, so I stick a web caching proxy on the site, and the big data (many gigs worth) are all transferred once, and then served from the local caching proxy. encrypting means the caching proxy needs to man-in-the middle, or it's just borked. stupid.
      • It seems to me that web caching of this sort is essentially the only argument for using plain old HTTP.

        TheRaven has already made a strong case against the think-of-the-CPU-overhead argument, but breaking web caching is a legitimate downside of HTTPS. Ideally there'd be an automatic checksum check after downloading over plain HTTP. Fun fact: this is precisely what Steam does. [valvesoftware.com] If they used a proprietary protocol, or HTTPS, then caching (whether by ISPs or by 'local' sysadmins) wouldn't be possible, to the det

    • "Let's say I'm downloading a file that's several GB, like a disk image. When I download it, I'll verify the signature."

      If you are tech savvy enough to do this then you are tech savvy enough to realize that the image means you are using HTTP, not HTTPS. They aren't stopping you from using HTTP, just making sure you are aware that you are using HTTP rather than HTTPS. So what it the problem again?

    • What waste? Modern processors have AES in hardware. Except for the initial TLS negotiation (and even that is hardware accelerated on most systems) it costs almost nothing to use encryption.

  • Stupid (Score:2, Informative)

    by Anonymous Coward

    This is completely retarded. Not every site needs https.

    • But it's apparently very important to educate users to ignore yet another legitimate warning indication.

      • But it's apparently very important to educate users to ignore yet another legitimate warning indication.

        What's worse is the implication that if it isn't telling you that it is not secure, it must be secure, because it's using https.

    • This is completely retarded. Not every site needs https.

      That's for the users to determine so telling them which sites are secure and which are not makes perfect sense.

  • by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Wednesday December 20, 2017 @05:47PM (#55779603) Homepage Journal

    HTTPS requires a certificate, and a certificate that requires a fully qualified domain name. The CA/Browser Forum's Baseline Requirements forbid issuing certificates in RFC 1918 private networks (such as 10/8 and 192.168/16) or the mDNS reserved domain (.local). This means everything on the average user's local area network will end up marked "Not Secure", such as the administration interface of the user's router, printer, or network attached storage (NAS) device.

    The document "Deprecating Non-Secure HTTP" [mozilla.org] states that Mozilla is aware of this problem but fails to offer a solution:

    Q. What about my home router? Or my printer?

    The challenge here is not that these machines can’t do HTTPS, it’s that they’re not provisioned with a certificate. A lot of times, this is because the device doesn’t have a globally unique name, so it can’t be issued a certificate in the same way that a web site can. There is a legitimate need for better technology in this space, and we’re talking to some device vendors about how to improve the situation.

    It should also be noted, though, that the gradual nature of our plan means that we have some time to work on this. As noted above, everything that works today will continue to work for a while, so we have some time to solve this problem.

    • by Octorian ( 14086 )

      What's even worse, is that many of these devices use HTTPS with an unverifiable certificate (either self-signed, missing an FQDN due to being local, etc). This is extremely annoying (and likely confusing to many) when trying to access such devices, to the point where they probably seem outright broken to an "average" user.

      I wish one of these organizations would come up with some solution to that problem, which everyone can adopt.

      For my own purposes, I set myself up an "internal CA" and loaded its certs on a

      • > I set myself up an "internal CA" and loaded its certs on all my browsers/devices.

        This is the usual solution for big companies and capable users.

        However the flaw is in the certificate specs. Certificates and crypto library auth policies do not have the semantics defined to declare "This cert is for this specific local domain and address space with this unique identifier" so it can be distinguished from all other such places with an identical domain and address space. It's a solvable problem. The browser

        • by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Wednesday December 20, 2017 @06:28PM (#55779911) Homepage Journal

          It's why a CA can charge hundreds of dollars to perform 50ms of compute effort.

          The "50 ms of compute effort" certificates are domain-validated, with just CRL and OCSP as ancillary services. Those typically cost $15 for three years (ssls.com) or nothing for 90 days (letsencrypt.org). The certificates that cost hundreds of dollars are Extended Validation, which ensure not only a connection between the certificate and the domain owner but also that a vandal isn't typosquatting the domain itself. These often come with greater insurance guarantees.

          • by TechyImmigrant ( 175943 ) on Wednesday December 20, 2017 @06:40PM (#55780001) Homepage Journal

            It's why a CA can charge hundreds of dollars to perform 50ms of compute effort.

            The "50 ms of compute effort" certificates are domain-validated, with just CRL and OCSP as ancillary services. Those typically cost $15 for three years (ssls.com) or nothing for 90 days (letsencrypt.org). The certificates that cost hundreds of dollars are Extended Validation, which ensure not only a connection between the certificate and the domain owner but also that a vandal isn't typosquatting the domain itself. These often come with greater insurance guarantees.

            And all those services and fees have nothing to do with my options for securing my own stuff. In fact they just make things worse.
            As I wrote on another thread, I ran Let's Encrypt's scripts and they crashed. It's a joke built with shoddy code.

            I built a CA once, with bespoke software, a screened room, air gaps, man traps and the whole malarky. All to certify communication devices, because all the cert vendors were not interested in selling certs for a few cents each for millions of devices.

            The more I have dealt with the cert industry, the more I hate it.

            • As I wrote on another thread, I ran Let's Encrypt's scripts and they crashed. It's a joke built with shoddy code.

              Did you file a bug report? There's millions of people who had no problem running the scripts on wide variety of hardware and software.

              The more I have dealt with the cert industry, the more I hate it.

              So you should be on board with what Lets Encrypt is trying to do, which is removing the unnecessary garbage from the CAs for what is handled by a simple automated domain ownership check.

              • by Octorian ( 14086 )

                So you should be on board with what Lets Encrypt is trying to do, which is removing the unnecessary garbage from the CAs for what is handled by a simple automated domain ownership check.

                Lets Encrypt is probably a great option if you're trying to secure a general purpose Linux server somewhere.
                But if you're trying to secure something their scripts won't run on, then its a PITA that isn't really helping.
                Most of what we're complaining about is stuff their scripts won't run on.

      • by vux984 ( 928602 )

        And the solution is really simple:

        firefox, chrome etc should have different rules when accessing devices on 10.x.x.x and 192.168.x.x etc.

        Especially if localhost is on the same subnet. Or a tracert to the device never crosses a public internet address. With equivalent rules for IPV6.

        For me at least the VAST majority of the time I'm accessing these devices I'm on the same private subnet. There's a couple scenarios at work where things are separated, and I might be accessing 10.1.1.x from 10.5.5.x etc,

        It can s

    • Re: (Score:3, Insightful)

      Great. Another layer of DRM. Printer doesn't work unless you're plugged into the internet and paying for 'up-to-date' certificates from the vendor.
    • Q. What about my home router? Or my printer?

      The challenge here is not that these machines canâ(TM)t do HTTPS, itâ(TM)s that theyâ(TM)re not provisioned with a certificate. A lot of times, this is because the device doesnâ(TM)t have a globally unique name, so it canâ(TM)t be issued a certificate in the same way that a web site can. There is a legitimate need for better technology in this space, and weâ(TM)re talking to some device vendors about how to improve the situation.

      It should also be noted, though, that the gradual nature of our plan means that we have some time to work on this. As noted above, everything that works today will continue to work for a while, so we have some time to solve this problem

      The solution is logging into the device using TLS-SRP but this doesn't enrich the CAs so no chance in hell.

  • I guess it depends; but when your rival has about 5 times your market share, you do not matter that much...or do you?

    • Troll!?

      So I suppose that Tesla doesn't matter much, since it is a very small fraction of cars sold by any of the major manufacturers. The band you might like doesn't matter because there are so many bigger ones. Linux (and MacOS) doesn't matter, since MS-Windows dwarfs desktop market share. Wind power doesn't matter much, since natural gas is a zillion times more market share. Yeesh.

  • by jwhyche ( 6192 ) on Wednesday December 20, 2017 @06:00PM (#55779685) Homepage

    Outstanding. Now how will I disable this problem?

  • Thanks for pouring napalm on the fire.

  • "...when activated will show a visible visual indicator..."

    In my 35 years in the computer industry, I have always found that visual indicators that were visible were much more effective than ones that weren't. But then, I'm kind of old-school...

  • ftp;//

    telnet:// [telnet]

    smb://

    I got a nice Let's Encrypt certificate than auto-renewed, and I've pushed any external HTTP requests to HTTPS on my router.

    And I have a pretty big list of CIDR ranges and URL strings that result in blocked transactions.

  • >"a red line striking through a classic lock that's normally used to signal the presence of encrypted HTTPS pages"

    Really, that sounds OK to me. it is a reasonable warning "for the masses." But ONLY if it stops there. No pop-ups, no dialogs, no animation, no nagging, no striking through the URL, etc.

    Not everything needs to be https, and things that aren't are not necessarily any problem. Mozilla can have bonus points by keeping the about:config that allows the user to en/disable the insecure http icon

  • Idiotic (Score:4, Insightful)

    by slashmydots ( 2189826 ) on Wednesday December 20, 2017 @08:52PM (#55780599)
    Oh good, now I can pay like $100 a year for an encryption cert that I don't need just to run my static, read-only website that tells people what my business does and where it is and how to contact me. Awesome.
    • And your router, laser printer, ip cam, smart tv, steam link, nvidia shield, desktops those two connect to, ip thermostat, NAS and Voip box. Every single thing with an admin page in fact.

      Someone sarcastically mentioned the answer is 'cloud base admin pages' so you can be tracked - and sold an 'admin' service with a monthly fee and I'm afraid they're right... dammit.
    • $100? Shit mine was free. You overpaid.

  • by Anonymous Coward on Wednesday December 20, 2017 @08:59PM (#55780639)

    The geniuses at Mozilla decided to hide the http: prefix from the user some time ago, so instead of http://www.cnn.com/ [cnn.com] the user sees www.cnn.com

    The http: prefix indicates that THERE IS NO ENCRYPTION.

    Why hide it from the user and then add a non-standard indicator that there is no encryption?

    So many UI designers should be shot...

  • Firefox has become overrun by nannies lately, and is now purposely breaking itself. I've dumped it for Chrome. Not that I'm wild about Chrome, but at least it hasn't become a malfunctioning mess. Say hi to Netscape for us when you reach your destination, Mozilla.
  • by Dangerous_Minds ( 1869682 ) on Thursday December 21, 2017 @12:54AM (#55781307)
    It does concern me about some of the smaller sites struggling to survive. If a hypothetical site is barely able to pay the server bills, the last thing they need is an additional $15 charge per year (or more) tacked on just to allow a percentage of users to access their site without having users complain about alarms blaring that it's an unsecured site. I mean, sure, $15 a year doesn't sound like much, but if you're not a major site pulling in hundreds off of ad impressions or subscription fees, that seemingly small fee is going to sting on the bottom line. No matter how you slice it, this is going to raise the barrier for entry for new sites.

    This added to what is going on with the destruction of network neutrality in the US is almost like pouring salt on the wound. The number of users being able to reasonably access your site may very well drop, but Mozilla decided that web admins need to add another layer of security that come with fees in the process.

IOT trap -- core dumped

Working...