Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Encryption

Some HTTPS Inspection Tools Actually Weaken Security (itworld.com) 102

America's Department of Homeland Security issued a new warning this week. An anonymous reader quotes IT World: Companies that use security products to inspect HTTPS traffic might inadvertently make their users' encrypted connections less secure and expose them to man-in-the-middle attacks, the U.S. Computer Emergency Readiness Team warns. US-CERT, a division of the Department of Homeland Security, published an advisory after a recent survey showed that HTTPS inspection products don't mirror the security attributes of the original connections between clients and servers. "All systems behind a hypertext transfer protocol secure (HTTPS) interception product are potentially affected," US-CERT said in its alert.
Slashdot reader msm1267 quotes Threatpost: HTTPS inspection boxes sit between clients and servers, decrypting and inspecting encrypted traffic before re-encrypting it and forwarding it to the destination server... The client cannot verify how the inspection tool is validating certificates, or whether there is an attacker positioned between the proxy and the target server.
This discussion has been archived. No new comments can be posted.

Some HTTPS Inspection Tools Actually Weaken Security

Comments Filter:
  • by fisted ( 2295862 ) on Saturday March 18, 2017 @01:41PM (#54066095)

    might inadvertently make their users' encrypted connections less secure and expose them to man-in-the-middle attacks,

    Well no shit, given that the traffic inspection itself has to be done via a man-in-the-middle attack.

    • Yes, this.

      My reaction to this story was "well, duh." Anyone who didn't already know this is someone who isn't familiar enough with the concepts involved.

      • My reaction to this story was "well, duh." Anyone who didn't already know this is someone who isn't familiar enough with the concepts involved.

        Huh? There's no "concept involved" that leads to the inevitable conclusion that some HTTPS proxies won't do certificate authentication. It's an implementation error.

        Of course increasing complexity will increase the programming error rate, but that's not at all specific to this vulnerability. And since the vendors have patched these flaws, they're not inherent to t

        • by JohnFen ( 1641097 ) on Saturday March 18, 2017 @02:44PM (#54066315)

          The concept involved is the increase in the "surface area" of potential failure. If you've introduced a system that sits in the middle, decrypting communications, processing the communications, and re-encrypting them, you've also introduced quite a lot of things that can go wrong, and have increased the chances that something will.

          In the global view, given how common these things are, is approaches inevitable that there will be security problems.

          • Indeed. No need to target individual computers to access their encrypted data. All you need to do is target the MITM TLS inspection tool.

          • Yep, that's always the logic IT security uses when we try to get software approved and they deny it.

            Any IT group who implements this is NOT serious about decreasing the attack space, they just want ton spy on employees.

            • Or are a school that wants students to have access to educational YouTube without all of YouTube. Honestly that is likely the bulk of the reason for this. The average employer either blocks a whole site or allows it, no need for inspection for that. Google could just move educational stuff to a sub domain but no that would be too hard.

            • Re: (Score:3, Informative)

              by Mindscrew ( 1861410 )

              Just to make sure i'm clear here...

              Are saying; that only IT groups that are serious about security, allow unknown encrypted data to pass out the perimeter with no regard to what could be present in it? Are you saying that IT groups should just accept the risk of data being ex filtrated over these unknown encrypted connections? What about C2 traffic?

              As someone who regularly performs Security Assessments and Penetration Tests for the Financial Industry in the US... I would say that's rather naive...

              There is a

              • by Bengie ( 1121981 )

                There is absolutely ZERO expectation of privacy when using an asset that is provided by your employer.

                Is that so? The only way I can access some of my medical information is via my work computer. Are you saying I have zero expectations of privacy to access my private medical data? I'm sure my company is not the only one that has many benefits that are only accessible via intranet services. IT has no right viewing any of that data.

                • Is that so? The only way I can access some of my medical information is via my work computer. Are you saying I have zero expectations of privacy to access my private medical data? I'm sure my company is not the only one that has many benefits that are only accessible via intranet services. IT has no right viewing any of that data.

                  In the US, anyway, Mindscrew is 100% correct. If you are using your employer's equipment, then the employer has every legal right to inspect the traffic you're generating and your

                  • Ack. Quoting problems! The first paragraph is an (unnecessary) quote of Bengie's comment.

                  • by Bengie ( 1121981 )
                    All intranet services are only accessible via work computers.
                    • Are you sure? That would be highly unusual. In every case I've seen, health care portals are not run by the company itself, but by a different company acting as a contractor. In each of those cases, my employer has offered intranet access as you describe -- but you can also reach the portal by going directly to the contractor's website outside of the intranet.

                      If this isn't what your company is doing, then something is very, very wrong with your company's procedures.

              • And I assume you have something in place to prevent the data on the machine from being encrypted (perhaps steganographically) locally before being sent out? Because otherwise your MITM system will only serve to prevent private information from being accidentally pasted in web forms. It would do nothing against exfiltration by malware or competent intentional leaking.

        • by imidan ( 559239 )
          I'd say there is a concept involved, and not just surface area. There's also the fact that authenticating certs is an additional step, that the system will appear to work without doing it, and that there exist programmers who are some combination of lazy and incompetent. It's the same concept involved when people write their own auth systems, or credential databases--they fuck it up because they don't understand the danger of storing plaintext passwords, or they invent their own "encryption," or any numbe
        • by guruevi ( 827432 )

          It's not an implementation error, it's an unavoidable feature of these SSL inspection proxies. You basically have to trust everything coming through the proxy, remove the capability of handling or inspecting the security yourself and give up the notion that it's safe, secure and unchanged - that is the feature the people in charge of security over these organizations *want*.

          On the other hand, if an SSL vulnerability crops up, upgrading your browser (or the requirement to) doesn't matter anymore and you have

        • by Bengie ( 1121981 )
          It's not an implementation error. "Secure HTTPS MITM" is undefined. Some many even argue, an oxymoron.
    • Oh come on, don't be ridiculous. Next thing you'll try to tell us is that TSA luggage inspection makes the stuff in our suitcases less secure...

    • by rwyoder ( 759998 )

      might inadvertently make their users' encrypted connections less secure and expose them to man-in-the-middle attacks,

      Well no shit, given that the traffic inspection itself has to be done via a man-in-the-middle attack.

      Exactly.
      At a previous employer (large Fortune 500 company), I got roped into going to a class put on by the vendor of a proxy product.
      The instructor was a very sharp fellow who flat out stated that the "HTTPS inspection" feature was a MITM attack.
      Interesting thing was this company was not using the feature due to the _legal_department_ prohibiting it's use.

    • The fact that you can transparently MITM a TLS connection, as the interception proxies do, shows how broken the entire HTTPS ecosystem is. If any random proxy can MITM you without the TLS layer raising alarms, then so can the NSA, CIA, FSB, MSS, and anyone else who cares to exploit HTTPS' broken ecosystem. The only difference will be that the various TLAs will hopefully do it in a less broken manner than the commercial vendors do, so you can't tell you're being intercepted while your browser happily displ
      • by fisted ( 2295862 )

        The fact that you can transparently MITM a TLS connection,

        Fact? Transparently? You can? How?

    • by upuv ( 1201447 )

      Actually that's not true and the man in the middle design is horrible for many reasons.

      The better inspection tools will use a lollipop design where the terminating https device spans than traffic to another device for traffic analysis. Traffic that requires modification can then be routed through the lollipop device on a case by case basis.

  • It's as obvious as a zipper. If you unencrypt to inspect, you have to re-encrypt before sending it back along its path. Done perfectly, it's zero impact.

    You have to verify that you're re-zip is as good as the original zip, or duh, you've weakened your zipper.

    So it kind of begs the question how thorough and deployment-specif is the testing itself in the first place?

    Why not test on multiple hops along the path if you can, including on a pseudo-client?

    • by karnal ( 22275 )

      I think they're also talking about the validation of the source - i.e. the man in the middle attack isn't just accepting the connection it's making on the client's behalf that it's own connection to the server is actually secure in and of itself.

      It's a two pronged issue.

    • by Z00L00K ( 682162 )

      Since the certificate of the original site is very hard to re-create then the recipient would be able to detect a man in the middle attack, but on company computers it may be a lot harder since the company also controls the clients.

      So don't do your private banking in a company environment.

      • So don't do your private banking in a company environment.

        Don't do private anything on company equipment.

        What I do, though, is use my own equipment (phone or tablet) to do my private stuff, and I use an SSH tunnel to my home server for internet access, so that the company network never sees any traffic that it can decrypt. Not a solution for everybody, but it works well for me.

      • Use certificate pinning. Yeah, you won't be able to get out over the corporate shit without first tunneling elsewhere, but your coporate spues won't be able to see your shit if you pin to known good certs.

    • by Qzukk ( 229616 )

      Except in this case, the zipper was broken when you got it, you unzip it ignoring the broken teeth, zip it back up ignoring the broken teeth, and everyone assumes you're protecting them against this so they don't notice that it's gotten a bit drafty.

    • Done perfectly, it's zero impact.

      Which means there's always some amount of impact, since you can't guarantee that it's done perfectly. HTTPS inspection tools are engaging in a man-in-the-middle attack themselves, and are introducing a whole new attack surface. We don't just have to trust that the code itself is implemented perfectly, we also have to trust that the server running it is not compromised, or that a legitimate admin isn't engaging in some nefarious activity.

      This negates a rather large portion of the strength (such as it is) of

      • Yep, the IT group continually sends employees emails telling us how much they don't trust us, THEN the decrypt our SSL sessions and expect our Complete trust in them... no dice, you made it completely clear that No One could be trusted. How does your newfound snooping power change that concept?

    • by guruevi ( 827432 )

      The problem with SSL-unpacking proxies is the following:
      - You can't pass along the original SSL certificates, so your clients all have to trust the SSL proxy
      - You can't verify with your clients whether the SSL certificate is correct, so you have to either accept or deny all 'broken' SSL certificates.
      - Since a significant portion of SSL implementations are expected to be broken (especially once you start dealing with parties like Microsoft, IBM and Oracle), your implementation also has to be broken.

      • You can't verify with your clients whether the SSL certificate is correct, so you have to either accept or deny all 'broken' SSL certificates.

        That or the proxy could return a 526 ("Invalid Server Certificate") status. (The 52x status codes are defined not by any RFC but unofficially by the Cloudflare reverse proxy service.) The response body describes the problem and displays the details of the certificate that the proxy does not trust. If the user logged into the proxy has "Allow" privileges, the body contains an "Allow" link to let this particular device use this particular upstream certificate despite its use of an unknown issuer or obsolete c

        • by guruevi ( 827432 )

          And as you said, this is a single-vendor workaround and works only if you're interactive.

  • by I'm New Around Here ( 1154723 ) on Saturday March 18, 2017 @01:46PM (#54066119)

    So now Slashdot has to have ads everywhere, even across the page as I scroll down.

    Actually, it's just a grey bar, because the adblocker stops the actual content. But I still have a grey bar that I don't want.

    So long slashdot. It's been nice knowing you over the last 16 years.

    • It wouldn't be so bad if the ads weren't shifting the page by 200 pixels all through reading
      • It wouldn't be so bad if the ads weren't shifting the page by 200 pixels all through reading

        Oh, that only impacts mobile devices - so it's obviously a tiny minority of the readership that's affected.

        Just like the lack of unicode support doesn't impact a significant number of users either. Picky readers...

        • No, my page shifts around in firefox. On my phone I get a full page ad inviting me to enter a contest that I can't make go away. I have to close my browser and come back to read slashdot. At first I thought I had malware of some sort but it only happened on slashdot.
    • Slashdot was never good.

    • by antdude ( 79039 )

      I don't see those in my uBlock Origin.

    • What grey bar? Just install the YesScript plugin and disable JavaScript for slashdot (and do so as well whenever you get to one of the subdomains for a story). About the only thing that doesn't work when you disable JavaScript on /. is the ability to enter your own tags for a story. Block javascript for /. and all the places where ads would have gone don't even show up. It also stops the damn auto-refresh where you lose your place on the page. You get nice clean formatting. I agree that browing /. wit
  • by jddj ( 1085169 ) on Saturday March 18, 2017 @03:31PM (#54066513) Journal

    How is this inadvertent?

    These tools have been out there for years.

    The user of the inspection box is INTENTIONALLY looking at my encrypted data, which could include PHI, PCI, or just plain shit I don't want them to see. My security has already been breached.

    That these boxes are even possible to create and deploy (i.e. that someone CAN grant a CA for the box (not even that someone will do so)) shows the untenability of the entire "web of trust" for certs that is supposed to make you certain your data isn't being hijacked over TLS.

    As long as this is out there, one can have _zero_ confidence any TLS-encrypted session isn't being hijacked.

    I hope there's a rebuild of encrypted transport, and that next time, they don't make certificates so horsey. No, I don't know how to do that perfectly. Seems there's no way to do it peer-to-peer if I have to go down to every bank or business with a printout of their cert and match it up.

    Maybe there's something blockchain technology could offer to make certs truly verifiable...

    • by guruevi ( 827432 )

      An immutable database doesn't help here. This is about the intentional breaking of security for whatever reason. It's similar to you installing malware on your computer, you can't prevent stupid people from doing stupid stuff.

      CA's are supposed to be responsible for the trust of a certificate. Even if a bank were to print out their certificate hash, you can technically generate another certificate with the same hash and you would also have to trust the $10/h teller and the branch's printer - I'd much rather

      • Yes, but the exposure here is in no way related to the banks choice of a cert provider. The bank doesn't enter into it, except as a place to rob.

        Because top-level CAs CAN issue more CAs, some WILL: to governments, and accidentally or on purpose to freelance thugs. The thug sets up an interception box with said CA, and starts DNS poisoning attacks: he's got you.

        Would prefer a system where issuance of a CA is a matter of real-time verifiable record, as would be each CA and cert on my machine. The browser coul

        • Would prefer a system where issuance of a CA is a matter of real-time verifiable record [...] blockchain might help here

          Such a system is being built in response to the DigiNotar fraud of 2011. It's called Certificate Transparency [wikipedia.org]. And like Bitcoin, it uses a Merkle tree. Chrome already requires it for EV certificates and for certificates issued by Symantec.

        • by guruevi ( 827432 )

          a) You can get multiple certificates from multiple CA's for the same domain for good reason (say you want to switch providers or have multiple servers you need to secure). An SSL certificate only establishes a weak verification of ownership and no relationship whatsoever to your IP or a particular system.
          b) You would have to trust and be able to scan hundreds of CA's containing millions of SSL certificates. Most blockchains take anywhere from 2-10 minutes to do a verification, can take up to days or weeks t

          • Yes, but what makes this box work is a phony _CA_. If I can verify the CA with a world-verifiable blockchain, then can't I trust the cert? Or at least make a smart decision about doing so?

            Seems the size of the CA problem is orders of magnitude smaller.

            What I want is to sniff the phony CA(s) and distrust all certs from it.

      • Even if a bank were to print out their certificate hash, you can technically generate another certificate with the same hash

        Good luck with that. For one thing, the existing SHA-1 attack is a collision, not a preimage, which is orders of magnitude harder. For another, web browsers aren't supporting new SHA-1 certificates, and SHA-256 isn't in foreseeable danger of even a collision.

  • No shit Sherlock this announcement is 10 years too late. Years ago the cert list was much better and regularly provided somewhat useful information. Now traffic is nearly nil and warnings they send are comically outdated.

  • by jonwil ( 467024 ) on Saturday March 18, 2017 @04:24PM (#54066695)

    Various proposals have been put forward to replace various parts of SSL/TLS (including the broken CA model) with better things that can't be easily targeted with man-in-the-middle attacks.
    The EFF has the Sovereign Keys project.
    DANE stores security related information in DNS and is the subject of several RFC standards.
    Other proposals exist to replace some or all of SSL/TLS as well.

    Why are people out there in the real world (makers of web browsers and servers for example) not interested in implementing any of these alternatives to the current horridly broken system?

    • by guruevi ( 827432 )

      Because they all have the same problems and don't fix any of the article's problems. I can repackage google.com's DNS response and sign it with my own DNS key, as long as the clients of my server(s) 'trust' that I am Google, they will accept it as such and that is the entire point of security.

      The simplest fix for these issues is to simply delete your company's and Microsoft's CA (if they use AD) from your computer - problem solved, it will no longer trust your company and you probably won't be able to get o

      • by jonwil ( 467024 )

        The way DNSSEC works is that everything ties back to the root namesevers and their special keys.
        Unless you replaced the root DNS trust anchor in the OS/browser on every single system on your network (something that any well-written OS/browser should make it VERY hard to do) AND re-signed every single DNS request made through that network with a new set of keys chained off that new root trust anchor, you wont be able to defeat DNSSEC.

        Its not like SSL where you can just add a certificate from your HTTPS-inspe

        • by tepples ( 727027 )

          Unless you replaced the root DNS trust anchor in the OS/browser on every single system on your network (something that any well-written OS/browser should make it VERY hard to do)

          I don't see how that'd work. Even if it's hardcoded into the kernel, someone who controls all endpoints on a LAN can just recompile the kernel from source [kernel.org] with a patch that changes the trust anchor to that of the MITM.

          • by jonwil ( 467024 )

            Not if you are on Windows (the kind of organization that would be doing MITM of this sort is likely going to be the kind of organization that runs Windows)

            • by tepples ( 727027 )

              Windows supports multiple DNSSEC trust anchors (source [microsoft.com]) and deploys them through Active Directory Domain Services (source [microsoft.com]).

        • by guruevi ( 827432 )

          As you say, you have to insert the DNS trust anchor in the OS, just like you insert the CA in every OS you control for these proxies to work. Re-signing or filtering out the DNSSEC for a request is trivial at that point (basically make it appear as if the entire DNS tree doesn't have DNSSEC).

  • Anti virus and sand boxing programs such as Invincia run as root. (any program that requires root to sand box a user space program is just a bad idea). The quality of programming and design that goes into some of these programs is appalling. It would be nice to educate employere not to click on every link and to be suspicious of certain emails but unfortunately most corporations find it too inconvenient to actually authenticate their corporate emails so a vigilant employee would miss any company wide no
  • All of them do. They completely smash and destroy it, and then fool you into thinking it's safe.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...