Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Encryption Security

Theo De Raadt's Small Rant On OpenSSL 301

New submitter raides (881987) writes "Theo De Raadt has been on a better roll as of late. Since his rant about FreeBSD playing catch up, he has something to say about OpenSSL. It is worth the 5 second read because it is how a few thousand of us feel about the whole thing and the stupidity that caused this panic." Update: 04/10 15:20 GMT by U L : Reader badger.foo pointed out Ted Unangst (the Ted in the mailing list post) wrote two posts on the issue: "heartbleed vs malloc.conf and "analysis of openssl freelist reuse" for those seeking more detail.
This discussion has been archived. No new comments can be posted.

Theo De Raadt's Small Rant On OpenSSL

Comments Filter:
  • by Anonymous Coward on Thursday April 10, 2014 @10:54AM (#46714281)
    Years ago the BSD guys added safeguards to malloc and mmap, but they were disabled for all platforms in OpenSSL only because they caused performance problems on some platforms. He finishes by saying that OpenSSL is not developed by a responsible team.
  • by badger.foo ( 447981 ) <peter@bsdly.net> on Thursday April 10, 2014 @10:55AM (#46714297) Homepage
    OpenBSD developer Ted Unangst (mentioned in the article) has gone into the code a bit more in two articles, both very well worth reading:

    heartbleed vs malloc.conf [tedunangst.com]

    and

    analysis of openssl freelist reuse [tedunangst.com]. Short articles with a lot of good information.

  • by Anonymous Coward on Thursday April 10, 2014 @11:00AM (#46714363)

    It is good practice to sign against an intermediate certificate. That way if it is compromised you can reject it and issue a new intermediate certificate signed by your root certificate. You can push the new certificates as updates since they would be validated against the root certificate.

    You need to read up on authenticating the entire chain of certificates.

  • by Anonymous Coward on Thursday April 10, 2014 @11:27AM (#46714691)

    I don't get what you're saying, and I think that's probably because you don't know what you're talking about. Having certificate chains is only a plus, the flat structure was crap. Here's how it works:

    I have a root certificate that's universally trusted. It is used *only* to sign intermediate certificates. Having the public cert in the public is fine since it only contains the public key part of the asymmetric public/private key pair. The private key, sits on a server which is physically isolated from the world. By that, I mean that the root certificate and private keys are literally on servers with *no* network connections. When you want to generate a new intermediate certificate, you put the CSR on a USB stick, plug it in, and sign it from that machine. In this way you never have to worry about external threats potentially gaining access to your private key (internal are an ever constant threat, and you put in good safeguards to prevent against that).

    Now that you have a chained hierarchy, you can use different intermediates to sign different end user certificates. Remember that both the root and intermediate have their own certificate revocation lists: the root can revoke intermediates (which means anything signed by them is null and void) and intermediates can revoke server or subordinate intermediate certs.

    As a result of my chained hierarchy, if an intermediate is compromised, I can revoke it, without revoking every single end user / server certificate out there. This gives me finer grained control.

    Now, I said that the root isn't even connected to the internet. The intermediate ideally is not either. Ideally the user / server signing intermediate is behind a set of firewalls, and *pulls* signing requests from frontends, that it then processes and posts the resulting signed certs to. That way if you compromise the front end, you have to the compromise the firewall (which ideally is blocking all inbound connection requests / unconnected sockets / anything that is not communication required for the intermediate server(s) to send pulls to the frontends) in order to get to the intermediate.

    Your flat view of the world is draconian, wrong, uneducated, and probably hurts everyone who reads it by making them a little less educated.

  • by EvanED ( 569694 ) <{evaned} {at} {gmail.com}> on Thursday April 10, 2014 @11:38AM (#46714853)

    Were OpenSSL's developers aware that malloc()/free() have special security concerns that OpenBSD's developers had specifically addressed (I assume that's what meant by "a conscious decision to turn off last-line-of-defense-security")

    My impression is OpenBSD's hardened allocator is relatively common knowledge and definitely should be among people writing security software. And that's not even remotely the only such allocator out there that does that sort of thing too, though it's probably the most well-known from the industrial side.

  • by Eunuchswear ( 210685 ) on Thursday April 10, 2014 @11:48AM (#46714975) Journal

    That is, if they were aware that OpenBSD's malloc() contained code to ensure against data leakage, it would seem to me to be highly probable they would have implemented the same deal in OpenSSL given, you know, their entire point is security. The fact they didn't makes me think they didn't know OpenBSD's malloc() had these measures in the first place.

    Not just OpenBSD's malloc(). glibc can do the same thing if you set MALLOC_PERTURB.

  • by Anonymous Coward on Thursday April 10, 2014 @11:51AM (#46715005)

    De Raadt wrote "OpenSSL is not developed by a responsible team".

    On the contrary, I believe it was developed by a responsible team, that unfortunately made an error.

    Most everyone have made errors, even if most go unnoticed and are essentially harmless. This one appears different, but I don't think it justifies De Raadt's moronic comment.

    Not so sure they're responsible.

    Did you read this [tedunangst.com]?

    This bug would have been utterly trivial to detect when introduced had the OpenSSL developers bothered testing with a normal malloc (not even a security focused malloc, just one that frees memory every now and again). Instead, it lay dormant for years until I went looking for a way to disable their Heartbleed accelerating custom allocator.

    Building exploit mitigations isn’t easy. It’s difficult because the attackers are relentlessly clever. And it’s aggravating because there’s so much shitty software that doesn’t run properly even when it’s not under attack, meaning that many mitigations cannot be fully enabled. But it’s absolutely infuriating when developers of security sensitive software are actively thwarting those efforts by using the world’s most exploitable allocation policy and then not even testing that one can disable it.

    The OpenSSL team doesn't fully test their product.

    That's pretty much as good an example of incompetence that you can probably find.

  • by Eunuchswear ( 210685 ) on Thursday April 10, 2014 @11:51AM (#46715013) Journal

    Oh, and read this: http://www.tedunangst.com/flak/post/analysis-of-openssl-freelist-reuse [tedunangst.com]

    In effect at some points OpenSSL does:


            free (rec); ...
            rec = malloc (...);

    and assumes that rec is the same.

    Eeew,

  • by KingOfBLASH ( 620432 ) on Thursday April 10, 2014 @12:09PM (#46715205) Journal

    Theo De Raadt is the king of tinfoil hats, and behind OpenBSD -- a version of BSD designed to be as secure as possible.

  • by phantomfive ( 622387 ) on Thursday April 10, 2014 @12:26PM (#46715373) Journal

    To say that "OpenSSL is not developed by a responsible team" is simply nonsense.

    Unless you look at the code, and notice they are using unvalidated data for......anything. That's a rookie mistake.

  • by Anonymous Coward on Thursday April 10, 2014 @12:53PM (#46715607)

    The OpenSSL team doesn't fully test their product.

    I agree with Theo on the broader point, but disagree here: "their product" is the code they wrote with their custom memory allocator. It's not the code they wrote with some changes someone outside the product made. Their custom allocator isn't disabled or enabled depending on configuration checks or something like that, it's always part of the product.

    It was still a pretty stupid decision.

    Ted Unangst merely used a different configuration option - one provided in the standard OpenSSL software.

    Given some of the other bugs he found, we know for a fact that the OpenSSL team does not fully test their product.

    Period.

    Unangst even found OpenSSL basically assumes that "free(prt); ptr=malloc()" will return the same pointer. Holy fucking shit, you gotta be kidding me. What kind of incompetent boob doesn't test a C/C++ application for that?

    Maybe the OpenSSL folks need to begin a Kickstart campaign to pay for a Purify license or two.

  • by DarkOx ( 621550 ) on Thursday April 10, 2014 @01:46PM (#46716147) Journal

    Far more important is that if the intermediate certificate is compromised, you as the CA have ability to act. You know from your records who your customers are. What you need to do is:

    1) Fix the glitch
    2) Get the media that stores the trusted root certs private key out of the vault
    3) Issue new intermediate certificates
    4) return the root certs private key to vault
    5) Start contacting your certificate customers and issuing them new certs, revoking the old ones along the way as customers reported they have switched, or if there *is* indication of a compromised cert, revoke immediately.
    6). Revoke the old intermediate certificates as soon as 5 is complete.

    If were signing client certificates directly with the trusted root like they once did you (as the CA) would be screwed royally. You would need to somehow get every client device to update their trusted roots. Or you'd have upset customers crying about how their reissued certs are untrusted by 3/4ths of the clients out there that nobody bothers to update and nobody who understands these things manages directly..

  • by EvanED ( 569694 ) <{evaned} {at} {gmail.com}> on Thursday April 10, 2014 @02:29PM (#46716639)

    Theo's "rant" isn't brought about by a typical bug in the sense of a mistake in the code. His rant is brought about by the fact that OpenSSL deliberately introduced wrapper functions for malloc/free, making it impossible for a system to provide hardened ones, then made those wrappers default. He further rants about the fact that, because it's default, they later introduced a bug that means you can't turn the wrappers off.

    The heartbeat bug itself was a mistake and a bug in the traditional sense. The "hey let's replace malloc/free" is much closer to "bad decision" than "mistake."

  • by chis101 ( 754167 ) on Thursday April 10, 2014 @04:20PM (#46718247)

    Even if OpenSSL is using system's malloc, with all its mitigation features, the bug still works. The attacker just has to be more careful, lest he should read freed() and unmapped memory, and so cause a crash and (supposedly) leave some kind of meaningful trail.

    You got it exactly right. He's complaining that because they provided their own malloc() wrapper that the read of freed() memory is NOT causing a crash. If they had used the system malloc() then there would be crashes, the issues would be detected, and they would have been fixed.

  • by Anonymous Coward on Thursday April 10, 2014 @04:25PM (#46718271)

    Doesn't affect OpenSSH, because it's openSSL is used for key generation, not SSL in the context of openSSH.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...