Forgot your password?
typodupeerror
Encryption Security

Theo De Raadt's Small Rant On OpenSSL 301

Posted by timothy
from the heartbleed-of-the-matter dept.
New submitter raides (881987) writes "Theo De Raadt has been on a better roll as of late. Since his rant about FreeBSD playing catch up, he has something to say about OpenSSL. It is worth the 5 second read because it is how a few thousand of us feel about the whole thing and the stupidity that caused this panic." Update: 04/10 15:20 GMT by U L : Reader badger.foo pointed out Ted Unangst (the Ted in the mailing list post) wrote two posts on the issue: "heartbleed vs malloc.conf and "analysis of openssl freelist reuse" for those seeking more detail.
This discussion has been archived. No new comments can be posted.

Theo De Raadt's Small Rant On OpenSSL

Comments Filter:
  • by alphatel (1450715) * on Thursday April 10, 2014 @10:46AM (#46714171)
    This could get a lot more ugly...

    Once upon a time, SSL certificates were signed against a single root certificate, each SSL cert issuer had a single root certificate authority for each of its product lines. Now all corps issue an SSL certificate that is signed against and INTERMEDIATE certificate, which in turn is signed against the root certificate.

    What happens if a provider's server has this exploit and the intermediate certificate is compromised? EVERY certificate signed against that intermediate must be revoked. Or put another way, the ENTIRE PRODUCT LINE must be tossed into the garbage and all certs reissued.

    So if Verisign or Thawte discover new their intermediate certificate MIGHT have been exploited, would they say anything? The servers implementing those certs are in the hands of a select few - it would be easy to hide the possibility they might have been compromised.
  • by sinij (911942) on Thursday April 10, 2014 @11:01AM (#46714383) Journal
    Why OpenSSL is so popular? It has FIPS-certified module, and this becomes important for selling your product to the government.

    So what could be done to prevent something like this from happening in the future? People will keep writing bad code, this is unavoidable, but what automated tests could be run to make sure to avoid the worst of it? Someone with direct development experience please educate the rest of us.
  • by G3ckoG33k (647276) on Thursday April 10, 2014 @11:15AM (#46714559)
    De Raadt wrote "OpenSSL is not developed by a responsible team".

    On the contrary, I believe it was developed by a responsible team, that unfortunately made an error.

    Most everyone have made errors, even if most go unnoticed and are essentially harmless. This one appears different, but I don't think it justifies De Raadt's moronic comment.
  • Bug Looks Deliberate (Score:5, Interesting)

    by Anonymous Coward on Thursday April 10, 2014 @11:17AM (#46714587)

    That code is almost a text book example of material that is submitted to the Underhanded C contest...

    http://en.wikipedia.org/wiki/Underhanded_C_Contest

  • by mr_mischief (456295) on Thursday April 10, 2014 @11:17AM (#46714589) Journal

    GnuTLS, which recently people were being told to avoid in favor of OpenSSL. You see, there was this bug...

  • Really? (Score:3, Interesting)

    by oneandoneis2 (777721) on Thursday April 10, 2014 @11:30AM (#46714725) Homepage

    "it is how a few thousand of us feel about the whole thing"

    Then maybe you thousands should stop complaining and start contributing to the project, which is so under-resourced problems like this are pretty much inevitable.

  • by Krojack (575051) on Thursday April 10, 2014 @11:37AM (#46714833)

    As we all know, most high level hacks these days come from an internal computer getting infected with something.

  • De Raadt is wrong (Score:2, Interesting)

    by stephc (3611857) on Thursday April 10, 2014 @11:40AM (#46714875)
    This is not a problem with OpenSSL, or the C Language or the Malloc implementation, this is a problem because everyone is relying on the same black box they do not understand. Because this is "standard" and common practice to use it. The only long term defense against this kind of vulnerability is software (and hardware?) diversity. Software built on custom SSL implementations may have even worse vulnerabilities, but nobody will discover them, and even if they do, it won't affect everyone on this planet. When I read Theo De Raadt, I fear his "solution" may only worsen the problem. We can't have all our secrets protected by the exact same door, no matter how strong the door is, once it's broken...
  • by squiggleslash (241428) on Thursday April 10, 2014 @11:57AM (#46715077) Homepage Journal
    Ouch. Serious ouch. Thank you. That suggests that the situation is considerably worse than De Raadt said.
  • by Anonymuous Coward (1185377) on Thursday April 10, 2014 @12:21PM (#46715329)

    This bug would have been utterly trivial to detect when introduced had the OpenSSL developers bothered testing with a normal malloc (not even a security

    This is simply not true, stop spinning it.

    Even if OpenSSL is using system's malloc, with all its mitigation features, the bug still works. The attacker just has to be more careful, lest he should read freed() and unmapped memory, and so cause a crash and (supposedly) leave some kind of meaningful trail.

  • Feeding Allo(g)ators (Score:3, Interesting)

    by Anonymous Coward on Thursday April 10, 2014 @12:24PM (#46715351)

    Allocators in this case make no significant difference with regards to severity of the problem.

    What is or is not in process free list makes no difference when you can arbitrarily request any block of memory you please.. only slightly effects chance of success when it becomes necessary to shoot in the dark. Lets not forget most OS provided optimized allocators keep freed memory in their heaps for some time as well and may still not throw anything when referenced.

    Looking at code for this bug I am amazed any of this garbage was accepted in the first place. There is no effort at all to minimize chance of error with redundant + 3's and 1 + 2's sprinkled everywhere complete with unchecked allocation for good measure.

    buffer = OPENSSL_malloc(1 + 2 + payload + padding);
     
    r = dtls1_write_bytes(s, TLS1_RT_HEARTBEAT, buffer, 3 + payload + padding);

    Suppose I should be glad 1 + 2 = 3 today and they have not used signed integers when dealing with lengths.

    unsigned int payload;
    unsigned int padding = 16; /* Use minimum padding */

    ... oh dear god ...

    int dtls1_write_bytes(SSL *s, int type, const void *buf, int len)

    Well at least they learned their lesson and have stopped sprinkling redundant and error prone type + length + padding garbage everywhere... see..

    + buffer = OPENSSL_malloc(write_length);
     
    - buffer, 3 + payload + padding,
    + buffer, write_length,

    and here ..

    + if (1 + 2 + 16 > s->s3->rrec.length)+ return 0; /* silently discard */
     
    + if (1 + 2 + payload + 16 > s->s3->rrec.length)+ return 0; /* silently discard per RFC 6520 sec. 4 */
     
    + if (1 + 2 + 16 > s->s3->rrec.length)+ return 0; /* silently discard */
     
    + if (1 + 2 + payload + 16 > s->s3->rrec.length)+ return 0; /* silently discard per RFC 6520 sec. 4 */

    ... oh well .. Looks like plenty of low hanging fruit to be had for anyone with a little spare time.

  • by OmniGeek (72743) on Thursday April 10, 2014 @12:28PM (#46715395)

    As I read his analysis, OpenSSL relies on releasing a buffer, reallocating it, and getting the PREVIOUS contents of that buffer back -- or else it will abort the connection. (Search for the string "On line 1059, we find a call to ssl3_release_read_buffer after we have read the header, which will free the current buffer." in his article referenced by the parent post).

    Now, IMO, this goes way beyond sloppy. Releasing a buffer before you're done with it, and relying on a wacky LIFO reallocation scheme giving you back that very same buffer so you can process it, is either 1) an utterly incompetent coding blunder that just happened to work when combined with an utterly terrible, insecure custom allocation scheme, or 2) specifically designed to ensure that this insecure combination is widely deployed to provide a custom-made back door, as it works only with the leaky custom allocator.

    If 1), then I must agree with Theo that the OpenSSL team were indeed irresponsible, since at least one of these two cooperating blunders ought to have shown up in a decent security audit of the code, and any decent set of security-oriented coding standards would forbid them both.

    If 2), then it was deliberate, and the tinfoil-hat crowd is right for once.

  • the RFC is horrible (Score:5, Interesting)

    by DrProton (79239) on Thursday April 10, 2014 @12:52PM (#46715593)
    Let's not miss the opportunity to point a finger of blame at the RFC, which says " to make the extension as versatile as possible, an arbitrary payload and a random padding is preferred, ". https://tools.ietf.org/html/rf... [ietf.org] Arbitrary payload and a random padding for a heartbeat instead of a specified sequence of bits? This is very suspicious.
  • Re:When comments... (Score:4, Interesting)

    by Anonymous Coward on Thursday April 10, 2014 @01:01PM (#46715667)

    As much as Theo can be an utter and insufferable prick, on this score he's right.

    Actually, most of the time he's right; it's just the prick bit that is the problem. What's worrying about this is that he sounds completely reasonable. He didn't even call the Open SSL developers any names or anything and everything in the post was reasonable. I hope he isn't sick or something.

  • by OneAhead (1495535) on Thursday April 10, 2014 @02:02PM (#46716367)

    That would have been an appropriate boilerplate response to your typical Theo De Raadt comment, but... did you actually read that "analysis of openssl freelist reuse" link? It's hard for me to imagine that having been done by mistake; it's more of an instance of a coder cleverly cutting a corner. And some might argue "don't we all?", but if one is writing such security-critical software, one should at least have some notion that clever corner-cutting is a gigantic no-no.

    Now, the freelist reuse is not the cause of the heartbleed bug; it merely frustrates what would otherwise have been a relatively straightforward mitigation strategy. But it's a symptom of an attitude that is, well, irresponsible.

Prediction is very difficult, especially of the future. - Niels Bohr

Working...