Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security Open Source

How To Prevent the Next Heartbleed 231

dwheeler (321049) writes "Heartbleed was bad vulnerability in OpenSSL. My article How to Prevent the next Heartbleed explains why so many tools missed it... and what could be done to prevent the next one. Are there other ways to detect these vulnerabilities ahead-of-time? What did I miss?"
This discussion has been archived. No new comments can be posted.

How To Prevent the Next Heartbleed

Comments Filter:
  • Static analysis (Score:3, Insightful)

    by Anonymous Coward on Saturday May 03, 2014 @09:22PM (#46910963)

    It could have been discovered with static analysis if anyone had the foresight to implement a specific check ahead of time (although it's unknown whether anyone could have thought of checking this specific case before Heartbleed was discovered):

    http://blog.trailofbits.com/2014/04/27/using-static-analysis-and-clang-to-find-heartbleed/

    • Re:Static analysis (Score:5, Informative)

      by Krishnoid ( 984597 ) on Saturday May 03, 2014 @11:22PM (#46911359) Journal

      Coverity has a blog post [coverity.com] describing the problem and why their static analysis methods currently can't detect it.

    • Re:Static analysis (Score:4, Interesting)

      by arth1 ( 260657 ) on Saturday May 03, 2014 @11:31PM (#46911391) Homepage Journal

      OpenSSL was static analyzed with Coverity. However, Coverity did not discover this, as is a parametric bug, which depends on variable content.

      The reaction from Coverity was to issue a patch to find this kind of problem, but in my opinion, the "fix" throws the baby out with the bath water. The fix causes all byte swaps to mark the content as tainted. Which surely would have detected this bug, but it also leads to an enormous amount of false positives for development where swabs are common, like cross- or multi-platform development.
      And while it finds "defects" this way, it's not the real problem it finds.
      So in my opinion, 0 out of 10 points to Coverity for this knee-jerk reaction.

      In my opinion, what's wrong here is someone with a high level language background submitting patches in a lower level language than what he's used to. The problems that can cause are never going to be 100% (or even 50%) caught by static analysis. Lower level languages does give you enough rope to hang yourself with. It's the nature of the beast. In return, you have more control over the details of what you do. That you're allowed to do something stupid also means you are allowed to do something brilliant.
      But it requires far more discipline - you cannot make assumptions, but have to actually understand what is done to variables at a low level.
      Unit tests and fuzzing helps. But even that is no substitute for thinking low level when using a low level language.

      • by Z00L00K ( 682162 )

        There are also other statical analysis tools like splint [splint.org]. The catch is that it produces a large volume of data which is tedious to sift through, but once done you will have found the majority of the bugs in your code.

        However the root cause is that the language itself permits illegal and bad constructs. It's of course a performance trade-off, but by coding part of the code in a high level language and leave the performance critical parts to a low level may lower the exposure and force focus on the problems t

      • by dgatwood ( 11270 )

        The reaction from Coverity was to issue a patch to find this kind of problem, but in my opinion, the "fix" throws the baby out with the bath water. The fix causes all byte swaps to mark the content as tainted. Which surely would have detected this bug, but it also leads to an enormous amount of false positives for development where swabs are common, like cross- or multi-platform development.

        Yes, that solution is complete and utter crap. Claiming that marking all byte swaps as tainted will help you find thi

  • by Anonymous Coward on Saturday May 03, 2014 @09:47PM (#46911063)

    Every industry goes through this. At one point it was aviation, and the "hot shot pilot" was the Real Deal. But then they figured out that even the Hottest Shot pilots are human and sometimes forget something critical and people die, so now, pilots use checklists all the time for safety. No matter how awesome they might be, they can have a bad day, etc. And this is also why we have two pilots in commercial aviation, to cross check each other.

    In programming something as critical as SSL it's long past time for "macho programming culture" to die. First off, it needs many eyes checking. Second, there needs to be an emphasis on using languages that are not susceptible to buffer overrunning. This isn't 1975 any more. No matter how macho the programmer thinks s/he is, s/he is only human and WILL make mistakes like this. We need better tools and technologies to move the industry forward.

    Last, in other engineering professions there is licensing and engineers are held accountable for mistakes they make. Maybe we don't need that for some $2 phone app, but for critical infrastructure it is also past time, and programmers need to start being held accountable for the quality of their work.

    It's things the "brogrammer" culture will complain BITTERLY about, their precious playground being held to professional standards. But it's the only way forward. It isn't the wild west any more. The world depends on technology and we need to improve the quality and the processes behind it.

    Yes, I'm prepared to be modded down by those cowboy programmers who don't want to be accountable for the results of their poor techniques... But that is exactly the way of thinking that our industry needs to shed.

    • The problem has more to do with the "hey, this is free so lets just take it" attitude of the downstream consumers not willing to pay for anyone to look at the code or pay anyone to write it.

      Why would you want the OpenSSL people to be held accountable for something they basically just wrote on their own time since nobody else bothered?

      Striking out to solve a problem should NOT be punished (that culture of legal punishment for being useful is part of why knowledge industries are leaving North America).

      This problem was caused by a simple missed parameter check, nothing more. Stop acting like the cultural problem is with the developers when it is with the leaches who consumer their work.

      • by Ckwop ( 707653 )

        I actually agree with both of you. The Open SSL guys gave out their work for free for anybody to use. Anybody should be free to do that without repercussions. Code is a kind of literature and thus should be protected by free speech laws.

        However, if you pay peanuts (or nothing at all) then likewise you shouldn't expect anything other than monkeys. The real fault here is big business using unverified (in the sense of correctness!) source for security critical components of their system.

        If regulation is nee

        • "businesses with a turn over $x million dollars should be required to use software developed only by the approved organisations."

          That would just lead to regulatory capture. The approved organisations would use their connections and influence to make it very hard for any other organisations to become approved - and once this small cabal have thus become the only option, they can charge as much as the like.

      • Re: (Score:3, Informative)

        by jhol13 ( 1087781 )

        This problem was caused by a simple missed parameter check, nothing more. Stop acting like the cultural problem is with the developers when it is with the leaches who consumer their work.

        I do not believe you. If this were an isolated case, then you'd be right. But no, this kind of "oops, well now it is fixed" things happens all the time, over and over again. The culture of the programming never improves due to the error - no matter how simple, no matter that it should have been noticed earlier, no matter what.

        I am willing to bet that after next hole the excuses will be same "it was simple, now it is fixed, should up" and "why don't you make better, shut up" or just "you don't understand, sh

    • by jhol13 ( 1087781 )

      You forgot NIH. OpenSSL used its own allocator, the most positive thing I can say about that is "totally idiotic". AFAIK nobody is removing it ...

      Furthermore, C is insufficient language for a security software (C++ when properly used barely acceptable, managed languages much better).

      • OpenSSL used its own allocator, the most positive thing I can say about that is "totally idiotic".

        That's deeply unfair. The most positive thing I can say about it is that it was 100% necessary a long time in the past when OpenSSL ran on weird and not so wonderful systems.

        AFAIK nobody is removing it ...

        Except in LibreSSL, you mean?

        Furthermore, C is insufficient language for a security software (C++ when properly used barely acceptable, managed languages much better).

        Depends on the amonut of auditing. C has h

        • by Raenex ( 947668 )

          Depends on the amonut of auditing. C has huge problems, but OpenBSD shows it can be safe.

          How so? OpenBSD says they audit their operating system (which includes code that they did not write). OpenBSD was affected [openbsd.org] by Heartbleed, which means OpenBSD's audit did not catch this bug, and they were affected just like everybody else.

          Also, most of the bugs on their advisory page are for typical C memory problems, such as use after free and buffer overruns.

    • by socode ( 703891 )

      > programmers need to start being held accountable for the quality of their work.
      They are.

      But I guess you mean that people who aren't paying for your work, and companies which aren't paying for the processes and professional services necessary for some level of quality, should hold programmers who don't have any kind of engineering or financial relationship with them accountable.

    • In programming something as critical as SSL it's long past time for "macho programming culture" to die.

      Yeah, but it's kind of going the other way, with more and more companies going to continuous deployment. Facebook is just pit of bugs.

      programmers need to start being held accountable for the quality of their work.

      OK, I'm with you that quality needs to improve, but if I have a choice between working where I get punished for my bugs and where I don't; I'm working for a place where I don't get punished for my bugs. I test my code carefully but sometimes they slip through anyway.

    • LICENSE (Score:2, Informative)

      by Anonymous Coward

      Excerpt...

      * THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
      * EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
      * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
      * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
      * ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
      * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
      * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GO

    • Second, there needs to be an emphasis on using languages that are not susceptible to buffer overrunning

      Heartbleed was not really a buffer overrun problem.

      • It was a buffer overread, which would also be solved with the same techniques.
        • It did not read more memory than allocated.

          • Umm... yes it did. Details here [existentialize.com]. It is a classic buffer overread. The client sends some heartbeat data, plus the length of the data. The server copies as many bytes from the payload as specified by the user, even if it is only one byte long.
            • Yes. It's a buffer overread. But if did not go beyond the memory allocated by malloc.

              • What the fuck are you talking about. The relevant lines go like this:

                unsigned char *pl = &s->s3->rrec.data[0];
                n2s(pl, payload);

                Get a pointer to the heartbeat data inside an SSL record and copy the first two bytes to a 16 bit value payload. pl will point to data on the heap, but it might only be one byte long.

                memcpy(bp, pl, payload);

                Copy payload bytes from pl to a bp. This will read pl, plus a bunch of stuff that is after pl on the heap. In that sense, "it did not go beyond memory allo
                • You have missed the malloc call. See what is being passed as size to the malloc call. That will show you that the it does not cross the size allocated by the malloc call (the malloc for this call - not everything allocated by malloc).

  • Not really (Score:4, Insightful)

    by NapalmV ( 1934294 ) on Saturday May 03, 2014 @09:55PM (#46911097)
    Let's remember the good old bug that plagued (and probably still does) many libraries that read graphic files such as TIFF. The classic scenario was that the programmer was reading the expected image size from the TIFF file header, allocated memory for this size, then read the reminder of the file into said buffer, until end of file. Instead of reading as many bytes as he has allocated. Now for a correct file this would work, however if the file is maliciously crafted to indicate one size in the header while having a much larger real size, you would do a classic buffer overrun. This is pretty much similar to what the SSL programmer did. And no tools were ever able to predict this type of errors, whether TIFF or SSL.

    BTW the last famous one with TIFF files was pretty recent:
    http://threatpost.com/microsoft-to-patch-tiff-zero-day-wait-til-next-year-for-xp-zero-day-fix/103117
    • Fuzzing may catch an erroneously stated buffer size type bug.

      An automated tool that was probing the binary on a live system is was what discovered heartbleed.
    • by Kjella ( 173770 )

      Considering how many times you need to do this (read the length of a block of data, then the data) it's strange that we haven't implemented a standard variable length encoding like with UTF-8. Example:

      00000000 - 01111111: (7/8 effective bits)
      10000000 00000000- 10111111 11111111: (14/16 effective bits)
      11000000 2x00000000 - 110111111 2x11111111: (21/24 effective bits)
      11100000 3x00000000 - 11101111 3x11111111 (28/32 effective bits)
      11110000 4x00000000 - 11110111 4x11111111 (35/40 effective bits)
      11111000 5x00000

  • Easy (Score:3, Informative)

    by kamapuaa ( 555446 ) on Saturday May 03, 2014 @10:19PM (#46911185) Homepage

    What did I miss?

    An article before the word "bad."

  • by godrik ( 1287354 ) on Sunday May 04, 2014 @12:53AM (#46911543)

    Hi dwheeler,

    This is a great article. It covers many common software development and testing techniques. But also some "on live system" techniques. It was a pleasure to read, I'll recommend it to various places.

  • Buffer overruns can be statically prevented at compile time without any runtime penalty.

    All that is required is that the type system of the target programming language enforces a special type for array indexes and that any integer can be statically promoted to such an array index type by a runtime check that happens outside of an array access loop.

    Array indexes are essentially pointer types that happen to be applicable to a specific memory range we call an array. Memory itself is just an array, but for tha

    • Agreed, but not in C. You need to change C (and modify the code to use the functionality) or change programming language. The article does discuss switching languages.

  • Rigorous coding should be held to approximately the same standard as engineering and math. Code should be both proven correct and tested for valid and invalid inputs. It has not happened yet because in many cases code is seen as less critical (patching is cheap, people usually don't die from software bugs etc). As soon as bugs start costing serious money, the culture will change.

    Anyway, I'm not a pro coder but I do write code for academic purposes, so I am not subjected to the same constraints. Robust code

    • Re:Priorities? (Score:4, Insightful)

      by ThosLives ( 686517 ) on Sunday May 04, 2014 @07:22AM (#46912263) Journal

      Rigorous testing is helpful, but I think it's the wrong approach. The problem here was lack of requirements and/or rigorous design. In the physical engineering disciplines, much effort is done to think about failure modes of designs before they are implemented. In software, for some reason, the lack of pre-implementation design and analysis is endemic. This leads to things like Heartbleed - not language choice, not tools, not lack of static testing.

      I would also go as far as saying if you're relying on testing to see if your code is correct (rather than verify your expectations), you're already SOL because testing itself is meaningless if you don't know the things you have to test - which means up-front design and analysis.

      That said, tools and such can help mitigate issues associated with lack of design, but the problem is more fundamental than a "coding error."

      • The problem lies with how the "software industry" evolved over time, and the complete lack of user/consumer protection legislation regarding software products.
        If the physical products manufacturers have a design fault, they will have to fix those products, during warranty period, at their own expense. If on top of it the defect is safety related, they'll have to fix it even beyond the standard warranty period. Whether the product is a car or a coffee grinder, they'll have to recall it period.
        Now contrast
      • Rigorous testing is helpful, but I think it's the wrong approach. The problem here was lack of requirements and/or rigorous design.

        The real problem is the horrible OpenSSL code, where after reading 10 lines, or 20 lines if you're really hard core, your eyes go just blurry and it's impossible to find any bugs.

        There is the "thousands of open eyes" theory, where thousands of programmers can read the code and find bugs, and then they get fixed. If thousands of programmers tried to read the OpenSSL code with the degree of understanding necessary to declare it bug free, you wouldn't end up with any bugs found, but with thousands of progra

  • Wouldn't the best course of action be to zero important memory after it's use just like on disk. After something like a password is loaded in memory it should always be followed by memset with zeros in C/C++. That way if an unchecked read is followed all that would be read is null.
  • Of course, we should find ways to improve quality control in open source software. But the next Heartblee is going to happen. It's like asking, "How can we prevent crime from happening?" Sure, you can and should take measures to prevent it, but there will always be unexpected loopholes in software, that allow unwanted access.

    • But that's the point, we can and should take measures to prevent it. Even if we never eliminate all vulnerabilities, we can prevent many more vulnerabilities than we currently do.

      • No doubt. So why didn't YOU take steps to prevent the Heartbleed vulnerability? The same reason everybody else didn't: time. Finding bugs takes time. Sure, you can automate, but that automation also takes time. So we are caught between two desires: 1) the desire to add or improve functionality, and 2) the desire to avoid vulnerabilities. The two desires compete for the amount of time that is available, so it becomes a trade-off.

        It's also an arms race. There is real financial incentive for finding vul

"It takes all sorts of in & out-door schooling to get adapted to my kind of fooling" - R. Frost

Working...