Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Bug

Bug in zlib Affects Many Linux Programs 512

SirTimbly writes: "CNET is reporting that there is a buffer overflow problem with zlib in linux, which is used for network compression. Supposedly, someone could remotely cause a buffer overflow through mozilla, X11 and many other programs." The advisory from Red Hat is available.
This discussion has been archived. No new comments can be posted.

Bug in zlib Affects Many Linux Programs

Comments Filter:
  • more info please (Score:2, Informative)

    by Sebastopol ( 189276 )

    could somebody point out where in the source this is? the article was fluff.
  • by marks ( 12185 ) on Monday March 11, 2002 @04:21PM (#3144508) Homepage
    This article gives more information, and links to vendor advisories: http://www.linuxsecurity.com/articles/security_sou rces_article-4582.html [linuxsecurity.com].
  • Some More Links (Score:5, Informative)

    by Zach Garner ( 74342 ) on Monday March 11, 2002 @04:23PM (#3144523)
    Some [newsforge.com] More [linuxsecurity.com] Links. [linuxsecurity.com]
  • Linux only? (Score:5, Interesting)

    by egoots ( 557276 ) on Monday March 11, 2002 @04:23PM (#3144524)
    zlib is not os dependent. Many Windows based products/projects use it as well. Is there some linux specific issues related to this overflow issue?... or is it just a headline hype thing
    • by Starship Trooper ( 523907 ) on Monday March 11, 2002 @04:50PM (#3144711) Homepage Journal
      This bug causes zlib to free() a malloc'ed block of memory more than once. free() on most other OS's (including Windows, FreeBSD and OpenBSD) is smart enough to check for this and will print a warning instead of destroying the heap; glibc's malloc (and by extension, Linux's) does not and will gleefully make a mess out of the whole memory space. This can cause all sorts of buggery when the next malloc() occurs, including what amounts to a buffer overflow exploit.

      So, you should download the patched zlib, but you should also email the glibc maintainers [mailto] and demand that they implement a sane, error-checking malloc()/free() system. Linux's current allocation model is a disaster waiting to happen.

      • Sounds like that disaster you speak of did just happen. Well, maybe not quite a disaster, but anytime I have to upgrade 20+ packages for one bug, that's bad.
      • by slamb ( 119285 ) on Monday March 11, 2002 @05:02PM (#3144779) Homepage
        This bug causes zlib to free() a malloc'ed block of memory more than once. free() on most other OS's (including Windows, FreeBSD and OpenBSD) is smart enough to check for this and will print a warning instead of destroying the heap; glibc's malloc (and by extension, Linux's) does not and will gleefully make a mess out of the whole memory space. This can cause all sorts of buggery when the next malloc() occurs, including what amounts to a buffer overflow exploit.

        If you want this behavior, you can get it easily on Linux/glibc. From the malloc(3) manual page:

        Recent versions of Linux libc (later than 5.4.23) and GNU libc (2.x) include a malloc implementation which is tunable via environment variables. When MALLOC_CHECK_ is set, a special (less efficient) implementation is used which is designed to be tolerant against simple errors, such as double calls of free() with the same argument, or overruns of a single byte (off-by-one bugs). Not all such errors can be proteced against, however, and memory leaks can result. If MALLOC_CHECK_ is set to 0, any detected heap corruption is silently ignored; if set to 1, a diagnostic is printed on stderr; if set to 2, abort() is called immediately. This can be useful because otherwise a crash may happen much later, and the true cause for the problem is then very hard to track down.
      • This is not entirely accurate. See my earlier post: about a workaround [slashdot.org]. Mind you, the manpage cited /is/ annoyingly vague about the behavior of this facility, but you should keep in mind there is a performance/feature tradeoff, and your criticism assumes we should strive for robustness over performance.
  • by ubiquitin ( 28396 ) on Monday March 11, 2002 @04:25PM (#3144539) Homepage Journal
    Owen Taylor at RedHat found the bug. He works on GTK among other things, as you can see from the GTK+ release notes he posted earlier this month: mail.gnome.org/archives/gtk-devel-list/2002-March/ msg00161.html [gnome.org]
  • by wrinkledshirt ( 228541 ) on Monday March 11, 2002 @04:25PM (#3144543) Homepage
    On the stuff I've been reading about finding and fixing buffer overflows, it seems like it's generally not too hard to spot where these things could potentially happen.

    My question is this: How feasible would it be for someone to take a computer and have it do nothing but pattern-matching through all the source code in a typical Linux distribution, looking specifically for problem areas like these? Obviously we couldn't rely on it as a foolproof audit, but has something like this ever been considered?
    • by SirSlud ( 67381 ) on Monday March 11, 2002 @04:41PM (#3144644) Homepage
      Some software packages do this .. purify, etc. They're pretty expensive tho. The problem is that the logic that results in a buffer overflow error can be VERY complicated, and so its extremely difficult to spot sometimes even for the seasoned developer, nevermind a clever regex.

      On the flip side, finding lots of memcpy's instead of strncpy might help you find the 'dumb' overflow bugs, but one would hope those arn't the ones we're most concerned about. :P Mostly, when copying and moving and generally playing with memory, if you spot functions without buffer limit or max byte limit arguments, you *might* be openening yourself up for trouble. Unfortunately, as I said, those are the easy ones. :) In reality, buffer overflow errors (and off-by-one bugs generally follow the 'simple errors can result from terribly complicated logic' construct of buffer overflow bugs) can be extremely difficult to spot if your input parsing/copying/moving mechanism is non-trivial.
    • How feasible would it be for someone to take a computer and have it do nothing but pattern-matching through all the source code in a typical Linux distribution...

      A pattern matched theoretically could work (you would need some pretty bitchin' patterns though). However, what will definitly work is a meta-compiler which explicity looks for these problems in code. The most straightforward way to implement this is to use some sort of logic programming language (e.g., SML, Prolog, etc.) to act as a code-verifier, to prove (not just test, but actually prove in a mathematical sense). A meta-compiler could check to see that all malloc's are freed, all new's deleted, all bounds checking is enforced, etc. It is a very intensive process though, but because of these of which parsers can be written which convert code into a universally understandable syntax (independent of whitespace, coding style, etc., note that this is done by all compilers), this could be done.

      Of course, the most effective way to solve this problem is to ensure that code is reviewed by someone other than the author so that these kinds of problems can be fixed.

    • by Carnage4Life ( 106069 ) on Monday March 11, 2002 @04:51PM (#3144713) Homepage Journal
      On the stuff I've been reading about finding and fixing buffer overflows, it seems like it's generally not too hard to spot where these things could potentially happen.

      From this statement I assume you are not a programmer. Buffer overflows caused by using known unsafe library functions (e.g. strcpy, strcat, gets, etc.) can be handled by simple pattern matching but actually investigating the code to make sure every memory/array access does not go out of bounds is not a simple pattern matching problem.

      However some automated techniques have been developed to discover buffer overflows and similar errors in a generic manner. The most significant efforts I have seen are the Stanford Meta-level Compilation Project [stanford.edu] and the /GS switch in Visual C++.NET [devx.com].
    • My understanding is that going through source to find possible run-time bugs is often not computable. Certainally you could find some things (x = *NULL will always fail, for instance, and while(1) will never halt), but lots of problems would never be found. A better solution would be to use a higher-level language so you don't have to worry about allocating and freeing memory, overrunning arrays, etc.

      --Ben
    • The whole issue is not to use insecure languages like C. Such languages allow all sorts of memory manipulation and typically depends on the programmer to secure. The situation is made worse by the lack of something pretty essential: good libraries for string manipulation. While 3rd party libraries are available you can't assume these to be present so many developers still use char*.

      Code reviews help, testing helps, good programming helps. But neither of these practices has succeeded in eliminating this type of bugs. It is just not good enough, witness a zillion bugs and security breaches on all major OSes (including the ones deemed secure, and yes I am talking about BSD) throughout the last decade. These OSes only differ in how the issues are dealt with. The occurance of the issue is a fact of life for all of them.

      There's no C developer that can claim his program is completely free of buffer overflows (many foolishly do however). There may be some undetected errors in the program, the progrm may depend on third party code that contains bugs (e.g. the compiler or one of the standard libraries). Most likely bugs in all three categories are present.

      Automatic checks are indeed the solution to the problem and modern languages build these checks into the run-time environment, where they belong. Buffer overflows are a non-issue in Java, for instance. The exception of this is native code and the JVM itself (written in C).

      To eliminate buffer overflows, getting rid of the C legacy is the only solution. Java is probably too controversial as an alternative right now (though arguably it is quite up to the task as far as server side development is concerned) but there are other alternatives. Rebuilding serverside services like ftp, dns, ssh, smtp, pop, etc. is mandatory since each of these services has widely used C implementations that are frequently plagued by buffer overflows. The only way to guarantee that there are none left is to reimplement them.
      • >the progrm may depend on third party code that contains bugs

        .. or your 'non insecure language' runtime could contain bugs (as you point out.) At some point, the machine gets instructions, and while safe languages might cut down on the number of 'dumb' overflow bugs, the simple fact remains that overflow bugs will _always_ exist.

        Education over technological safegaurds (I could just see the next generation of programmers: 'buffer what?') .. otherwise, you rely on something you 'assume' is safe, but isn't neccessarily so.

        All of this is notwithstanding the question: is a buffer overflow error in a platform that has 100% deployment better than lots of programs here and there getting their native buffer overflow bugs discovered? Interesting question unto itself ...

        Also, remember that performance requirements of core services (ftp, dns) have always outstripped performance supply. There will always be a need for languages that are 'closer to the cpu', in order to extract all possible performence to meet demand, and those will always be insecure.

        Sure, client side run-once apps benifit greatly from safe languages, but there will always been a need for unsafe languages. To that end, I'd advocate pushing for a higher respect of 'safe' function use in code, by changing a developers' attitudes from 'ah, what the hell, i'm too lazy to use the safe func' to, 'if a safe func exists, use it or die'. Or writing safe wrappers to unsafe funcs, and having a policy saying you simply are not allowed to side-step the safe prototype.

        I really can't imagine justifying the additional cost & perfornace hit of safe platforms with the laziness and 'infailability' of coders. If I were the one with my hands on the purse string, I'd understand that bugs will *always* exist (as to avoid putting all my proverbial eggs in one code base, which is probably the biggest problem facing our eceonomy's reliance in software stability and security), and make sure I have developers that understand how to minimize the possibility of producing code with these types of errors.

        Also, don't forget .. a problem without a buffer overflow bug is still exploitable through a veritable infinitum of poor design choices, so are the benifits of nearly eliminating the possibility of buffer overflow bugs and memory related exploits truely worth the performance hit for critical systems with high load and a perf demand that almost always outstrips perf supply?

        .. which is all a long winded way of saying, sure, your approach works for many cases, but it sounds like yet another case of someone with a hammer trying to make everything look like a nail. :)
        • These are the standard excuses I've been hearing for years. The amount of buffer flow related security issues has been a fact of life during all these years so obviously relying on programmer discipline/skill won't get us anywhere. It has never been a solution and it will never be a solution.

          The JVM indeed has the problem that it is implemented in C which involves the risk of it having buffer overflows (and other C related bugs). However you can't introduce new bufferflows by programming in Java and the jvm could theoratically be reimplemented in a language that has a bit more elaborate memory protection.

          Indeed bug free programs don't exist. However, minimizing the damage does help. The wide majority of programs implemented in C that we depend on might just as well be implemented in a more safe language. This includes large parts of the kernel and device drivers. I'd say it is time to give a bit more priority to security. In principle, if security is of any concern, the use of C should be avoided since it involves unnecesary risks.

          The reason C is still being used is programmer lazyness. They are too lazy to learn/invent better languages (just like they are too lazy to adopt the code practices you suggest). The buffer overflow issue is so stupid, a simple run-time check completely eliminates it.

          My suggestion simply is to stop developing in C. Stop new development in C, keep maintaining the old stuff and use more modern languages for new development. It can't be that hard, we can start with the smaller stuff like bind, openssh or telnet. I've seen well performing servers of all kinds written in Java. If needed, there are more lightweight languages than Java that are still secure.Most of the stuff that runs ontop of a linux kernel currently implemented in C (and therefore a security risk) could be implemented in C.
      • > Automatic checks are indeed the solution

        Not for the embedded market, and consoles! I'm a console (games) programmer. All those little checks *ADD* up. Checks should be at the *programmer's descretion*, not at the whim of a compiler or language.

        > to the problem and modern languages build these checks into the run-time environment, where they belong.

        And you're completely ignoring the cost (performance) of doing so! For a debug build, yes, I love having extra checks, but for a full optimization build, NO. The performance cost is too high. There is a reason you have them in a debug build -- to write your code properly (robustfully) the 1st time, so you don't need the extra checking later.

        > To eliminate buffer overflows, getting rid of the C legacy is the only solution.

        Now you're trolling. You can write safe code in any language. Likewise you can write bad code in any language. Languages are *not* the silver bullet to the problem, but you for whatever reason think they are.
    • It would be possible to find occurences of known common mistakes in different packages automatically, and to find instances of a known error throughout a large project. There is work on doing this with the linux kernel, where a group has turned up hundreds of potentially unsafe usages (mostly DoS-type, mostly in drivers).

      This particular problem is not actually a buffer overflow at all, it turns out, and is more subtle.

      There are commercial programs which can find this sort of error, assuming it actually happens while running an instrumented version of the code. These programs are in the expensive commercial development tool price range (order $10k), though, and very difficult to write.

      Another possibility is to build and run the program under a virtual machine which checks these things; for example, if you turn your C into Java and run it, it shouldn't throw any out-of-bounds exceptions, or it indicates that your C wasn't checking array bounds. There isn't a C virtual machine (that I know of), though.
    • It'd be next to impossible to detect a second free() of some chunk of code because machines are horrible at guessing what a programmer meant to do, rather than what they did. There are many tools available to diagnose memory problems, but they are slow and must be monitoring the code while it's running to do any good; to boot, they proabably won't find errors under normal program use. Besides, isn't this the C programmer's creed (the computer does what I tell it to do)? Well, zlib is doing exactly what the C programmer told it to do.
    • Insure (was Insight) [parasoft.com] does a tremendous job of this. I've been using Insure (used to be called Insight) for many years, and it can find all sorts of runtime problems. Of course, doing all the runtime checks is slow and takes time, but the savings in development time are awesome.
      I am just a satisfied customer, and have no relationship with the company.
  • by Anonymous Coward on Monday March 11, 2002 @04:27PM (#3144558)
    The advisory for zlib-1.1.3 is at:

    http://www.zlib.org/advisory-2002-03-11.txt
    Zli b Advisory 2002-03-11
    zlib Compression Library Corrupts malloc Data Structures via Double Free

    The new zlib (1.1.4) is at:

    ftp://ftp.info-zip.org/pub/infozip/zlib/zlib-1.1 .4 .tar.gz
  • Did this get released early? I got the RedHat advisory, but there is no source update at zlib.org, the CVE page at Mitre is empty and there is nothing from CERT yet.

    What gives? Does anyone know where a patch is available?

  • by joeflies ( 529536 ) on Monday March 11, 2002 @04:30PM (#3144581)
    As the article points out, anything that uses zlib staticly can't be fixed by the new zlib patch until it's recompiled.

    As I'm not a programmer, what can I grep to search stuff I've compiled from source to determine what's using staticly linked zlib?

    • by Stonehand ( 71085 ) on Monday March 11, 2002 @05:08PM (#3144815) Homepage
      You could write a script using 'nm' and 'grep' -- once you identify some functions in zlib. If they have a common prefix, search on that.

      Of course, if you stripped the symbols out of the binaries, then the function names won't be there for nm to find and you're quite screwed -- basically you'd have to go grab the sources again and scan the Makefiles and perhaps the code itself for zlib references.
    • You're absolutely right -- the only thing that a binary download will fix is packages using the libz.so shared library. Most software seems to link with the library statically. This is a huge problem.

      I'm currently running this command against my /usr/src directory, just to get a preliminary list of packages to recompile:

      grep '-lz' `find . -name 'Makefile'` > ~/zlib-dependencies

      Assuming you've still got your source tree intact since you compiled, this should find all makefiles which reference the zip library. If you've deleted any source directories, you will have to untar them and run configure again to build the makefiles.

    • There are the right ways, then there is the easy, 99% effective way. The easy way is to search for very specific error message strings, which are sort of a fingerprint for most software. I compiled zlib then used "strings libz.a" to find these error messages:

      too many length or distance symbols
      invalid literal/length code

      A quick grep for one of those two strings reveals quite a number of statically linked versions of zlib in /usr/bin.
  • No buffer overflow! (Score:3, Informative)

    by lkaos ( 187507 ) <anthony@NOspaM.codemonkey.ws> on Monday March 11, 2002 @04:32PM (#3144594) Homepage Journal
    From the zlib.org page:

    The vulnerability results from a programming error that causes segments of dynamically allocated memory to be released more than once (aka. "double-freed"). Specifically, when inftrees.c:huft_build() encounters the crafted data, it returns an unexpected Z_MEM_ERROR to inftrees.c:inflate_trees_dynamic(). When a subsequent call is made to infblock.c:inflate_blocks(), the inflate_blocks function tries to free an internal data structure a second time.


    Because this vulnerability interferes with the proper allocation and de-allocation of dynamic memory, it may be possible for an attacker to influence the operation of programs that include zlib. In most circumstances, this influence will be limited to denial of service or information leakage, but it is theoretically possible for an attacker to insert arbitrary code into a running program. This code would be executed with the permissions of the vulnerable program.


    Duplicate deletions are not the same as buffer overflows and are no where near as easy to exploit. In fact, I have _never_ seen a duplicate deletion exploitation other than a simple DoS. Not to mention the fact that it requires a special series of calls from the calling program.

    In summary, the world hasn't come to an end and Free Software is all-the-sudden as vunerable as closed source software. Put the pills down and relax :)
  • by Anonymous Coward
    What about closed source projects that statically link these sorts of things?

    Quake 3, for example, statically links zlib in to deal with decompressing pk3 (zip) files. If the client auto-download is on, pk3 files can be downloaded from the quake server.

    I don't mean to be an alarmist, but this is something that should be considered. Zlib is linked into Quake 3 on all platforms.

  • by RichiP ( 18379 ) on Monday March 11, 2002 @04:36PM (#3144616) Homepage
    One of the problems I have with some Open Source software is that they're written by people who don't pay attention to such details as error-catching and buffer overflows. They all have great ideas to do things and always leave the minute detail of error-checking for some later date.

    zlib is such a simple library compared to most software libs out there. The source is approximately 9,000 lines of code (remarks included) and exports a handful of functions. And yet a buffer-overflow situation exists. It's unfortunate that a lot of projects link zlib in and some of them statically! This really has the potential to be a disaster.

    People ... be careful when you code!
    • Oh really? (Score:2, Flamebait)

      by sterno ( 16320 )
      And you know this how? Thankyou for making an unbacked assertion about the behavior of open source vs. closed source programmers. If you said it, it must be true.

      Really, the fact of the matter is that programmers, are human. This means that, every so often, they get lazy about checking something or possibly just make a mistake. This is true of open source or closed source programmers.

      Now, having said that, as a user of open source software, you at least have the opportunity to check other people's mistakes. You can do your own personal code review and find those dumb mistakes somebody made. On the other hand, that is not possible with closed source software.
    • ...Open Source software is that they're written by people who don't pay attention.... They all have great ideas to do things and always leave the minute detail of error-checking for some later date.

      I hate to break the news to you, but the same thing happens frequently in the commercial world. Management asks for a prototype, provides a negative number of mandays to complete it in. Development hustles through a prototype -- no error checking, poor design, etc. Management loves the results and provides Development with even less time to make it into a final product. Just do the minimum amount of work. We can go back and fix it later. Later never comes. Quality is often ignored for speed.

      Also keep in mind that many of the people who write Open Source software also have a "day job" where they are creating commercial software (or important software for in-house use). My guess is that their coding practices vary little between work and hobby.

    • Unfortunately the same problem happens with closed-source programmers so that is not a solution. I tracked several of these things down at Sun (in Olit and Xlib) and it was not fun, and they seemed to exist despite the fact that Sun's source code was closed. And in my current position I have made hundreds of such mistakes, the fact that the software is closed-source did not somehow magically change my brain so I did not do it! I also write open-source software and I would say the proportion of stupid mistakes I make in both is about equal.

  • Static linking bad (Score:4, Insightful)

    by Jeffrey Baker ( 6191 ) on Monday March 11, 2002 @04:37PM (#3144620)
    Maybe this will finally kill all those apps that statically link zlib, or include their own version of the source. Code reuse: good. Dynamic linking: good. Library versions: good. Cut-and-paste, static linking, namespace collisions, recompiling: bad.
  • The article says that the kernel is affected - does it statically link zlib, in which case a recompile is in order, or do I just need to upgrade the zlib package?

    Or is the article lying?
    • by ceswiedler ( 165311 ) <chris@swiedler.org> on Monday March 11, 2002 @04:47PM (#3144685)
      The only dynamic linking the kernel uses is modules, which aren't used for providing library routines like zlib. The kernel does not link .so files. The code is almost certainly cut-and-pasted into the ppp compression code somewhere.
    • by Mr Z ( 6791 ) on Monday March 11, 2002 @05:08PM (#3144817) Homepage Journal

      One place kernel uses zlib is to compress the kernel boot image. The kernel image then gets decompressed during bootup. So, from the standpoint of "the kernel uses zlib", the kernel is affected. There is, however, no new vulnerability introduced as far as I can tell. To attack the zlib-based decompression that the kernel performs, an attacker would need to modify the compressed kernel image that is used to boot the machine. I can think of far more fruitful ways to compromise a machine by modifying the kernel image than by trying to dork the zlib decompression that happens before the kernel even runs.

      Another place the kernel uses ZLib is when mounting compressed filesystems. (Compressed RAM disks and zisofs come to mind.) In this case, you're asking a live kernel to decompress arbitrary data. These are only issues when mounting untrusted media. If you made the media yourself, then your only risk is that corrupted media might cause a kernel oops. And if you don't have cramdisk and zisofs compiled in, you're safe.

      Other places the kernel seems to use ZLib (from a cursory scan of the source -- there may be others):

      • jffs2 -- Journalling Flash Filesystem version 2
      • ppp -- used for ppp_deflate option

      In any case, the kernel is a statically linked entity, with a minor exception for modules. ZLib is not a module, therefore to upgrade ZLib in the kernel, you'll need to rebuild the kernel. And it doesn't appear to be as easy as just upgrading ZLib and rebuilding the kernel. The kernel has multiple modified copies of ZLib in its source tree. I'd wait for an official kernel patch.

      --Joe
  • ouch (Score:5, Interesting)

    by Phexro ( 9814 ) on Monday March 11, 2002 @04:41PM (#3144648)
    Is it just me, or have there been a really huge amount of security issues with Free/Open Source software this year?

    It just seems like there's a new hole (or two) every week. Let's see, we've had openssh, zlib, php, mod_ssl, cvs, cups, rsync, exim, ncurses, glibc and more, just since January. We've still got two-thirds of the year to go. Anyone want to make bets on what other projects will get hit? I think we're going to see problems with XFree86, samba, and apache.

    So, my question is this: Do you think that this is simply a bad time for FS/OSS security? Are we at the threshold where there are enough eyes on the code to locate these kinds of bugs? Or is the quality of FS/OSS declining?
    • And it seems they arent too a shame to tell they made mistakes ;)

      But to answer your last question, its seems its only getting better with finding all these bugs and all.
    • Not a surprise... (Score:2, Insightful)

      by Joseph Vigneau ( 514 )
      Is it just me, or have there been a really huge amount of security issues with Free/Open Source software this year?

      Yes; perhaps this is due to the fact that FS/OSS is used by more developers/users. More eyeballs and more code to exercise libraries mean more bugs are discovered. As mentioned, this bug is (relatively) benign, and has already been fixed in the source. So I wouldn't necessarily say that FS/OSS is getting "more buggy", any more than commercial software, whose bugs don't leave the company if users don't discover them first.
    • Re:ouch (Score:4, Insightful)

      by the Man in Black ( 102634 ) <jasonrashaad&gmail,com> on Monday March 11, 2002 @04:57PM (#3144749) Homepage
      All of whom were stamped out within hours of being found.

      That's the strength of open source.
      • Re:ouch (Score:2, Insightful)

        If the strength of Linux is closing the barn doors after the horses have ran amok, I think I'll investigate BSD, where they, you know, actively audit the code.
        • Your analogy doesn't make much sense. I would say "the horses have run amok" when there are actually boxes being compromised using this hole. At the moment, this is a purely theoretical exploit. This is more like discovering the barn door was open, and closing it before any horses got out.

          Your other point is a good one, though. Anyone interested in secure servers would do well to investigate OpenBSD, as they have spent huge amounts of time auditing their code.

          • Actually, I'd say this is more like finding out that the vast majority of barn doors can be opened by knocking in a certain pattern, and knowing for a damn fact that most farmers won't bother to install the new latch that's being given away for free. My point mainly is that while having armies of college students coding is wonderful in some respects, it's horrible in other respects.
            • Can anyone here say Code Red?

              I agree that discovering that the barn door could be opened before it was installed is the best solution, however just in case something is found, you have to have a half-decent backup plan.

              This is something that I believe Windows XP got right - by forcing users to download the latest patches, it means that the clueless are no longer vulnerable, and those who know enough to turn the updates off are either keeping up with the patches themselves, or basically have nobody to blame when Code Red III comes along...
    • Well, there might be a lot of these buffer overflow (or double free in this case) bugs these year in FS/OSS software, but you have to keep in mind that most of these bug postings are *theoretical* security flaws. Many of these don't even have an exploit coupled with them, it's just that since people can go through the code, it's easier to see that a flaw that creates a potential security breach has been discovered.

      Compare that with most of the closed-source security flaws this year, which more had to do with actual exploited vulnerabilities than with potential theoretical vulnerabilities.

      I guess the difference is that in a lot of these FS/OSS projects the potential problems are announced and stomped out quicker than they can become actual exploited vulnerabilities.

      A double free that creates a potential remote hole (and more likely a potential DoS) isn't good, but if we weight security flaws, I would weight that far below something like Code Red or Sircam which is being actively exploited.
      • "...most of these bug postings are *theoretical* security flaws."

        L0pht Heavy Industries: "Making the theoretical practical since 1992."

        The fact is, a theoretical flaw is still a flaw, and it's still relevant to my original question.

        "Many of these don't even have an exploit coupled with them..."

        ...yet.

        "I guess the difference is that in a lot of these FS/OSS projects the potential problems are announced and stomped out quicker than they can become actual exploited vulnerabilities."

        This is something that's been bugging me as well. It seems that more security bugs are kept quiet until a fix is prepared. Personally, I'd much rather know ASAP, so I can at least disable or filter the vulnerable service. I think that as soon as a vuln. is discovered & discussed on any public list, the more likely it will be that there is an exploit available. In some cases (WU-FTPd globbing bug comes to mind), the vuln. was known about for some time before the fix was available. In this case, there was even an exploit available. The bug was discovered in July, a security advisory was being drafted as early as September, and the fix didn't get released until December.
    • Its mostly because a certain large vendors publicity machine keeps pushing any bug report that impacts open source in the slightest as a big thing and feeding it to the press.

      Dirty game really, but thata how they play it.

      This is why news stories keep starting with Linux blah and then turning into "everyone blah"

      • And inversely, any bug report for a certain large vendor starts out being an "internet flaw" or an "web browser/email bug" and eventually is clarified to affect only a specific vendor's product.
      • In this case, it's just a lame attempt of Slashdot to be "provocative". However, there is some truth in saying that it's a Linux (actually, GNU/Linux and Linux kernel) problem.

        As other posters noticed, GNU libc defaults to an unsafe but faster free() implementation that can damage the heap if called more than once on the same pointer. Other operating systems are said to default to a safer malloc() implementation.

        The question is, if, and to which extent, potential extra security justifies real trade-offs in speed.

        Using shared libraries is not free of trade-offs. Position independent code (PIC) used in them is slower on many architectures, including x86. However, shared libraries are preferred because, should anything like this bug in zlib be found, the only package to replace will be the faulty library (and the "offenders" linking to it statically, like CVS and rsync).

        But the measure that could have prevented this bug from being an issue on GNU/Linux has not been taken on the same grounds of speed! Yet I think that a lot of programs suffer (in terms of speed) more from slower PIC code than from safer malloc(). No I'm not suggesting going back to static libraries. Rather, we should acknowledge that there is a price tag attached to the peace of mind.

        Linux kernel developers should have examined the zlib code for compatibility with Linux kernel-space memory management.

    • Its because buffer overflows are a hot topic now. Microsoft made them famous due to the exploits. Now everyone is quick to point them out and post them on their websites. I'd almost bet buffer overflow news in your favorite OS would get just as many hits as your favorite teen popstar naked. God knows I've done more searches this year for Red Hat and Windows bug errata than I have for Britney Spears' boobs.

      As for the quality of free and open source software, well, I've never understood why people think it is any better than commercial software. I've been a professional programmer for 10 years and I've seen whack commercial code, whack open source code, and I'm pretty sure I've written some whack code myself. The license doesn't have much to do with it except for having NDA's keeping me from discussing some commercial code. Having open source visible on the bug watch list doesn't hurt much because fixes are usually available. Its when people don't pay attention to the fixes that the problem gets out of hand. Remember, the fix to the exploit that Code Red used was out a few weeks before Code Red. Now if we see this zlib bug take down a big chunk of the Linux communitiy in a few weeks then it will be a bad thing because we didn't pay attention.

  • by Anonymous Coward
    Thankfully the 9000 lines of regularly coped and pasted source commonly known as zLib does not infest the MacOS universe or mac servers at least.

    The MacOS running WebStar as a server has never been exploited.

    In fact in the entire securityfocus (bugtraq) database history there has never been a Mac exploited over the internet remotely.

    That is why the US Army gave up on MS IIS and got a Mac with WebStar.

    I am not talking about BSD derived MacOS X (which already had a couple of exploits) I am talking about Mac OS 9 and earlier.

    Why is is hack proof? These reasons :

    1> No command shell. No shell means no way to hook or intercept the flow of control with many various shell oriented tricks found in Unix or NT

    2> No Root user. All mac developers know their code is always running at root. Nothing is higher (except undocumented microkernel stufff where you pass Gary Davidians birthday into certain registers and make a special call). By always being root their is no false sense of security.

    3> Pascal strings. ANSI C Strings are the number one way people exploit Linux and Wintel boxes. The mac avoids C strings historically in most of all of its OS. In fact even its roms originally used Pascal strings. As you know pascal strings are faster than C (because they have the length delimiter in the front and do not have to endlessly hunt for NULL), but the side effect is less buffer exploits.

    4> Stack return address positioned in safer location than intel. Buffer exploits take advantage of loser programmers lack of string length checking and clobber the return address to run thier exploit code instead. The Mac places return address infornt of where the buffer would overrun. Much safer.

    5 : Macs running Webstar have ability to only run CGI placed in correct lodirectoy cation and correctly file typed.

    6> Macs never run code ever merely based on how a file is named. ",exe" suffixes mean nothing. For example the file type is 4 characters of user-invisible attributes, along wiht many other invisible attributes, but these 4 bytes cannot be set by most tool oriented utilities that work with data files. For ecxample file copy utilities preserve launchable file-types, but JPEG MPEG HTML TXT etc oriented tools are physically incapable of creating an executable file. the file type is not set to executable for hte hackers needs. In fact its even more secure than that. A mac cannot run a program unless it has TWO files. The second file is an invisible file associated with the data fork file and is called a resource fork. EVERY mac program has a resource fork file containing launch information. It needs to be present. Typically JPEG, HTML, MPEG, TXT, ZIP, C, etc are merely data files and lack resource fork files, and even if the y had them they would lack launch information. but the best part is that mac web programs and server tools do not create files with resource forks usually.. TOTAL security.

    7> There are less macs, though there are huge cash prizes for craking into a MacOS based WebStar server. Less macs means less hacvker interest, butthere are millions of macs sold, and some of the most skilled programmers are well versed in systems level mac engineering and know of the cash prizes so its a moot point, but perhaps macs are never kracked because there appear to be less of them. (many macs pretend they are unix and give false headers to requests to keep up the illusion, ftp http, finger, etc).

    8> MacOS source not available traditionally, except within apple, similar to Microsoft source availability to its summer interns and such, source is rare to MacOS. This makes it hard to look for programming mistakes, but I feel the restricted source access is not the main reasons the MacOS has never been remotely broken into and exploited.

    Sure a fool can install freeware and shareware server tools and unsecure 3rd party addon tools for e-commerce, but a mac (MacOS 9) running WebStar is the most secure web server possible and webstar offers many services as is.

    I think its quite amusing that there are over 200 or 300 known vulenerabilities in RedHat over the years and not one MacOS remote exploit hack. And Now ith zLib, even more holes can be found in Linux.

  • Easy Workaround! (Score:5, Interesting)

    by dtrombley ( 565773 ) <[ten.abmub] [ta] [mortd]> on Monday March 11, 2002 @04:46PM (#3144679)
    Well, it won't prevent the DoS aspect - but, from the malloc manpage:

    Recent versions of Linux libc (later than 5.4.23) and GNU libc (2.x) include a malloc implementation which is tunable via environment variables. When MALLOC_CHECK_ is set, a special (less efficient) implementation is used which is designed to be tolerant against simple errors, such as double calls of free() with the same argument, or overruns of a single byte (off-by-one bugs). Not all such errors can be proteced against, however, and memory leaks can result. If MALLOC_CHECK_ is set to 0, any detected heap corruption is silently ignored; if set to 1, a diagnostic is printed on stderr; if set to 2, abort() is called immediately. This can be useful because otherwise a crash may happen much later, and the true cause for the problem is then very hard to track down.

    Seems worth it while all pour through the symbol tables of our static binaries (and recompile the stripped ones. =( )

    On another note, I've always regarded security bulletins as a one-way process... For example, I couldn't find a way to tell RedHat they'd omitted this (seemingly important?) reminder. Any thoughts about this? (admittedly i didn't look very hard for very long)

  • by Anonymous Coward on Monday March 11, 2002 @04:48PM (#3144696)
    OpenSSh uses zlib - if you happen to compile OpenSSH statically with zlib (I think thats the default), one more upgrade cycle after the latest OpenSSH 3.0.2p1 bug... :(
  • by coyote-san ( 38515 ) on Monday March 11, 2002 @05:06PM (#3144809)
    This is why you ALWAYS set a pointer to NULL after freeing it, even if it's "totally unnecessary" because you're about to free the structure holding the pointer.

    This doesn't prevent attempts to free the previously freed pointer, but that will generally do a lot less damage than freeing a real malloc'd address. And during development it's trivial to add an assertion checking for a NULL pointer before any free().
    • by Tom7 ( 102298 )
      I dunno, most double frees come from freeing DIFFERENT copies of a pointer. Setting one to NULL won't help in this case...

      (A much better solution is to use a garbage collector. ;))
  • by hysterion ( 231229 ) on Monday March 11, 2002 @05:18PM (#3144906) Homepage
    Part 1: libz/zlib [suse.com]
    Part 2: packages containing libz/zlib [suse.com]

    From part 2:

    The packages affected by the double-free() libz bug can be devided into
    two categories:

    1) packages that link dynamically against the system-provided
    compression library. These packages get fixed automatically with
    the update of the libz package as described in SuSE-SA:2002:010.
    Please note that the processes will continue to use the old
    version of the libz.so shared library if the have not been
    restarted after the libz package upgrade.

    2) packages that contain the compression library in their own
    source distribution. These packages need an individual bugfix.
    We have prepared update packages for this software that can be
    downloaded from the locations as shown below.

    The following is a list of the packages in category 2):
    gpg
    rsync
    cvs
    rrdtool
    freeamp
    netscape
    vnc
    kernel

  • While it's annoying to discover a buffer-overflow problem (or whatever; I haven't examined the report closely) in Linux and other OSS, if you ever wanted confirmation that Linux is being taken seriously by the public at large, it's that c|net thought a Linux bug worthy of reporting. Has that ever happened before?
  • On Linux : add this at the beginning of your /etc/rc file and in your shell init scripts :

    export MALLOC_CHECK_=2

    (don't forget the extra underscore at the end) .

    On BSD systems :

    ln -s ZH /etc/malloc.conf

    It will protect both your statically and dynamically linked apps. It adds a little performance penalty, but it's really not noticeable.

  • by chrysalis ( 50680 ) on Monday March 11, 2002 @06:54PM (#3145556) Homepage
    If you have to remotely upgrade the zlib library, be *very careful* .

    Because SSH/OpenSSH depend on zlib, if you replace your current libz.so file with another version whoose API has a bit changed, your SSH server won't work any more.

    So if you don't have access to the console, open a classical 'telnet' port for a few minutes, just during the upgrade. Once you've checked that SSH is still ok, you can remove the telnet daemon.

    If SSH doesn't work any more after the zlib upgrade, recompile SSH.

    • by Electrum ( 94638 ) <david@acz.org> on Monday March 11, 2002 @08:03PM (#3145917) Homepage

      So if you don't have access to the console, open a classical 'telnet' port for a few minutes, just during the upgrade. Once you've checked that SSH is still ok, you can remove the telnet daemon.

      Since the SSH server forks after you've connected, you can safely stop the server while connected via SSH. You never need to use telnet. Just make sure that you can still connect before disconnecting from the original SSH connection.
  • by Tom7 ( 102298 ) on Monday March 11, 2002 @07:52PM (#3145861) Homepage Journal

    Like most recent security holes in linux software, this one would be unexploitable in a modern safe language. (In fact it would be *impossible* to make this error in a garbage-collected language!)

    The typical response I hear to this kind of comment is that "high level languages are inefficient". (I don't belive this is true, but most other people here do.) But whatever, let's pretend they are.

    Now, what kind of crazy world do we live in where we value performance more than correctness (security)?? We are seeing more and more security holes as we try to write bigger and bigger packages in C. Why do we accept this? Who here really cares more about the performance of zlib than the time it takes for them to patch all of their statically-linked software, and their risk of being rooted until they do? I sure don't.

    Forget about all this "coding practices" stuff. It simply takes too much effort to produce bug-free code in C. The OpenBSD people, kings of code review, just had an exploitable bug in sshd! While we need to use C for some tasks (ie, most parts of the kernel), I think we are seriously unpowered to do this for most applications (as evidenced by the high number of simple errors made, and sometimes caught).

    If we simply wrote our software in high level languages, we would automatically rule out the largest classes of security holes, which would give us a lot more time to work on more important things, like high level architecture review and optimizations. I think we'd end up with a better system. So what's keeping us?

    For more discussion, see our big argument in the story about the OpenSSH root hole. http://slashdot.org/comments.pl?sid=29123&cid=3124 957 [slashdot.org]

  • XFree86 4.2.0?? (Score:3, Insightful)

    by gweihir ( 88907 ) on Monday March 11, 2002 @08:47PM (#3146107)
    The latest XFree comes with a copy of zlib 1.0.8.

    Does anybody know where this is used and whether I should do a rebuild with the current 1.1.4 version?

    In addition to gs, this seems to be the only software package that contains zlib in it. I found it because there is a /usr/X11R6/lib/libz.a on
    my Linux system.

Neutrinos have bad breadth.

Working...