Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 20 million monthly users. It takes less than a minute. Get new users downloading your project releases today!
Posted
by
michael
from the sauce-for-the-gander dept.
SirTimbly writes: "CNET is reporting that there is a buffer overflow problem with zlib in linux, which is used for network compression. Supposedly, someone could remotely cause a buffer overflow through mozilla, X11 and many other programs." The advisory from Red Hat is available.
This discussion has been archived.
No new comments can be posted.
Um...because that's the way nearly every package that uses zlib links it? For instance, OpenSSH AFAIK will only statically link it (so if you rebuilt OpenSSH last week to fix this hole [slashdot.org], you get to rebuild it again:-) ).
(I'm rebuilding OpenSSH on the work machines right now...I checked to see if it would link to libz.so, but it seems to only want libz.a.)
zlib is not os dependent. Many Windows based products/projects use it as well. Is there some linux specific issues related to this overflow issue?... or is it just a headline hype thing
This bug causes zlib to free() a malloc'ed block of memory more than once. free() on most other OS's (including Windows, FreeBSD and OpenBSD) is smart enough to check for this and will print a warning instead of destroying the heap; glibc's malloc (and by extension, Linux's) does not and will gleefully make a mess out of the whole memory space. This can cause all sorts of buggery when the next malloc() occurs, including what amounts to a buffer overflow exploit.
So, you should download the patched zlib, but you should also email the glibc maintainers [mailto] and demand that they implement a sane, error-checking malloc()/free() system. Linux's current allocation model is a disaster waiting to happen.
Sounds like that disaster you speak of did just happen. Well, maybe not quite a disaster, but anytime I have to upgrade 20+ packages for one bug, that's bad.
My count of 20 packages is based on the warning redhat distributed. They recommend upgrading some 22 packages on a Redhat 7.2 system. This includes several packages for the kernel, zlib itself, and a number of other applications that are statically linked such as cvs, dump, etc.
The scary thing is, I may have installed other apps that have zlib statically compiled that I don't even know about because they aren't part of the default vendor distribution.
This bug causes zlib to free() a malloc'ed block of memory more than once. free() on most other OS's (including Windows, FreeBSD and OpenBSD) is smart enough to check for this and will print a warning instead of destroying the heap; glibc's malloc (and by extension, Linux's) does not and will gleefully make a mess out of the whole memory space. This can cause all sorts of buggery when the next malloc() occurs, including what amounts to a buffer overflow exploit.
If you want this behavior, you can get it easily on Linux/glibc. From the malloc(3) manual page:
Recent versions of Linux libc (later than 5.4.23) and GNU
libc (2.x) include a malloc implementation which is tunable via environment variables. When MALLOC_CHECK_ is
set, a special (less efficient) implementation is used
which is designed to be tolerant against simple errors,
such as double calls of free() with the same argument, or
overruns of a single byte (off-by-one bugs). Not all such
errors can be proteced against, however, and memory leaks
can result. If MALLOC_CHECK_ is set to 0, any detected
heap corruption is silently ignored; if set to 1, a diagnostic is printed on stderr; if set to 2, abort() is
called immediately. This can be useful because otherwise
a crash may happen much later, and the true cause for the
problem is then very hard to track down.
If they ripped them off as you say they would not be able to licence them as GPL now could they?
sure they could. The BSD license lets anyone do whatever they want, including relicense the code as GPL. There is already BSD code in the Linux kernel.
The TCP stack for almost all OS's was {stolen|borrowed|based on} the TCP stack in BSD 4.x (IIRC). I remember once, a few years ago, a system engineer did a special sequence on a Windows 98 computer, and got the BSD copyright to show up.
VM in Linux is a mess. 2 (very) different VM systems, that cause quite a few utilities to be re-writtian. All of the BSD's have one each, and they optimize their VM quite well. FreeBSD has a very robust one, that is a bit slower than Linux's, but seems to be more stable.
OpenBSD just got statefull firewall with the latest version (3.0) Prior to that, they were using Darren Reed's IPF, but due to a licensing fiasco (and petty name calling), IPF was yanked, and PF was created. I use PF in my firewall at home, and I am quite happy. I can't wait until 3.1 (not willing to run -current) to see how much more robust it can get. =)
AFAIK: The kernel doesn't use the glibc C library. It has its own memory management code which presumably the kernel zlib code uses. This memory manager may or may not guard against free()ing the same area twice.
But what is the use of zlib in the kernel anyway? Just to uncompress the vmlinuz image before the kernel starts? If so it's not much of a vulnerability, if you can corrupt the vmlinuz file then you can control the whole system anyway.
This is not entirely accurate.
See my earlier post: about a workaround [slashdot.org].
Mind you, the manpage cited/is/ annoyingly vague about the behavior of this facility, but you should keep in mind there is a performance/feature tradeoff, and your criticism assumes we should strive for robustness over performance.
by Anonymous Coward writes:
on Monday March 11, 2002 @05:12PM (#3144847)
so what you are saying is that slashdot has been wrong in the past to criticise microsoft for seeking performance ahead of robustness. glad we've cleared things up.
*BSD's malloc manages to simultaneously provide high performance while also providing robust (and highly configurable) error checking. glibc's MALLOC_CHECK_ variable does far too much and isn't nearly as fine-grained as BSD's options. Read the "TUNING" section of FreeBSD's malloc(3) manpage [freebsd.org]. It puts Linux to shame as far as clarity, usefulness, and convenience goes. You only turn on the error checks you need, instead of a few general and poorly-implemented checks in glibc's malloc.
Why Linux can't follow in the supposedly-inferior BSD's footsteps is beyond me.
On the stuff I've been reading about finding and fixing buffer overflows, it seems like it's generally not too hard to spot where these things could potentially happen.
My question is this: How feasible would it be for someone to take a computer and have it do nothing but pattern-matching through all the source code in a typical Linux distribution, looking specifically for problem areas like these? Obviously we couldn't rely on it as a foolproof audit, but has something like this ever been considered?
Some software packages do this.. purify, etc. They're pretty expensive tho. The problem is that the logic that results in a buffer overflow error can be VERY complicated, and so its extremely difficult to spot sometimes even for the seasoned developer, nevermind a clever regex.
On the flip side, finding lots of memcpy's instead of strncpy might help you find the 'dumb' overflow bugs, but one would hope those arn't the ones we're most concerned about.:P Mostly, when copying and moving and generally playing with memory, if you spot functions without buffer limit or max byte limit arguments, you *might* be openening yourself up for trouble. Unfortunately, as I said, those are the easy ones.:) In reality, buffer overflow errors (and off-by-one bugs generally follow the 'simple errors can result from terribly complicated logic' construct of buffer overflow bugs) can be extremely difficult to spot if your input parsing/copying/moving mechanism is non-trivial.
How feasible would it be for someone to take a computer and have it do nothing but pattern-matching through all the source code in a typical Linux distribution...
A pattern matched theoretically could work (you would need some pretty bitchin' patterns though). However, what will definitly work is a meta-compiler which explicity looks for these problems in code. The most straightforward way to implement this is to use some sort of logic programming language (e.g., SML, Prolog, etc.) to act as a code-verifier, to prove (not just test, but actually prove in a mathematical sense). A meta-compiler could check to see that all malloc's are freed, all new's deleted, all bounds checking is enforced, etc. It is a very intensive process though, but because of these of which parsers can be written which convert code into a universally understandable syntax (independent of whitespace, coding style, etc., note that this is done by all compilers), this could be done.
Of course, the most effective way to solve this problem is to ensure that code is reviewed by someone other than the author so that these kinds of problems can be fixed.
On the stuff I've been reading about finding and fixing buffer overflows, it seems like it's generally not too hard to spot where these things could potentially happen.
From this statement I assume you are not a programmer. Buffer overflows caused by using known unsafe library functions (e.g. strcpy, strcat, gets, etc.) can be handled by simple pattern matching but actually investigating the code to make sure every memory/array access does not go out of bounds is not a simple pattern matching problem.
However some automated techniques have been developed to discover buffer overflows and similar errors in a generic manner. The most significant efforts I have seen are the Stanford Meta-level Compilation Project [stanford.edu] and the/GS switch in Visual C++.NET [devx.com].
My understanding is that going through source to find possible run-time bugs is often not computable. Certainally you could find some things (x = *NULL will always fail, for instance, and while(1) will never halt), but lots of problems would never be found. A better solution would be to use a higher-level language so you don't have to worry about allocating and freeing memory, overrunning arrays, etc.
The whole issue is not to use insecure languages like C. Such languages allow all sorts of memory manipulation and typically depends on the programmer to secure. The situation is made worse by the lack of something pretty essential: good libraries for string manipulation. While 3rd party libraries are available you can't assume these to be present so many developers still use char*.
Code reviews help, testing helps, good programming helps. But neither of these practices has succeeded in eliminating this type of bugs. It is just not good enough, witness a zillion bugs and security breaches on all major OSes (including the ones deemed secure, and yes I am talking about BSD) throughout the last decade. These OSes only differ in how the issues are dealt with. The occurance of the issue is a fact of life for all of them.
There's no C developer that can claim his program is completely free of buffer overflows (many foolishly do however). There may be some undetected errors in the program, the progrm may depend on third party code that contains bugs (e.g. the compiler or one of the standard libraries). Most likely bugs in all three categories are present.
Automatic checks are indeed the solution to the problem and modern languages build these checks into the run-time environment, where they belong. Buffer overflows are a non-issue in Java, for instance. The exception of this is native code and the JVM itself (written in C).
To eliminate buffer overflows, getting rid of the C legacy is the only solution. Java is probably too controversial as an alternative right now (though arguably it is quite up to the task as far as server side development is concerned) but there are other alternatives. Rebuilding serverside services like ftp, dns, ssh, smtp, pop, etc. is mandatory since each of these services has widely used C implementations that are frequently plagued by buffer overflows. The only way to guarantee that there are none left is to reimplement them.
>the progrm may depend on third party code that contains bugs
.. or your 'non insecure language' runtime could contain bugs (as you point out.) At some point, the machine gets instructions, and while safe languages might cut down on the number of 'dumb' overflow bugs, the simple fact remains that overflow bugs will _always_ exist.
Education over technological safegaurds (I could just see the next generation of programmers: 'buffer what?').. otherwise, you rely on something you 'assume' is safe, but isn't neccessarily so.
All of this is notwithstanding the question: is a buffer overflow error in a platform that has 100% deployment better than lots of programs here and there getting their native buffer overflow bugs discovered? Interesting question unto itself...
Also, remember that performance requirements of core services (ftp, dns) have always outstripped performance supply. There will always be a need for languages that are 'closer to the cpu', in order to extract all possible performence to meet demand, and those will always be insecure.
Sure, client side run-once apps benifit greatly from safe languages, but there will always been a need for unsafe languages. To that end, I'd advocate pushing for a higher respect of 'safe' function use in code, by changing a developers' attitudes from 'ah, what the hell, i'm too lazy to use the safe func' to, 'if a safe func exists, use it or die'. Or writing safe wrappers to unsafe funcs, and having a policy saying you simply are not allowed to side-step the safe prototype.
I really can't imagine justifying the additional cost & perfornace hit of safe platforms with the laziness and 'infailability' of coders. If I were the one with my hands on the purse string, I'd understand that bugs will *always* exist (as to avoid putting all my proverbial eggs in one code base, which is probably the biggest problem facing our eceonomy's reliance in software stability and security), and make sure I have developers that understand how to minimize the possibility of producing code with these types of errors.
Also, don't forget.. a problem without a buffer overflow bug is still exploitable through a veritable infinitum of poor design choices, so are the benifits of nearly eliminating the possibility of buffer overflow bugs and memory related exploits truely worth the performance hit for critical systems with high load and a perf demand that almost always outstrips perf supply?
.. which is all a long winded way of saying, sure, your approach works for many cases, but it sounds like yet another case of someone with a hammer trying to make everything look like a nail.:)
These are the standard excuses I've been hearing for years. The amount of buffer flow related security issues has been a fact of life during all these years so obviously relying on programmer discipline/skill won't get us anywhere. It has never been a solution and it will never be a solution.
The JVM indeed has the problem that it is implemented in C which involves the risk of it having buffer overflows (and other C related bugs). However you can't introduce new bufferflows by programming in Java and the jvm could theoratically be reimplemented in a language that has a bit more elaborate memory protection.
Indeed bug free programs don't exist. However, minimizing the damage does help. The wide majority of programs implemented in C that we depend on might just as well be implemented in a more safe language. This includes large parts of the kernel and device drivers. I'd say it is time to give a bit more priority to security. In principle, if security is of any concern, the use of C should be avoided since it involves unnecesary risks.
The reason C is still being used is programmer lazyness. They are too lazy to learn/invent better languages (just like they are too lazy to adopt the code practices you suggest). The buffer overflow issue is so stupid, a simple run-time check completely eliminates it.
My suggestion simply is to stop developing in C. Stop new development in C, keep maintaining the old stuff and use more modern languages for new development. It can't be that hard, we can start with the smaller stuff like bind, openssh or telnet. I've seen well performing servers of all kinds written in Java. If needed, there are more lightweight languages than Java that are still secure.Most of the stuff that runs ontop of a linux kernel currently implemented in C (and therefore a security risk) could be implemented in C.
Not for the embedded market, and consoles! I'm a console (games) programmer. All those little checks *ADD* up. Checks should be at the *programmer's descretion*, not at the whim of a compiler or language.
> to the problem and modern languages build these checks into the run-time environment, where they belong.
And you're completely ignoring the cost (performance) of doing so! For a debug build, yes, I love having extra checks, but for a full optimization build, NO. The performance cost is too high. There is a reason you have them in a debug build -- to write your code properly (robustfully) the 1st time, so you don't need the extra checking later.
> To eliminate buffer overflows, getting rid of the C legacy is the only solution.
Now you're trolling. You can write safe code in any language. Likewise you can write bad code in any language. Languages are *not* the silver bullet to the problem, but you for whatever reason think they are.
It would be possible to find occurences of known common mistakes in different packages automatically, and to find instances of a known error throughout a large project. There is work on doing this with the linux kernel, where a group has turned up hundreds of potentially unsafe usages (mostly DoS-type, mostly in drivers).
This particular problem is not actually a buffer overflow at all, it turns out, and is more subtle.
There are commercial programs which can find this sort of error, assuming it actually happens while running an instrumented version of the code. These programs are in the expensive commercial development tool price range (order $10k), though, and very difficult to write.
Another possibility is to build and run the program under a virtual machine which checks these things; for example, if you turn your C into Java and run it, it shouldn't throw any out-of-bounds exceptions, or it indicates that your C wasn't checking array bounds. There isn't a C virtual machine (that I know of), though.
It'd be next to impossible to detect a second free() of some chunk of code because machines are horrible at guessing what a programmer meant to do, rather than what they did. There are many tools available to diagnose memory problems, but they are slow and must be monitoring the code while it's running to do any good; to boot, they proabably won't find errors under normal program use. Besides, isn't this the C programmer's creed (the computer does what I tell it to do)? Well, zlib is doing exactly what the C programmer told it to do.
Insure (was Insight) [parasoft.com] does a tremendous job of this. I've been using Insure (used to be called Insight) for many years, and it can find all sorts of runtime problems. Of course, doing all the runtime checks is slow and takes time, but the savings in development time are awesome.
I am just a satisfied customer, and have no relationship with the company.
Did this get released early? I got the RedHat advisory, but there is no source update at zlib.org, the CVE page at Mitre is empty and there is nothing from CERT yet.
What gives? Does anyone know where a patch is available?
You could write a script using 'nm' and 'grep' -- once you identify some functions in zlib. If they have a common prefix, search on that.
Of course, if you stripped the symbols out of the binaries, then the function names won't be there for nm to find and you're quite screwed -- basically you'd have to go grab the sources again and scan the Makefiles and perhaps the code itself for zlib references.
You're absolutely right -- the only thing that a binary download will fix is packages using the libz.so shared library. Most software seems to link with the library statically. This is a huge problem.
I'm currently running this command against my/usr/src directory, just to get a preliminary list of packages to recompile:
Assuming you've still got your source tree intact since you compiled, this should find all makefiles which reference the zip library. If you've deleted any source directories, you will have to untar them and run configure again to build the makefiles.
There are the right ways, then there is the easy, 99% effective way. The easy way is to search for very specific error message strings, which are sort of a fingerprint for most software. I compiled zlib then used "strings libz.a" to find these error messages:
too many length or distance symbols invalid literal/length code
A quick grep for one of those two strings reveals quite a number of statically linked versions of zlib in/usr/bin.
The vulnerability results from a programming error that causes segments of dynamically allocated memory to be released more than once (aka. "double-freed"). Specifically, when inftrees.c:huft_build() encounters the crafted data, it returns an unexpected Z_MEM_ERROR to inftrees.c:inflate_trees_dynamic(). When a subsequent call is made to infblock.c:inflate_blocks(), the inflate_blocks function tries to free an internal data structure a second time.
Because this vulnerability interferes with the proper allocation and de-allocation of dynamic memory, it may be possible for an attacker to influence the operation of programs that include zlib. In most circumstances, this influence will be limited to denial of service or information leakage, but it is theoretically possible for an attacker to insert arbitrary code into a running program. This code would be executed with the permissions of the vulnerable program.
Duplicate deletions are not the same as buffer overflows and are no where near as easy to exploit. In fact, I have _never_ seen a duplicate deletion exploitation other than a simple DoS. Not to mention the fact that it requires a special series of calls from the calling program.
In summary, the world hasn't come to an end and Free Software is all-the-sudden as vunerable as closed source software. Put the pills down and relax:)
If the application has the "wrong" pattern of allocations and frees, it may be exploitable. One such pattern is the freeing of x, an allocation -- which gets x-(sizeof void *) -- and then the subsequent double-freeing of x.
I think the traceroute hack is an example of freeing garbage, not a double-free(). The garbage being freed happens to be part of the command line, which is how the hacker injected his/bin/sh. The traceroute exploit description did not give full details, but I don't see how it could be possible to use modify ((int*)p)-1 using the zlib vulnerability. Remember that all mallocs are sizeof(8) aligned and have a minimum size of 16 (with overhead and internal fragmentation).
What about closed source projects that statically link these sorts of things?
Quake 3, for example, statically links zlib in to deal with decompressing pk3 (zip) files. If the client auto-download is on, pk3 files can be downloaded from the quake server.
I don't mean to be an alarmist, but this is something that should be considered. Zlib is linked into Quake 3 on all platforms.
One of the problems I have with some Open Source software is that they're written by people who don't pay attention to such details as error-catching and buffer overflows. They all have great ideas to do things and always leave the minute detail of error-checking for some later date.
zlib is such a simple library compared to most software libs out there. The source is approximately 9,000 lines of code (remarks included) and exports a handful of functions. And yet a buffer-overflow situation exists. It's unfortunate that a lot of projects link zlib in and some of them statically! This really has the potential to be a disaster.
And you know this how? Thankyou for making an unbacked assertion about the behavior of open source vs. closed source programmers. If you said it, it must be true.
Really, the fact of the matter is that programmers, are human. This means that, every so often, they get lazy about checking something or possibly just make a mistake. This is true of open source or closed source programmers.
Now, having said that, as a user of open source software, you at least have the opportunity to check other people's mistakes. You can do your own personal code review and find those dumb mistakes somebody made. On the other hand, that is not possible with closed source software.
...Open Source software is that they're written by people who don't pay attention.... They all have great ideas to do things and always leave the minute detail of error-checking for some later date.
I hate to break the news to you, but the same thing happens frequently in the commercial world. Management asks for a prototype, provides a negative number of mandays to complete it in. Development hustles through a prototype -- no error checking, poor design, etc. Management loves the results and provides Development with even less time to make it into a final product. Just do the minimum amount of work. We can go back and fix it later. Later never comes. Quality is often ignored for speed.
Also keep in mind that many of the people who write Open Source software also have a "day job" where they are creating commercial software (or important software for in-house use). My guess is that their coding practices vary little between work and hobby.
Unfortunately the same problem happens with closed-source programmers so that is not a solution. I tracked several of these things down at Sun (in Olit and Xlib) and it was not fun, and they seemed to exist despite the fact that Sun's source code was closed. And in my current position I have made hundreds of such mistakes, the fact that the software is closed-source did not somehow magically change my brain so I did not do it! I also write open-source software and I would say the proportion of stupid mistakes I make in both is about equal.
Maybe this will finally kill all those apps that statically link zlib, or include their own version of the source. Code reuse: good. Dynamic linking: good. Library versions: good. Cut-and-paste, static linking, namespace collisions, recompiling: bad.
Are you joking? Nobody needs to exploit this from an RPM because RPMs are almost always installed by root. If you want to trojan an RPM, just add your desired commands to the install scripts.
The article says that the kernel is affected - does it statically link zlib, in which case a recompile is in order, or do I just need to upgrade the zlib package?
The only dynamic linking the kernel uses is modules, which aren't used for providing library routines like zlib. The kernel does not link.so files. The code is almost certainly cut-and-pasted into the ppp compression code somewhere.
One place kernel uses zlib is to compress the kernel boot image. The kernel image then gets decompressed
during bootup. So, from the standpoint of "the kernel uses zlib", the kernel is affected.
There is, however, no new vulnerability introduced
as far as I can tell. To attack the zlib-based
decompression that the kernel performs, an attacker would need to modify the compressed kernel image that is used to boot the machine.
I can think of far more fruitful ways to
compromise a machine by modifying the kernel
image than by trying to dork the zlib decompression that happens before the kernel even runs.
Another place the kernel uses ZLib is when
mounting compressed filesystems. (Compressed
RAM disks and zisofs come to mind.) In this
case, you're asking a live kernel to decompress arbitrary data. These
are only issues when mounting untrusted media.
If you made the media yourself, then your
only risk is that corrupted media might cause
a kernel oops. And if you don't have cramdisk
and zisofs compiled in, you're safe.
Other places the kernel seems to use ZLib (from a cursory scan of the source -- there may be others):
jffs2 -- Journalling Flash Filesystem version 2
ppp -- used for ppp_deflate option
In any case, the kernel is a statically linked
entity, with a minor exception for modules.
ZLib is not a module, therefore to upgrade
ZLib in the kernel, you'll need to rebuild the
kernel. And it doesn't appear to be as easy as
just upgrading ZLib and rebuilding the kernel.
The kernel has multiple modified copies of ZLib
in its source tree. I'd wait for an official
kernel patch.
Is it just me, or have there been a really huge amount of security issues with Free/Open Source software this year?
It just seems like there's a new hole (or two) every week. Let's see, we've had openssh, zlib, php, mod_ssl, cvs, cups, rsync, exim, ncurses, glibc and more, just since January. We've still got two-thirds of the year to go. Anyone want to make bets on what other projects will get hit? I think we're going to see problems with XFree86, samba, and apache.
So, my question is this: Do you think that this is simply a bad time for FS/OSS security? Are we at the threshold where there are enough eyes on the code to locate these kinds of bugs? Or is the quality of FS/OSS declining?
Is it just me, or have there been a really huge amount of security issues with Free/Open Source software this year?
Yes; perhaps this is due to the fact that FS/OSS is used by more developers/users. More eyeballs and more code to exercise libraries mean more bugs are discovered. As mentioned, this bug is (relatively) benign, and has already been fixed in the source. So I wouldn't necessarily say that FS/OSS is getting "more buggy", any more than commercial software, whose bugs don't leave the company if users don't discover them first.
If the strength of Linux is closing the barn doors after the horses have ran amok, I think I'll investigate BSD, where they, you know, actively audit the code.
Your analogy doesn't make much sense. I would say "the horses have run amok" when there are actually boxes being compromised using this hole. At the moment, this is a purely theoretical exploit. This is more like discovering the barn door was open, and closing it before any horses got out.
Your other point is a good one, though. Anyone interested in secure servers would do well to investigate OpenBSD, as they have spent huge amounts of time auditing their code.
Actually, I'd say this is more like finding out that the vast majority of barn doors can be opened by knocking in a certain pattern, and knowing for a damn fact that most farmers won't bother to install the new latch that's being given away for free.
My point mainly is that while having armies of college students coding is wonderful in some respects, it's horrible in other respects.
I agree that discovering that the barn door could be opened before it was installed is the best solution, however just in case something is found, you have to have a half-decent backup plan.
This is something that I believe Windows XP got right - by forcing users to download the latest patches, it means that the clueless are no longer vulnerable, and those who know enough to turn the updates off are either keeping up with the patches themselves, or basically have nobody to blame when Code Red III comes along...
Well, there might be a lot of these buffer overflow (or double free in this case) bugs these year in FS/OSS software, but you have to keep in mind that most of these bug postings are *theoretical* security flaws. Many of these don't even have an exploit coupled with them, it's just that since people can go through the code, it's easier to see that a flaw that creates a potential security breach has been discovered.
Compare that with most of the closed-source security flaws this year, which more had to do with actual exploited vulnerabilities than with potential theoretical vulnerabilities.
I guess the difference is that in a lot of these FS/OSS projects the potential problems are announced and stomped out quicker than they can become actual exploited vulnerabilities.
A double free that creates a potential remote hole (and more likely a potential DoS) isn't good, but if we weight security flaws, I would weight that far below something like Code Red or Sircam which is being actively exploited.
"...most of these bug postings are *theoretical* security flaws."
L0pht Heavy Industries: "Making the theoretical practical since 1992."
The fact is, a theoretical flaw is still a flaw, and it's still relevant to my original question.
"Many of these don't even have an exploit coupled with them..."
...yet.
"I guess the difference is that in a lot of these FS/OSS projects the potential problems are announced and stomped out quicker than they can become actual exploited vulnerabilities."
This is something that's been bugging me as well. It seems that more security bugs are kept quiet until a fix is prepared. Personally, I'd much rather know ASAP, so I can at least disable or filter the vulnerable service. I think that as soon as a vuln. is discovered & discussed on any public list, the more likely it will be that there is an exploit available. In some cases (WU-FTPd globbing bug comes to mind), the vuln. was known about for some time before the fix was available. In this case, there was even an exploit available. The bug was discovered in July, a security advisory was being drafted as early as September, and the fix didn't get released until December.
Its mostly because a certain large vendors publicity machine keeps pushing any bug report that impacts open source in the slightest as a big thing and feeding it to the press.
Dirty game really, but thata how they play it.
This is why news stories keep starting with Linux blah and then turning into "everyone blah"
And inversely, any bug report for a certain large vendor starts out being an "internet flaw" or an "web browser/email bug" and eventually is clarified to affect only a specific vendor's product.
In this case, it's just a lame attempt of Slashdot to be "provocative". However, there is some truth in saying that it's a Linux (actually, GNU/Linux and Linux kernel) problem.
As other posters noticed, GNU libc defaults to an unsafe but faster free() implementation that can damage the heap if called more than once on the same pointer. Other operating systems are said to default to a safer malloc() implementation.
The question is, if, and to which extent, potential extra security justifies real trade-offs in speed.
Using shared libraries is not free of trade-offs. Position independent code (PIC) used in them is slower on many architectures, including x86. However, shared libraries are preferred because, should anything like this bug in zlib be found, the only package to replace will be the faulty library (and the "offenders" linking to it statically, like CVS and rsync).
But the measure that could have prevented this bug from being an issue on GNU/Linux has not been taken on the same grounds of speed! Yet I think that a lot of programs suffer (in terms of speed) more from slower PIC code than from safer malloc(). No I'm not suggesting going back to static libraries. Rather, we should acknowledge that there is a price tag attached to the peace of mind.
Linux kernel developers should have examined the zlib code for compatibility with Linux kernel-space memory management.
Its because buffer overflows are a hot topic now. Microsoft made them famous due to the exploits. Now everyone is quick to point them out and post them on their websites. I'd almost bet buffer overflow news in your favorite OS would get just as many hits as your favorite teen popstar naked. God knows I've done more searches this year for Red Hat and Windows bug errata than I have for Britney Spears' boobs.
As for the quality of free and open source software, well, I've never understood why people think it is any better than commercial software. I've been a professional programmer for 10 years and I've seen whack commercial code, whack open source code, and I'm pretty sure I've written some whack code myself. The license doesn't have much to do with it except for having NDA's keeping me from discussing some commercial code. Having open source visible on the bug watch list doesn't hurt much because fixes are usually available. Its when people don't pay attention to the fixes that the problem gets out of hand. Remember, the fix to the exploit that Code Red used was out a few weeks before Code Red. Now if we see this zlib bug take down a big chunk of the Linux communitiy in a few weeks then it will be a bad thing because we didn't pay attention.
Thankfully the 9000 lines of regularly coped and pasted source commonly known as zLib does not infest the MacOS universe or mac servers at least.
The MacOS running WebStar as a server has never been exploited.
In fact in the entire securityfocus (bugtraq) database history there has never been a Mac exploited over the internet remotely.
That is why the US Army gave up on MS IIS and got a Mac with WebStar.
I am not talking about BSD derived MacOS X (which already had a couple of exploits) I am talking about Mac OS 9 and earlier.
Why is is hack proof? These reasons:
1> No command shell. No shell means no way to hook or intercept the flow of control with many various shell oriented tricks found in Unix or NT
2> No Root user. All mac developers know their code is always running at root. Nothing is higher (except undocumented microkernel stufff where you pass Gary Davidians birthday into certain registers and make a special call). By always being root their is no false sense of security.
3> Pascal strings. ANSI C Strings are the number one way people exploit Linux and Wintel boxes. The mac avoids C strings historically in most of all of its OS. In fact even its roms originally used Pascal strings. As you know pascal strings are faster than C (because they have the length delimiter in the front and do not have to endlessly hunt for NULL), but the side effect is less buffer exploits.
4> Stack return address positioned in safer location than intel. Buffer exploits take advantage of loser programmers lack of string length checking and clobber the return address to run thier exploit code instead. The Mac places return address infornt of where the buffer would overrun. Much safer.
5 : Macs running Webstar have ability to only run CGI placed in correct lodirectoy cation and correctly file typed.
6> Macs never run code ever merely based on how a file is named. ",exe" suffixes mean nothing. For example the file type is 4 characters of user-invisible attributes, along wiht many other invisible attributes, but these 4 bytes cannot be set by most tool oriented utilities that work with data files. For ecxample file copy utilities preserve launchable file-types, but JPEG MPEG HTML TXT etc oriented tools are physically incapable of creating an executable file. the file type is not set to executable for hte hackers needs. In fact its even more secure than that. A mac cannot run a program unless it has TWO files. The second file is an invisible file associated with the data fork file and is called a resource fork. EVERY mac program has a resource fork file containing launch information. It needs to be present. Typically JPEG, HTML, MPEG, TXT, ZIP, C, etc are merely data files and lack resource fork files, and even if the y had them they would lack launch information. but the best part is that mac web programs and server tools do not create files with resource forks usually.. TOTAL security.
7> There are less macs, though there are huge cash prizes for craking into a MacOS based WebStar server. Less macs means less hacvker interest, butthere are millions of macs sold, and some of the most skilled programmers are well versed in systems level mac engineering and know of the cash prizes so its a moot point, but perhaps macs are never kracked because there appear to be less of them. (many macs pretend they are unix and give false headers to requests to keep up the illusion, ftp http, finger, etc).
8> MacOS source not available traditionally, except within apple, similar to Microsoft source availability to its summer interns and such, source is rare to MacOS. This makes it hard to look for programming mistakes, but I feel the restricted source access is not the main reasons the MacOS has never been remotely broken into and exploited.
Sure a fool can install freeware and shareware server tools and unsecure 3rd party addon tools for e-commerce, but a mac (MacOS 9) running WebStar is the most secure web server possible and webstar offers many services as is.
I think its quite amusing that there are over 200 or 300 known vulenerabilities in RedHat over the years and not one MacOS remote exploit hack. And Now ith zLib, even more holes can be found in Linux.
Well, it won't prevent the DoS aspect - but, from the malloc manpage:
Recent versions of Linux libc (later than 5.4.23) and GNU libc (2.x) include a malloc implementation which is tunable via environment variables. When MALLOC_CHECK_ is set, a special (less efficient) implementation is used which is designed to be tolerant against simple errors, such as double calls of free() with the same argument, or overruns of a single byte (off-by-one bugs). Not all such errors can be proteced against, however, and memory leaks can result. If MALLOC_CHECK_ is set to 0, any detected heap corruption is silently ignored; if set to 1, a diagnostic is printed on stderr; if set to 2, abort() is called immediately. This can be useful because otherwise a crash may happen much later, and the true cause for the problem is then very hard to track down.
Seems worth it while all pour through the symbol tables of our static binaries (and recompile the stripped ones. =( )
On another note, I've always regarded security bulletins as a one-way process... For example, I couldn't find a way to tell RedHat they'd omitted this (seemingly important?) reminder. Any thoughts about this? (admittedly i didn't look very hard for very long)
by Anonymous Coward writes:
on Monday March 11, 2002 @04:48PM (#3144696)
OpenSSh uses zlib - if you happen to compile OpenSSH statically with zlib (I think thats the default), one more upgrade cycle after the latest OpenSSH 3.0.2p1 bug...:(
This is why you ALWAYS set a pointer to NULL after freeing it, even if it's "totally unnecessary" because you're about to free the structure holding the pointer.
This doesn't prevent attempts to free the previously freed pointer, but that will generally do a lot less damage than freeing a real malloc'd address. And during development it's trivial to add an assertion checking for a NULL pointer before any free().
The assertion being tripped would, however, be a useful diagnostic if you prefer to know when something is being freed twice. For instance, if there exist two separate pointers to an object, which itself contains a pointer that gets set to NULL when freed, a well-placed assertion would tell you if you've got a logical error -- trying to deallocate a dangling pointer, basically.
Simply clearing it to NULL means that such won't be found, which may hide related errors. Letting it be freed twice would simply be sloppy and asking for trouble.
Calling free() on a NULL pointer is a no-op. No check or assertion is needed.
The assertion lets you catch the logic error that led to the second free during debugging. Presumably, your code path wasn't expecting the pointer to already be free at that point; otherwise, you would have designed it to handle that case already.
Then, in production code, if you do take that path, you'll get the harmless no-op free(0). (You do build production with -DNDEBUG, right?)
The packages affected by the double-free() libz bug can be devided into
two categories:
1) packages that link dynamically against the system-provided
compression library. These packages get fixed automatically with
the update of the libz package as described in SuSE-SA:2002:010.
Please note that the processes will continue to use the old
version of the libz.so shared library if the have not been
restarted after the libz package upgrade.
2) packages that contain the compression library in their own
source distribution. These packages need an individual bugfix.
We have prepared update packages for this software that can be
downloaded from the locations as shown below.
The following is a list of the packages in category 2):
gpg
rsync
cvs
rrdtool
freeamp
netscape
vnc
kernel
While it's annoying to discover a buffer-overflow problem (or whatever; I haven't examined the report closely) in Linux and other OSS, if you ever wanted confirmation that Linux is being taken seriously by the public at large, it's that c|net thought a Linux bug worthy of reporting. Has that ever happened before?
If you have to remotely upgrade the zlib library, be *very careful*.
Because SSH/OpenSSH depend on zlib, if you replace your current libz.so file with another version whoose API has a bit changed, your SSH server won't work any more.
So if you don't have access to the console, open a classical 'telnet' port for a few minutes, just during the upgrade. Once you've checked that SSH is still ok, you can remove the telnet daemon.
If SSH doesn't work any more after the zlib upgrade, recompile SSH.
So if you don't have access to the console, open a classical 'telnet' port for a few minutes, just during the upgrade. Once you've checked that SSH is still ok, you can remove the telnet daemon.
Since the SSH server forks after you've connected, you can safely stop the server while connected via SSH. You never need to use telnet. Just make sure that you can still connect before disconnecting from the original SSH connection.
Like most recent security holes in linux software, this one would be unexploitable in a modern safe language. (In fact it would be *impossible* to make this error in a garbage-collected language!)
The typical response I hear to this kind of comment is that "high level languages are inefficient". (I don't belive this is true, but most other people here do.) But whatever, let's pretend they are.
Now, what kind of crazy world do we live in where we value performance more than correctness (security)?? We are seeing more and more security holes as we try to write bigger and bigger packages in C. Why do we accept this? Who here really cares more about the performance of zlib than the time it takes for them to patch all of their statically-linked software, and their risk of being rooted until they do? I sure don't.
Forget about all this "coding practices" stuff. It simply takes too much effort to produce bug-free code in C. The OpenBSD people, kings of code review, just had an exploitable bug in sshd! While we need to use C for some tasks (ie, most parts of the kernel), I think we are seriously unpowered to do this for most applications (as evidenced by the high number of simple errors made, and sometimes caught).
If we simply wrote our software in high level languages, we would automatically rule out the largest classes of security holes, which would give us a lot more time to work on more important things, like high level architecture review and optimizations. I think we'd end up with a better system. So what's keeping us?
Does anybody know where this is used and whether I should do a rebuild with the current 1.1.4 version?
In addition to gs, this seems to be the only software package that contains zlib in it. I found it because there is a/usr/X11R6/lib/libz.a on my Linux system.
No, it's not the site, it's the poster who is suffering from the/. effect (actually, syndrome). The/. syndrome is: post as fast as possible and get tons of karma from moderators who don't check the links.
This is creepy. I was wondering about possible
vulnerabilities in zlib
earlier today for a completely different
reason.
According to this
post [slashdot.org] in a different article, a recent patch to Internet Explorer disables support for gzip encoding.
Do you think, perhaps, Microsoft noticed the bug,
shut off the support (since presumably they're
just using zlib also), and just didn't say anything?
Of course, that is just paranoia. They could have plenty of other reasons.
You have a point. Maybe this is MS intention of C# to avoid their own problems with just this issue. (I am of course joking we all know world domination throught IT enslavement is the real goal)
But a point you may be missing is MANY of the languages are built on C and as a result in effect inherently unsafe as well, unless they are self hosting.
Yet why in the name of all that is good do they not realize that C is an inherently unsafe language. There are some really good free alternatives to C, so why the heck are those numbskulls not using them?
Because most distributions of GNU/Linux operating systems don't install any compiled languages but C and C++. (There is no language called "C/C++".)
Because popular libraries have C bindings.
Because compilers can optimize C code for machines with limited resources. (Java technology is a memory hog partly because of the 16-byte overhead of java.lang.Object.)
Because people think in "step 1, step 2, step 3" of an algorithm, rther thn in functionl style (partially ruling out CL, ML, and Haskell).
Take your choice: Eiffel, Ada, Modula-3
Which of these languages is in the default devel install in all the major distros and can link to libraries' C bindings?
I think it does hit FreeBSD. FreeBSD uses zlib v1.1.3. v1.1.4 has the fix (http://www.gzip.org/zlib/). No security announcement has arrived in my mailbox, yet I expect it soon.
I guess it is only partially immune: http://www.kb.cert.org/vuls/id/368819. Reading the security list I gather that it is only a problem with regards to running Linux apps. Time to give FreeBSD's free() a hug.:)
Some glory hound at Redhat found it.I doubt he fired up a windows machine and tested it before he realized he could get his name in the news.
I bet you are one of the ppl that instantly criticizes when you hear that M$ (or insert fav evil corp) knew about a vulnerability and kept it quiet for a month.
Not that quick... this was a coordinated release of the bug Owen Taylor at Red Hat found by investigating a bug against Gnome. This wasn't found yesterday, vendors have had time to create and test packages.
Linux: 1. Flaw Found 2. Flaw fixed for current and previous distributions.
Windows 1. Flaw found 2. Flaw claimed to be feature 3. Large Corporation bothered by flaw 4. Flaw patch developement begins. 5. Beta release of flaw pack 6. Beta 2 release of flaw pack 7. RC1 of flaw pack 8. Flaw pack released for new OS products 9. 1 month later flaw pack released for most widely used OS version.
And it should be the law: If you use the word `paradigm' without knowing
what the dictionary says it means, you go to jail. No exceptions.
-- David Jones
more info please (Score:2, Informative)
could somebody point out where in the source this is? the article was fluff.
Re:more info please (Score:3, Informative)
Re:more info please (Score:5, Informative)
Re:more info please (Score:4, Informative)
Mitre [mitre.org]
Gnome [gnome.org]
The Mitre page says it's still under review.
Re:more info please (Score:2, Informative)
Its not a buffer overrun. its a double free. much harder to exploit, but still possible, so you should patch ASAP.
Re:more info please (Score:2, Funny)
double free as in beer, or double free as in speech? Is the double free covered by the double GPL?
Re:more info please (Score:2, Insightful)
Re:more info please (Score:3, Informative)
Um...because that's the way nearly every package that uses zlib links it? For instance, OpenSSH AFAIK will only statically link it (so if you rebuilt OpenSSH last week to fix this hole [slashdot.org], you get to rebuild it again :-) ).
(I'm rebuilding OpenSSH on the work machines right now...I checked to see if it would link to libz.so, but it seems to only want libz.a.)
more information - better article (Score:5, Informative)
Some More Links (Score:5, Informative)
Linux only? (Score:5, Interesting)
It's not a problem in zlib per se (Score:5, Insightful)
So, you should download the patched zlib, but you should also email the glibc maintainers [mailto] and demand that they implement a sane, error-checking malloc()/free() system. Linux's current allocation model is a disaster waiting to happen.
Waiting to happen? (Score:2)
Count based on advisory at SecurityFocus (Score:2)
The scary thing is, I may have installed other apps that have zlib statically compiled that I don't even know about because they aren't part of the default vendor distribution.
Re:It's not a problem in zlib per se (Score:5, Informative)
If you want this behavior, you can get it easily on Linux/glibc. From the malloc(3) manual page:
Re:That's better (Score:3, Insightful)
sure they could. The BSD license lets anyone do whatever they want, including relicense the code as GPL. There is already BSD code in the Linux kernel.
Re:That's better (Score:2, Insightful)
VM in Linux is a mess. 2 (very) different VM systems, that cause quite a few utilities to be re-writtian. All of the BSD's have one each, and they optimize their VM quite well. FreeBSD has a very robust one, that is a bit slower than Linux's, but seems to be more stable.
OpenBSD just got statefull firewall with the latest version (3.0) Prior to that, they were using Darren Reed's IPF, but due to a licensing fiasco (and petty name calling), IPF was yanked, and PF was created. I use PF in my firewall at home, and I am quite happy. I can't wait until 3.1 (not willing to run -current) to see how much more robust it can get. =)
Re:It's not a problem in zlib per se (Score:3, Informative)
But what is the use of zlib in the kernel anyway? Just to uncompress the vmlinuz image before the kernel starts? If so it's not much of a vulnerability, if you can corrupt the vmlinuz file then you can control the whole system anyway.
Re:It's not a problem in zlib per se (Score:2, Interesting)
Re:It's not a problem in zlib per se (Score:4, Insightful)
mjl
Then there's still a problem in glibc malloc() (Score:5, Insightful)
Why Linux can't follow in the supposedly-inferior BSD's footsteps is beyond me.
Credit where credit is due... (Score:4, Interesting)
Dumb security question (Score:4, Interesting)
My question is this: How feasible would it be for someone to take a computer and have it do nothing but pattern-matching through all the source code in a typical Linux distribution, looking specifically for problem areas like these? Obviously we couldn't rely on it as a foolproof audit, but has something like this ever been considered?
Re:Dumb security question (Score:5, Interesting)
On the flip side, finding lots of memcpy's instead of strncpy might help you find the 'dumb' overflow bugs, but one would hope those arn't the ones we're most concerned about.
Re:Dumb security question (Score:2, Interesting)
How feasible would it be for someone to take a computer and have it do nothing but pattern-matching through all the source code in a typical Linux distribution...
A pattern matched theoretically could work (you would need some pretty bitchin' patterns though). However, what will definitly work is a meta-compiler which explicity looks for these problems in code. The most straightforward way to implement this is to use some sort of logic programming language (e.g., SML, Prolog, etc.) to act as a code-verifier, to prove (not just test, but actually prove in a mathematical sense). A meta-compiler could check to see that all malloc's are freed, all new's deleted, all bounds checking is enforced, etc. It is a very intensive process though, but because of these of which parsers can be written which convert code into a universally understandable syntax (independent of whitespace, coding style, etc., note that this is done by all compilers), this could be done.
Of course, the most effective way to solve this problem is to ensure that code is reviewed by someone other than the author so that these kinds of problems can be fixed.
Re:Dumb security question (Score:4, Informative)
From this statement I assume you are not a programmer. Buffer overflows caused by using known unsafe library functions (e.g. strcpy, strcat, gets, etc.) can be handled by simple pattern matching but actually investigating the code to make sure every memory/array access does not go out of bounds is not a simple pattern matching problem.
However some automated techniques have been developed to discover buffer overflows and similar errors in a generic manner. The most significant efforts I have seen are the Stanford Meta-level Compilation Project [stanford.edu] and the
Re:Dumb security question (Score:2)
--Ben
Can't resist (Score:2)
while(1)
{
if (x=*NULL)
break;
}
Re:Dumb security question (Score:3, Insightful)
Code reviews help, testing helps, good programming helps. But neither of these practices has succeeded in eliminating this type of bugs. It is just not good enough, witness a zillion bugs and security breaches on all major OSes (including the ones deemed secure, and yes I am talking about BSD) throughout the last decade. These OSes only differ in how the issues are dealt with. The occurance of the issue is a fact of life for all of them.
There's no C developer that can claim his program is completely free of buffer overflows (many foolishly do however). There may be some undetected errors in the program, the progrm may depend on third party code that contains bugs (e.g. the compiler or one of the standard libraries). Most likely bugs in all three categories are present.
Automatic checks are indeed the solution to the problem and modern languages build these checks into the run-time environment, where they belong. Buffer overflows are a non-issue in Java, for instance. The exception of this is native code and the JVM itself (written in C).
To eliminate buffer overflows, getting rid of the C legacy is the only solution. Java is probably too controversial as an alternative right now (though arguably it is quite up to the task as far as server side development is concerned) but there are other alternatives. Rebuilding serverside services like ftp, dns, ssh, smtp, pop, etc. is mandatory since each of these services has widely used C implementations that are frequently plagued by buffer overflows. The only way to guarantee that there are none left is to reimplement them.
Re:Dumb security question (Score:2)
Education over technological safegaurds (I could just see the next generation of programmers: 'buffer what?')
All of this is notwithstanding the question: is a buffer overflow error in a platform that has 100% deployment better than lots of programs here and there getting their native buffer overflow bugs discovered? Interesting question unto itself
Also, remember that performance requirements of core services (ftp, dns) have always outstripped performance supply. There will always be a need for languages that are 'closer to the cpu', in order to extract all possible performence to meet demand, and those will always be insecure.
Sure, client side run-once apps benifit greatly from safe languages, but there will always been a need for unsafe languages. To that end, I'd advocate pushing for a higher respect of 'safe' function use in code, by changing a developers' attitudes from 'ah, what the hell, i'm too lazy to use the safe func' to, 'if a safe func exists, use it or die'. Or writing safe wrappers to unsafe funcs, and having a policy saying you simply are not allowed to side-step the safe prototype.
I really can't imagine justifying the additional cost & perfornace hit of safe platforms with the laziness and 'infailability' of coders. If I were the one with my hands on the purse string, I'd understand that bugs will *always* exist (as to avoid putting all my proverbial eggs in one code base, which is probably the biggest problem facing our eceonomy's reliance in software stability and security), and make sure I have developers that understand how to minimize the possibility of producing code with these types of errors.
Also, don't forget
.. which is all a long winded way of saying, sure, your approach works for many cases, but it sounds like yet another case of someone with a hammer trying to make everything look like a nail.
Re:Dumb security question (Score:2)
The JVM indeed has the problem that it is implemented in C which involves the risk of it having buffer overflows (and other C related bugs). However you can't introduce new bufferflows by programming in Java and the jvm could theoratically be reimplemented in a language that has a bit more elaborate memory protection.
Indeed bug free programs don't exist. However, minimizing the damage does help. The wide majority of programs implemented in C that we depend on might just as well be implemented in a more safe language. This includes large parts of the kernel and device drivers. I'd say it is time to give a bit more priority to security. In principle, if security is of any concern, the use of C should be avoided since it involves unnecesary risks.
The reason C is still being used is programmer lazyness. They are too lazy to learn/invent better languages (just like they are too lazy to adopt the code practices you suggest). The buffer overflow issue is so stupid, a simple run-time check completely eliminates it.
My suggestion simply is to stop developing in C. Stop new development in C, keep maintaining the old stuff and use more modern languages for new development. It can't be that hard, we can start with the smaller stuff like bind, openssh or telnet. I've seen well performing servers of all kinds written in Java. If needed, there are more lightweight languages than Java that are still secure.Most of the stuff that runs ontop of a linux kernel currently implemented in C (and therefore a security risk) could be implemented in C.
Re:Dumb security question (Score:2, Interesting)
Not for the embedded market, and consoles! I'm a console (games) programmer. All those little checks *ADD* up. Checks should be at the *programmer's descretion*, not at the whim of a compiler or language.
> to the problem and modern languages build these checks into the run-time environment, where they belong.
And you're completely ignoring the cost (performance) of doing so! For a debug build, yes, I love having extra checks, but for a full optimization build, NO. The performance cost is too high. There is a reason you have them in a debug build -- to write your code properly (robustfully) the 1st time, so you don't need the extra checking later.
> To eliminate buffer overflows, getting rid of the C legacy is the only solution.
Now you're trolling. You can write safe code in any language. Likewise you can write bad code in any language. Languages are *not* the silver bullet to the problem, but you for whatever reason think they are.
Re:Dumb security question (Score:2)
This particular problem is not actually a buffer overflow at all, it turns out, and is more subtle.
There are commercial programs which can find this sort of error, assuming it actually happens while running an instrumented version of the code. These programs are in the expensive commercial development tool price range (order $10k), though, and very difficult to write.
Another possibility is to build and run the program under a virtual machine which checks these things; for example, if you turn your C into Java and run it, it shouldn't throw any out-of-bounds exceptions, or it indicates that your C wasn't checking array bounds. There isn't a C virtual machine (that I know of), though.
Re:Dumb security question (Score:2)
Re:Dumb security question (Score:2)
I am just a satisfied customer, and have no relationship with the company.
advisory & zlib 1.1.4 url (Score:5, Informative)
http://www.zlib.org/advisory-2002-03-11.txt
Zl
zlib Compression Library Corrupts malloc Data Structures via Double Free
The new zlib (1.1.4) is at:
ftp://ftp.info-zip.org/pub/infozip/zlib/zlib-1.
Did this get released early? (Score:2)
Did this get released early? I got the RedHat advisory, but there is no source update at zlib.org, the CVE page at Mitre is empty and there is nothing from CERT yet.
What gives? Does anyone know where a patch is available?
Staticly linked-implication (Score:3, Interesting)
As I'm not a programmer, what can I grep to search stuff I've compiled from source to determine what's using staticly linked zlib?
Re:Staticly linked-implication (Score:4, Informative)
Of course, if you stripped the symbols out of the binaries, then the function names won't be there for nm to find and you're quite screwed -- basically you'd have to go grab the sources again and scan the Makefiles and perhaps the code itself for zlib references.
Re:Staticly linked-implication (Score:3, Informative)
I'm currently running this command against my /usr/src directory, just to get a preliminary list of packages to recompile:
grep '-lz' `find . -name 'Makefile'` > ~/zlib-dependencies
Assuming you've still got your source tree intact since you compiled, this should find all makefiles which reference the zip library. If you've deleted any source directories, you will have to untar them and run configure again to build the makefiles.
Re:Staticly linked-implication (Score:3, Informative)
too many length or distance symbols
invalid literal/length code
A quick grep for one of those two strings reveals quite a number of statically linked versions of zlib in
No buffer overflow! (Score:3, Informative)
Duplicate deletions are not the same as buffer overflows and are no where near as easy to exploit. In fact, I have _never_ seen a duplicate deletion exploitation other than a simple DoS. Not to mention the fact that it requires a special series of calls from the calling program.
In summary, the world hasn't come to an end and Free Software is all-the-sudden as vunerable as closed source software. Put the pills down and relax
Re:No buffer overflow! (Score:2)
Re:No buffer overflow! (Score:3, Informative)
traceroute provided an example of an exploit [geocrawler.com] for a double-free in a setuid program.
Traceroute hack not a double free (Score:3, Interesting)
Closed source projects that statically link (Score:2, Interesting)
Quake 3, for example, statically links zlib in to deal with decompressing pk3 (zip) files. If the client auto-download is on, pk3 files can be downloaded from the quake server.
I don't mean to be an alarmist, but this is something that should be considered. Zlib is linked into Quake 3 on all platforms.
Coding practice issue (Score:4, Interesting)
zlib is such a simple library compared to most software libs out there. The source is approximately 9,000 lines of code (remarks included) and exports a handful of functions. And yet a buffer-overflow situation exists. It's unfortunate that a lot of projects link zlib in and some of them statically! This really has the potential to be a disaster.
People
Oh really? (Score:2, Flamebait)
Really, the fact of the matter is that programmers, are human. This means that, every so often, they get lazy about checking something or possibly just make a mistake. This is true of open source or closed source programmers.
Now, having said that, as a user of open source software, you at least have the opportunity to check other people's mistakes. You can do your own personal code review and find those dumb mistakes somebody made. On the other hand, that is not possible with closed source software.
Re:Coding practice issue (Score:2)
I hate to break the news to you, but the same thing happens frequently in the commercial world. Management asks for a prototype, provides a negative number of mandays to complete it in. Development hustles through a prototype -- no error checking, poor design, etc. Management loves the results and provides Development with even less time to make it into a final product. Just do the minimum amount of work. We can go back and fix it later. Later never comes. Quality is often ignored for speed.
Also keep in mind that many of the people who write Open Source software also have a "day job" where they are creating commercial software (or important software for in-house use). My guess is that their coding practices vary little between work and hobby.
Re:Coding practice issue (Score:3)
Static linking bad (Score:4, Insightful)
Re:Static linking bad (Score:2)
Re:Static linking bad (Score:2)
Can't find xxxxx.3.4.21.3-34.so: bad. Failed package dependencies: bad. LD_LIBRARY_PATH: bad. DLL hell: bad.
Re:Static linking bad (Score:2)
Should I upgrade my kernel? (Score:2)
Or is the article lying?
Re:Should I upgrade my kernel? (Score:4, Interesting)
Re:Should I upgrade my kernel? (Score:5, Informative)
One place kernel uses zlib is to compress the kernel boot image. The kernel image then gets decompressed during bootup. So, from the standpoint of "the kernel uses zlib", the kernel is affected. There is, however, no new vulnerability introduced as far as I can tell. To attack the zlib-based decompression that the kernel performs, an attacker would need to modify the compressed kernel image that is used to boot the machine. I can think of far more fruitful ways to compromise a machine by modifying the kernel image than by trying to dork the zlib decompression that happens before the kernel even runs.
Another place the kernel uses ZLib is when mounting compressed filesystems. (Compressed RAM disks and zisofs come to mind.) In this case, you're asking a live kernel to decompress arbitrary data. These are only issues when mounting untrusted media. If you made the media yourself, then your only risk is that corrupted media might cause a kernel oops. And if you don't have cramdisk and zisofs compiled in, you're safe.
Other places the kernel seems to use ZLib (from a cursory scan of the source -- there may be others):
In any case, the kernel is a statically linked entity, with a minor exception for modules. ZLib is not a module, therefore to upgrade ZLib in the kernel, you'll need to rebuild the kernel. And it doesn't appear to be as easy as just upgrading ZLib and rebuilding the kernel. The kernel has multiple modified copies of ZLib in its source tree. I'd wait for an official kernel patch.
--Joeouch (Score:5, Interesting)
It just seems like there's a new hole (or two) every week. Let's see, we've had openssh, zlib, php, mod_ssl, cvs, cups, rsync, exim, ncurses, glibc and more, just since January. We've still got two-thirds of the year to go. Anyone want to make bets on what other projects will get hit? I think we're going to see problems with XFree86, samba, and apache.
So, my question is this: Do you think that this is simply a bad time for FS/OSS security? Are we at the threshold where there are enough eyes on the code to locate these kinds of bugs? Or is the quality of FS/OSS declining?
Wasnt it buq fixing month ? (Score:2, Interesting)
But to answer your last question, its seems its only getting better with finding all these bugs and all.
Not a surprise... (Score:2, Insightful)
Yes; perhaps this is due to the fact that FS/OSS is used by more developers/users. More eyeballs and more code to exercise libraries mean more bugs are discovered. As mentioned, this bug is (relatively) benign, and has already been fixed in the source. So I wouldn't necessarily say that FS/OSS is getting "more buggy", any more than commercial software, whose bugs don't leave the company if users don't discover them first.
Re:ouch (Score:4, Insightful)
That's the strength of open source.
Re:ouch (Score:2, Insightful)
Re:ouch (Score:2)
Your analogy doesn't make much sense. I would say "the horses have run amok" when there are actually boxes being compromised using this hole. At the moment, this is a purely theoretical exploit. This is more like discovering the barn door was open, and closing it before any horses got out.
Your other point is a good one, though. Anyone interested in secure servers would do well to investigate OpenBSD, as they have spent huge amounts of time auditing their code.
Re:ouch (Score:2)
Re:ouch (Score:2)
I agree that discovering that the barn door could be opened before it was installed is the best solution, however just in case something is found, you have to have a half-decent backup plan.
This is something that I believe Windows XP got right - by forcing users to download the latest patches, it means that the clueless are no longer vulnerable, and those who know enough to turn the updates off are either keeping up with the patches themselves, or basically have nobody to blame when Code Red III comes along...
Re:ouch (Score:2)
Compare that with most of the closed-source security flaws this year, which more had to do with actual exploited vulnerabilities than with potential theoretical vulnerabilities.
I guess the difference is that in a lot of these FS/OSS projects the potential problems are announced and stomped out quicker than they can become actual exploited vulnerabilities.
A double free that creates a potential remote hole (and more likely a potential DoS) isn't good, but if we weight security flaws, I would weight that far below something like Code Red or Sircam which is being actively exploited.
Re:ouch (Score:2)
L0pht Heavy Industries: "Making the theoretical practical since 1992."
The fact is, a theoretical flaw is still a flaw, and it's still relevant to my original question.
"Many of these don't even have an exploit coupled with them..."
...yet.
"I guess the difference is that in a lot of these FS/OSS projects the potential problems are announced and stomped out quicker than they can become actual exploited vulnerabilities."
This is something that's been bugging me as well. It seems that more security bugs are kept quiet until a fix is prepared. Personally, I'd much rather know ASAP, so I can at least disable or filter the vulnerable service. I think that as soon as a vuln. is discovered & discussed on any public list, the more likely it will be that there is an exploit available. In some cases (WU-FTPd globbing bug comes to mind), the vuln. was known about for some time before the fix was available. In this case, there was even an exploit available. The bug was discovered in July, a security advisory was being drafted as early as September, and the fix didn't get released until December.
Additional publicity (Score:2)
Dirty game really, but thata how they play it.
This is why news stories keep starting with Linux blah and then turning into "everyone blah"
Re:Additional publicity (Score:2)
Re:Additional publicity (Score:2)
As other posters noticed, GNU libc defaults to an unsafe but faster free() implementation that can damage the heap if called more than once on the same pointer. Other operating systems are said to default to a safer malloc() implementation.
The question is, if, and to which extent, potential extra security justifies real trade-offs in speed.
Using shared libraries is not free of trade-offs. Position independent code (PIC) used in them is slower on many architectures, including x86. However, shared libraries are preferred because, should anything like this bug in zlib be found, the only package to replace will be the faulty library (and the "offenders" linking to it statically, like CVS and rsync).
But the measure that could have prevented this bug from being an issue on GNU/Linux has not been taken on the same grounds of speed! Yet I think that a lot of programs suffer (in terms of speed) more from slower PIC code than from safer malloc(). No I'm not suggesting going back to static libraries. Rather, we should acknowledge that there is a price tag attached to the peace of mind.
Linux kernel developers should have examined the zlib code for compatibility with Linux kernel-space memory management.
Re:Additional publicity (Score:2)
Re:ouch (Score:2)
As for the quality of free and open source software, well, I've never understood why people think it is any better than commercial software. I've been a professional programmer for 10 years and I've seen whack commercial code, whack open source code, and I'm pretty sure I've written some whack code myself. The license doesn't have much to do with it except for having NDA's keeping me from discussing some commercial code. Having open source visible on the bug watch list doesn't hurt much because fixes are usually available. Its when people don't pay attention to the fixes that the problem gets out of hand. Remember, the fix to the exploit that Code Red used was out a few weeks before Code Red. Now if we see this zlib bug take down a big chunk of the Linux communitiy in a few weeks then it will be a bad thing because we didn't pay attention.
zlib rarely used in MacOS. MacOS safe. (Score:2, Interesting)
The MacOS running WebStar as a server has never been exploited.
In fact in the entire securityfocus (bugtraq) database history there has never been a Mac exploited over the internet remotely.
That is why the US Army gave up on MS IIS and got a Mac with WebStar.
I am not talking about BSD derived MacOS X (which already had a couple of exploits) I am talking about Mac OS 9 and earlier.
Why is is hack proof? These reasons
1> No command shell. No shell means no way to hook or intercept the flow of control with many various shell oriented tricks found in Unix or NT
2> No Root user. All mac developers know their code is always running at root. Nothing is higher (except undocumented microkernel stufff where you pass Gary Davidians birthday into certain registers and make a special call). By always being root their is no false sense of security.
3> Pascal strings. ANSI C Strings are the number one way people exploit Linux and Wintel boxes. The mac avoids C strings historically in most of all of its OS. In fact even its roms originally used Pascal strings. As you know pascal strings are faster than C (because they have the length delimiter in the front and do not have to endlessly hunt for NULL), but the side effect is less buffer exploits.
4> Stack return address positioned in safer location than intel. Buffer exploits take advantage of loser programmers lack of string length checking and clobber the return address to run thier exploit code instead. The Mac places return address infornt of where the buffer would overrun. Much safer.
5 : Macs running Webstar have ability to only run CGI placed in correct lodirectoy cation and correctly file typed.
6> Macs never run code ever merely based on how a file is named. ",exe" suffixes mean nothing. For example the file type is 4 characters of user-invisible attributes, along wiht many other invisible attributes, but these 4 bytes cannot be set by most tool oriented utilities that work with data files. For ecxample file copy utilities preserve launchable file-types, but JPEG MPEG HTML TXT etc oriented tools are physically incapable of creating an executable file. the file type is not set to executable for hte hackers needs. In fact its even more secure than that. A mac cannot run a program unless it has TWO files. The second file is an invisible file associated with the data fork file and is called a resource fork. EVERY mac program has a resource fork file containing launch information. It needs to be present. Typically JPEG, HTML, MPEG, TXT, ZIP, C, etc are merely data files and lack resource fork files, and even if the y had them they would lack launch information. but the best part is that mac web programs and server tools do not create files with resource forks usually.. TOTAL security.
7> There are less macs, though there are huge cash prizes for craking into a MacOS based WebStar server. Less macs means less hacvker interest, butthere are millions of macs sold, and some of the most skilled programmers are well versed in systems level mac engineering and know of the cash prizes so its a moot point, but perhaps macs are never kracked because there appear to be less of them. (many macs pretend they are unix and give false headers to requests to keep up the illusion, ftp http, finger, etc).
8> MacOS source not available traditionally, except within apple, similar to Microsoft source availability to its summer interns and such, source is rare to MacOS. This makes it hard to look for programming mistakes, but I feel the restricted source access is not the main reasons the MacOS has never been remotely broken into and exploited.
Sure a fool can install freeware and shareware server tools and unsecure 3rd party addon tools for e-commerce, but a mac (MacOS 9) running WebStar is the most secure web server possible and webstar offers many services as is.
I think its quite amusing that there are over 200 or 300 known vulenerabilities in RedHat over the years and not one MacOS remote exploit hack. And Now ith zLib, even more holes can be found in Linux.
Easy Workaround! (Score:5, Interesting)
Recent versions of Linux libc (later than 5.4.23) and GNU libc (2.x) include a malloc implementation which is tunable via environment variables. When MALLOC_CHECK_ is set, a special (less efficient) implementation is used which is designed to be tolerant against simple errors, such as double calls of free() with the same argument, or overruns of a single byte (off-by-one bugs). Not all such errors can be proteced against, however, and memory leaks can result. If MALLOC_CHECK_ is set to 0, any detected heap corruption is silently ignored; if set to 1, a diagnostic is printed on stderr; if set to 2, abort() is called immediately. This can be useful because otherwise a crash may happen much later, and the true cause for the problem is then very hard to track down.
Seems worth it while all pour through the symbol tables of our static binaries (and recompile the stripped ones. =( )
On another note, I've always regarded security bulletins as a one-way process... For example, I couldn't find a way to tell RedHat they'd omitted this (seemingly important?) reminder. Any thoughts about this? (admittedly i didn't look very hard for very long)
One more bug OpenSSH is affected by... (Score:3, Informative)
This is why you clear pointers after freeing them (Score:5, Informative)
This doesn't prevent attempts to free the previously freed pointer, but that will generally do a lot less damage than freeing a real malloc'd address. And during development it's trivial to add an assertion checking for a NULL pointer before any free().
Pointer aliasing... (Score:3, Insightful)
(A much better solution is to use a garbage collector.
Re:This is why you clear pointers after freeing th (Score:2)
Simply clearing it to NULL means that such won't be found, which may hide related errors. Letting it be freed twice would simply be sloppy and asking for trouble.
Re:This is why you clear pointers after freeing th (Score:3, Informative)
The assertion lets you catch the logic error that led to the second free during debugging. Presumably, your code path wasn't expecting the pointer to already be free at that point; otherwise, you would have designed it to handle that case already.
Then, in production code, if you do take that path, you'll get the harmless no-op free(0). (You do build production with -DNDEBUG, right?)
Re:This is why you clear pointers after freeing th (Score:2)
Problem: What if you have a structure that *optionally* allocates memory, and your cleanup code for that structure just frees all the pointers?
I have never bothered to check free because my rule is: for every init of a structure, make sure you call the cleanup.
This has worked for hundreds of kLOC of C I've written. Also, debugging heaps are nice. :)
SuSE advisory (affected packages) (Score:3, Informative)
Part 2: packages containing libz/zlib [suse.com]
From part 2:
The bright side of the bug (Score:2)
Quick workarounds. (Score:2)
export MALLOC_CHECK_=2
(don't forget the extra underscore at the end)
On BSD systems
ln -s ZH
It will protect both your statically and dynamically linked apps. It adds a little performance penalty, but it's really not noticeable.
Remote upgrades : be careful (Score:3, Insightful)
Because SSH/OpenSSH depend on zlib, if you replace your current libz.so file with another version whoose API has a bit changed, your SSH server won't work any more.
So if you don't have access to the console, open a classical 'telnet' port for a few minutes, just during the upgrade. Once you've checked that SSH is still ok, you can remove the telnet daemon.
If SSH doesn't work any more after the zlib upgrade, recompile SSH.
Re:Remote upgrades : be careful (Score:4, Informative)
So if you don't have access to the console, open a classical 'telnet' port for a few minutes, just during the upgrade. Once you've checked that SSH is still ok, you can remove the telnet daemon.
Since the SSH server forks after you've connected, you can safely stop the server while connected via SSH. You never need to use telnet. Just make sure that you can still connect before disconnecting from the original SSH connection.Would be impossible in garbage-collected language! (Score:5, Interesting)
Like most recent security holes in linux software, this one would be unexploitable in a modern safe language. (In fact it would be *impossible* to make this error in a garbage-collected language!)
The typical response I hear to this kind of comment is that "high level languages are inefficient". (I don't belive this is true, but most other people here do.) But whatever, let's pretend they are.
Now, what kind of crazy world do we live in where we value performance more than correctness (security)?? We are seeing more and more security holes as we try to write bigger and bigger packages in C. Why do we accept this? Who here really cares more about the performance of zlib than the time it takes for them to patch all of their statically-linked software, and their risk of being rooted until they do? I sure don't.
Forget about all this "coding practices" stuff. It simply takes too much effort to produce bug-free code in C. The OpenBSD people, kings of code review, just had an exploitable bug in sshd! While we need to use C for some tasks (ie, most parts of the kernel), I think we are seriously unpowered to do this for most applications (as evidenced by the high number of simple errors made, and sometimes caught).
If we simply wrote our software in high level languages, we would automatically rule out the largest classes of security holes, which would give us a lot more time to work on more important things, like high level architecture review and optimizations. I think we'd end up with a better system. So what's keeping us?
For more discussion, see our big argument in the story about the OpenSSH root hole. http://slashdot.org/comments.pl?sid=29123&cid=3124 957 [slashdot.org]
XFree86 4.2.0?? (Score:3, Insightful)
Does anybody know where this is used and whether I should do a rebuild with the current 1.1.4 version?
In addition to gs, this seems to be the only software package that contains zlib in it. I found it because there is a
my Linux system.
Re:Version 1.1.4 fixes the problem (Score:4, Funny)
Re:Version 1.1.4 fixes the problem (Score:2)
I was just thinking about this earlier today... (Score:2, Interesting)
This is creepy. I was wondering about possible vulnerabilities in zlib earlier today for a completely different reason.
According to this post [slashdot.org] in a different article, a recent patch to Internet Explorer disables support for gzip encoding. Do you think, perhaps, Microsoft noticed the bug, shut off the support (since presumably they're just using zlib also), and just didn't say anything?
Of course, that is just paranoia. They could have plenty of other reasons.
--JoeRe:Programming languages: fool me once ... (Score:2)
But a point you may be missing is MANY of the languages are built on C and as a result in effect inherently unsafe as well, unless they are self hosting.
...shame on the distribution publishers. (Score:2)
Yet why in the name of all that is good do they not realize that C is an inherently unsafe language. There are some really good free alternatives to C, so why the heck are those numbskulls not using them?
Because most distributions of GNU/Linux operating systems don't install any compiled languages but C and C++. (There is no language called "C/C++".)
Because popular libraries have C bindings.
Because compilers can optimize C code for machines with limited resources. (Java technology is a memory hog partly because of the 16-byte overhead of java.lang.Object.)
Because people think in "step 1, step 2, step 3" of an algorithm, rther thn in functionl style (partially ruling out CL, ML, and Haskell).
Take your choice: Eiffel, Ada, Modula-3
Which of these languages is in the default devel install in all the major distros and can link to libraries' C bindings?
Re:hahaha... (Score:2)
Re:The article says this is only affecting Linux (Score:3, Interesting)
Re:The article says this is only affecting Linux (Score:2)
I guess it is only partially immune: http://www.kb.cert.org/vuls/id/368819. Reading the security list I gather that it is only a problem with regards to running Linux apps. Time to give FreeBSD's free() a hug.
Re:The article says this is only affecting Linux (Score:2)
I bet you are one of the ppl that instantly criticizes when you hear that M$ (or insert fav evil corp) knew about a vulnerability and kept it quiet for a month.
Damned if ya do...
Re:bugtraq-slashdot effect (Score:2)
This bug was not reported on Bugtraq yesterday. In fact, it's not been mentioned on Bugtraq at all yet.
Re:Quick Response by RedHat (Score:2)
Re:So much for "many eyes".... (Score:2)
Did you come up with that one? Well put. I may have to make that my sig, I feel the same way, wish I had thought up that one.
Re:Oh the IRONY! (Score:2)
Linux:
1. Flaw Found
2. Flaw fixed for current and previous distributions.
Windows
1. Flaw found
2. Flaw claimed to be feature
3. Large Corporation bothered by flaw
4. Flaw patch developement begins.
5. Beta release of flaw pack
6. Beta 2 release of flaw pack
7. RC1 of flaw pack
8. Flaw pack released for new OS products
9. 1 month later flaw pack released for most widely used OS version.