Are the Hard-to-Exploit Bugs In LZO Compression Algorithm Just Hype? 65
NotInHere (3654617) writes In 1996, Markus F. X. J. Oberhumer wrote an implementation of the Lempel–Ziv compression, which is used in various places like the Linux kernel, libav, openVPN, and the Curiosity rover. As security researchers have found out, the code contained integer overflow and buffer overrun vulnerabilities, in the part of the code that was responsible for processing uncompressed parts of the data. Those vulnerabilities are, however, very hard to exploit, and their scope is dependent on the actual implementation. According to Oberhumer, the problem only affects 32-bit systems. "I personally do not know about any client program that actually is affected", Oberhumer sais, calling the news about the possible security issue a media hype.
Absolutely. (Score:1)
Yes.
Re: (Score:2)
Of course, because of ... Betteridge's law of headlines [wikipedia.org].
Re: Kernel bloat (Score:2)
Are you trolling?
There are many good use cases - as is illustrated in TFA
Configure your kernel and rebuild if you don't want the benefits.
Re:Kernel bloat (Score:5, Insightful)
Why should the Linux kernel have a compression algorithm in it?
Because it needs to compress and decompress things.
The kernel image is usually compressed anyway, then you've got things like page compression for zram, in-filesystem compression support - heck, BTRFS uses LZO! I think some network layer stuff like PPP supports header compression, and all that's only the things I'm vaguely aware of.
Re:Kernel bloat (Score:5, Informative)
Compressed memory? Filesystem compression? Compressed memory before swap? Compressed init filesystem?
Lots of valid reasons. Those are just the ones that I know of off the top of my head and I don't even use Linux.
Re: (Score:2)
Re: (Score:2)
Because it can.
Serious. That's why.
Do you actually know about this specific feature of the kernel, or are you just trying to appear cynical and wisened?
One suspects it's there because it's actually used by something. One also suspects it's used for a reason.
Re: (Score:2)
It seems that leaving out a lot of that stuff would leave us with a leaner and nicer kernel. A lot of the compression stuff there is just because Linux has to have "everything".
Look like those in charge disagree, and presumably they have their reasons. There are downsides to being lean-and-mean, of course.
I doubt if you even know what kind of things it is used for.
You're right.
Re: (Score:3, Insightful)
you're confused, this fights bloat on disk by letting kernel image be half the size and loading twice as fast (uncompression time is negligible, less than tenth of a second on a typical desktop)
it of course has many other uses but you can read the article to find out
Re: (Score:2)
Bloat isn't just the size on stable storage... In fact most of the problems of software bloat are entirely separated from that metric, code size, complexity and other effects are far greater.
Re: (Score:2)
this compression code is very tiny in the binaries. most of the kernel is device drivers
Re: (Score:2)
Decompressing initramfs images? Stuff like that is great for embedded systems. The feature is optional.
Re:Kernel bloat (Score:5, Funny)
Why should the Linux kernel have a compression algorithm in it?
Because it aspires to compress itself into a microkernel.
Re: (Score:1)
Whoosh! I think O
Re: (Score:2, Funny)
Are you sure you understand what a microkernel design actually is?
The area of your brain which processes humor. ;)
Famous last words (Score:5, Informative)
I'm old enough to recall when many people argued we didn't have to worry about various (then theoretical) JPEG vulnerabilities because they would be "extremely hard to exploit". But once it becomes known that something is possible, people have repeatedly proven themselves extremely clever in figuring out how to accomplish it.
If I was on the Rover team, I might not worry - but terrestrial users of LZO compression should at least start thinking about how to ameliorate this.
Re:Famous last words (Score:5, Interesting)
In this case, it's not just "extremely hard to exploit" (which means the NSA had it done 10 years ago and the other black hats 5). It appears that it's impossible -- to cause the overrun requires an compressed block size larger than the affected programs will accept. (of course, this doesn't preclude the possibility of other bugs which allow a larger compressed block through)
You don't know... (Score:1)
but NSA knows.
Re: (Score:3)
File system drivers in general are not properly security vetted. You can do interesting stuff to a Linux box if you put ext4 on a fake device and start messing with what is on the disk while it is being read. Many device drivers have similar problems; you could find a Linux device driver with a problem and make a fake piece of hardware resembling the real thing while exploiting the bug.
This is pretty much unfixable. While most core OS code is of a high quality these days, there is just too much driver code
If a tree falls in a forest... (Score:5, Informative)
Whether you consider this issue is hype depends on your answer to "if a tree falls in a forest and there's no one to observe it..." thought experiment.
The author of LZ4 has a summary with regards to LZ4 [blogspot.co.uk] (both LZO and LZ4 are based on the LZ77 compression and both contained the same flaw) - that the issue has not been demonstrated as being exploitable in currently deployed programs due to their configuration (a rather angrier redacted original reply was originally posted [blogspot.co.uk]). So at present this issue is severe but of low importance. If a way is found to exploit this problem on currently deployed popular programs without changing their configuration then this issue will also be of high importance but since this issue has now been patched hopefully newly deployed systems wouldn't be vulnerable.
Safe Buffer? (Score:4)
Given the number of security issues related to buffer over-runs, I wonder whether C/C++ should provide a safe buffer that would help alleviate these issues? Sure it might compromise performance slightly, though it might be acceptable when faced with the alternative of unexpected issues due to an unforeseen buffer overrun.
Re: (Score:1)
The real question is why can't programmers figure out how to do bounds checking.
It's not exactly rocket science.
Just make a struct with an array pointer and a size int.
Re: (Score:3, Insightful)
Sigh. Every time someone suggests online that perhaps we should be trying to put better bounds-checking constructs into languages, someone else will flame-on and say "programmers should just well, program better!". True, yet a pipe dream at the same time -- as we see by history.
Fact is, yes, programmers should be doing proper bounds-checking. But programmers are people, and people suck (I'm a member of both sets, so I can personally corroborate this). It's time we stopped saying over and over that "Programm
Re: (Score:1)
And to further the argument: Is a glass manufacturer lazy/stupid/careless when he sells non-bullet proof glass for $X, and not makes it a point to only sell bullet-proof glass at $X * 100?
The same way I just have to accept that the door to my balcony is not going to stop a man with a ladder and a sledgehammer and ~15 minutes time, I have to accept that "normal" computer security won't keep me 100% save, unless I invest some time and effort myself, or pay someone to do the effort, way above "the norm" to mak
Re: (Score:1)
Because it's not the right solution for every problem, and if you make languages that *force* this kind of behaviour, the shitty programmer will just put their bugs elsewhere.
The solution is to simply write better code.
Re: Safe Buffer? (Score:1)
The folks who designed ALGOL already thought abouts this and have it. They even built entire Mainframe operating systems using ALGOL variants. They had fine-grained security protection inside each program, each library. Not just the ÂMMU will hopefully contain any C-based exploit at the process levelÂ. They sold this as the MCP OS since 1961. They still do. The Russians have a similar OS and so did Britains ICL corporation.
But you now what ? Cheap and nasty C, Unix, MSDOS, Windows have somehow eli
Re: (Score:3)
When I see that expression "C/C++" used in this particular context it raises my hackles, because it indicates the writer does not understand the difference.
In C the programmer is free to USE a buffer safely, by doing his own bounds checking. In C++ the programmer is free to use C++'s non-overflowing dynamically-allocated/self-growing constructs, as well as a simple C style array or fixed-size-allocated buffer. C++ makes it substantially easier to use a buff
Safe code not about ability to make it safe... (Score:1)
Re: (Score:2)
Why still program in C? Simple: C is easy to glue to everything, just because it assumes not too much about the data structures. And because you have reinvent data validation ( safe buffers) for every interface again, there is a huge risk there.
The obvious solution is to use proven libraries for these problems (opensll, libzzo ;) . However if one of these libraries has a bug, (or is not obvious to use) the problem is in many programs at once.
Re: (Score:2)
Once you start checking bounds and counting references and making strings safe and cleaning memory and garbage collection you're in the realm of ObjC, Java and other higher languages. They exist, they are available and can be used to implement any algorithm imaginable. Yet programmers still use C, Assembler and even PROM...
Re: (Score:2)
If we're talking about stuff in the kernel, we're talking about needing performance with minimal runtimes. This usually rules out the usual garbage collection. At that point, you probably should consider C++, which provides string and array equivalents with bounds checking but is as efficient as C.
Re: (Score:2)
But that's what I mean. However, C++ is slower than C, C is simpler to implement and virtually any platform has a C compiler but it doesn't do a lot of things out of the box. You choose the tool you need and best suited for the job. I can't program a PIC in JavaScript, but I can do a website.
Re: (Score:2)
C++ is very rarely slower than C (particularly if you can disable exceptions), and is sometimes faster (std::sort vs qsort). It doesn't actually require a complicated runtime. These are important differences between it and Java.
If you need C-like performance, want security, and have a good C++ compiler (g++, clang, to some extent VC++), C++ is a very good choice.
Re: (Score:2)
They have. It's called Java or C#.
Re: (Score:2)
C++ has safe buffers in the standard library.
32bit OS / 32bit app or 32bit cpu (Score:2)
32bit OS / 32bit app or 32bit cpu
c/c++ (Score:1)
How could you think switching c++ would solve any issues knowing a translator for the programming language g van be compiled in c, the old Rusan joke about the garagekey. Your antivirus has no chance when it comes to a pay load being in g that then gets translated back to c because it has already exsepted it in the door to asymbly line (native language) slick key that garagekey
Grain of salt.... (Score:2)
I take such news with a grain of salt. In my experience/estimates, about 80% of security experts report "not possible to reproduce/impossible to exploit" for REAL exploitable bugs.