Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Software

Are the Hard-to-Exploit Bugs In LZO Compression Algorithm Just Hype? 65

NotInHere (3654617) writes In 1996, Markus F. X. J. Oberhumer wrote an implementation of the Lempel–Ziv compression, which is used in various places like the Linux kernel, libav, openVPN, and the Curiosity rover. As security researchers have found out, the code contained integer overflow and buffer overrun vulnerabilities, in the part of the code that was responsible for processing uncompressed parts of the data. Those vulnerabilities are, however, very hard to exploit, and their scope is dependent on the actual implementation. According to Oberhumer, the problem only affects 32-bit systems. "I personally do not know about any client program that actually is affected", Oberhumer sais, calling the news about the possible security issue a media hype.
This discussion has been archived. No new comments can be posted.

Are the Hard-to-Exploit Bugs In LZO Compression Algorithm Just Hype?

Comments Filter:
  • by Anonymous Coward

    Yes.

  • Famous last words (Score:5, Informative)

    by 93 Escort Wagon ( 326346 ) on Saturday June 28, 2014 @04:21PM (#47341871)

    I'm old enough to recall when many people argued we didn't have to worry about various (then theoretical) JPEG vulnerabilities because they would be "extremely hard to exploit". But once it becomes known that something is possible, people have repeatedly proven themselves extremely clever in figuring out how to accomplish it.

    If I was on the Rover team, I might not worry - but terrestrial users of LZO compression should at least start thinking about how to ameliorate this.

    • Re:Famous last words (Score:5, Interesting)

      by russotto ( 537200 ) on Saturday June 28, 2014 @06:24PM (#47342391) Journal

      In this case, it's not just "extremely hard to exploit" (which means the NSA had it done 10 years ago and the other black hats 5). It appears that it's impossible -- to cause the overrun requires an compressed block size larger than the affected programs will accept. (of course, this doesn't preclude the possibility of other bugs which allow a larger compressed block through)

  • but NSA knows.

  • by Sits ( 117492 ) on Saturday June 28, 2014 @05:14PM (#47342109) Homepage Journal

    Whether you consider this issue is hype depends on your answer to "if a tree falls in a forest and there's no one to observe it..." thought experiment.

    The author of LZ4 has a summary with regards to LZ4 [blogspot.co.uk] (both LZO and LZ4 are based on the LZ77 compression and both contained the same flaw) - that the issue has not been demonstrated as being exploitable in currently deployed programs due to their configuration (a rather angrier redacted original reply was originally posted [blogspot.co.uk]). So at present this issue is severe but of low importance. If a way is found to exploit this problem on currently deployed popular programs without changing their configuration then this issue will also be of high importance but since this issue has now been patched hopefully newly deployed systems wouldn't be vulnerable.

  • by Midnight Thunder ( 17205 ) on Saturday June 28, 2014 @06:04PM (#47342329) Homepage Journal

    Given the number of security issues related to buffer over-runs, I wonder whether C/C++ should provide a safe buffer that would help alleviate these issues? Sure it might compromise performance slightly, though it might be acceptable when faced with the alternative of unexpected issues due to an unforeseen buffer overrun.

    • by Anonymous Coward

      The real question is why can't programmers figure out how to do bounds checking.
      It's not exactly rocket science.
      Just make a struct with an array pointer and a size int.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        Sigh. Every time someone suggests online that perhaps we should be trying to put better bounds-checking constructs into languages, someone else will flame-on and say "programmers should just well, program better!". True, yet a pipe dream at the same time -- as we see by history.

        Fact is, yes, programmers should be doing proper bounds-checking. But programmers are people, and people suck (I'm a member of both sets, so I can personally corroborate this). It's time we stopped saying over and over that "Programm

        • by aix tom ( 902140 )

          And to further the argument: Is a glass manufacturer lazy/stupid/careless when he sells non-bullet proof glass for $X, and not makes it a point to only sell bullet-proof glass at $X * 100?

          The same way I just have to accept that the door to my balcony is not going to stop a man with a ladder and a sledgehammer and ~15 minutes time, I have to accept that "normal" computer security won't keep me 100% save, unless I invest some time and effort myself, or pay someone to do the effort, way above "the norm" to mak

      • Because it's not the right solution for every problem, and if you make languages that *force* this kind of behaviour, the shitty programmer will just put their bugs elsewhere.
        The solution is to simply write better code.

    • by Anonymous Coward

      The folks who designed ALGOL already thought abouts this and have it. They even built entire Mainframe operating systems using ALGOL variants. They had fine-grained security protection inside each program, each library. Not just the ÂMMU will hopefully contain any C-based exploit at the process levelÂ. They sold this as the MCP OS since 1961. They still do. The Russians have a similar OS and so did Britains ICL corporation.

      But you now what ? Cheap and nasty C, Unix, MSDOS, Windows have somehow eli

    • by fnj ( 64210 )

      I wonder whether C/C++ should provide a safe buffer

      When I see that expression "C/C++" used in this particular context it raises my hackles, because it indicates the writer does not understand the difference.

      In C the programmer is free to USE a buffer safely, by doing his own bounds checking. In C++ the programmer is free to use C++'s non-overflowing dynamically-allocated/self-growing constructs, as well as a simple C style array or fixed-size-allocated buffer. C++ makes it substantially easier to use a buff

      • Three words: goto considered harmful. Safe code isn't only about the ability to make it safe. It's about a set of rules the environment or language imposes on the programmer to make it *easier* to make it safe and *harder* to make it unsafe. The programmer can still choose the language, so it's a self-imposed set of rules. Like garbage collection. Of course any decent programmer can do his own garbage collection. But if you can keep the programmer from having to do that, you (1) free him up to do thin
      • by leuk_he ( 194174 )

        Why still program in C? Simple: C is easy to glue to everything, just because it assumes not too much about the data structures. And because you have reinvent data validation ( safe buffers) for every interface again, there is a huge risk there.

        The obvious solution is to use proven libraries for these problems (opensll, libzzo ;) . However if one of these libraries has a bug, (or is not obvious to use) the problem is in many programs at once.

    • by guruevi ( 827432 )

      Once you start checking bounds and counting references and making strings safe and cleaning memory and garbage collection you're in the realm of ObjC, Java and other higher languages. They exist, they are available and can be used to implement any algorithm imaginable. Yet programmers still use C, Assembler and even PROM...

      • If we're talking about stuff in the kernel, we're talking about needing performance with minimal runtimes. This usually rules out the usual garbage collection. At that point, you probably should consider C++, which provides string and array equivalents with bounds checking but is as efficient as C.

        • by guruevi ( 827432 )

          But that's what I mean. However, C++ is slower than C, C is simpler to implement and virtually any platform has a C compiler but it doesn't do a lot of things out of the box. You choose the tool you need and best suited for the job. I can't program a PIC in JavaScript, but I can do a website.

          • C++ is very rarely slower than C (particularly if you can disable exceptions), and is sometimes faster (std::sort vs qsort). It doesn't actually require a complicated runtime. These are important differences between it and Java.

            If you need C-like performance, want security, and have a good C++ compiler (g++, clang, to some extent VC++), C++ is a very good choice.

    • They have. It's called Java or C#.

    • C++ has safe buffers in the standard library.

  • 32bit OS / 32bit app or 32bit cpu

  • by Anonymous Coward

    How could you think switching c++ would solve any issues knowing a translator for the programming language g van be compiled in c, the old Rusan joke about the garagekey. Your antivirus has no chance when it comes to a pay load being in g that then gets translated back to c because it has already exsepted it in the door to asymbly line (native language) slick key that garagekey

  • I take such news with a grain of salt. In my experience/estimates, about 80% of security experts report "not possible to reproduce/impossible to exploit" for REAL exploitable bugs.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...