Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security

Analyzing Binaries For Security Problems 304

Matt writes "At the last talk at BlackHat in Las Vegas, Greg Hoglund demonstrated a product for sale by his new company that analyzes binaries for security vulnerabilities. He showed the analysis of several commercial products, the results of which were shockingly insecure. This product should help end the debate of closed source or open source applications being more or less secure."
This discussion has been archived. No new comments can be posted.

Analyzing Binaries For Security Problems

Comments Filter:
  • by Meat Blaster ( 578650 ) on Friday August 01, 2003 @06:09AM (#6586442)
    Isn't it blatently illegal to analyze the majority of the binaries out there? You can't even give benchmarks on many of them without violating the EULA, let alone actually dig through the internals, because it's damaging to the rights of the software designer under the license.

    Then again, it's not like virus scanners don't do the same thing.

    • by msgmonkey ( 599753 ) on Friday August 01, 2003 @06:13AM (#6586456)
      Nope, the contract may say that you may not do this nor that you and you could only be sued for breaking a contract.

      If it was illegal ie there was a law against reverse engineering, benchmarking, etc it would not be in the EULA.

      Also just because something is in a contract does n't make it legally binding if the clause breaks laws.
    • by quigonn ( 80360 ) on Friday August 01, 2003 @06:25AM (#6586488) Homepage
      Not in most parts of Europe. The copyright there explicitly permits disassembling and reverse engineering.
      • by BlueWonder ( 130989 ) on Friday August 01, 2003 @06:56AM (#6586571)
        Not in most parts of Europe. The copyright there explicitly permits disassembling and reverse engineering.

        I don't know what you mean by most parts of Europe, but an EU directive makes disassembling and reverse engineering explicitly illegal. This directive must be made the law by all EU member countries, and already has by many.

        • Not for interop purposes though, as far as I'm aware. Obviously, you cannot use code gained from disassembly in your own products.
        • by LarsG ( 31008 ) on Friday August 01, 2003 @07:15AM (#6586613) Journal
          an EU directive makes disassembling and reverse engineering explicitly illegal.

          Which directive? According to directive 91/250/EEC, reverse engineering is expliclitly legal in EU/EEC.
          • it's a little more complicated than that: it depends what, exactly, you mean by reverse engineering.

            There's a general right in Article 5(3) of the Directive to "observe, study or test the functioning" of a program to "determine the ideas and principles which underlie any element of the program".

            But decompilation is restricted to circumstanecs where it's essential to do so to achieve interoperability (e.g. interchange of file formats) with other, independently created, software. You can't use decompiled co
        • Decompilation is explicitly allowed in Austria, to (re-)establish interoperability, even in the revised version: http://www.parlinkom.gv.at/pd/pm/XXII/I/images/000 /I00051__3097.pdf

          And this revised copyright law is an implementation of the EU directive!
    • by t123 ( 642988 ) on Friday August 01, 2003 @06:48AM (#6586542)
      Because this is /. and nobody RTFA

      Q: Does BugScan decompile programs?
      A: No. BugScan does analysis of assembly code and does not need to decompile the program.

      Q: Does BugScan "reverse engineer" programs?
      A: No. Reverse engineering is a process where a program or device is taken apart to understand how it works, generally for the purpose of reimplementing, complementing, or modifying a behavior of the system. BugScan doesn't try to understand how the program works, what algorithms it employs, or anything else. BugScan analyzes usage of known APIs and the dataflow to and from those APIs.
    • Isn't it blatently illegal to analyze the majority of the binaries out there? You can't even give benchmarks on many of them without violating the EULA, let alone actually dig through the internals, because it's damaging to the rights of the software designer under the license.

      It's blatently illegal to break into other people's machines using the same holes found in such software. So if this is the case, and black hats are still going to do it, do you really think that crappy laws forbidding analysing bin

  • Hmm. (Score:5, Insightful)

    by Anonymous Coward on Friday August 01, 2003 @06:09AM (#6586446)
    Isn't it kind of strange how they make such big claims but present no actual evidence?
    • Re:Hmm. (Score:3, Funny)

      by Gleng ( 537516 )
      You must be new here.
    • Re:Hmm. (Score:4, Interesting)

      by tankdilla ( 652987 ) on Friday August 01, 2003 @06:33AM (#6586507) Homepage Journal
      Is there any evidence? OpenBSD maybe.

      If they did present an app as being secure, it'd definitely put that app under the microscope, as someone would find a vulnerability just to try to show that BugScan doesn't work.

      But I guess at that if a vuln was found, BugScan would say they just suggest purchasing BugScan v1.1.

    • Re:Hmm. (Score:5, Informative)

      by archeopterix ( 594938 ) on Friday August 01, 2003 @06:34AM (#6586510) Journal
      Indeed, even finding what code gets actually executed is by no way a simple task. Easy to follow from the main entry point of the executable? Not always. Some compilers/interpreters create tables of entry points for some functions then call the functions via entries in the table. Moreover, the table doesn't have to be present in the executable, but created at runtime instead (calculated from offsets or something). That's only one of many problems with static analysis of machine code. I don't think their program does much more than scanning for a set of known patterns produced by a set of known compilers.
      • I assume you are talking about vtables here? These would be created by both the compiler and the runtime. Under Windows the runtime is responsble for loading the code, performing fixups, then executing it. This is known behaviour and could be emulated by the software.
    • Re:Hmm. (Score:3, Interesting)

      by ldrolez ( 261228 )
      Moreover, Reasoning's ( http://www.reasoning.com/news/pr/07_01_03.html ) source code analysis of apache was not very convincing (31 defects => 7 real bugs, see http://www.apacheweek.com/issues/03-07-11 ). So how could they make a better analysis tool without software's source code ??
  • by jamieswith ( 682838 ) on Friday August 01, 2003 @06:10AM (#6586449)
    Is that, provided you have the ability, then you don't have to sit around and wait for someone else to fix the problems in the programs you use...

    Still, politics aside, perhaps with more applications like this freely available, perhaps more bugs will actually be fixed - rather than relying on security through obscurity - sitting tight and hoping no-one notices...

    Leave me alone! - I can dream can't I ??
  • Presentation slides (Score:5, Informative)

    by bartc ( 160938 ) on Friday August 01, 2003 @06:13AM (#6586453)
    You can get the slides of his presentation here:

    http://www.blackhat.com/presentations/bh-usa-03/bh -us-03-hoglund.pdf [blackhat.com]
  • obfuscation (Score:5, Informative)

    by doofusclam ( 528746 ) <slash@seanyseansean.com> on Friday August 01, 2003 @06:13AM (#6586457) Homepage
    I'd like to know exactly how it does this, considering how much of a mess compiled/optimised c++ code can look at an assembler level. It's also unlikely to be any use on a semi-compiled runtime, such as those used by Visual Basic, .NET etc as the only 'code' is the runtime, the actual program is held in a data section.
    • by darkov ( 261309 ) on Friday August 01, 2003 @06:57AM (#6586575)
      I'd like to know exactly how it does this.

      It searches for '(c) Copyright Microsoft Corporation'.
    • operation (Score:3, Insightful)

      by *weasel ( 174362 )
      they can't even likely tell what code is going to execute, so that severely restricts their options.

      odds are they are just scanning for loops that copy until they find a null at the end of a string. (searching for resulting patterns from compiled strcpy as opposed to strncpy).

      as most exploits are buffer overflows, this would theoretically catch all of them. it would also catch all sorts of potential buffer overflows that would never be possible given the level of user input (since it's not running the co
      • I'd imagine there's a lot of "didn't check return value of such-and-such function call" as well.

        Checking return values is something that a lot of people tend to leave out, and something that C doesn't have a way of doing that "feels" natural.

        Perl's simplistic:

        dostuff($param) or die("Couldn't do stuff!");

        ...is nice, because it leaves the most important part at the beginning of the line, and it's immediately obvious what's important. I'm reminded of how Spanish (and many other languages, I'd wager) tends

  • by wazlaf ( 681158 ) on Friday August 01, 2003 @06:15AM (#6586459) Homepage
    I can't imagine this program to work very well - finding buffer overflows and other possible security vulnerabilities can be an immensely hard task when you actually _do_ have access to the source code. Also, the available compilers produce quite different assembly for the same code. This just all sounds a little bit too good to be true...
    • by leuk_he ( 194174 ) on Friday August 01, 2003 @06:43AM (#6586530) Homepage Journal
      Howabout checking thewhitepaper? [bugscaninc.com]

      It tells how it works, and it also tells it does not have the abilty to smell at the data users provide.

      It just smells at the code, looks if it uses vulnerable calls like strcpy, an reports this. But it completely puzzels me how you can use the report to report "this is good" or "this is good enough" or "this is a piece of shit".

      finding buffer overflows and other possible security vulnerabilities can be an immensely hard task when you actually _do_ have access to the source code. Also, the available compilers produce quite different assembly for the same code.

      This is the part they did right. They can analyze all kind of assembly, also non-x86. (It does not produce C, no they ananlyze function calls and backtrack them. The problem is that it analyzes "compiled source code", but not the user input.
      • It just smells at the code, looks if it uses vulnerable calls like strcpy, an reports this. But it completely puzzels me how you can use the report to report "this is good" or "this is good enough" or "this is a piece of shit".

        Well it looks to me like you wouldn't want to trust a 'this is good' result from this tool, but if it says it's bad it's probably right.

    • It doesn't (Score:4, Insightful)

      by i_really_dont_care ( 687272 ) on Friday August 01, 2003 @06:43AM (#6586532)
      Looks like a lot of hot air.

      The PDF presentation tells us things that we know already (buffer overflow, race conditions, whatever).

      Two screenshots show debuggers and disassemblers. Another screenshot shows the "analysis results" of the "tool": "wsprintf: This function is insecure, use another function." Even this info is useless, because wsprintf is insecure only if it is used the wrong way, and I bet the "tool" doesn't check that. Besides, everyone uses std::string these days (or at least should do so).

      It's also worth to note that about every University in the world has one or more groups working on topics like "automatic code verification", "code path analysis" and other things. This stuff is nowhere rocket science, but there's a lot to happen until it will go usable by the mainstream of developers.
      • Re:It doesn't (Score:3, Insightful)

        by Zathrus ( 232140 )
        Even this info is useless, because wsprintf is insecure only if it is used the wrong way

        Yes, but the point being that it's pretty damned easy to use it in the wrong way. More importantly, it's very likely that someone else will come behind you to patch the program and end up using it the wrong way. End result? Don't use wsprintf()... at the very least use wsnprintf() (or whatever the hell the equivalent of snprintf() is for wide character sets). I know, snprintf() isn't standard, but it's implemented on v
  • by Anonymous Coward on Friday August 01, 2003 @06:16AM (#6586463)
    It's called "file", and you can use it to recognize problematic/insecure binaries.

    $ file /usr/lib/jed/bin/w32/w32shell.exe
    /usr/lib/jed/bin/w32/w32shell.exe: MS Windows PE 32-bit Intel 80386 console executable not relocatable

    And voila!

  • Uh oh (Score:5, Funny)

    by beacher ( 82033 ) on Friday August 01, 2003 @06:18AM (#6586468) Homepage
    I just put my boss's Windows 2003 Server CD under a microscope to examine the binaries.. Started zooming in.. and then SNAP. The bitch cracked into 2. I'll put gentoo on the server now and just tell him that a security cracker broke his shit.
    -B
    • Reminds me of my first microscopy class at U. The Zeiss phase-contrast oil-immersion scopes cost the equivalent of over $20000 at today's prices and they gave them to 18 and 19 year olds (almost all male) to use. The only things that ever got broken were the 2c cover glasses. It made me appreciate German engineering.
  • by msgmonkey ( 599753 ) on Friday August 01, 2003 @06:19AM (#6586473)
    If this can be used to detect for example buffer overflows than does n't it also help speed up a crackers turn around rate?

    I mean instead of trying to find flaws instruction by instruction with some debugger, simply specify all exe and dll's in your %winroot% directory press start and wait for the report and then manually inspect hilighted areas.
    • by kinnell ( 607819 ) on Friday August 01, 2003 @07:10AM (#6586605)
      If this can be used to detect for example buffer overflows than does n't it also help speed up a crackers turn around rate?

      All the more reason for companies to buy this product - if crackers can find the bugs easily using this program, it's much more important that the developers do to.

    • I think this is a valid observation. [sarcasm]Let's ban it, and while we're at it, let's ban port scanners and encryption cracking software. Oh, that's right, we already did that![/sarcasm]

      Seriously, though, just about any useful security tool is also a useful cracker tool. This fact is not confined to the field of computers.

  • Rubbish... (Score:5, Informative)

    by MosesJones ( 55544 ) on Friday August 01, 2003 @06:19AM (#6586475) Homepage

    So this analyses binaries and will find all issues where the code will halt and will exceed its resource requests, thus eliminating the need for testing...

    I call Snake Oil.

    For those who don't know about the Halting Problem [wolfram.com] or Busy Beaver Problem [ucla.edu] then you should really know about what computers can or cannot do.

    I dare say these people have some basic pattern matching, but this is NOT a reason to stop testing.
    • Re:Rubbish... (Score:2, Insightful)

      by doofusclam ( 528746 )
      Damn right, I agree 100%

      This program might find potential buffer overruns, but it has no idea of context - most overruns come in the common interfaces between components rather than internally to an exe, bear in mind too that a (windows) exe usually spends most of its time in COM or operating system components anyway., I'd rather spend time manually checking the code which is executed 100000 times a second rather than getting told of buffer overruns in something probably never gets executed.
      • Re:Rubbish... (Score:3, Insightful)

        by kasperd ( 592156 )
        I'd rather spend time manually checking the code which is executed 100000 times a second rather than getting told of buffer overruns in something probably never gets executed.

        The number of times it gets executed is not an issue. If it is vulnurabe executing it once is enough for the cracker to take control. Even if it is never executed under normal circumstances, the cracker might be able to do something to get it executed.
        • The problem with this sort of thing is it can't always know the full context in how the program is using some code - it just sees a section of code that has the potential for problems (like a strcpy), but may not see that everything being fed to it has already been length checked beforehand. Unless the software can also confirm there's actually some way to feed the code something that could overflow it, I don't see how it proclaim it a bug.
    • by sporty ( 27564 ) on Friday August 01, 2003 @07:49AM (#6586685) Homepage
      "Snake oil?" "Shenanagins", is more fun.
    • Re:Rubbish... (Score:3, Insightful)

      by Phillip2 ( 203612 )
      "For those who don't know about the Halting Problem or Busy Beaver Problem then you should really know about what computers can or cannot do."

      I'm not sure what the relevance is here though. They are claiming that they can find security problems, not that they can guarentee to find all security problems.

      The halting problem does not mean that you can not write a program to identify other programs that will not halt. It just says you can not always do this.

      Phil


      • The relevance is that what they are doing is nothing more than doing a "you called an API that MIGHT potentially cause an issue" they do _nothing_ to determine the way the code works or how the code flows.

        Their claim that this is in anyway a substitute for testing is laughable in the extreme. The point about the halting problem is that yes you might identify definate causes of error but you cannot identify even the majority of errors in a complex system. And that is when taking a proper approach.

        This is
    • It looks like what it does is similar, but not quite as useful, as what Bounds Checker (now called Dev Partner) does. This is debug code and a runtime instrumentation that is added to your code. It then checks the parameters of every single library call you make and ensures you pass good values. It can detect things like passing in a bad handle to a function and memory leaks...all good stuff at development time.

      This product looks less useful because it will not process it in regard to user input, and user i

  • Ultimately whether a program has a security or other kind of bug in it; that's an equivalent level problem to Turing's halting problem, and we know that that is an NP complete problem.

    Which isn't to say that this product is useless, it's entirely possible to have useful approximations or rules of thumb for checking programs out. Heck, that's how people mostly do it, and automating what people do is fair enough.

    • by Anonymous Coward on Friday August 01, 2003 @06:44AM (#6586535)
      The halting problem isn't NP-complete (that would be bad but not that bad) but actually intractable -- it can be proved that you can't solve it at all, in general.

      Which indeed does not mean that you can't make interesting inroads using a suitable tool that calls your attention to problematic areas in code.
  • by nietsch ( 112711 ) on Friday August 01, 2003 @06:24AM (#6586486) Homepage Journal
    from the faq:

    Q: Does BugScan make it easy for hackers to develop new attacks?

    A: No. The information BugScan gives optimizes a small part of the exploit development process, but it still requires a very skilled person to do the additional work to produce a working exploit. That being said, BugScan is used in HBGary's exploit development process, and some customers are using BugScan for similar purposes.

    Q: Does BugScan determine if a security coding error creates an exploitable vulnerability?

    A: No. While we are working to enable this kind of functionality in the product in response to customer demand, it is a difficult to determine with any amount of certainty if a problem detected is truly exploitable.


    So actually you will end up with a report that cannot mention if you are safe or not, and no way to change the application if you think you are.

    Snake oil. Very good against any kind of bugs, esp security bug whatever those may be.
  • by blowdart ( 31458 ) on Friday August 01, 2003 @06:25AM (#6586489) Homepage

    Lets look at the quote on the web page [bugscaninc.com], shall we?

    "The alternatives are to laboriously test software or meticulously review source code line by line. But these options are so time consuming and expensive that few companies will do it." (emphasis added)

    So how exactly, as the article submitter says will this "help end the debate of closed source or open source applications being more or less secure"? The product page already says that few companies have the time or money to check source code, and how many others do? Sure, it's great to have the source, but when you install apache do you check every single line for buffer offerflows? Of course not. You rely on others doing it, and you rely on others doing it correctly. That may well be a mistake, are you sure someone else will check every revision line by line?

    So, frankly, this product contributes nothing to open or closed source arguements, it's simply a nice tool to automate some reviews.

    (as an aside, it appears that bugscaninc have made their choice over open and closed source,

    Server: Microsoft-IIS/5.0
    X-Powered-By: ASP.NET

  • by Dexter77 ( 442723 ) on Friday August 01, 2003 @06:29AM (#6586498)
    The webpage says "report is created for each program identifying the specific locations of potential security vulnerabilities"

    All programmers know that high level languages create very large binary files. A small program that prints few lines written in Visual Basic, might take hundreds of kilobytes space. Hundreds of kilobytes might mean even millions of lines of assembly code.

    Let's take an example. The bugscan reports that there are bugs on lines 24.234, 93.234, 134.834, 342.234, 534.444, 767.835 and 822.511 out of 1.023.890 lines. The BugScan might even report that those lines are from abcd.dll, efgh.dll, ijkl.dll and aaaa.dll. Do you now feel reliefed? No, I didn't think so either. I mean that BugScan might be very useful on low level languages, but when there are ten layers of different libraries between your code and the machine code, I bet the usefulness is not that high.
    • by BenjyD ( 316700 ) on Friday August 01, 2003 @06:38AM (#6586521)
      Exactly what I thought. I imagine things like inlining and other compiler optimisations might confuse things further.

      From looking at the report generated on Trillian (in the whitepaper on the site), most of what it seems to do is check for bad function calls (sprintf etc). I'm not sure who their target market is - not developers, as they can use automatic auditing tools on the source which would tell them more useful information.
    • by lokedhs ( 672255 )

      All programmers know that high level languages create very large binary files. A small program that prints few lines written in Visual Basic, might take hundreds of kilobytes space. Hundreds of kilobytes might mean even millions of lines of assembly code.

      Really?

      Most assembly language representations use one instruction per line.

      So, in order to get a million lines of code out of a 100 KB program, you'd need a CPU which has instructions less than one bit in size(!).

      There are other flaws in your r

  • no... (Score:4, Insightful)

    by Anonymous Coward on Friday August 01, 2003 @06:38AM (#6586520)
    "This product should help end the debate of closed source or open source applications being more or less secure"

    how so? who's to say *this* tool is an official measure of security? its *a* measure. and how would you actually do the comparison? that statement just doesn't make sense.
    • Sure it makes sense. If you analyse the open-source code and it comes up secure, and the closed-source comes up insecure, then you may have not quite proven, but you have at least bolstered, the assuertion often made by the open-source lobby that open-source code is more secure.

      Of course, it also could come up the other way, thus giving closed-source advocates more fuel.

    • It doesn't matter, as long as it's consistent. The assumptions are 1) open and closed source are comprable 2) the ratio of bugs findable by bugscan : total bugs is the same for all programs. Given that these are true (not an entirely unreasonable assumption). Then even if bugscan only finds 1% of bugs in a program then when you run it on open source and find that there are 100 bugs per megabyte and on closed source and find that there are 200 bugs per megabyte then you have some evidence that open source
  • by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Friday August 01, 2003 @06:49AM (#6586545) Journal
    but complex ones? I imagine what this software does is it scans the binary for things like instances of strcpy calls instead of explicit strncpy calls. Given that the software is likely not executed, how would it be able to catch more complex bugs? How can it find all instances of user interaction which could modify a variable where that variable is used as a parameter in strncpy for example?

    Dollars to do[ugh]nuts says that even with a program that gets a clean bill of health, there are still countless bugs undiscovered.
    • by Anonymous Coward
      Check out valgrind sometime. Now expand on that context.

      For every byte in the application (code segment, data segment, stack segment, heap) keep a record with minimum, maximum, likely values etc. Do this for every assembler instruction. (heap will need special treatment). Now trace every possible execution flow, and check if any of the values lead to "strange" behavior.

      You can get to the point where you can do full dataflow analysis. You start with a variable that is initialized at some point, then anothe
  • Compressed executables, a la UPX [sourceforge.net].
  • by Anonymous Coward on Friday August 01, 2003 @06:55AM (#6586567)
    A friend asked me to help her install an operating system on her brand spanking new PC. I have installed many operating systems - Debian, Slackware, Mandrake and Red Hat among them - and thought I knew a bit about the process. Boy was I in for a surprise!

    The OS she wanted me to install was Windows XP Home Edition. I have never bothered with Microsoft software in the past, not since Bill Gates got all pissy-arsey about people making copies of "his" BASIC interpreter at the Homebrew Computer Club. Grow up guys! You liken your ideas to your babies, but babies eventually grow up, leave home and learn to survive without you! Well, Gates was basically saying that if people didn't pay for their software, programmers would go out of business because nobody would want to create software unless they got paid for it. Right ..... 'cause that's exactly what Richard Stallman and Linus Torvalds got famous for doing!

    So I have never bothered with MS stuff, never having felt the need. But I figured, it could not be too difficult to install it, could it?

    Windows XP comes on just one CD. First installation attempt sort of worked, but it was a bit flakey and it was a bit slow. And the desktop is just downright annoying - both in terms of colour sceme and general UI. It's a bit like KDE, but not quite. Only one desktop, for crying out loud! And it's slow and crash-prone. Just like Mandrake where you get a really bloaty stock kernel {drivers for god knows what compiled into it just in case anybody ever needs them}. So I figured, first thing we should do was maybe recompile the kernel. Never recompiled a kernel in Windows, never even run the damn thing. Never even likely to now.

    Could we find the Kernel Configurator? Could we hell! And the command prompt was useless. It seems to be based on the old DOS command line. And it doesn't understand make menuconfig.

    The kernel configurator was not the only thing we could not find. There didn't seem to be any Packages either. You know, stuff like KWord, KSpread and Kate. MySQL, Apache and a scripting language like PHP, Perl or Python. And some simple games. Just the basics. There is something called Internet Explorer, which is a bit like a cut-down Konqueror, but it's nasty to use.

    So I'm guessing that the missing configurator probably is part of a Kernel Source Devel package which is not installed by default. In fact, almost no packages seem to be installed by default. And there are no .rpm, .deb or .tar.gz files on the CD. I've analysed it thoroughly and I found no sign.


    In the end, I installed Slackware 9 and configured it to look as much like the Live CD as I could manage, but obviously not running everything as root. I can only suppose those missing packages are on another CD which we weren't sent for some reason or another. I mean, she has paid good money for the software, so she is entitled to get it! And the source code. Especially the source code! After all, if we can't check out that source, we have no way to be sure what we're running. It could be sending every keystroke to Microsoft, for all we know!

    Anyway, my friend is well chuffed with Slack so I suggested to take the XP CD back to the shop and get a refund. But of course, that might be difficult seeing as she doesn't seem to have the full set. We'll keep you posted as this story develops.
  • Open Source rules (Score:3, Insightful)

    by gonvaled ( 584635 ) on Friday August 01, 2003 @06:59AM (#6586580) Journal
    Security problems are often inteoperation issues. You can make sure a program is bug free, but this will not guarantee that your program is not going to fail if the rest of the pieces are not functionning properly. To analyze the interconnections, Open Source is required.
  • by Temporal ( 96070 ) on Friday August 01, 2003 @07:04AM (#6586592) Journal
    What does this have to do with open source vs. closed? Sure, in theory, every single person who downloads an open source program will review the code themselves to make sure there are no buffer overruns. If they find any, they will of course report them back to the maintainer, who will then fix the bug.

    In practice, this doesn't really happen.

    As an open source developer, I can assure you that very few people are interested in reviewing other people's code for free. I'm sure the bigger projects, like Apache and Linux, manage to get a good amount of code review -- but then, big closed source projects usually do ample code review, too. As for little open source projects, like the ones I run, you're lucky if people even take a peek at the source. Really, no one is interested. I do not believe that open source projects are any more (or any less) likely to have security issues than typical closed source ones (Microsoft aside).

    As long as people are using C, there will always be buffer overruns. C is just that kind of language -- it makes it so amazingly difficult to do simple things (like allocate space for a character string) that programmers naturally take shortcuts (giving the string a static length) without taking the proper precautions (bounds checking). We can't make programmers not be lazy, so the only real solution is to move on to a better language.
    • What does this have to do with open source vs. closed? Sure, in theory, every single person who downloads an open source program will review the code themselves to make sure there are no buffer overruns. If they find any, they will of course report them back to the maintainer, who will then fix the bug.

      In practice, this doesn't really happen.


      But it's not required to happen for OSS to work. Somebody who "downloads and installs and that's it" factors in as a zero into the matrix of development efforts.

      The
      • The point of OSS is that *more* people actually *do* look at the source than with a closed environment, not that *everybody* does.

        That is NOT TRUE. That's exactly my point. Most software companies do code reviews, regression testing, etc. Most open source projects are something whipped up by one or two guys in their spare time. The only testing usually done is actual usage testing, and it's not normally very extensive. Most open source programs are NOT reviewed by anyone other than author. Again, so
  • by aking137 ( 266199 ) on Friday August 01, 2003 @07:21AM (#6586625)
    I realise that this particular software may not actually decompile or disassemble anything, but this presents a very good reason for making reverse engineering of any software legal in any country: if I'm not allowed to make my own private analysis of a piece of proprietary software out there, how am I to know what it's going to do to my computer? How can I know that it isn't going to take liberties and do damage (such as installing backdoors) on my systems?

    To be fair, many software packages I see for Windows machines these days do take advantage of this fact, such as by giving users adverts, invading their privacy, and withholding information to them about what their computer is doing. (One example is Freeserve, a UK ISP: some of their dialling software refuses to tell you what numbers your computer is dialling out to. This can be got round, but it's the principle of the thing...). For the past few years, I've refused to run any software on my desktop machine where source code is not made available, for that reason. If they are prepared to reveal to me what they're going to do to my computer, then I'm not prepared to run their software.

    Here's another question: if I have a copy of this software on a machine in a country where reverse engineering is allowed, but then I shell in to that machine (via ssh, vnc, or some other means which will allow me to control that machine remotely) from a country where reverse engineering is not allowed, and then carry out the reverse engineering over that link, is that illegal?
  • by arcanumas ( 646807 ) on Friday August 01, 2003 @07:25AM (#6586632) Homepage
    Can you image running it on itself?

    # bugscan bugscan
    Segmentation Fault

    Hehe

  • Uhm well. Nice yea. But where's the article?
  • by RenHoek ( 101570 ) on Friday August 01, 2003 @07:30AM (#6586645) Homepage
    Does anybody have any idea how many binaries are protected nowadays, wich encryption, obfuscation and/or compression?

    If a program uses any kind of serial entry, CD check or other kind of 'protection' scheme, you can be sure the makes have run an obfuscation program like 'PEcrypt' on it.

    Even then, I don't see this program unpacking unprotected executables that have been packed with UPX or one of the other dozens of PE compressors.

    Simply put, this program will have VERY limited uses for normal consumers. The only one who could use it would be the firm who made the program in the first place, before obfuscation/protection/compression, but why would they? They have the source code. A source-code checking program would be MUCH more effective.
  • by jcochran ( 309950 ) on Friday August 01, 2003 @07:38AM (#6586661)
    I suspect that this product will flag a lot of false positives. After reading the white paper, I believe that the following code would be considered "insecure."

    #include <stdlib.h>
    #include <string.h>

    char *duplicate(const char *input)
    {
    size_t len;
    char *out;

    out = NULL;
    if (input != NULL) {
    len = strlen(input);
    out = malloc(len + 1);
    if (out != NULL) {
    strcpy(out, input);
    }
    }
    return out;
    }

    Note the use of the "evil" function strcpy().
  • by LordDartan ( 8373 ) on Friday August 01, 2003 @07:53AM (#6586693)
    Once before, while working at a client site, I was installing a 3rd party application. Well, in setting it up and looking for any security holes, I found a pretty large one. Apparently, the client application talks to a MSSQL server using a single account (which happens to have dbo access). Not only did it use a single account for everyone, but the username and password were stored as cleartext in the executable itself! Now granted, not likely that an end user would look there to find this information, but if someone did, and the client did happen to know someone breached the security, the only way to block the intrusion was to shut down the entire system. With the username and password hard coded into the executable, there was no way to change it witout having the vendor make the change and send out a new executable.

    Just goes to prove that MS programmers are a dime a dozen, but most of them are worth that too!
    • by DrSkwid ( 118965 )
      With the username and password hard coded into the executable, there was no way to change it witout having the vendor make the change and send out a new executable.

      if it was in cleartext couldn't you just edit the executable, so long as the new username/password was the same length you'd be set.

  • Scam (Score:5, Interesting)

    by roady ( 30728 ) on Friday August 01, 2003 @08:01AM (#6586709)
    Reverse Engineer Halvar Flake called BugScan a scam at his BlackHat Amsterdam course.

    It is just a bunch of simple IDA pro [datarescue.com] plugins and it will give you a false sense of security.

    Halvar has published is own open source version called BugScam on sourceforge [sourceforge.net]

  • 1. Many false positives, as apparently insecure constructs are totally secure given knowledge the programmer has about the source of inputs. E.g. a static buffer may appear prone to overflows, but maybe it's copying data with a known fixed size.

    2. Many missing positives that depend on external factors: security settings, file visibility, encoding algorithms, etc.

    My guess is that the false positive issue will make the approach unusable for any real software. If the developers can fine-tune that, the tool
  • that sounds misleading. the white paper states that "for example, using strncpy " is a good security practice"

    even though strncpy and strncat are actually used incorrectly MUCH MORE OFTEN than strcpy.

    Let me explain. People that use strcpy tend to use malloc()ed memory because they
    know how it works, and that they have to supply a certain size before they copy in it.

    However, almost nobody knows how strncpy works. (as for strncat, i don't recall seeing it correctly used)

    i wouldn't call that "safe", i see m
    • I have seen strncpy() abused so many times it makes me sick. Like you, I prefer strlcpy(), although I have no idea about the politics behind its adoption in GNU -- I just link in -lgen (the xpg4 lib) under Solaris and code away. I usually have -lgen linked anyhow for strecpy() and such. The Apache Runtime (apr.apache.org) has apr_cpystrn() which is fine, too.

      I have actually seen code like this during a code review:

      strncpy(a, b, strlen(b));

      What the hell is the point of that?

      {
      char *a = malloc(10);
      st
  • This product should help end the debate of closed source or open source applications being more or less secure.

    The feet of man who uses hypotheticals may no longer be aground.

    Never argue with a drunkard, a woman, or a fool.

    Proof by analogy is fraud [temple.edu].

  • So the plan is to automatically find possible security problems in assembler code, even though such a process is not computable. Hence the program either doesn't find every security problem or finds ones that aren't actually problems. Is this really a good way to judge software?

    I mean, I can write a program that scans executables and tells ya which ones are good and which ones suck. It won't actually mean anything though.
  • by edp ( 171151 ) on Friday August 01, 2003 @09:20AM (#6587186) Homepage
    There's no need to pay for expensive software to detect bugs. I used to have a freeware bug detector. You would drop an executable on it, and it would display a message indicating whether or not there was a bug in the executable.

    A near as I could tell, for almost any executable you gave it, it reported there was a bug. The exception is that if you dropped its own executable on itself (even a renamed copy), it reported no bug. That seems pretty accurate to me.

  • The Golden Rule... (Score:3, Interesting)

    by kris ( 824 ) <kris-slashdot@koehntopp.de> on Friday August 01, 2003 @10:36AM (#6587875) Homepage
    The Golden Rule Of Programming:

    Never check for an error condition you don't know how to handle.

    I mean, what use is this? If you do not have the source, you may use this tool to check for potential security vulnerabilities. The result will leave you with a binary which you cannot change because you don't have the source, and with a list of potential vulnerabilities, which you can't validate without a great deal more of work which you would need to create working exploits. Failure to produce an exploit does not prove that there is no vulnerability, though.

    And if you happen to have the source, what use is this tool? There are better tools to find this class of errors on source level.

    Kristian
  • by dwheeler ( 321049 ) on Friday August 01, 2003 @12:51PM (#6589248) Homepage Journal
    If you HAVE the source code, use a source code analyzer like my flawfinder [dwheeler.com] tool (or Viega's RATS tool). Source code analyzers can immediately identify where the problem is, and several are freely available. And has been noted elsewhere, the problem with binary analyzers is that they may show where some possible problems are, but it's very difficult to actually FIX the binary without the source code. That doesn't mean this is a useless product; if nothing else, if you're planning to use a proprietary program, a tool like this one might help you begin to understand your risks.

This restaurant was advertising breakfast any time. So I ordered french toast in the renaissance. - Steven Wright, comedian

Working...