Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security IT

Fuzzing Toolkit For Web Server Testing 47

prostoalex writes "Dr. Dobb's Journal runs an article discussing the tools necessary for fuzzing (testing a system by generating random input in order to cause program failure or crash). Quoting: 'You are fuzzing a Web server's capability to handle malformed POST data and discover a potentially exploitable memory corruption condition when the 50th test case you sent that crashes the service. You restart the Web daemon and retransmit your last malicious payload, but nothing happens... The issue must rely on some combination of inputs. Perhaps an earlier packet put the Web server in a state that later allowed the 50th test to trigger the memory corruption. We can't tell without further analysis and we can't narrow the possibilities down without the capability of replaying the entire test set in a methodical fashion.'"
This discussion has been archived. No new comments can be posted.

Fuzzing Toolkit For Web Server Testing

Comments Filter:
  • This is like using a bump key - hit a lock with random impacts and it opens. Spew enough garbage at a program and it will probably die. Eat enough food you find on the ground and you will probably get sick. Other than getting the +1, Obvious award, whats the point?
    • Re:Bump Key? (Score:5, Informative)

      by caffeinemessiah ( 918089 ) on Saturday June 30, 2007 @01:28PM (#19700421) Journal
      Spew enough garbage at a program and it will probably die.

      But if you can spew garbage at a program and make it die during development, perhaps you can figure out what exactly made it die and fix it. You get the +1 Obvious award, not fuzz testing.

      • I'm no web dev, but it would appear that the primary failure point is in the parser itself (for processing POST data). A well tested robust parser under attack should simply respond, "fuzz you!".
        • I'm no web dev, but it would appear that the primary failure point is in the parser itself (for processing POST data). A well tested robust parser under attack should simply respond, "fuzz you!".

          i'm not a web dev either, but i'm guessing that part of the reason for this is to take a less-tested, less-robust parser (or whatever) and eventually make it into one which is better-tested and more-robust. i mean, we do this with most code (or at least ought to). so the devs can learn how to make their program say "fuzz you!" better :)

      • Re: (Score:3, Interesting)

        by Bender0x7D1 ( 536254 )

        You could also create a robust test suite that would try to break the program in an intelligent and repeatable way. Can you do far more random tests than well-thought out tests? Yes. However, random tests, (even a lot of them), don't guarantee that you have good code coverage.

        Even better, you take time to make your parser better at error handling. It can take a lot longer but is probably worth it in the long run. It won't eliminate the need for testing, but thinking through all the things that can go

        • Even better, you take time to make your parser better at error handling. It can take a lot longer but is probably worth it in the long run. It won't eliminate the need for testing, but thinking through all the things that can go wrong is never a waste of time.

          This should be -1 Obvious, but the sad fact is that many, many programs out there don't properly or sufficiently validate their input.
          Even input from a "trusted" source, such as a DB "owned" by the program, should be checked.
          There may be cases where

    • Re: (Score:1, Offtopic)

      I didn't know my manager had a Slashdot account! Honest, I don't post here while I am on duty, boss!
    • This actually reminds me of the old hack to crash NT3.X, "telnet host:19|telnet host:53", which would take output from the random character generator port and pipe it to the DNS port which would then crash the DNS server and maybe the host itself. I remember making a friend scream in horror as I did this to his machine across his LAN where he thought it could only be done locally.
  • It's not 100%, but if your random number generator (not totally random) started with a random known seed, you might be able to recreate the event.

    • Re: (Score:3, Insightful)

      by FooAtWFU ( 699187 )
      Or you could just record it all. The timing might never be exactly the same twice, but if you can just record everything you sent, and then send it again, that's a big improvement.

      I'm sure the article talks about this in spades. If only we were to read it.

  • The concept seems to be a sound one. Sending random input to your web service in a repeatable fashion, but are the tools intuitive? Do they detect the different systems you are running? (ie. an Internet Posting Board) Do they keep track of recent exploits in the wild?
  • by Anonymous Coward
    Any program that does this sort of testing should use a good pseudorandom number generator with a very large period and a manually-specifiable seed. If it logs where it's at in the sequence, it makes it easy to repeat a series of tests. Good generators are easy to build - use a big Linear Feedback Shift Register and SHA or MD5 hash the output.

    Too bad cmpnet can't host sites. I never made it past all their interstitial ad and popup junk.
    • There's more to it than that to make tests repeatable. Any interesting system will have state beyond that represented by the fuzzed inputs - it's things like IO timings that will still change between runs even with the same random seed. Something as simple as a disk seek time varying between runs might make the difference between a test failing or passing.
  • Can't tell (Score:3, Insightful)

    by arth1 ( 260657 ) on Saturday June 30, 2007 @03:01PM (#19700957) Homepage Journal
    The article blurb says:

    We can't tell without further analysis and we can't narrow the possibilities down without the capability of replaying the entire test set in a methodical fashion.

    Yes, you can tell by experts eyeballing the code. Granted, this might be far more work than automated testing, but it's not like testing is the only way to isolate bugs.
  • replaying set? (Score:3, Insightful)

    by 192939495969798999 ( 58312 ) <[info] [at] [devinmoore.com]> on Saturday June 30, 2007 @03:06PM (#19700987) Homepage Journal
    isn't the answer in the summary, that you obviously have it record the input as it goes, so you can literally back up and repeat any given random scenario? Without this capability, it would be like having a 3-year old bash away at the keyboard, they're just as unable to repeat anything.
  • A web application should be cleaning up after each transaction to arrive at a safe state; why wouldn't a web server do the same? If it doesn't, it should.
    • A web application should be cleaning up after each transaction to arrive at a safe state; why wouldn't a web server do the same? If it doesn't, it should.

      Ya, and people should always wear their seatbelts, never drink and drive, and floss everytime they brush their teeth; Yet, the majority of humans fail to abide by these 3 simple "should haves".

      Maybe you should write a howto covering how you think a web server should behave, and then proceed to write your own implementation from scratch (for my network pro

  • Is it just me or does it some obvious that you could just dump the stream then use tcpreplay to send the stream back and anaylze the packets it is sending to the webserver? Pick intervals and ranges of packets send them and see what the webserver does. That seems like a pretty straight forward way to narrow down what is going on.
  • The thing that irritates me most are the sites that have 5,478 *different* links. Even on broadband, they take tens of seconds, sometimes over a minute, to load. I'd like one *standard* test to be that they try surfing, like 50% of the US public does, on dialup.

    I won't even start on the idiots who have no compression on their cameras, and put jpgs up on their Websites that are over a meg....

                mark
  • I do what to servers?....

One man's constant is another man's variable. -- A.J. Perlis

Working...