Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Fingerprinting Port 80 Attacks 147

pg writes "I found an interesting article on www.cgisecurity.com that explains common fingerprints in web server, and web application attacks. It goes to describe how to detect most known, and unknown attacks. This may come in handy when trying to detect another internet worm."
This discussion has been archived. No new comments can be posted.

Fingerprinting Port 80 Attacks

Comments Filter:
  • It makes me angry (Score:2, Interesting)

    by gmplague ( 412185 )
    It makes me angry that everyone decides to beef up security and write analytical articles about how to maintain security AFTER THE FACT. The problem is, that although this seems like a good idea now, fingerprinting probably won't help for the next series of attacks, because they will be different in nature.
    • This isn't about improving security after the fact, it's about implementing IDS rules based on identifiable attack characteristics so you can build some useful filters, about checking your logs if you want to in order to determine what may have been an attack and what was really innocuous, and so on.

      It's not at all about the security of the server itself.
      • My point was, we're improving the IDS rules and filters after the fact, rendering it useless to most new worms. Improving IDS rules and building better filters DOES improve the security of the server (that's what Intrusion Detection Systems are for, most IDS's I know don't JUST detect an intrusion, but also kick out possible intruders) And although checking your logs is good to see if you were the intended target of the attack, if you're even a half-assed sysadmin, you should be able to tell if you've been infected with a worm. My point was that it does very little to help against new worms, because, contrary to popular belief the people who write these worms are pretty clever.
        • As far as I can tell, popular belief does say that worm and virus programmer are "fiendishly clever hackers", and it's dead wrong. Most of them are amateurs and know barely enough to glue some long-known exploit together with a bit of propagation code.


          And it's the same with "manual" intrusions: the biggest problem are not the very few competent black hats, it's the hordes of script kiddies, and those do use mostly older attacks and try out every exploit they know one after another.


          Besides, a main point of fingerprinting is attempting to find common elements that will be present even in currently unknown attack forms.

    • They will be different in nature, but will use the same commands to gain root access to the box. Read the part where he describes the common commands an attacker may execute, or common files the attacker will request. How the files or commands are accesses may change, but the files and commands will (for the most part) stay the same.
      • I completely disagree on this one, there are tons of ways to get root on any given system, and all different kinds of command sequences... and if you've ever heard of a polymorphic virus, it may utilise many of these different ways to obtain root on a system, and randomize the order in which they are used. That's just one of the many things that these worm writers can think of.
    • AFTER THE FACT.

      Ok, sooooo, maybe we can enlist the aid of The Great Carnac? I'm not certain the writers of worms and perpetrators of DoS attacks are leaving their plans, hermetically sealed, on Funk and Wagnell's front porch.

      Part of fighting an attack is certainly building a more attack resistant mechanism, but keep in mind that the ingenuity of the perpetrators is eventually their undoing, as attacks will have to become more and more sophisticated (except where gaping holes like those in a certain monopolies products are left in througn lack of ordinary foresight) as many aspects of the internet, as well as operating systems and applications get stronger.

      • Ok, first off, I really do think that we need to catalog these worms, but it doesn't seem useful to me to have 50,000+ slashdot readers cataloging the same vulnerability. Second, the best way to secure a system is to write secure code. Although many of these attacks are original and new, most of them could be avoided if programmers just learned some secure coding practices (eg. not using strcpy(), I swear, that must account for 90% of root compromises) Also, maybe sysadmins should pay a little more attention to keeping their system secure. Set up your system securely to begin with, and then it's not that hard to check securityfocus.com once a day to see if there are new vulnerabilities in the daemons that you run. If there are, install the patch. It's that simple, really.
        • The worst thing, IMHO, is that large projects create large vulnerabilities. For example, a.c is secure, b.c is secure, but a and b together are not secure. Spread that across any large project and you'll get more flaws than you can shake a 9/16" debugging tool at.
  • What irony! (Score:5, Funny)

    by swordboy ( 472941 ) on Monday November 05, 2001 @05:53PM (#2524760) Journal
    I'm sure that the server that the article is posted on is getting a nice "attack" on port 80 right now!
  • isn't the usual fingerprint of an attack that your web server is down, your traffic explodes and all people you know send your their documents to have your advice ?

    I don't see much more room for advanced technology there.
  • Will it ever be able to stop DDOS attacks? You can ofcause write software that checks for packages that have some common pattern but then this software is going to take resources and what happens on a distributed site like amazon with huge load?
    • DDOS attacks are very hard to avoid becouse they're not "real attacks". It's a sort of vandalism - you can't hurt it like a hacker, so just

      while true; do press 'submit' button; done

      can they stand it? No, well ...i'm so mighty! YES!
  • One thing missed (Score:5, Insightful)

    by 13013dobbs ( 113910 ) on Monday November 05, 2001 @05:55PM (#2524776) Homepage
    formmail script exploits. Due to post 25 blocking, spammers are looking for exploitable formmail scripts to send their spam through. I guess the author just wanted to talk about root exploits, but there are other ways to abuse a web server.
    • Yeah, I was looking for that specifially because just this afternoon I saw a whole metric buttload of these:

      152.163.160.44 - - [05/Nov/2001:14:50:00 -0500] "GET /cgi-bin/FormMail.pl?email=&recipient=tester@aol.n et&subject=P80+24.161.81.172+7 HTTP/1.0" 404 279 "-" "-" 152.163.160.44 - - [05/Nov/2001:14:50:00 -0500] "GET /cgi-bin/formmail/FormMail.pl?email=&recipient=tes ter@aol.net&subject=P80+24.161.81.172+11 HTTP/1.0" 404 288 "-" "-"
      152.163.160.44 - - [05/Nov/2001:14:50:00 -0500] "GET /cgi/FormMail.pl?email=&recipient=tester@aol.net&s ubject=P80+24.161.81.172+19 HTTP/1.0" 404 275 "-" "-"
      152.163.160.44 - - [05/Nov/2001:14:50:00 -0500] "GET /cgi-bin/formmail.pl?email=&recipient=tester@aol.n et&subject=P80+24.161.81.172+35 HTTP/1.0" 404 279 "-" "-"152.163.160.44 - - [05/Nov/2001:14:50:00 -0500] "GET /cgi-sys/formmail.pl?email=&recipient=tester@aol.n et&subject=P80+24.161.81.172+67 HTTP/1.0" 404 279 "-" "-"152.163.160.44 - - [05/Nov/2001:14:50:00 -0500] "GET /cgi-sys/FormMail.pl?email=&recipient=tester@aol.n et&subject=P80+24.161.81.172+131 HTTP/1.0" 404 279 "-" "-
      • I get those all the time too. It is originating from AOL. WTF is this?
        • Re:One thing missed (Score:3, Informative)

          by ptomblin ( 1378 )
          It's a spammer or a mail bomber looking for form-mail scripts that he can hijack to send his millions of email messages through and make it hard to catch him or block mail from him. They used to rely on finding open mail relays, but except for a few thousand in China and Korea, there aren't that many around any more (and anybody who doesn't want to get spam just blocks everything from sites in China or Korea). So they've altered their tactics.
          • True. Another thing that is spurring this is the post 25 filters that most nationwide (and worldwide) ISPs have enabled. So, to get their psudo-anonymous mail out, they need to use services on other ports.
      • Oh yeah...

        Mine are from teh exact same address. Is this an AOL proxy used by AOL users, or can I safely firewall that address to deny access?

        • All legitimate traffic from AOL appears to come from address that reverse lookup to foo.proxy.aol.com. This guy doesn't.

          For instance in my current logs, the only legitimate traffic from AOL addresses comes from
          spider-mtc-tg014.proxy.aol.com and
          spider-mtc-tk043.proxy.aol.com and
          spider-mtc-tb054.proxy.aol.com.
          • I take that partly back. Looking through my logs again, I see what appears to be legitimate traffic (ie to existing web pages) from AOL ips like:
            ACA19CF3.ipt.aol.com and
            AC8D6E32.ipt.aol.com

            I believe these are people who aren't using the default AOL browser, though.
  • incomplete document (Score:4, Informative)

    by Angry Black Man ( 533969 ) <vverysmartman@ho[ ]il.com ['tma' in gap]> on Monday November 05, 2001 @05:56PM (#2524788) Homepage
    That article doesn't cover too many port 80 exploits. It does cover the most common attacks, but, if you want some more information here is a more complete guide [insecure.org]. There are also a lot of language translations of it at the top if you're not the most fluent in english.

    Remember, these documenst are written to help server administrators get an idea of what to look out for, not to solve every single port 80 problem out there.
    • by mwalker ( 66677 )
      I hate to rain on your parade, but I believe that while the linked information is Informative, it is not quite On-Topic. The article in question talks about how to fingerprint different exploit strings launched at web servers at the application layer on port 80. The document you linked discusses how to fingerprint the TCP stacks of varying operating systems based on details gleaned from the top 3 layers of the network stack, including timing details, TCP sequence numbers, etc. Specifically it describes how Fyodor's excellent nmap utility fingerprints an Operating System by TCP stack. You may note that you must be "root" to use this capability because you must sniff the raw TCP stream in order to be able to do this. Fingerprinting port 80 exploit strings just requires you to read the http logs...

      While fingerprinting an OS is certainly a useful thing, we shouldn't confuse it with a fingerpinting and profiling effort aimed at categorizing and identifying buffer overrun and similar exploits aimed at web servers. Automated run-time detection of these attempts can lead to faster detection and elimination of threats. In addition, this is a passive measure, whereas nmap is an active measure.
    • That article doesn't cover too many port 80 exploits.

      Nor does yours.

      This discussion is about fingerprinting exploits. The article you reference discusses fingerprinting servers. Big difference.
  • by Slipped_Disk ( 532132 ) on Monday November 05, 2001 @05:56PM (#2524789) Homepage Journal
    I think there is some value to this article for new admins - it highlights most of the common things you will see in your log files if someone is poking at your site.

    By the same token, most well-written CGIs will block these sorts of attacks (and hopefully if you are writing CGIs you will have enough knowledge (and common sense) to write them in a reasonably secure manner).

    At the least it's worth a quick five-minute scan.
    • how about this... a 'virtual machine' as cracker bait...

      set up a cgi program that fakes being a vulnerable system just to see what sort of attacks you get... call it 'perl', 'sh', or something especially enticing... and set up a fake file system for the hacker to explore, fake log files for them to modify, etc

      this would be kind of like writing a text adventure game. you could put several fake encrypted files that are just random strings of characters..
  • Fingerprint Database (Score:5, Interesting)

    by helleman ( 62840 ) on Monday November 05, 2001 @05:58PM (#2524796) Homepage
    I'd love to see a plugin for apache that allowed a central server fingerprint database for new exploits.

    Every hour or so, a web server could access a central fingerprint server and download what the latest exploits look like. If a exploit comes in, the server could deny that IP, or drop those accesses without needing to know what the particular exploit is. A self maintaining web server via the web.

    What do you think?
    • by Tenebrious1 ( 530949 ) on Monday November 05, 2001 @06:02PM (#2524820) Homepage
      And, unbeknownst to you and thousands of others, the site that maintains the list has been hacked, and you are downloading empty lists that allow every exploit.

      It's a good idea, but there's a problem when you create a central point of failure.
      • If the worst-case scenario is no worse than not using the service at all, then it still sounds like a good deal to me. Of course, there is probably a "worse-case" scenario: the site that maintains the list is hacked, and everyone downloads lists that match and subsequently block ALL traffic, legitimate and otherwise. ...or possibly just the same thing happening as a result of incompetence.
      • List maintainer digitally signs fingerprints.

        Next?

    • This is a really cool idea, there would need to be some way of verifying the information the plugin was getting though, or someone could just feed your server an "attack" fingerprint that matches a normal hit and you would wind up denying legitemate users. This idea sounds a lot like the ORBS/RBL for sendmail.
    • I'd love to see a plugin for apache that allowed a central server fingerprint database for new exploits.

      Then we could couple it with my favorite idea for an Apache module: mod_labrea. This way any 'undesirable' HTTP exploit could be given a reverse DoS by keeping the connections alive and stalled for as long as possible.

    • That would be great. Until someone unveils their new exploit, or worm, and DDoS' the central fingerprint server. The problems that would be caused to that server everytime somebody came out with their newest attacks would be innumerable I'm afraid.

      I had the same idea a while back myself.. but I myself wouldn't dare hassle with it. Perhaps there is some brave soul out there with the time and resources to fend off the countless attacks.

      My $.02
  • portblocker (Score:2, Informative)

    If your using windows (blech!) you can get a program for free that blocks your port 80, as well as tells you the IP number of somebody attempting to get in. The program is called Portblocker, and the company that makes it is analog X. I often bomb the person who tries to access my computer with telnet requests just to irritate them.

    • or you could just install ZoneAlarm (i.e. blocks all ports in & outbound), which is a proper firewall (and also free).

      PS. Yes I know there are better firewalls around (e.g. smoothwall) but ZA does the job for simple windows boxes, IMHO the best of the "personal" firewalls.
  • Garbage requests (Score:3, Redundant)

    by spankfish ( 167192 ) on Monday November 05, 2001 @06:01PM (#2524816) Homepage
    What I personally like to do is create a good set of rules for detecting this kind of garbage requests and storing them in log files which are separate to my normal access_log and error_log... that way I don't have to wade through acres of crap while looking at my real visitors.

    Yes, I know I could grep 'em out while viewing, but I think garbage should be kept in a separate place to the real visitors' log entries.

    • Re:Garbage requests (Score:3, Informative)

      by Heem ( 448667 )
      I think garbage should be kept in a separate place to the real visitors' log entries.

      What i do, is setup virtual hosts on apache, with my domain name pointing at the real website, and my numeric IP pointed at just a blank page, and have them log to seperate files. Since MOST attacks come randomly via numeric IP, and MOST real users come in using the domain name.

      • This is a very good idea. I hadn't thought of this at all. Would liked to have done this before Time Warner (RoadRunner) disabled inbound port 80 due to nimdA. Now I don't have crap coming to my server. I switched it to run at port 81, which works, but how elegant is the URL http://myserver.dhs.org:81/ ? Not very.
        • Got another one for ya then. surely you have a friend or a friend of a friend that has or has access to a 'real' server.. park your www.mydomain.org there with a simple html that redirects to www2.mydomain.org:81 OR opens a frame with www2.mydomain.org:81 inside of it.

    • I agree

      I would go one step further. I would like an apache module that can recognize requests for certain resources, like

      /scripts/root.exe?/c+dir
      /c/winnt/system32/cmd.exe?/c+dir
      /scripts/..%c0%af../winnt/system32/

      etc.

      and then just add that ip immediately to 127.0.0.1 without writing anything to access or error logs.

      ... as long as we're wishing...
      • by wytcld ( 179112 ) on Monday November 05, 2001 @07:03PM (#2525102) Homepage
        Here's how to get part way there (in this case for Nimda). In httpd.conf:

        SetEnvIf Request_URI "cmd\.exe" ATTACK
        SetEnvIf Request_URI "root\.exe" ATTACK
        CustomLog /www/logs/access_log common env=!ATTACK
        CustomLog /www/logs/attack_log common env=ATTACK

        <Location />
        Order Allow,Deny
        Allow from all
        Deny from env=ATTACK
        ErrorDocument 403 "
        </Location>

        And then optionally for individual bad directories:

        <Location /scripts/>
        Deny from all
        ErrorDocument 403 "
        </Location>

        At this point requests for cgi.exe are not being logged in access_log but only attack_log (leave out the attack_log line if you don't want even that much). They'll still show in error_log (but with a shorter error statement). The ErrorDocument line instructs Apache to send back nothing and just drop the connection - not as nasty as a tar pit, but at least you don't waste outgoing bandwidth, generally tighter than incoming for a Webserver. Also, Apache doesn't waste any time checking the file system on these requests, since the rules preclude that.

        • You also need to add:

          CustomLog /www/logs/error_log common env=!ATTACK

          If you want to avoid a "client denied by server configuration" message in your error_log. I also added SetEnvIf rules for "WINNT" and "system32" for some extra paranoia.



          Other than those minor modifications, your config changes are supurb! I just added it to my web server and it took less than 30 seconds for the new rules to be triggered. Works like a charm. Thanks a lot :-)

  • by Embedded Geek ( 532893 ) on Monday November 05, 2001 @06:04PM (#2524829) Homepage
    On first glance, this looks like a really nice piece of work, especially given the caveat (paraphrased) "this is not completely inclusive..." for the author.

    I do have a question for my fellow slashdotters: Why does the author single out TFTP but not FTP? Does TFTP have inherrent weaknesses that would make it the file transfer protocol of choice for an attacker?

    • TFTP [rfc-editor.org] doesn't use passwords, so it's easier to use from a script.
    • TFTP has no authentication in the protocol, so the only ACLs you've got are network level ones from TCP wrappers.

      All it requires is a misconfiguration on the TFTP server, and you'll be able to fetch and overwrite any file anywhere on the filesystem; I've seen this happen in the real world from time to time.
    • Thanks for the response, everyone. I'm used to using TFTP in an embedded environment (no surprise, given my handle). I'd assumed the full standard supported accounts/passwords and we just ran it with 'em off - I should've thought it through. I guess the "Tivial" in TFTP is well earned.

      I was wondering if you could block the port on TFTP, thus locking it out entirely, so I dug out my copy of Stevens, Volume I and skimmed the chapter on TFTP. The thing is, I see no mention of a port at all in this chapter. Am I just missing it or are ports a TCP concept (while TFTP runs UDP/IP)? Regardless of that, though, how do you defend against the use of TFTP in this manner?

      Thanks again.

      • Answers. (Score:3, Informative)

        by mindstrm ( 20013 )
        TFTP is udp based. Yes there are ports.
        It runs on udp port 69.

        And, you hit the nail on the head.. embedded systems.
        tftp is 'trivial' so it can be used for bootstrapping systems. The protocol is as simple as it could possibly be (but not fast nor efficient network wise).
        It was designed so it could be implemented with very little code in order to bootstrap systems.

        Given that.. it really has no reason to be enabled at all in most modern systems.
        The only uses I've used it for recently are:
        booting diskless clients
        cisco router configuration files
        embeded systems work
      • In NT DOS prompt you can find this by typing this command

        type C:\WINNT\system32\drivers\etc\services | find "tftp"
  • Snort (Score:5, Informative)

    by Frums ( 112820 ) on Monday November 05, 2001 @06:05PM (#2524839) Homepage Journal
    Hmm, Snort [snort.org] has signatures written for all of these =)
  • Not very interesting (Score:4, Informative)

    by brettbender ( 87275 ) on Monday November 05, 2001 @06:09PM (#2524857)

    This paper includes very loose regex heuristics for requests that "might be" attacks. These may be interesting for anomaly detection, when coupled with an engine that records incidence rate (if you see an exponential surge in 'weird' requests, then maybe you're seeing a worm's infection growth curve ).

    But the result of deploying these (say, matching for "%20" in a URI) as intrusion detection system rules would be a high false positive rate.

    You would be better off looking at arachNIDS [whitehats.com] for rules that are more specific and less likely to drown you in alerts.
    • But the result of deploying these (say, matching for "%20" in a URI) as intrusion detection system rules would be a high false positive rate.

      From the article:

      ""%20" Requests

      This is the hex value of a blank space. While this doesn't mean youre being exploited, it is
      something you may want to look for in your logs. Some web applications you run may use these
      characters in valid requests, so check your logs carefully
      . On the other hand, this request
      is occasionally used to help execute commands."

      What's your beef?
      • As a holdover from my DOS days, I have this aversion to spaces in my filenames, so nothing I write has them. Assuming even moderate control over they system you're administering, it should be easy to make SURE that anything with %20 in it is, if not an attack, anomalous.
  • The article comes right out and states that it doesn't cover everything, but it seems to get the most common exploits. Once an admin gets this paper and secures the server against everything in it, it becomes easier to block other kinds of traffic (such as file types ending in exe). I do like the idea one poster had about a central database of port 80 fingerprints.
  • Who are the correct people, if any to alert in case you see such attacks originating from certain IPs?
  • GET /cgi-bin/phf?Jserver=a&Qalias=a%0Acat%20/etc/passw d HTTP/1.0

  • It seems that a great deal of these attacks are based upon the fact that file names are passed as CGI arguments. This is dumb. First of all, it causes your URL's to be unnecessarily ugly. Second, if one uses suffix mapping (e.g. in Apache), the URL is checked by the web server before being sent to a CGI type process. The upshot is that only files that live in the htdocs "sandbox" can be accessed. For example, http://www.davesresume.net/resume.xml points to the "/resume.xml" file in my web root. Go ahead and try "http://www.davesresume.net/../resume.xml" and you get a 404. The .xml extension is mapped to Apache Cocoon, my xml processor of choice, and there is no exploit opportunity unless I explicitly open one up with some other CGI code. Since I don't need any other file context, this type of attack is not a problem.
  • I did a worm blocker (Score:2, Interesting)

    by certsoft ( 442059 )
    Must of have been a week or two ago. I was trying to debug some LAN activity that only occurs at midnight on a custom system here. I have a log of TCP activity but it was filled with worm activity by the time I looked at it in the mornings.

    Based on the activity I detected I set the software up to look for a GET using any of the following substrings: SCRIPTS MSADC WINNT ADMIN.DLL _VTI_BIN and _MEM_BIN. If found then the requestor's IP address got added to a list. Anytime the TCP stack saw a SYN request from one of these addresses it just ignored it instead of starting the handshake. So far it has blocked 75 IP addresses and my log files are now pretty pristine.

    • If you're using a Windows server and you have a small operation where edits are done on your server, as opposed to in a seperate development environment, then it isn't as easy as that.

      _vti_bin is used by frontpage when it connects to the web server to edit. So is admin.dll.

      Scripts is used by Interdev; a lot of the code that the design time controls and scriplets depend on is in there

      _mem_bin is Site Server 3.0's membership files. If you've written your own login / error handling code for this, then you should DEFINITELY block access to it. Problem is, you can't just delete the folder; even when you write your own code, it still needs access to this folder.

      Argh. Don't you love Windows?
  • Most of these attacks are aiming to get the web server to do something it shouldn't be doing in any case. This is a good argument for running it in a solid sandbox. There are many ways of doing this (VM's and chroot'ing are some simple mechanisms, there are numerous more robust mechanisms), so that even if the web server were exploited, it would be unable to perform malicious actions on behalf of the attacker. This is not to say that you shouldn't attempt to write robust server code, but to put all of your security eggs in the basket of your request parser is a dangerous idea.
  • Just off the top of my head, they forgot exploits involving the '&' character (or its hex encoding). I've seen this one used in much the same way as the ';' (basically, to execute an extra command in UNIX).
  • URLSCAN (Score:2, Informative)

    by Dego ( 182553 )
    Microsoft has a free tool that uses a text config file that allows for the rejection of http requests based on fingerprints. Check here [microsoft.com] if you are interested. Works pretty well.
    • I downloaded your suggested M$ tool, and as it was opened, it said "Stopping IIS (yada yada)" and then "Starting IIS (yada yada)", but I have IIS turned off (ever since the inception of Code Red v.1) because I don't need a web server here on my cable box, although I turn it on (on an alternate port) when I have a need to test something.

      Good news is, it didn't start IIS despite the fact it was disabled in my services applet.

      kewl. Thanks for the link - I'll dabble about with it when I have time.

    • There is a conversation on the NTBugtraq mailing list (http://www.ntbugtraq.com) in the last few days about modifying the config file for URLScan. Apparently, there is a problem when using the default settings and a deny posture, that if someone make a request to http://www.insertthennameofyourwebsitehere.com/ , the program will consider / to be the file extension and not allow the connection. A couple of people have posted sample ini files that will fix this problem and apparently some others they found in testing.
  • I have been getting a ton of port 80 scans on my Linux box ( I don't use a webserver like Apache), but it seems that half the infected boxes on the planet are probing mine. I run Portsentry for protection so all the port 80 scans are blocked by the script and logged in my syslog. I have even written a few of the ISP's where their clients with infected boxes were scanning. Some even responded politely *grin*

    My two bits

  • by GISboy ( 533907 ) on Monday November 05, 2001 @07:07PM (#2525123) Homepage
    Among the utilites mentioned like snort, no one has hit on the actual fingerprinting utilities out there like nmap, nbtscan and something like portsentry.

    I forget off the top of my head if portsentry has scriptable events, if it does then the possibility of having a "guarddog" type box would be interesting.

    For instance, if attack is detected portsentry and it does its thing by putting the offending adderss in /ect/hosts.deny and rereads hosts.deny and passes the address off to nmap or nbtscan to figure out what the box is running.

    Nothing beats calling up an ISP and saying "you have a windows/linux/whatever box probing for webservers/mailservers/(insert service) and is attempting to execute a vulnerability of that service".

    Nmap and Nbtscan are excellent utilities, but from using them and playing around, nmap is more of a discovery tool, nbtscan is more of a retalitory tool. Or, at the very least they both can be used as such.
    I know from personal experience that nbtscan's default setting (normal, aggressive, insane) is enough to knock a box off of a network.
    I scanned my cable modem...had to power down to get back up and knocked my boss off even with his knowledge...only a complete power down would bring the box back on the network.

    If you can have a "honeypot" why not a "watchdog" box for computer security?
    Has the "security/watchdog" been done before?
    • Nothing beats calling up an ISP and saying "you have a windows/linux/whatever box probing for webservers/mailservers/(insert service) and is attempting to execute a vulnerability of that service".

      Actually, one thing does beat that: when you call the ISP's tech support to report it, the person on the other end of the line asks you, "What's probing? What's a web server? What's Linux? These aren't on my script? Have you rebooted your cable modem?"

      Sorry, was channelling a little bit of tech support rage. I feel much better now :)

    • Nothing beats calling up an ISP...

      Nothing beats finding a system like that and then "attacking" it by hitting it with spoofed packets pretending to be an attack, lets see how long it'd stand up to a) the load and b) the fact that it is blackholeing itself from all those fake IP addresses
  • This paper on general web server vulnerabilities is quite good for those of you who would like to know the basics of what to look for in an attack on your web server. Covers the fundamentals well enough to give anyone an idea of how to detect if someone's trying to compromise your web server. If you're just reading (or writing) comments here and you haven't read it yet, go back now and do so!!
  • As an admin.. I often DONT CARE. I don't want a report every time someone tries some IIS exploit against my apache server. I dont' want to waste my own resources tracking and logging this.

    Sure, more information is better.. but.. I'm just not at risk.

    You make your servers secure, and then you forget about it. You keep on top of new vulnerabilities.... but seriously folks.

    Why should I care one bit whether some code-red worm tried to exploit apache thinking it was IIS? I'm immune, it's not relevant to me.

    Now.. knowing what goes on in a network in general, yes, that's important. Run snort or something.. keep an eye on traffic coming in/out of your net

    But get real. There are better, more productive things to spend time on.
    • Actually the majority of those exploits documented were for web servers running under *nix. Don't just assume that an article on web exploits would be aimed at IIS.
    • Why should I care one bit whether some code-red worm tried to exploit apache thinking it was IIS? I'm immune, it's not relevant to me.

      Even though you are immune to infection, it doesn't mean you don't have anything to worry about. If you are just admining a single home-based hobby server, you probably don't have anything else to worry about. But suppose you admin a unix machine in a big business; you run Samba so you can share files between your Unix box and all those M$ boxen that are somebody else's problem. If one of those M$ boxen gets compromised, now you DO have a problem, especially if you are using DOMAIN security and the PDC is the one that got hit. Even if the windoze boxen are not your responsibility, they can still impact you.


      Even if you only have one box hanging off a cable modem, IIS-specific attacks SHOULD worry you. Just because a potential attacker is being clueless now doesn't mean he won't develop a clue later. If you see a bunch of suspicious activity coming from an address, you should definately be paying more attention to anything else that comes from that address in the future. The fact that it's infected with a worm is an indication that it's not being administered properly. Some clueful hacker could take that infected system and use it as a jumping off point to do somthing that COULD hurt you.

  • Snort (Score:2, Informative)

    by TV-SET ( 84200 )
    All these reminds me of a good old snort - http://www.snort.org [snort.org]

  • by Tazzy531 ( 456079 ) on Monday November 05, 2001 @11:09PM (#2525981) Homepage
    A lot of people here have been asking what people should do after they are attacked. Here is an article/guideline [cert.org] for procedures on recovering after an attack. These steps include information on saving logs, documenting everything that you do after the attack, the type of evidence needed to prosecute, and who to contact (FBI, local police, etc) But as always..the best policy is to secure the system so that attacks don't happen.
  • Pardon my cynicism, but can you really trust someone who thinks that
    cat access_log| grep -i ".."
    will find anything useful in the log? And why -i ???
  • Nice, but... (Score:1, Interesting)

    by Anonymous Coward
    The article does not take IIS into account, that shows because Unicode is missing, the article also miss another point: Error codes!

    Say user Foo runs cgi-scanner X against one system, without proper fingerprinting (as most lame script kiddies dont do), most scans will trigger an error because apache doesn't come with a /scripts/ directory and IIS wouldn't know what to do with an apache exploit. This will create tonnes of errors, making lame so called "cgi-scans" easy to spot.

    Apart from that, most network/security people should read the article; this is basic intrusion detection skills that should be mastered by you people, that include those apes (and that's an insult to primates!) who have those lame ass certifications.

For God's sake, stop researching for a while and begin to think!

Working...