Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security Software Linux

Forensics On a Cracked Linux Server 219

This blog entry is the step-by-step process that one administrator followed to figure out what was going on with a cracked Linux server. It's quite interesting to me, since I have had the exact same problem (a misbehaving ls -h command) on a development server quite a while back. As it turns out, my server was cracked, maybe with the same tool, and this analysis is much more thorough than the one I was able to do at the time. If you've ever wondered how to diagnose a Linux server that has been hijacked, this short article is a good starting point.
This discussion has been archived. No new comments can be posted.

Forensics On a Cracked Linux Server

Comments Filter:
  • by Andy Dodd ( 701 ) <atd7NO@SPAMcornell.edu> on Friday August 24, 2007 @02:39PM (#20346731) Homepage
    On the other hand, shutting down the box ASAP makes it much harder to find the guy.

    For example, one of Vodafone Greece's first reactions to finding that some of their switching systems had been rootkitted was to remove the offending software. This removal was one of the main contributing factors to the authorities having no chance to ever find the group that had compromised the system, that along with a couple of other screwups led to Vodafone getting fined a pretty hefty sum.

    http://en.wikipedia.org/wiki/Greek_telephone_tappi ng_case_2004-2005 [wikipedia.org]

    IEEE Spectrum had a recent article that had MUCH better information than Wikipedia though, I don't have it with me at the moment unfortunately.
  • by DrSkwid ( 118965 ) on Friday August 24, 2007 @03:10PM (#20347033) Journal
    I had a co-lo rental from Pipex. Linux 2.2. They noticed it was broken in to, cut us off, charged us to re-image the box on which they had left a tar of the drive. OK sounds fair enough, but they re-imaged it with EXACTLY the same Linux 2.2 install and it was infiltrated again by the time I got the email telling me it was back on. I fixed it by hand and never told them lest they charge the company again. Happily I quite soon after.
  • Re:Forensics (Score:1, Interesting)

    by Anonymous Coward on Friday August 24, 2007 @04:07PM (#20347599)
    [~]:apache$
    lrwxr-xr-x 1 root wheel 11 Dec 20 2006 .bash_history -> /dev/random

    I'm pretty sure it is. I didn't use any crazy exploits or anything. It's an old computer that I once had access too when I was in school. It's just a lesser used machine and all I use it for is bit torrent (on a .edu).

    I created a few users such as "apache" and "sendmail". I'm not claiming to be a haxor by any means, and I just use it, like I said, for bit torrent.

    'apache's root directory is actually a mounted DMG file that I have mounted to /tmp.

    With OSX it's pretty easy.
    Create DMG: /usr/bin/hdiutil create -size 1t -fs HFS+ -type SPARSE -encryption -stdinpass -volname objc_sharing_ppc_23 data

    Attach DMG: /usr/bin/hdiutil attach -readwrite -private -mountroot /tmp -nobrowse -stdinpass "/Library/Application Support/LiveType/LiveFonts/Pro Series/Script.ltlf/data.sparseimage"

    Detach DMG: /usr/bin/hdiutil detach /tmp/objc_sharing_ppc_23 >> /dev/null

    128 bit encryption on that home directory. No one really questions large files in /Library/.
  • by rastoboy29 ( 807168 ) * on Friday August 24, 2007 @06:06PM (#20348729) Homepage
    I work in a large, low-end datacenter.  Almost all the servers there are rented buy non-technical people, who for some reason feel qualified to run web hosting businesses.  There are so many exploits going on there at any given time, we can't really do anything about it--especially as theoretically the customer is responsible.  So when they call in because their server is running slow, we usually find a php hijack happening, tell them their server has been compromised, and suggest that they do something about it.

    It's pretty appalling.  We would need an army of sysadmins--an army which is currently employed already--to really do something about it.  Most of what we see are primitive script kiddie hacks, but guess what--that's good enough, and rarely are the perpetrators hunted down.

    Who knows what the more sophisticated hackers are up to!
  • by Antique Geekmeister ( 740220 ) on Friday August 24, 2007 @06:10PM (#20348771)
    I'm afraid that most software tools are not inherently better than those in 1997: most attackers, and even most successful attacks, are by script kiddies with tools. Even skilled crackers like Mitnick consistently make foolish mistakes. (In Mitnick's case, it was leaving messages mocking his victims and getting the FBI really, really mad at him,, angry enough to actually prosecute.) There are plenty of vaunted crackers who make other amazingly stupid mistakes, both programming and social.

    The IRC-bot creators seem to be among the worst of the script kiddies. Frankly, IRC should go the way of open relays. Too much of the traffic is illegitimate to justify allowing it through any firewalls or any ISP provided system. It should be blocked even before non-ISP-server bound SMTP, simply for damage control.
  • Re:Forensics (Score:2, Interesting)

    by jnelson4765 ( 845296 ) on Friday August 24, 2007 @06:59PM (#20349139) Journal
    I've seen root's .bash_history symlinked to /dev/null used on a couple of incidents - at least the date of the symlink creation can be used to tell you exactly how long they've been there...

  • by baggins2001 ( 697667 ) on Friday August 24, 2007 @09:07PM (#20350011)
    I wouldn't be to critical of the techy in this situation.
    It's more about 2 screwed up business models (If you look at it from a technical point of view).
    They want cheap servers with bandwidth, buy cheap servers and buy shitloads of bandwidth. Offer them for really cheap prices ( 10,000 Servers. They may have five or six people on a shift for maintaining these systems. These guys are responsible for patch management and backup/restore, plus they have to physically replace the systems which crash (Usually there is very little forensics done. It's down, yank the box replace it and restore. This usually happens to about 15 boxes a week. Plus you have the hardware update cycle. There's another 100+ getting yanked per week). So these guys are usually pretty busy. There are only a few guys who actually look at the system and try and determine why it is running slow, but they aren't there to fix problems. There in place to tell customers they have a problem and tell them that they need to fix it or let them restore it(very very nicely). They aren't there to go through the intricacies of a hack.

    Comp 2) Some guys heard about this web thingy and heard he can make money doing it. He knows very well that he can't have less than a full server for his 12 orders a week. Of course he originally thought it would be thousands, especially since he went out and had a professional build the whole site for him for $500 (looks good). He occassionally calls this guy up to update his site for $50 (content mind you).

    So now we have 2 business's with interest in a server and neither one gives a shit about security. (Of course the techs working Company 1 do, but they don't have time for that)

    Which brings us to Comp 3. These are the guys Comp2 turns to when their server isn't fixed or keeps crashing due to poor security. They charge 10% more, but this time Comp2 asks them about security. Comp3 answers yes we are vigilant about security "We do patch management and are vigilant about monitoring for hackers". "Ahh, you monitor for hackers" Comp2 says "I'll take it". Never realizing that he is getting no more than what he was getting from Comp1.
    But won't Comp1 go out of business? No Comp1 is getting Comp3's old customers for the same problem.

    Basically if you aren't paying $250/month for computer and bandwidth and paying $300 for management of a system, your getting a Dell Dimension in a barn somewhere. And Odd's are pretty good that a hacker is going to get it or a cow is going to shit on it.
  • by TheLink ( 130905 ) on Saturday August 25, 2007 @10:37AM (#20353585) Journal
    If I suspect something is wrong with my home machines and I didn't care to figure out what happened, I'd just revert the relevant virtual machine to a clean snapshot, disconnect the network connections and patch, restore data etc.

    If I did care, I could either suspend the virtual machine or make a snapshot of it.

    Virtual machines are cool :). Once x86 hardware gets more efficient at running VMs (including IO), I think I'll run everything virtualized. You can't get away with doing that red pill, blue pill thing to my system if I do it first :).

    If you don't run machines in a VM, I believe the proper way to do forensics is to pull the plug (not sure if attackers would tamper with fsync) then make a copy of the drive using hardware that is certified to block writes to the drive - there are few vendors about selling such hardware and software to go with it. Google should show up a few.

    If you do it any other way, any evidence gathered could be considered suspect or tampered with by the defense, or you could accidentally destroy your evidence, or you could be allowing the attacker to destroy the evidence.

    Doing what the chap did in the article is definitely not "forensics", anymore than stomping all over a murder scene while touching everything is forensics or a proper investigation.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...