Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security

2M New Websites a Year Compromised To Serve Malware 72

SkiifGeek writes "Sophos claims that they are detecting 6,000 new sites daily that have been compromised to serve malware to unsuspecting site visitors, with 80% of site owners not aware that they have been compromised — though this figure is probably on the low side. With increasingly vocal arguments being put forward by security experts criticizing the performance and capability of site validation tools (though many of these experts offer their own tools and services for similar capabilities), and rising levels of blended attacks, perhaps it is time you reviewed the security of your site and what might be hiding in infrequently used directories."
This discussion has been archived. No new comments can be posted.

2M New Websites a Year Compromised To Serve Malware

Comments Filter:
  • by MankyD ( 567984 ) on Friday January 25, 2008 @09:39AM (#22181118) Homepage
    Everytime I read about a new form of server malware, I try to check a LAMP server that I run. So far I've come up clean but I've hardly done a full inspection. Anyone know of a good way to scan a set up? Sophos says that they are detecting thousands of new sites - how are they scanning them?
    • Comment removed based on user account deletion
    • by rs232 ( 849320 )
      "Everytime I read about a new form of server malware, I try to check a LAMP server that I run. So far I've come up clean but I've hardly done a full inspection"

      The only way to be sure is to run it from a CD image and reboot nightly ..

      "Anyone know of a good way to scan a set up? Sophos says that they are detecting thousands of new sites - how are they scanning them?"

      They're not, they just need a little boost to the stock price, please buy our PRODUC~1 .. :)
      • Sophos is a private company. they dont have a stock price that needs raising.
      • The only way to be sure is to run it from a CD image and reboot nightly ..
        I thought you were supposed to nuke it from orbit?
      • It it practical to use an HD in write-protect mode? (Some hard drives have a write-protect jumper: Google "ide hd write protect jumper".) Put the system software on the WP'd drive and use a second HD for the remainder. This topic raises another question: I wonder how Google is able to crawl the web while protecting its own systems from malware.
        • Google is not running every ActiveX control and .exe that is comes across. I should imagine that they only need to protect against SQL injection.
    • by Smidge204 ( 605297 ) on Friday January 25, 2008 @09:52AM (#22181260) Journal
      I thought about this myself. One possible solution that I considered would be to maintain a local list of files on your server and their CRC/Hash values. A script on the server would scan all the files and output a similar list than you could then check against your local copy and would quickly identify any new or changed files. This could be set to a cron job to do periodic scans or just initiate a manual scan whenever.

      Might not be the best solution but it should be easy to implement. Larger sites can do incremental scans. It would be harder to detect corruption of databases, though, unless you know what to look for or have a concrete way of validating the contents.
      =Smidge=
      • Re: (Score:2, Informative)

        by Bongfish ( 545460 )
        For this, you'd want to use something like Tripwire or AIDE. It's been used for years, and will detect changes to files.

        You're right that it won't help you detect that somebody has managed to insert a chunk of javascript or PHP in your insecure mySQL/PHP web app, though. Perhaps a combination of Snort, Ntop (if it wasn't shit), a "hardened" PHP binary and config, and log monitoring would alert you in the case of an attack.

        The problem is that there's a lot of badly written or out of date software out there t
        • > insecure mySQL/PHP web app

          I meant "vulnerable", but feel free to insert jokes about neurotic software here:

      • Re: (Score:1, Informative)

        by Anonymous Coward
        That's exactly what Radmind does:

        http://radmind.org/ [radmind.org]
      • Re: (Score:2, Informative)

        by the_olo ( 160789 )

        I thought about this myself. One possible solution that I considered would be to maintain a local list of files on your server and their CRC/Hash values. A script on the server would scan all the files and output a similar list than you could then check against your local copy and would quickly identify any new or changed files. This could be set to a cron job to do periodic scans or just initiate a manual scan whenever.

        Congratulations! You have just described Tripwire [sourceforge.net].

        • by leet ( 1202001 )
          Tripwire is excellent and I've used it for years. Another good thing to do is e-mail the results of running a Tripwire check on a daily basis, or more often if that's a preference. That way you get the server reports in your inbox on a regular basis.

          Tripwire is good for systems that are well understood. If you don't understand the changes that can happen on a system then it won't do you much good. But running Tripwire gives the administrator an understanding of system changes over time. So initial unde
      • by spinfan ( 893209 )
        A quick and easy way to check all your files is to use md5deep, which
        will scan directories recursively and generate all the md5s into
        a single file which you could compare against a baseline:

        http://md5deep.sourceforge.net/ [sourceforge.net]
      • For File integrity, you can always utilize somthing like:

        AIDE - Advanced Intrusion Detection Environment

        AIDE is a file integrity checker that supports regular expressions. Licensed with GPL.

        www.cs.tut.fi/~rammer/aide.html

      • what you would have to do is set up a script to take a local mirror of the website that has every authorized file, in it authorized form with their appropriate hashes and compare it to what is on the website via FTP and report any additions, deletions or changed files. This would tell you that the site proper is what you uploaded and unchanged. Other problems seems to be that some big server farms have been rooted which is probably out of your control, that why I wouldn't trust a script running on the websi
      • Radmind (Score:2, Informative)

        Radmind: http://radmind.org/ [radmind.org]. Radmind's is designed for this purpose exactly. It's a tripwire with the ability to roll back changes, or capture them and store them for deployment to other systems.
    • I don't have a great answer but I may have found a step in the right direction. Check out Tiger at http://www.nongnu.org/tiger/ [nongnu.org]. It's a Debian scanner that checks for common signs that someone has pwned your system. I'll warn you now that I haven't tried it yet but might do so in the next few weeks to see how it operates. It doesn't check specifically for any malware but does check for signs that someone has altered your system for remote control. Like I said, not a great solution, but another tool in the t
    • You first need a file integrity checker. AFICK (my favorite) or similar will do a run, on whatever period of time you set a cronjob and conf file at. You then get an email to you listing what files have changed over that period of time.

      Also, keep an eye on how big your maillog files are - if they suddenly grow by some exponent, you've been turned into a spam server (or a newsletter went out - five seconds of peeking at the live output should tell you which).

      Also, you can keep an eye on the http access l

    • In addition to the other tools mentionned by /.ers, there are 2 root-kit checking tools that are worth mentioning :
      - chkrootkit [chkrootkit.org]
      - rkhunter [rootkit.nl]

      They are scripts that scan the system for known root kits, weird behaviours and hidden files in unusual places.
      They can both be used to scan an offline system (booted from a live-cd and the system mounted under some directory),
      and a live online system (they check the system for suspicious behaviour that may reveal a root-kit trying to hide it self - for example the "ps" co
    • I use tripwire to build hashes for all files, and store them "off-machine". The hashes are compared against a baseline, and any differences are highlighted. Disk use in monitored, and (not yet found) anomalies are investigated. Logs are examined, and ssh dictionary attacks are dropped.

      The server does NOT run database -- only (pure static) apache. Scripts are NOT run on this machine. Certain things are directed off to https, on another machine, with user/password authentication. That, in turn, actually talks
  • But I thought all these sites were validated and certified and IP imdemnified, else what was the point of paying huge wads of dosh to all the lawyers, oh wait, now I get it .. :)
  • by davidwr ( 791652 ) on Friday January 25, 2008 @09:43AM (#22181160) Homepage Journal
    Perhaps the time has come to harden the "common stacks" so certain switches are off.

    For example, once you set up your web site, "lock it" so if there are any changes to files or directories that shouldn't change, the site will break in a non-harmful way rather than be compromised.

    If and when these files need updating, the "unlock" process should be done using a tool independent of the main web-server process, perhaps by using a different web-server process running on a different port or even a process on a different computer that validates the request then passes it on to the main web server.
    • Re: (Score:1, Funny)

      by Anonymous Coward
      Even better. When a server is compromised, it will burst into flames, burning down the entire data center it's hosted in. You know, just in case the virus spread.
    • Easier. For a LAMP stack, here's what's needed:

      Everything on the Web server should be mounted read-only, preferrably from a machine behind a firewall. A firewall sits behind that machine and your inside network. The only way to write to the file system should be from behind the firewall. Any temporary files that need to be created for download or parsing or whatever, where read/write is necessary, should only be done from a RAM disk. Reboot the server nightly.

      The database server should also sit on the
      • Actual different machines with actual different firewalls are good for hosted solutions and IT departments that know what they are doing, but they are too complicated for non-geek do-it-yourself mom-and-pop-businessman/home-user solutions.

        However, a stack that puts a virtual or other hardened subsystem to hide the non-read-only files and databases behind in an easy-to-use form should be doable.
        • Actual different machines with actual different firewalls are good for hosted solutions and IT departments that know what they are doing, but they are too complicated for non-geek do-it-yourself mom-and-pop-businessman/home-user solutions.

          Should the non-geek do-it-yourself mom-and-pop-businessman/home-user REALLY be putting a live box out on the public Intartubes with exposed services? Wouldn't they be MUCH better off with a hosted solution, especially given that shared hosting can cost as little as $5 a month?

    • For example, once you set up your web site, "lock it" so if there are any changes to files or directories that shouldn't change, the site will break in a non-harmful way rather than be compromised.

      If it's not supposed to change at all, just issue chattr +i on it to make it immutable. Then it won't change, even w/ system root permissions. Just remember to unset the flag any time that you do want to change something ("chattr -i").

      /P

  • by Anonymous Coward
    I would just run a perl script that does a regex on the access logs for anything that does not match the files that should be delivered to clients. Put the perl script in a cron job and let it run. Also do an MD5 hash on those files regularly and check for any changes to static files. And use very strong root passwords, don't let root account login remotely, and use ssh keys with no interactive logins.

    my 2 cents...

    • by JavaRob ( 28971 )

      I would just run a perl script that does a regex on the access logs for anything that does not match the files that should be delivered to clients. Put the perl script in a cron job and let it run.

      Sounds like LogCheck. I'm using that; it provides a decent organized daily summary of all access to the server.

      Also do an MD5 hash on those files regularly and check for any changes to static files.

      Know any pre-written scripts/software that handle this? The trouble is, you'd have to secure the hashes as well, which gets tricky, plus it doesn't offer any help for regularly changing content or database contents (where a content hack like secretly added JS would probably be inserted). But still, better security for non-changing files seems like a good idea; the chattr +i suggestion above se

      • by JavaRob ( 28971 )
        Whoops -- I mean LogWatch, not LogCheck (though I think that's another, similar package).
  • virtualized rootkits (Score:3, Interesting)

    by Speare ( 84249 ) on Friday January 25, 2008 @09:50AM (#22181252) Homepage Journal
    Okay, say someone's site is served by an ISP. The ISP gives the site owner a shell account and manages the LAMP infrastructure. The shell account is likely a virtualized instance, meant to limit the damage that each little site can do to the hosted infrastructure, not to limit the damage that the host does to little sites or their visitors. How can the site owner "check their own site" in such a case? Virtualization itself is a sort of rootkit conceptually, so how can the virtualized account check for malicious rootkits in its own instance or in the greater infrastructure?
  • by Lumpy ( 12016 ) on Friday January 25, 2008 @09:53AM (#22181276) Homepage
    Until they release the fricking list of IP addresses or Domain names.

    I would love to put that list in my squid blocking file to protect my users.

    • That would be a sump move. If it was IP addresses then once an IP address was re-assigned to a good host you still wouldn't see their website. You have no way of removing IP addresses from your list.

      If it was domains names the same problem would apply but after domains cleaned infected files.

      I say all of this because I was a victim of stupid block lists when I got a new IP and tried to send email out on it. It was blocked because of the previous owner and getting removed from most lists was non-obvio
  • 6000 sites? (Score:1, Interesting)

    by Anonymous Coward
    TFA says 6000 infected webpages. Could be a big difference, but TFA doesn't elaborate.
  • by oni ( 41625 ) on Friday January 25, 2008 @10:02AM (#22181410) Homepage
    If I run FF and keep it patched, am I safe? If I did get compromised, what would the symptoms be?

    I tend to think that keeping my OS patched keeps me pretty safe, but there's always a delay after a new vulnerability is discovered before the patches come out (the zero day) and what concerns me is that if someone has a very large network of compromised web servers, they can roll out a zero day vulnerability to all of them and do a lot of damage.

    As to symptoms, I think spyware used to be the big problem, and infected computers would have popups and such. But now I think that infected machines will be used primarily to send spam. Is that correct?
    • So long as you are not using an operating system that is named after the most easily broken part of the house, you should be safe.
  • by jc42 ( 318812 ) on Friday January 25, 2008 @10:34AM (#22181798) Homepage Journal
    When do we get a FOSS runtime library for using this valuable public resource?

    Imagine all the useful things we could do for the world if we all had access to this distributed computing power.
    • Shush, I'm trying to put together a business model based on that idea. Don't go blabbing it everywhere! ;-p
      • by jc42 ( 318812 )
        I'm trying to put together a business model based on that idea.

        Well, I think you might be a bit late with that. ;-)

        But think of the good things that could be done with a free and open implementation.

        OTOH, it's been more than 25 years since the first true distributed OS was announced, and the idea hasn't exactly taken the world by storm.

  • 80% (Score:1, Interesting)

    by Anonymous Coward

    with 80% of site owners not aware that they have been compromised

    Wait. So 20% of site owners know their site has been comprimised and they haven't done anything about it and are still serving up malware? Sounds to me like someones making up statistics.

  • Yes... (Score:3, Interesting)

    by SigmundFloyd ( 994648 ) on Friday January 25, 2008 @11:00AM (#22182126)

    Sophos claims that they are detecting 6,000 new sites daily that have been compromised to serve malware
    ...but do they run Linux?
    • by i*rod ( 1021795 )
      Per Netcraft Sophos.com is running: Linux Apache 27-Jan-2008 213.31.172.77 SOPHOS
    • by Jake96 ( 69645 )

      Yes, some do - and it's not a knock on Linux, either.

      I work at a small webhost. We're 100% Linux, and have somewhere in the low hundreds of thousands of sites on about forty servers. I come across a compromised site about every other day, and those are the ones that are making themselves obvious - malicious javascript, form abuse, SQL injections, etc. Being on Linux servers has nothing to do with how secure the sites are. The users pick their own passwords and manage their own content, and the sites t

  • Somebody should warn 3M that they are next. I'm sure they would want to prepare. Ok, sorry I'll get my coat.
  • Vendor FUD or Real? (Score:4, Interesting)

    by a-zarkon! ( 1030790 ) on Friday January 25, 2008 @12:46PM (#22183860)
    I for one would like some description of how they're detecting these 6000 new sites per day. Also, what are they considering a website? Do they include bot systems that configured to listen on port 80 as part of the worm propagation and command/control? That's not really a website in my opinion, but it may be in theirs. It would be great if they published a list of the 42000 new websites they have discovered over the past 7 days, you know just to back up their claim. Wouldn't hurt to notify the owners of those sites that they've got a problem.

    Absent more detail, I am calling shenanigans on this statistic, Sophos, and the Register. I am soooo sick of the FUD.

    Harumph!

    • It is always wise to be considerate of potential FUD, but you also have to realize that Sophos is a for-profit company, and the likelihood of them publishing a list of information that helps them to follow their for-profit program is highly unlikely. It would be like asking Coke to publish their secret recipe.....
  • Why can't a bunch of white hats get together in some country with lax or missing Internet laws and make a virus that SLOWLY propagates throughout the Internet looking for VERY OLD vulnerabilities, infect those machines, download the patches and turn on auto-update, maybe scan the local network and then outside for a week or so, then alert the owner with a polite pop-up and background change telling them they've been infected by at least one virus and should get some AV and some patches and maybe even list s
    • Oh, and it would probably kill most/all of the bot nets.
      • by JavaRob ( 28971 )

        Oh, and it would probably kill most/all of the bot nets.

        No -- many of the bots nowadays lock down the PC themselves (once they're in...) to keep it "safe" from competing bots. They even actively remove other bots when they can manage it.

        As for the idea, though... I think about that as well. Even if just to get onto computers that haven't been compromised by a really effective bot yet (as I mentioned above) would be a big step.

        Alas, most of the people talented enough to write such a thing are probably either:
        * well-employed enough that they don't want to risk

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...