Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security IT

Cybercriminals Building New, Stealthier Networks 107

ancientribe writes "Cybercriminals are adopting a new method of hiding and sustaining their malicious Websites and botnet infrastructures so they'll be harder to detect, called "fast-flux," according to an article in Dark Reading. Criminal organizations behind two infamous malware families — Warezov/Stration and Storm — in the past few months have separately moved their infrastructures to so-called fast-flux service networks. The article says bad guys like fast-flux not only because it keeps them up and running, but also because it's more efficient than traditional methods of infecting victims' machines." I'm not exactly sure why this is new/different than the more well known open relay proxy networks.
This discussion has been archived. No new comments can be posted.

Cybercriminals Building New, Stealthier Networks

Comments Filter:
  • So, in the end (Score:2, Interesting)

    by vivaoporto ( 1064484 ) on Wednesday July 18, 2007 @09:35AM (#19899955)
    These criminals are giving a "smarter" * use for the enormous potential that these hundred thousands of homogeneous (or similar enough) connected machines have than most companies out there does. It is time for 1) Microsoft and its users get their act straight and work on better security for they machines and 2) someone to realize the incredible potential of all this "dark" bandwidth and processing power and give it a good use. Criminals are showing it is possible, all it need is some legitimate application.

    * Smart but immoral and illegal. I, for one, don't condone nor endorse their actions, and think they are nothing but vile criminals
  • by Control Group ( 105494 ) * on Wednesday July 18, 2007 @09:50AM (#19900159) Homepage
    I am not a networking guru (IANANG, copyright 2007, me, all rights reserved), so I'd appreciate somebody setting me straight on this if necessary.

    But I don't really see how blocking port 80 would be an effective way to fight this sort of thing. There's nothing special about port 80 aside from it being the default http port. Unless the victims are typing the URL into their address bar, I don't see any reason the mother ship couldn't have bots listen on another port. I mean, the machine is already owned, so it's not like opening up port 43783 is difficult. And I can't help believing that most - if not all - people going to these sites are clicking links, not typing addresses.

    So you close off port 80, and anyone running a legit (well, probably not, given the TOS of most ISPs, but at least not a malicious) web server out of their house/apartment/dorm room can no longer easily direct people to it. Meanwhile, the malicious sites are slowed down by the amount of time it takes some jackass to change one constant in one piece of code.

    Unless, of course, there's some other factor I'm unaware of making it more difficult to reach an http host over something other than port 80.
  • Re:Block TCP Port 80 (Score:2, Interesting)

    by Sobrique ( 543255 ) on Wednesday July 18, 2007 @09:52AM (#19900183) Homepage
    I take it you mean except the IANA assigned port number?

    How about outbound firewall and proxy configurations?

  • Re:Block TCP Port 80 (Score:2, Interesting)

    by InsaneMosquito ( 1067380 ) on Wednesday July 18, 2007 @09:52AM (#19900185)
    Charter.net blocks port 80. It was PITA to figure out why I couldn't connect to my webserver from outside the Charter network. While inside their network I could just fine. Once I figured it out though, its was as simple as moving the webserver to a different port. I picked 443 because they allow secure websites. From there I just set up a little domain forwarding/cloaking so that end users never see they are connected to 443 and don't use SSL - its not needed for the type of site I have hosted.
  • Re:Block TCP Port 80 (Score:3, Interesting)

    by CastrTroy ( 595695 ) on Wednesday July 18, 2007 @10:02AM (#19900299)
    I've never got why people want to run a webserver on their home computer over a cheap cable/dsl connection. I tried it for a while but between the cost of the extra computer, the cost of the extra electricity, the trouble of setting up all the server software on my own, and the trouble of dealing with changing IPs, and all the other wonderful cable ISP network oddities, I found it easier to just pay a cheap monthly fee for a shared hosting account. It's nice to run a home server for some things, but if it's going to be used by a lot of people, and accessible from outside your home, then It's way easier to just pay for hosting. That's my opinion anyway.
  • by Control Group ( 105494 ) * on Wednesday July 18, 2007 @10:27AM (#19900699) Homepage
    *shrug*

    Randomly select a different port each time you connect to the zombie. If you're really worried about users running netstat to check their open ports (and I suspect that zombied machines are more often owned by people who don't even know the CLI exists, much less who generally run network diagnostic tools via the CLI than not - and by a wide margin), then have it only open the port for ten minutes every hour. Windows, by default, updates its clock to NIST weekly, so you can be reasonably sure that your zombies are synced enough for that to work. Round-robin assign the ten minute window to the zombies (xx:00 - xx:09, xx:01 - xx:10, xx:02 - xx:11, etc). During that window, you use the zombie to host content, and you can push a listen port update. At any given time, most of your zombies are running on the same port (they have to be, or your victims can't connect to your content), but blocking that port will only be effective for however long you determine. How fast can ISPs identify a rogue port and block it?

    If my experience with spam is any indication, the linked sites go down almost as fast as the spam comes in, but that's (apparently) not a problem for the spammers. So you rotate ports every two, three days.

    And this is just the scheme I've come up with off the top of my head in less than a minute.

    Come to think of it, you're already executing arbitrary code on the zombied machine. Have them determine when they can listen on their assigned port, with a minimum frequency and duration set, with a bias towards times the user isn't at the console. When the window opens, step one is to notify the mother ship that this machine is active.

    There are probably holes in this scheme, but I don't see the problem as being intractable. I do see any effort to just block port 80 as being naive (at best). I don't think ISPs can respond fast enough to block a new port every couple days, but perhaps I'm wrong about that.
  • Re:Block TCP Port 80 (Score:5, Interesting)

    by Anonymous Coward on Wednesday July 18, 2007 @10:49AM (#19901005)
    With power comes responsibility. If you want unfettered internet access, it's your responsibility to make sure that your participation in this network doesn't cause problems for others. Since most residential internet users have neither the ability nor the intention to shoulder that responsibility, their upstream provider has to find ways to protect other internet users from his customers, because if he doesn't, he will ultimately have to pay for the damage that they do (higher traffic costs, less favorable peering agreements, blacklisting, etc.)

    The net has grown very fast and so far we've shirked the responsibility issue: Customer's complain about spam and when the spammer's provider says it's not their responsibility, they're called a safe-haven for spammers. On the other hand, when customers get cut off because their computers are scanning and infecting other machines, they complain that it's not their fault and how are they supposed to keep their system clean without a full time admin and it's none of the ISPs business as long as the internet access bills are paid.
  • by Todd Knarr ( 15451 ) on Wednesday July 18, 2007 @01:11PM (#19903417) Homepage

    Fast-flux takes advantage of the ability to set extremely low time-to-lives on DNS resource records. The shorter the TTL, the faster changes propagate out through the DNS cache network. This suggests a way of neutering fast-flux: implement a minimum TTL in DNS servers. Since most people depend on their ISP's DNS servers rather than going directly to the roots, this would effectively prevent the fast-flux record changes from propagating as fast as they need to to be effective. If, for example, an ISP put a 30-minute minimum TTL in place, then the A record for a given name would remain fixed for 30 minutes (modulo cache being filled and the record being forced out) regardless of what the fast-flux network did. And since the DNS servers enforcing the minimum typically aren't under the control of either the botnet or the infected machines, there's nothing the botnet operators can do about the situation. As a side-effect, this also cuts the load on the DNS network caused by PHBs who order 60-second TTLs on their records "so customers won't be inconvenience when we change our IP addresses".

    Two glitches with the idea:

    1. Changes to the NS records for a domain are also slowed down. When changing your NS records you need to make the changes but leave the old servers running in parallel long enough for the changes to trickle out to everybody.
    2. Load balancing via round-robin DNS would be broken unless the caching servers also do rotation of the cached records in responses. I think BIND already does that.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...