Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security IT

Botnet Targets Web Sites With Junk SSL Connections 64

angry tapir writes "More than 300 Web sites are being pestered by infected computers that are part of the Pushdo botnet. The FBI, Twitter, and PayPal are among the sites being hit, although it doesn't appear the attacks are designed to knock the sites offline. Pushdo appears to have been recently updated to cause computers infected with it to make SSL connections to various Web sites — the bots start to create an SSL connection, disconnect, and then repeat." SecureWorks's Joe Stewart theorizes that this behavior is designed to obscure Pushdo's command and control in a flurry of bogus SSL traffic.
This discussion has been archived. No new comments can be posted.

Botnet Targets Web Sites With Junk SSL Connections

Comments Filter:
  • What it probably is? (Score:3, Interesting)

    by Anonymous Coward on Monday February 01, 2010 @11:09PM (#30991104)

    Probably one of a few things
    1) They are looking for a particular vuln to make their bot bigger.
    2) They are just testing a DOS.
    3) They are actually conducting a DOS.
    4) They are trying to make some sort of name for themselves.
    5) Combination of the above.

    My money is mostly on 1, and some sort of bug in the program causing it to spam the same boxes over and over.

  • SSL traffic (Score:3, Interesting)

    by shird ( 566377 ) on Monday February 01, 2010 @11:13PM (#30991134) Homepage Journal

    Do they realise that SSL traffic causes a higher load on the server than a regular request? This would be an indication it is trying to bring the site down.

    I don't see how sending packets to 'major websites' disguises the real communications in any way. Just filter those requests. The more 'major' the web site for the garbaage packets, the easier it is to distinguish them from the real packets.

  • Up to something? (Score:4, Interesting)

    by toleshei ( 749993 ) on Monday February 01, 2010 @11:46PM (#30991346)
    "Site owners "would just see weird connections that don't seem to make sense," he said. "They look like they're trying to start an SSL handshake, but it comes in malformed and doesn't ever send anything after that first handshake attempt."" Is it possible that they've found a flaw in a specific Systems handling of SSL and are trying to see if the flaw exists elsewhere in an attempt to produce an exploit? I'm not really a security guy, but it seems like they're up to something specific. Otherwise why use SSL exclusively? wouldn't they want to diversify their requests?
  • Re:SSL traffic (Score:5, Interesting)

    by girlintraining ( 1395911 ) on Monday February 01, 2010 @11:59PM (#30991408)

    Do they realise that SSL traffic causes a higher load on the server than a regular request? This would be an indication it is trying to bring the site down.

    Yes, they do. They also don't care. Most botnet authors are self-taught, or only college educated, and are not experienced developers. They don't know how to obscure their creation's activity, because they lack a full understanding of network security. Which is understandable: That isn't in the SDK documentation and example code. Because they lack the skillset necessary to create a protocol resistant to traffic analysis, they go the other way: Flood all the connections and hope those analyzing the logs decide it's not worth the effort to find the needle in the haystack. They know it can be tracked -- they just don't feel its worth the effort to learn how to do it right, when doing it wrong gets them to payday faster and with only a minute amount of additional risk.

  • Re:SSL traffic (Score:4, Interesting)

    by JWSmythe ( 446288 ) <jwsmythe@nospam.jwsmythe.com> on Tuesday February 02, 2010 @12:11AM (#30991458) Homepage Journal

        I can honestly say, with experience, that https only takes a trivial amount more CPU time than a http request.

        The honest references you will find showing that https was so much heavier than http, was when the blazing fast webservers were 133Mhz.

        You're in more danger of the DDoS filling up your pipe than bringing a server to it's knees. The bringing the server down could be accomplished just as easily as a http server. That is unless some genius decided that they needed an entire server farm for http, but only one or two machines for https, which would definately qualify it as "weak"

        The folks running the servers should be able to deploy countermeasures of some sort. If a number over some acceptable threshold are illegitimate requests, automatically block them. It's easy enough on a *nix box. I'm not talking about anything in the webserver itself either. The webserver should be able to initiate something as simple as an iptables/ipfilter rule. It's amazing how useful those can be, and if the threshold is calculated appropriately, it won't even bother legitimate traffic.

        You are right though, I don't see how these would disguise anything. If you have a list of places that are targets, that makes it more noticeable, not less, even if it is the CnC machine, or a drone.

  • by SlappyBastard ( 961143 ) on Tuesday February 02, 2010 @12:24AM (#30991532) Homepage
    But, it does apparently make a very good smoke screen for a good offense.
  • Entropy depletion (Score:5, Interesting)

    by xenocide2 ( 231786 ) on Tuesday February 02, 2010 @12:50AM (#30991664) Homepage

    SSL/TLS at it's core generates "session keys" for communication; a string of random characters. It's possible they're trying to deplete the SSL servers of true entropy for some undisclosed attack; PRNG, for example.

  • Re:SSL traffic (Score:3, Interesting)

    by Anonymous Coward on Tuesday February 02, 2010 @12:57AM (#30991690)

    [Citation needed] The guy that took over torpig has some very nice things to say about the quality of the logging info that suggests the complete opposite, botnet developers are damn good and produce a better product than most code-monkeys.

  • Re:Entropy depletion (Score:2, Interesting)

    by Anonymous Coward on Tuesday February 02, 2010 @12:59AM (#30991708)
    Replying anon because I voted it up. Anyways it's amusing to me (a security geek) that so far only one person has gotten this, it's a pretty obvious reason (assuming of course the SSL attack is deliberate and actually aimed against these sites). I'd be curious to know if the attacker is collecting data and perhaps running Randomness Tests [wikipedia.org] against the results to see if this connection flooding is having any affect.
  • Re:From TFA (Score:5, Interesting)

    by fm6 ( 162816 ) on Tuesday February 02, 2010 @01:19AM (#30991812) Homepage Journal

    Some of the malware I've encountered lately (I've got one system unusable until I get around to reinstalling the OS) is very sophisticated indeed. I would admire the designers, if I didn't so badly want them dead.

    Does anybody else miss script kiddies?

  • Re:Entropy depletion (Score:5, Interesting)

    by bobstreo ( 1320787 ) on Tuesday February 02, 2010 @01:29AM (#30991870)

    Don't think it's that complex. From June 2009:
    http://isc.sans.org/diary.html?storyid=6601 [sans.org]

    Yesterday an interesting HTTP DoS tool has been released. The tool performs a Denial of Service attack on Apache (and some other, see below) servers by exhausting available connections. While there are a lot of DoS tools available today, this one is particularly interesting because it holds the connection open while sending incomplete HTTP requests to the server.

    In this case, the server will open the connection and wait for the complete header to be received. However, the client (the DoS tool) will not send it and will instead keep sending bogus header lines which will keep the connection allocated.
    The initial part of the HTTP request is completely legitimate:

    GET / HTTP/1.1\r\n
    Host: host\r\n
    User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)\r\n
    Content-Length: 42\r\n

    After sending this the client waits for certain time – notice that it is missing one CRLF to finish the header which is otherwise completely legitimate. The bogus header line the tools sends is currently:

    X-a: b\r\n

    Which obviously doesn't mean anything to the server so it keeps waiting for the rest of the header to arrive. Of course, this all can be changed so if you plan to create IDS signatures keep that in mind.

    According to the web site where the tool was posted, Apache 1.x and 2.x are affected as well as Squid, so the potential impact of this tool could be quite high considering that it doesn't need to send a lot of traffic to exhaust available connections on a server (meaning, even a user on a slower line could possibly attack a fast server). Good news for Microsoft users is that IIS 6.0 or 7.0 are not affected.

    At the moment I'm not sure what can be done in Apache's configuration to prevent this attack – increasing MaxClients will just increase requirements for the attacker as well but will not protect the server completely. One of our readers, Tomasz Miklas said that he was able to prevent the attack by using a reverse proxy called Perlbal in front of an Apache server.

    We'll keep an eye on this, of course, and will post future diaries or update this one depending on what's happening. It will be interesting to see how/if other web servers as well as load balancers are resistant to this attack.

  • Huh? (Score:4, Interesting)

    by guyminuslife ( 1349809 ) on Tuesday February 02, 2010 @07:43AM (#30993430)

    I don't get it. Could someone please explain this to me?

    If they're trying to disguise their traffic to the command-and-control center, how does this help? If you get a lot of malformed requests from a particular host, then if you're an investigator, it's like the infected computers are advertising themselves as zombies. And if they're sending these requests to major web sites, how does this disguise the requests they're making to the (presumably non-major website) control center? Couldn't you just say, "Well, this computer made 300 malformed SSL requests to Facebook, Twitter, et cetera, and one malformed request to , let's find that guy!"

    I'm seriously confused.

  • Re:Entropy depletion (Score:3, Interesting)

    by crypticwun ( 1735798 ) on Tuesday February 02, 2010 @11:19AM (#30995586)

    1) The code function does NOTHING with any data returned by the server.
    2) This version of pushdo is using SSLv3 to phone home (HTTP over SSL) to its C2 (Command & Control).
    3) When looking purely at netflow records or using tcpdump/wireshark, you will see 30+ SSL connections taking place at once. Only 1-2 of those connections is to the C2.
    3.5) Many admins don't set up matching PTR records in DNS, so you won't easily be able to map back the IPs to the "common"/well-known hostnames.
    4) ... ?
    5) profit!
    The idea is to make it HARD, not impossible to identify the C2 systems. Note well that the C2's might never connect back to the botnet client systems. Instead another tier of slightly more disposable hosts are likely to perform that function.

"If it ain't broke, don't fix it." - Bert Lantz

Working...