Up To 9% of a Company's Machines Are Bot-Infected 146
ancientribe sends in a DarkReading piece on the expanding footprint of small, targeted botnets in enterprises. "Bot infections are on the rise in businesses, and most come from botnets you've never heard of nor ever will. Botnet researchers at Damballa have found that nearly 60 percent of bot infections in organizations are from bot armies with only a handful to a few hundred bots built to target a particular organization. Only 5 percent of the bot infections were from big-name botnets, such as Zeus/ZDbot and Koobface. And more businesses are getting hit: 7 to 9 percent of an organization's machines are bot-infected, up from 5-to-7 percent last year, according to Damballa. ... [Damballa's] Ollmann says many of the smaller botnets appear to have more knowledge of the targeted organization as well. 'They are very strongly associated with a lot of insider knowledge...and we see a lot of hands-on command and control with these small botnets,' he says. ... Ollmann says botnets of all sizes are also increasingly using more and different types of malware rather than one particular family in order to evade detection. 'Most botnets, even small ones, have hundreds of different pieces of malware and families in use..."
egress filtering (Score:4, Interesting)
This solution is egress filtering: stop all traffic going out to the internet from desktop computers. Then provide a proxy server (HTTP and SOCKS) users can use to get what they need on the net. The proxy server must be a filtering server--the sort that keeps a list of known malware sites and botnet controllers, so that it can automatically block them.
With this in place, users will still be able to get what they need from the net, but 99% of bots will be stopped.
machine malware infections (Score:5, Interesting)
Half of Fortune 100 companies compromised by new information stealing Trojan [blogspot.com]
"Security tool designed to stealthy run on winnt based systems (win2k to winvista) and to stealthy and efficiently spread with 3 spreaders, which were specially designed and improved compared to already known public methods.[sic]" The three spreaders are MSN, USB, and P2P. Listed P2P networks were "ares, bearshare, imesh, shareaza, kazaa, dcplusplus, emule, emuleplus, limewire.[sic]"
Might have to resort to what many schools do? (Score:3, Interesting)
It seems like educational institutions have some of the biggest problems with system tampering/hacking/infections, since they're exposed to thousands of students each year who have attitudes of "Who cares? Not MY computer anyway!" and who often think it's a challenge and *fun* trying to mess up the system in question. Unlike hackers trying to infect you with malware over the Internet from some other country, these people have full PHYSICAL access to the computers.
So how do they manage? Many schools I know have things configured so their workstations get re-imaged nightly from master images on a server. Any unauthorized changes made to the computer only last until that nightly maintenance runs, at the longest. (An admin might re-image a workstation even more quickly than that if he/she realizes it has an issue.)
I could see large businesses resorting to this, as well - if they're starting to encounter risks as aggressive as bots targeted to their particular businesses.
Re:Education (Score:4, Interesting)
Screw educating, this situation calls for whitelisting and non-administrator privileges.
Mod parent up. (Score:5, Interesting)
I'm having a lot of trouble believing some of the claims in that article.
600 botnets
5% of 600 is 30. So only 30 out of 600 were "big-name"? That doesn't sound like those "big-name" ones are all that big.
60% of 600 is 360. So their tiny sample found 360 instances of NEW viruses/worms/trojans? I find it very difficult to believe that there are that many sites with custom infections.
Which leaves 210 infections that are not custom and not "big-name". How did those sites manage that? In my experience, if some site it getting infected by less virulent code, it's also infected by the more virulent code.
Which makes me question how those sites are selected for them to investigate. NONE of them had decent anti-virus practices?
Whoa! I'd think that they're using a different definition of "botnet" than the one I'm familiar with. Of course having more than one machine is more efficient. If nothing else, that one machine is a "single point of failure" than can be re-imaged at any time.
I don't see how those two statements support each other. What knowledge do they need? IP ranges, routers, gateways and servers.
Which they cannot possibly do if they controlled 40 or 50 hosts. Or 400 or 500. Etc. Bullshit.
Again there is nothing to support those statements.
How can it be "specific to the host being targeted"?
Aren't "bots" always hardcoded with the "command and control channel"? Such as "use IRC" and "connect to this generated list of sites for updates".
Damn "malware kids". Get off my lawn!
Damn! Not only are they "more automated" but they also have " a lot of hands-on command and control".
Pure
Marketing
Fluff