Throttling Computer Viruses 268
An anonymous reader writes "An article in the Economist that looks at a new way to thwart computer viral epidemics, by focusing on making computers more resilient rather than resistant. The idea is to slow the spread of viral epidemics allowing effective human intervention rather than attempting to make a computer completely resistant to attack."
The best way to throttle viruses (Score:2, Interesting)
Re:I have a brilliantly original idea (Score:4, Interesting)
Hope he doesn't patent this (Score:2, Interesting)
I just got an image of him presenting his paper, and pointing to each audience member, "patent pending, patent pending, patent pending" ala Homer Simpson.
security vs. privacy (Score:2, Interesting)
But to have this system installed you will be giving someone an authorization to see your computer use profile, giving away your privacy. And it will not detect most virus that are only interested in destroing your data and/or spam your friends via email.
Will it work? (Score:2, Interesting)
If it becomes a widespread implementation on the upstream routers, then virus writers will throttle their own connections to 1 per second to evade detection.
This defense was only tested against Nimda, and other viruses may work other ways. Will it stop email virii?
Makes the Warhol worm a bit harder to implement though
Details, details (Score:2, Interesting)
...where are the details. What kind of heuristics is this 'throttle' using? Do they look for disparate connections, like 100+ individual hosts per minute, or simply just for connections outside of a tripwire-esque 'connection profile' for the machine? What kind of protocols does the throttle watch?
I really enjoy the Economist, but this article is so shallow and fluffy, especially for them.
computer history (Score:2, Interesting)
Time for a change of strategy (Score:1, Interesting)
Searching and scanning for new viral signatures are not a final solution. The real solution is a transparent system where processes running are recognized by the operator, much as you recognize a familiar face when the mailman comes to the door.
I have so many services/processes running on WinXP that I have no idea what half of them do, but I can't turn them off, or something won't work. Seems like virus authors hardly have to try to find ways to exploit millions of systems with a single outbreak.
To those working on a different solution, thanks in advance.
Re:I have a brilliantly original idea (Score:4, Interesting)
Everyone has two complaints about the software he/she uses:
No one accepts, that the enhancement of one leads to a degradation of the other one. Cisco has a nice approach (at least they had it during my ISP days): There is a feature rich version and a stability oriented version. The pick is yours.
Martin
P2P (Score:3, Interesting)
Re:human intervention (Score:3, Interesting)
Personally, I have seen some interesting trojan epidemics on networks that are in no way connected to the Internet. There was a company that was terribly paranoid and allowed Internet use only and exclusively from a particular computer. This way they thought they could overcome problems with viruses they had in the past. There was a not so dumb admin that dealed with the E-mail, filtering it through antivirus tools, before copying it into a diskette and send it into the LAN. And you know? They kept having serious problems with viruses. Some deeper analysis showed that every trojaned E-Mail containing a corporate cliche inside the subject was always the cause for the next epidemics.
Re:Not very sophisticated. (Score:3, Interesting)
OK, this seems to point to the question: Why was the ability to connect to "new" computers at an extremely high rate there in the first place? Is that ability ever utilized to any extent in legitimate, day-to-day operations?
If so, this might cause you some problems and putting "throttling" in there is a really bad idea. But if this ability isn't used, then maybe the "throttling" should be put in at the OS level.
The only time I can see having this at the OS-level being a problem is when you first start up some big iron that needs to connect to thousands of clients. The OS might kill any attempt to do this. But once you've established a semi-regular list of clients, then having the OS thwart any attempts to collect to a massive amount of "new" machines seems like a good idea.
Very strange indeed (Score:1, Interesting)
Re:Technique (Score:5, Interesting)
why ? new viruses are designed to subvert them. I've done it, installing 5 virusscanners to check if, and how they detect your virus. (btw my virus was a
example
wrong:
-> to_infect = "*.com";
right:
-> boem = "*.c";
-> othervariable = 5;
-> to_infect = strcat(boem,"om");
I have yet to see the first scanner that detects this one. The difference in codesize is about 3 extra bytes (assuming you were using strcat anyway) so in today's 500kb viruses it is negligeable.
Heuristics are nice, they do have some effect, but they are no solution.
Virusscanning is inherently responsive. The best they can hope to do is to repair the damage when it is done. They have no use whatsoever for online worms.
Not so new: remember syn-cookies? (Score:3, Interesting)
The idea of slowing down the attack rate of an intruder is really not so new. One example is the infamous Linux "syn-cookies" countermeasure to syn-flooding. Syn-cookies prevent the excessive use of connection resources by reserving these resources to connections that have evidently gone through a genuine TCP three-way handshake. This forces the attack to slow down, since instead of throwing SYN-packets at a host as fast it can it now has to do a proper three-way handshake. This involves waiting for the associated round-trip times which cause the attack to slow down to the speed of genuine connection attempts.
Now since the attack has been slowed down to the speed of the genuine users, it takes part in the competition for connection resources on a fair and equal ground with other users, wich makes it as successful as other users to acquire connection resources. That means that the rate of attack is not quick enough for a resource starvation attack anymore, and it is reduced to a resource abuse attack. Since the latter type of attack needs to be employed for a long time to cause significant damage, the risks of being discovered become too big to make the attack practical.
Well, now this is not exactly a "throttling" countermeasure as described in the Economist's article. The countermeasure from the article selectively slows down outgoing connection attempts to "new" hosts, in order to further slow down the attack in an attempt to put genuine users not on equal footing with the attack but at a significant advantage. This element of selection may be new, at least I can not come up with an older example. As others commented before, the selection technique also has its disadvantages:
a) depending on the attack, different kinds of selection methods must be employed to actually single out the malicious connections -- there is is no predefinable "catch-all-attacks" selection method
b) depending on the services you run on your network, the effort you have to make to find out how your usage patterns can be discerned from known attack patterns varies.
Re:I have a brilliantly original idea (Score:3, Interesting)
Eventually at some level code needs static buffers. Well-designed programs along with proper code validation techniques ensure a minimal number of errors. Java/C#/language-of-the-month can help software engineering, but by no means are they a panacea.
Hardware or Software? (Score:2, Interesting)
The idea, then, is to limit the rate at which a computer can connect to new computers, where "new" means those that are not on a recent history list.
If the history is implemented in software, what the fuck is to stop a virus from injecting the IP's it wants to attack into the history?
Take advantage of the throttle? (Score:2, Interesting)
For example, intentionally make connections at a decreased rate. It gives you a couple of (probable) advantages -> You'd slide by the detection aspect of this (No backlog of connections), You'd spread slower, but you could make that work to your advantage -> a slower spread can mean longer time until detection, which may mean more hosts infected. Also, if this works as the article states, you could eventually make it so that the hosts you were connecting to were -not- throttled (Say you're getting ready to propogate a DDOS attack virus).
This would catch most virus/worms as they are written -now-, but as soon as this is widely deployed, someone will write a virus or worm that sneaks around it, by avoiding the behavior it's looking for.
details? (Score:3, Interesting)
Ah yes, well, see, we're going to throttle the network, so that the virus spreads more slowly.
Throttle what? bandwidth? That wouldn't have much of an effect on virus activity, but it certainly would affect everything else. Connections per second would probably slow down a virus, but would basically shut down SMB and DNS as well.
You better make sure Ridge doesn't hear about this, or we'll be required by law to wear 20 lb. lead shoes everywhere we go, to make it easier to catch running terrorists.
What about datacenters... (Score:3, Interesting)
In the datacenter I work at we handle 2000 transactions per second per machine on average with peaks reaching 10000 transactions per second. Not every transaction requires a new connection because of caching in our software but we create far more than 1 new connection per second.
No Replacement for Good Security Practice (Score:4, Interesting)
Now. why don't these things happen? Time. Money. Combination of both. Convenience. Lack of understanding on the part of users.
But the big one is the belief that security is a product that can be purchased, that there is a quick fix out there that will solve all your security ills and hide you from all the bad guys.
Security is a PROCESS. Better yet, it's a combination of processes, relating to employees at all levels of your organization, from the CEO to the custodial service contracted by your property manager. Hell, even building safer software isn't going to help you if your users refuse to use it 'cause it's a pain in the ass. Remember, they believe in the panacea of the "single sign-on". They put their passwords on post-its around their workstations. They keep their contacts (oh help us) in their Hotmail addressbook, regardless of how many 'sploits have been uncovered in Hotmail. They're afraid of computers.
Security is expensive. And it should be, because it has to be done right. You need user participation, on all levels. It requires education and training, and a reduction in ease of use.
There is no magic wand.
--mandi
implement in routers (Score:2, Interesting)