Throttling Computer Viruses 268
An anonymous reader writes "An article in the Economist that looks at a new way to thwart computer viral epidemics, by focusing on making computers more resilient rather than resistant. The idea is to slow the spread of viral epidemics allowing effective human intervention rather than attempting to make a computer completely resistant to attack."
Re:One connection per second? (Score:2, Informative)
If you read the article, you'll see that the limit is on OUTgoing connections, not incomming traffic. In other words, this type of AV effort will not eliminate the slashdot effect.
Link to paper (Score:4, Informative)
This just ups the ante. (Score:2, Informative)
Of course the article doesn't really say whether this is enforced on the local machines or is applied from outside (i.e. at a switch or router). However, by talking about it as an inoculation, it suggests it really enforced on the local machine.
It's a good idea, in general, but it has to be user-tweakable, and that means it's virus-tweakable too.
Re:I have a brilliantly original idea (Score:3, Informative)
Seriously, this is a whole new way to think about security, and it has a lot of promise. Security systems will never be perfect, and if they are designed never to fail, the consequences of failure are likely to be dire. By managing the consequences of failure, you can best limit the effects of a determined attack. I think this is equally true of electronic security and physical security.
Re:I have a brilliantly original idea (Score:5, Informative)
I think it isn't that people WRITE programs with static buffers now-a-days as much as it is that people who maintain old software don't fix the static buffers.
Plus I could also argue what is more important to the program? Static gives me knowledge of the maximum size of memory used, if that knowledge is required. Searching is faster in arrays than linked lists (although replacing, on average, is slower). Don't assume that static buffers are ALWAYS wrong.
Re:Microsoft already does this... (Score:2, Informative)
-j
Just secure the code (Score:3, Informative)
I support the notion that the key to ultimate security lies in the quality of the code. I'll go further and say that open source is the key to reaching the absolute goal of inpenetrable code. The open source model is our best bet at insuring that many, many eyes (with varying degrees of skill and with different intentions) will scan the code for flaws. I just wish that some of the more popular open source projects were more heavily reveiwed before their latest builds went up.
the article is lots of fluff (Score:1, Informative)
limit _new_ connections
so a webpage view will consist of X connections to 1 machine. the first time its a 'new' connection the other times its in the history, so a webpage will NOT be affected unless it has a group of image servers and applet servers or popup ads to everywhere under the sun (like some p0rn sites)
the history can be fairly short, like connections in the last 5 minutes of 1 per second that do get through, that is only a table of 300 IPs. 4 bytes each for IPv4 1200 bytes or IPv6 16 bytes each for a 4800 byte table. (index probably 2 bytes each, so add another 600 bytes to the table to make searching faster)
as this can easily be kept in ram, and it doesn't need to be long term profiling, privacy issues can be easily conntrolled.
if the machine is connecting to 400 different IP addresses per second, then you either have a poweruser or a netblock port scanner or a worm
and limiting it to conntacting 300 machines every 5 minutes would be a good thing.
"in tests it has a 2% fail rate", well in my neck of the internet, my isp's provider has a 3% fail rate in MTR tests, i don't know if i would blame the connection filter or just my bad connection to remote parts of the world
so in short, it will fail because it will affect p0rn sites and most/all P2P and worms will be made to handle them just like they handle anti-virus software now.
Re:Technique (Score:3, Informative)
Yes. By definition, heuristics [techtarget.com] can only find some evil programs, not all of them. (If they could, they'd be algorithims). Holes will always exist.
And since virus-scanner software must be widely distributed to all the users it's supposed to protect, the virus author can always test his code against the heuristic until he finds a way to slip past it.
This suggests an altered business model for anti-virus vendors: start treating their heuristics like a trade secret, and don't let them out of the building. Run virus scanning on an ASP model.
Of course, the privacy, network-capacity, and liability problems with that approach are enormous.