Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Throttling Computer Viruses 268

An anonymous reader writes "An article in the Economist that looks at a new way to thwart computer viral epidemics, by focusing on making computers more resilient rather than resistant. The idea is to slow the spread of viral epidemics allowing effective human intervention rather than attempting to make a computer completely resistant to attack."
This discussion has been archived. No new comments can be posted.

Throttling Computer Viruses

Comments Filter:
  • by /Wegge ( 2960 ) <awegge@gmail.com> on Friday November 22, 2002 @10:42AM (#4731679) Homepage
    Could you imagine how slow Slashdot would be at one connection per second? How well could this work on high traffic sites?


    If you read the article, you'll see that the limit is on OUTgoing connections, not incomming traffic. In other words, this type of AV effort will not eliminate the slashdot effect.
  • Link to paper (Score:4, Informative)

    by NearlyHeadless ( 110901 ) on Friday November 22, 2002 @10:43AM (#4731692)
    Here's Williamson's paper on the idea: Throttling Viruses: Restricting propagation to defeat malicious mobile code [hp.com] I haven't read it yet, but I see one potential problem right away. When you load a web page, you normally make quite a few connections--one for each image, e.g. I'll have to see how he handles that
  • by fractalus ( 322043 ) on Friday November 22, 2002 @10:45AM (#4731710) Homepage
    We've got malware that now disables personal firewall software so as to avoid detection. This throttle might be an effective patch against current viruses, but the next round will simply work around the throttle, if it is applied locally.

    Of course the article doesn't really say whether this is enforced on the local machines or is applied from outside (i.e. at a switch or router). However, by talking about it as an inoculation, it suggests it really enforced on the local machine.

    It's a good idea, in general, but it has to be user-tweakable, and that means it's virus-tweakable too.
  • by redfiche ( 621966 ) on Friday November 22, 2002 @10:51AM (#4731758) Journal
    And the #2 rule is that hackers are not, so they'll probably find a way to break through your security if they really want too.

    Seriously, this is a whole new way to think about security, and it has a lot of promise. Security systems will never be perfect, and if they are designed never to fail, the consequences of failure are likely to be dire. By managing the consequences of failure, you can best limit the effects of a determined attack. I think this is equally true of electronic security and physical security.

  • by FortKnox ( 169099 ) on Friday November 22, 2002 @11:09AM (#4731907) Homepage Journal
    True, but why do people have to keep writing programs with static buffer sizes?

    I think it isn't that people WRITE programs with static buffers now-a-days as much as it is that people who maintain old software don't fix the static buffers.

    Plus I could also argue what is more important to the program? Static gives me knowledge of the maximum size of memory used, if that knowledge is required. Searching is faster in arrays than linked lists (although replacing, on average, is slower). Don't assume that static buffers are ALWAYS wrong.
  • by jdiggans ( 61449 ) on Friday November 22, 2002 @11:14AM (#4731943)
    The plural of 'virus' (which is what I think you meant by virii) is 'viruses' ... see this [perl.com] for why.

    -j

  • Just secure the code (Score:3, Informative)

    by mao che minh ( 611166 ) on Friday November 22, 2002 @11:17AM (#4731961) Journal
    As systems become more adaptive and proactive against malicious code, so too will the viruses against these counter measures. The next generation of virus writers will be bred in the same computing climate that the future white hats will hail from - there is no reason to think that viruses will not evolve right alongside the platforms that they attack.

    I support the notion that the key to ultimate security lies in the quality of the code. I'll go further and say that open source is the key to reaching the absolute goal of inpenetrable code. The open source model is our best bet at insuring that many, many eyes (with varying degrees of skill and with different intentions) will scan the code for flaws. I just wish that some of the more popular open source projects were more heavily reveiwed before their latest builds went up.

  • by Anonymous Coward on Friday November 22, 2002 @01:05PM (#4732652)
    ... but what i get out of it as for the actual idea, without reading the HP whitepaper is

    limit _new_ connections
    so a webpage view will consist of X connections to 1 machine. the first time its a 'new' connection the other times its in the history, so a webpage will NOT be affected unless it has a group of image servers and applet servers or popup ads to everywhere under the sun (like some p0rn sites)

    the history can be fairly short, like connections in the last 5 minutes of 1 per second that do get through, that is only a table of 300 IPs. 4 bytes each for IPv4 1200 bytes or IPv6 16 bytes each for a 4800 byte table. (index probably 2 bytes each, so add another 600 bytes to the table to make searching faster)
    as this can easily be kept in ram, and it doesn't need to be long term profiling, privacy issues can be easily conntrolled.

    if the machine is connecting to 400 different IP addresses per second, then you either have a poweruser or a netblock port scanner or a worm
    and limiting it to conntacting 300 machines every 5 minutes would be a good thing.

    "in tests it has a 2% fail rate", well in my neck of the internet, my isp's provider has a 3% fail rate in MTR tests, i don't know if i would blame the connection filter or just my bad connection to remote parts of the world

    so in short, it will fail because it will affect p0rn sites and most/all P2P and worms will be made to handle them just like they handle anti-virus software now.
  • Re:Technique (Score:3, Informative)

    by Minna Kirai ( 624281 ) on Friday November 22, 2002 @08:33PM (#4736353)
    heuristic scanning is very ineffective.

    Yes. By definition, heuristics [techtarget.com] can only find some evil programs, not all of them. (If they could, they'd be algorithims). Holes will always exist.

    And since virus-scanner software must be widely distributed to all the users it's supposed to protect, the virus author can always test his code against the heuristic until he finds a way to slip past it.

    This suggests an altered business model for anti-virus vendors: start treating their heuristics like a trade secret, and don't let them out of the building. Run virus scanning on an ASP model.

    Of course, the privacy, network-capacity, and liability problems with that approach are enormous.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...