Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Throttling Computer Viruses 268

An anonymous reader writes "An article in the Economist that looks at a new way to thwart computer viral epidemics, by focusing on making computers more resilient rather than resistant. The idea is to slow the spread of viral epidemics allowing effective human intervention rather than attempting to make a computer completely resistant to attack."
This discussion has been archived. No new comments can be posted.

Throttling Computer Viruses

Comments Filter:
  • by batemanm ( 534197 ) <batemanm.gmail@com> on Friday November 22, 2002 @09:34AM (#4731579)
    Okay everyone back to 2400bps modems :-)

  • by ekrout ( 139379 ) on Friday November 22, 2002 @09:35AM (#4731595) Journal
    Start writing secure software!

    I'm not joking. The #1 rule of computer science is that computer scientists are lazy.

    We need to stop working just to accomplish the minimal functionality desired and start testing the hell out of our software to ensure that it's secure.
    • by gorilla ( 36491 ) on Friday November 22, 2002 @09:38AM (#4731612)
      You have to seperate computer scientists, who research basic principles, with programmers, who implement those principles in available packages. No computer scientist would recommend that your develop an OS without memory protection, nor try to simulate multipe users on a system without file ownership. It didn't stop Microsoft.
      • Well, marketing runs companies in a free market society, which is why "imperfect" software like Microsoft's is the best selling.

        Specs? testing? what is that? I've been coding in IT depts/a .dot bomb/a consulting firm and now a government contractor. Of those, the current one has the best specs/testing and it's getting ok, but our new CS post-doc grad just looked in horror "How can you devlop is this environment." He wanted to spend two weeks writing a object model and test plan.. hey, this enhancment is due to DEPLOY in six weeks. I told him this envionment is the best I've seen in seven years and he is seriously thinking about a career change or going back into academia. Welcome to the real world.
    • by vidnet ( 580068 ) on Friday November 22, 2002 @09:42AM (#4731669) Homepage
      Yeah ok......starting tomorrow.
    • There's always a hole that cannot be planned. In complex systems, bugs and leaks are bound to be found, regardless of how much attention you pay.
      Plus, you usually have to balance security with user friendliness (putting on flame retardent jacket). Simply adding users vs root is a hassle for your average (home) user. People need to understand security to be willing to put in secure methods. Lets face it, people just want crap to work right now. They turn off security measures (like firewalls) to get something to work (like a game), then don't turn them back on so they don't have to deal with it the next time they try to play that game.
      • by cyborch ( 524661 ) on Friday November 22, 2002 @10:01AM (#4731837) Homepage Journal

        There's always a hole that cannot be planned.

        True, but why do people have to keep writing programs with static buffer sizes? I cannot think of one single acceptable excuse to write a piece of software where a buffer overflow can happen.

        If user input is in any way involved - directly or indirectly - then you need to test it before you accept it! There is no exuse!

        Buffer overflows is not the only security issue with software, but the principle behind preventing it applies to most of the security issues out there...

        So, I have to agree with your parent poster: the people making the software are lazy!

        • by FortKnox ( 169099 ) on Friday November 22, 2002 @10:09AM (#4731907) Homepage Journal
          True, but why do people have to keep writing programs with static buffer sizes?

          I think it isn't that people WRITE programs with static buffers now-a-days as much as it is that people who maintain old software don't fix the static buffers.

          Plus I could also argue what is more important to the program? Static gives me knowledge of the maximum size of memory used, if that knowledge is required. Searching is faster in arrays than linked lists (although replacing, on average, is slower). Don't assume that static buffers are ALWAYS wrong.
          • by Tim C ( 15259 ) on Friday November 22, 2002 @10:44AM (#4732158)
            Don't assume that static buffers are ALWAYS wrong.

            Indeed - generally, there's nothing wrong with static buffers. If you're going to use them, however, there is absolutely no excuse for not bounds checking access to that buffer. That is, if you know that the buffer can contain say 1000 characters, check anything you write to it to make sure it fits!

            That's most of what's "wrong" with static buffers - that it's too easy to use them incorrectly. It's not entirely the fault of the buffer, though, that it's easily misused
        • by rossjudson ( 97786 ) on Friday November 22, 2002 @10:17AM (#4731964) Homepage
          Here's a thought. Stop writing programs in languages that HAVE static buffers. Stop writing programs in languages that have memory buffers that the program is free to overwrite. The problem isn't the programmers. What you're saying is that every programmer in the world has to write perfect code every time, and that's never gonna happen. Programs need to run in safe environments. The sandbox concept for running applets has been with us for a while, and it's a good one. You have a single place where you can fix things. It's gotten pretty hard to write an applet that can screw up a machine.

          I think that ALL programs should be running in the equivalent of a sandbox at all times. There should be sandboxes inside sandboxes. When you download something off the net, you can go ahead and run it in a relatively safe, walled-off environment. There should be NO need for the program to look outside of that. Later on you might decide to allow the program more access to your system, once you begin to trust it, or some else in your web of trust has trusted it.

          The OS needs to be designed to do this from the beginning.
          • We already have that. It's called Java. Trouble is, Sun won't give up Java's pledge of "compile once, run anywhere" feature and create a native compiler. So we're stuck with C++ and all of its inherent insecurities for any kind of "performance" application.
            • by radish ( 98371 ) on Friday November 22, 2002 @11:36AM (#4732458) Homepage

              FUDDY FUDDY FUD FUD :)

              Depends what you mean by "performance application". Java is just as fast as C++ for a long-lived server process, running on a decent OS with a new-ish (i.e. 1.3.0 or above) JVM. Hotspot (even more so the newer 1.4 versions) is a fantastically good optimising engine which tunes your compilation as it runs. That's something gcc can never do...I have seen the suggestion put forward by better scientists than myself that something using the same concepts as Hotspot should in most cases be able to beat a traditional compiler, for that reason.

              For client side apps Java can "feel" a little slow, but that is often caused by the graphics libraries, Swing is a little sluggish. Look at the Ecplise IDE however if you want to see a client side graphical Java app running just as fast as C.
          • You realize that what you are suggesting is very naive? There's a whole class of computing at the embedded system layer. I don't know about you, but I don't want the defibrillator keeping me alive suddenly pause while it's garbage collecting some values. Or in terms of "general computing", the software for an air-traffic controllers needs to make millions (if not billions) of calculations on a radar beam to decide the position and velocity, if any, of an incoming plane. Computers are fast, but memory is slow The OS is going to be spending time allocating memory and bounds checking each radar ping; meanwhile planes will be crashing.

            Eventually at some level code needs static buffers. Well-designed programs along with proper code validation techniques ensure a minimal number of errors. Java/C#/language-of-the-month can help software engineering, but by no means are they a panacea.
        • > True, but why do people have to keep writing
          > programs with static buffer sizes?

          Mostly because they are programming in computer
          languages that make basic things like storing
          information in a buffer a pain in the neck for the
          programmer. As long as we have languages with
          malloc or the equivalent (C, C++, and all their
          ilk), we will have buffer overruns and pointer
          errors and other such nonsense.
    • by El Neepo ( 411885 ) on Friday November 22, 2002 @09:49AM (#4731744)
      Being lazy = good.

      If you write the simplest code you can that meet the requirements then more than likely its secure. It has no fancy tricks, its easy to see what its doing, therefore has less holes that need to be found.
      • Nah, being lazy tends to mean the exact opposite. You write crap underlying code, and then put in some fancy GUI to try and cover up the sins of the program.
      • This really is the best way. Keeping it simple (stupid) would be the best path to follow for secure code. But then there'd be nothing to spurn the market to switch up to the latest Intel Chips and the newer software to run on the latest chips and the latest gizmos which need the newest software and the latest chips to run and...Oh, we were talking about slowing the spread of Virii. Seems this does apply.


        Of course, there's my solutions to slowing the spread of virii: (All should help. Any can be done.)

        1. switch to GNU/Linux. (Put on flame-retardant suite *now*.)
        2. Instruct users on the use of the "delete" key.
        3. Instruct users why it's not a good idea to use a GUI email programs.
        4. Instruct users into the ease of tracking your behaviors online and that little number called your IP, which is very easy to find.
        5. Instruct users how to patch their Windows Boxen, to disable services which shouldn't be enabled and patch their Explorer/Outlook/AND Offices. (Oh, never mind... Windows is already more secure than ever. :) )
        6. Explain why it's not *good* to click on every popup add that you see.
        7. Educate lusers to make them into users. (BOFH cameo.)
        8. THEN, reassess the situation and begin implimenting fixes like making the OS and Hardware more impervious to virii.


        Sorry guys, but alot can be done with the existing stuff. Even though it hasn't been made *simple* or in a lazy manner (read, easiest way), its what we have to work with. One well written piece of paper circulated to 500 people can come a long way in upgrading the user's brainware. Its eaisier than convincing M$ (and others) to rewrite code. Lets see what happens then.
      • This is a common and flawed belief among developers: write the software so it works. From a QA standpoint, you've accomplished a system requiring a trained and trustworthy user to interact with it as expected.

        What happens when it's a technically inept user or one with malicious intent? Immediately, the fact that your program expects certain kinds of information in certain character ranges etc. to be input at point X causes a problem as wrong input is provided, or it's done in an obscene amount (hence buffer overruns) and the like. If you have an extremely simple program, your approach works: if, howerever, it's like *anything* done in an enterprise development environment several programs (or several portions and routines of the same program) nest together and share that information for their own purposes. Simplicity must give way to verbosity, in this case.

        There's also expected order of operations, component stressing (memory leaks) and so on. Don't take the shortcut.
    • And the #2 rule is that hackers are not, so they'll probably find a way to break through your security if they really want too.

      Seriously, this is a whole new way to think about security, and it has a lot of promise. Security systems will never be perfect, and if they are designed never to fail, the consequences of failure are likely to be dire. By managing the consequences of failure, you can best limit the effects of a determined attack. I think this is equally true of electronic security and physical security.

    • by janolder ( 536297 ) on Friday November 22, 2002 @09:51AM (#4731768) Homepage
      Hate to rain on your parade, but there is ample evidence [microsoft.com] to suggest that quality has to be designed in rather than tested into the product later in the process. If your design is flawed, testing won't help a bit. If your implementation is riddled with bugs, testing will find 95% of them, but Murphy will ensure that you get bitten by the rest at the worst possible moment.

      In this business, it's a tradeoff between quality and time to market. Up until recently, software purchasing decisions haven't been based on quality very much so the software producers have given the customer what he wants: Buggy product now.

    • by mseeger ( 40923 ) on Friday November 22, 2002 @09:56AM (#4731795)
      We need to stop working just to accomplish the minimal functionality desired and start testing the hell out of our software to ensure that it's secure.

      Everyone has two complaints about the software he/she uses:

      • It's not secure/stable enough
      • It doesn't have enough features

      No one accepts, that the enhancement of one leads to a degradation of the other one. Cisco has a nice approach (at least they had it during my ISP days): There is a feature rich version and a stability oriented version. The pick is yours.

      Martin

  • Technique (Score:5, Insightful)

    by gurnb ( 80987 ) on Friday November 22, 2002 @09:36AM (#4731597) Homepage
    Antivirus software makers are recycling some old tricks to combat computer viruses proliferating over the Internet.
    The technique, called "heuristics," checks for suspicious commands within software code to detect potential viruses.

    Heuristic techniques can detect new viruses never seen before, so they can keep malicious code from spreading. An older method, called signature-scanning, uses specific pieces of code to identify viruses.

    Both methods have down sides. Heuristic techniques can trigger false alarms that flag virus-free code as suspicious. Signature-scanning requires that a user be infected by a virus before an antivirus researcher can create a patch--and the virus can spread in the meantime. Most antivirus vendors use both techniques.

    It's time for the industry as a whole to look at different approaches The time-honored method of signature scanning is a little worn and weary given new viruses coming out
    • Re:Technique (Score:5, Interesting)

      by OeLeWaPpErKe ( 412765 ) on Friday November 22, 2002 @10:54AM (#4732200) Homepage
      heuristic scanning is very ineffective.

      why ? new viruses are designed to subvert them. I've done it, installing 5 virusscanners to check if, and how they detect your virus. (btw my virus was a .com infector without a chdir instruction, not very dangerous, but it worked)

      example :

      wrong:
      -> to_infect = "*.com"; // oops, heuristics detect this

      right:
      -> boem = "*.c";
      -> othervariable = 5;
      -> to_infect = strcat(boem,"om");

      I have yet to see the first scanner that detects this one. The difference in codesize is about 3 extra bytes (assuming you were using strcat anyway) so in today's 500kb viruses it is negligeable.

      Heuristics are nice, they do have some effect, but they are no solution.

      Virusscanning is inherently responsive. The best they can hope to do is to repair the damage when it is done. They have no use whatsoever for online worms.
      • I believe (and I may be wrong) that the approach modern heuristic scanners take is to look at what a program does externally. So no it wouldn't see you constructing the file name, but it would see you opening file handles to a bunch of .com files and writing to them. That's the dodgy behaviour, not creating the filename - you could just be "ls"!
      • Re:Technique (Score:3, Informative)

        by Minna Kirai ( 624281 )
        heuristic scanning is very ineffective.

        Yes. By definition, heuristics [techtarget.com] can only find some evil programs, not all of them. (If they could, they'd be algorithims). Holes will always exist.

        And since virus-scanner software must be widely distributed to all the users it's supposed to protect, the virus author can always test his code against the heuristic until he finds a way to slip past it.

        This suggests an altered business model for anti-virus vendors: start treating their heuristics like a trade secret, and don't let them out of the building. Run virus scanning on an ASP model.

        Of course, the privacy, network-capacity, and liability problems with that approach are enormous.
    • Re:Technique (Score:4, Insightful)

      by Tenebrious1 ( 530949 ) on Friday November 22, 2002 @11:11AM (#4732279) Homepage
      It's time for the industry as a whole to look at different approaches The time-honored method of signature scanning is a little worn and weary given new viruses coming out

      True, but most of the new viruses that come out are produced by script kiddies and their virus construction kits, and heuristics work well for detecting these.

      Besides, AV software does not stand alone. AV security includes scanning, monitoring and blocking at the mail servers and firewalls, good communication between av software companies and IT AV staff, desktop security policies, and the most important, user training. Admittedly the last is the hardest, but well informed users are less likely to infect themselves and risk infecting everyone else.

    • This looks more like Carnivore than anti-virus software, quoth the article:
      The idea, then, is to limit the rate at which a computer can connect to new computers, where "new" means those that are not on a recent history list. Dr Williamson's "throttle" (so called because it is both a kind of valve and a way of strangling viruses at birth) restricts such connections to one a second.

      Given the large institution focus of the article, I assumed the control would be external at the network level. The only way to really stop a computer from connecting to "new" machines is to keep a record of connections and stop "new" ones external to the machine. If you can't secure the computer secure the network the author seems to be saying.

      The author wonders why no one had thought of this before and I can tell him that the reason is that it's contrary to the founding priciples of the internet and it won't work. The whole idea behind the internet is to have a network without central control or intelligence. Putting this kind of invasive intelligence into the net adds complications useful only for censorship and control. How, pray tell, can you do this for a mail server? Mail servers contact new machines all day, that's their job! The virus mentioned as an example happened because of poor software from a certian vendor, Microsoft. The same trick can be had again if the virus shifts its mailing burden to the stupid IIS server.

      Attention has been focused on the root cause of the problem: mail clients that run as root and automatically execute commands sent by strangers. Everyone said it was a bad idea when M$ did it, and everyone should continue to point the finger of blame in the right direction. Adding hacks like this elswhere is a waste of time and has serious implications for the internet as a medium for imformation exchange.

  • human intervention (Score:3, Insightful)

    by it0 ( 567968 ) on Friday November 22, 2002 @09:36AM (#4731600)
    Doesn't current human interaction show that it only stimulates viral spreading , by opening emails and running stuff because it says "I love you" not to mention the spreading of emails "warning new virus delete file foo.exe?"
    • by Ektanoor ( 9949 )
      Absolutely correct. It is amazing to see how people simply and roughly ignore warnings, rush to open letters with such amicable statements like "Love you", "You won!", "About our last discussion", "Concerning your message". Such mails are usually the basis for those huge burst of virus epidemics inside certain corporate networks. There are times when a new virus comes in and goes nearly unnoticed. However, when someone plays a little social engineering and sends some letter with a key phrase (cliche), one may see how panic rises inside the building in a matter of minutes. And it is curious to note that this really does not depend on the automatisms of the antivirus programs, the technicities of the admins or the experience of the users. It is a matter of network use and personal expectations. Some people overuse corporate systems for personal purposes, others use it for the majority of communications among colleagues and some see it as an escape hatch into a "virtual" world. Depending on the way such networks evolve, certain common cliches come up into frequent use. It is enough to send some E-Mail containing such cliche and a good exploit to see users storming the admins with complaints.

      Personally, I have seen some interesting trojan epidemics on networks that are in no way connected to the Internet. There was a company that was terribly paranoid and allowed Internet use only and exclusively from a particular computer. This way they thought they could overcome problems with viruses they had in the past. There was a not so dumb admin that dealed with the E-mail, filtering it through antivirus tools, before copying it into a diskette and send it into the LAN. And you know? They kept having serious problems with viruses. Some deeper analysis showed that every trojaned E-Mail containing a corporate cliche inside the subject was always the cause for the next epidemics.
  • by Anonymous Coward
    is to launch global network monitoring, perhaps monitered by a reputable security company like mi2g. It would require nodes at pretty much all internet connections, of course, at could be costly, but the cost is miniscule compared to the savings. Then that company could record traffic and, once a virus propogates, backtrack through teh logs for the first time it appears. From there, we could find the originator and bring the full weight of the maw on him.
  • NOW we're talking! (Score:4, Insightful)

    by Shoten ( 260439 ) on Friday November 22, 2002 @09:38AM (#4731613)
    This is an excellent idea. For a long time the fight against computer viruses (as well as many other aspects of computer security) has been focused on winning or losing, period. Try to stop the virus, and that's it. But what about what happens when a virus gets through? Like almost all things in computer security, there hasn't been enough attention given to what happens if security fails. Bruce Schneier has been yelling from the mountain that security is as much about what happens when safeguards don't work as it is about making sure they do. The notion of being able to keep a virus in check to a certain degree is a good example of security that can fail gracefully when a new virus comes around.
  • The "annoy the user to death" virus.
    You have a possible virus(mickeymouse variant 1a). Transmit to everyone in your address book?
    No.
    You have a possible virus(mickeymouse variant 1b). Transmit to everyone in your address book?
    No.
    You have a possible virus(mickeymouse variant 1c). Transmit to everyone in your address book?
    No.
    You have a possible virus(mickeymouse variant 1d). Transmit to everyone in your address book?
    No. ARGH!
  • Could you imagine how slow Slashdot would be at one connection per second? How well could this work on high traffic sites?

    It would probably save other sites from being Slashdotted, though.
    • Could you imagine how slow Slashdot would be at one connection per second? How well could this work on high traffic sites?


      If you read the article, you'll see that the limit is on OUTgoing connections, not incomming traffic. In other words, this type of AV effort will not eliminate the slashdot effect.
  • It could be of so much benefit to everyone in helping stop attacks (and make them not worth attempting, at least in their current form). But he's a researcher for HP, so I am guessing he will. Oh well.

    I just got an image of him presenting his paper, and pointing to each audience member, "patent pending, patent pending, patent pending" ala Homer Simpson.

  • by onomatomania ( 598947 ) on Friday November 22, 2002 @09:41AM (#4731660)
    Article blurb:
    The idea, then, is to limit the rate at which a computer can connect to new computers, where "new" means those that are not on a recent history list. Dr Williamson's "throttle" [...] restricts such connections to one a second.
    Hrm... well, it might have some benefit for things like Nimda, but it won't do anything for nasties that spread via email. If this becomes a default in a future version of Windows, though, you can bet that any virus meant to propagate by opening outgoing connections will just self-throttle, or disable the feature first. Already there is precedent for this, such as Bugbear that disables software firewalls so it can get out and spread.

    I would much rather see effort spent educating people to install security related patches regularly and turn off unused services, and push vendors towards "secure by default."
    • The basic concept could be applied to emails, perhaps - unexpected email requests, a system that scans for outgoing mails and compares it to a common list of outgoings, or detects spoofed addresses, etc.

      The BASIC idea of finding ways to strangle virii and warn of spreads is a good one. But you make an excellent point that we have to consider ALL methods of spreading virii.
    • The idea, then, is to limit the rate at which a computer can connect to new computers, where "new" means those that are not on a recent history list. Dr Williamson's "throttle" [...] restricts such connections to one a second.

      OK, this seems to point to the question: Why was the ability to connect to "new" computers at an extremely high rate there in the first place? Is that ability ever utilized to any extent in legitimate, day-to-day operations?

      If so, this might cause you some problems and putting "throttling" in there is a really bad idea. But if this ability isn't used, then maybe the "throttling" should be put in at the OS level.

      The only time I can see having this at the OS-level being a problem is when you first start up some big iron that needs to connect to thousands of clients. The OS might kill any attempt to do this. But once you've established a semi-regular list of clients, then having the OS thwart any attempts to collect to a massive amount of "new" machines seems like a good idea.

  • security vs. privacy (Score:2, Interesting)

    by GdoL ( 460833 )
    The author refers the different behaviour of a computer infected by a virus as a way to detect the virus. What the author says is that a virus will try to make connection to as many comouters as possible. This different behaviour is then monitorized by a system and someone somewhere is informed of the presence of the virus.

    But to have this system installed you will be giving someone an authorization to see your computer use profile, giving away your privacy. And it will not detect most virus that are only interested in destroing your data and/or spam your friends via email.
    • The way this looks like it's written to me is to be used in business LANs. No need for privacy there. The bottom line is what needs to be looked after. If the sys-admin needs additional permissions on your computer to be able to keep you from doing something stupid, oh well.

      I know I'd like to beat some of my users regularly with a stick.
  • by dethl ( 626353 )
    semi-anti-virus programs that "hold" the virus in until Joe Blow computer user comes in, and accidentally releases the virus into his machine.
  • Will it work? (Score:2, Interesting)

    by yogi ( 3827 )
    If the throttle is implemented on the same machine as the virus, the virus writers will turn it off.

    If it becomes a widespread implementation on the upstream routers, then virus writers will throttle their own connections to 1 per second to evade detection.

    This defense was only tested against Nimda, and other viruses may work other ways. Will it stop email virii?

    Makes the Warhol worm a bit harder to implement though :-)
    • If virus writers restrict outbound connections to 1 per second, while we lose the detection advantage of this scheme, we've STILL slowed the virus down. A virus opening a new connection per second can't spread nearly as fast as one that can open up hundreds.
    • For an other reason i doubt this will work.

      Nimba (the first one) had a bug so it scanned all the ip's in the same order. (it forgot to seed the random generator). But if a virus truely randoms seeks out ip's it will be trothled for a short while. But after some time the same exponentional behaviour will occur where more and more computers infect more and more computers.

      But he concludes correctly: nimba will be throtled.
  • Details, details (Score:2, Interesting)

    by TillmanJ ( 223874 )

    ...where are the details. What kind of heuristics is this 'throttle' using? Do they look for disparate connections, like 100+ individual hosts per minute, or simply just for connections outside of a tripwire-esque 'connection profile' for the machine? What kind of protocols does the throttle watch?

    I really enjoy the Economist, but this article is so shallow and fluffy, especially for them.

  • computer history (Score:2, Interesting)

    by it0 ( 567968 )
    The article basicly says that it wants user intervention when it connects to a new/unknwon computer it hasn't connected to before. So the virus could still spread to it's known list?? What if you run kazaa? The program would block outgoing connections.. I know which one is going out of the window first..
    • There isn't any user intervention involved in the actual operation of the throttling system. It's automated. Basically, once you connect to a machine, it's whitelisted for a period of time.

      The only "user intervention" is the fact that once a virus starts opening outgoing connections like crazy, the user will perceive severely reduced system performance.

      Not even a Gnutella client starting up and searching for other hosts can come close to the number of connections many virii open up. (Although it may be useful to whitelist certain apps as having permission to connect faster - They still should be throttled, but maybe 1 second for all apps but you can give Kazaa permissions for a .1-second delay instead. Much faster for KaZaA, but still a major slowdown for viruses.)
  • Link to paper (Score:4, Informative)

    by NearlyHeadless ( 110901 ) on Friday November 22, 2002 @09:43AM (#4731692)
    Here's Williamson's paper on the idea: Throttling Viruses: Restricting propagation to defeat malicious mobile code [hp.com] I haven't read it yet, but I see one potential problem right away. When you load a web page, you normally make quite a few connections--one for each image, e.g. I'll have to see how he handles that
    • When you load a web page, you normally make quite a few connections--one for each image, e.g. I'll have to see how he handles that.

      Now that I've read it, I see that he's just talking about the first connection to a computer. So, if your web page's images are all on the same server, no delay. If you have one on images.slashdot.org and another on adserver.f-edcompany.com and another on aj783.akamai.net, there will be a slight delay.

  • Issue at Hand (Score:5, Insightful)

    by seangw ( 454819 ) <seangw@@@seangw...com> on Friday November 22, 2002 @09:43AM (#4731693) Homepage
    I think the issue at hand is a more global issue faced when writing applications.

    Software is expected to behave 100%. How many of the developers here have had some strange bug, that may only appear in 1 out of every million users (not instances, otherwise it would happen in less than a second in most all modern processors). Then we are asked to fix it.

    This solution is great, throttle the computer, lose that 2% of all connections being instantaneous, but then it won't be perfect.

    I think we have to more realistically analyze the needs of modern software, and accept that it can "fail" to an acceptable degree if we want some superior functionality.

    The human brain is great, but it fails (quite too much for myself). IBM is annoucing building a computer that could simulate the human brain, but it won't reap the rewards of our brains, until it's willing to give in to the issues that we face, uncertain failure.

    With our "uncertain failure", look how great we are at calculating PI to the 100th digit (well, normal individuals anyway). Our brains certainly couldn't calculate nuclear simulations with the "uncertain failure"

    We will probably have to split "computer science" into the "uncertain failure, superb flexibility" and the "perfect, 99.999% of the time" categories.

    This sounds great for the "uncertain failure" group.
  • by txtger ( 216161 ) on Friday November 22, 2002 @09:44AM (#4731697) Homepage
    A lot of the vulnerabilities of these systems are things that are just downright idiotic, in my opinion. We've made programs that don't really need to talk to the outside world able to do so (Word, Excel), and we've given programs that shouldn't be able to control the filesystem and other aspects of the system that privilege (Outlook, Internet Explorer). During the Summer I managed to have Internet Explorer install software for me (.NET Platform).

    Why do we not look at applications and give them a domain before we just open the floodgates? Why not just say, "hey, email comes from the outside world, I don't trust the outside world, so I won't let my email client do anything it wants to". I know that this wouldn't stop all of these problems, but I think the general idea would circumvent many virii.
  • We've got malware that now disables personal firewall software so as to avoid detection. This throttle might be an effective patch against current viruses, but the next round will simply work around the throttle, if it is applied locally.

    Of course the article doesn't really say whether this is enforced on the local machines or is applied from outside (i.e. at a switch or router). However, by talking about it as an inoculation, it suggests it really enforced on the local machine.

    It's a good idea, in general, but it has to be user-tweakable, and that means it's virus-tweakable too.
  • Good idea! (Score:2, Insightful)

    And it's not that difficult to implement either.

    Give your switches enough memory and let them keep a history of 20 IP addresses per host. (this number needs to be tweaked according to usage of course) When you get a IP packet going to a new host, record the address and start a 1-second timer. While the timer runs, drop all IP packets to hosts not on the list.

    The packets you drop will be resent, and you get the wanted behaviour.

    Another advantage is that you only need to change the switches, not the systems.

    Only problem I can see: What about web pages with lots of images from different servers? Those will take forever to load. You could tell everyone to use a proxy, but you wouldn't be able to run this throttling on the proxy...

  • by Dexter's Laboratory ( 608003 ) on Friday November 22, 2002 @09:45AM (#4731713)
    Run Windows! That'll slow things down. Maybe it would slow down the spreading of viruses too?
  • How about some Outlook awareness classes?
  • by corvi42 ( 235814 ) on Friday November 22, 2002 @09:48AM (#4731734) Homepage Journal
    [SARCASM]
    Prevent the spread of viruses, make computers more secure, enjoy life in the Real World, spend more time with your family & loved ones!

    All this and more can be yours! Support Neo-Ludditism - break your computer today!

    No computers means no computer problems!
    Just imagine a profitable new career in ...um.... basket weaving!
    [/SARCASM]
  • by Viol8 ( 599362 ) on Friday November 22, 2002 @09:52AM (#4731770) Homepage
    Since only TCP has the idea of connections only this protocol can be protected from abuse in this way. Others such as UDP/ICMP etc send their data in descrete packets (as far as the OS is concerned, whether the app client-server system has the idea of connections over UDP is another matter) and if you limit these to 1 packet a second you can kiss goodbye to a whole host of protocols because they simply will not work effeciently or at all any longer. All his idea will do is cause virus writers to use protocols other than TCP. For macro viruses this could be a problem (does vbscript support UDP?) but for exe viruses its no big deal I suspect.
    • It's not a limit of one new connection per second, but a new connection to an UNKNOWN HOST per second.

      i.e. if you've opened an outbound connection to that host already in recent history - No speed limit.
  • by krystal_blade ( 188089 ) on Friday November 22, 2002 @09:53AM (#4731776)
    Virii thought: Woohoo, I got in a machine!
    Windows: "Are you a dll?"
    Virii thought: "Umm... Yes. I like Outlook."
    Windows: "Okay, hang on..."

    Launches Outlook...
    Virii thought: "Why is everything blue?"
    Windows: .............
    Virii thought: "Oh, if only I had hands!!!"

  • If this is on individual computers, I can't see "human intervention" being effective. It might certainly slow the progress of a worm, but I can just see someone getting a pop-up box "Your machine appears to be infected with a virus, should I delete it?" and someone sitting there and hitting "No."

    It would probably be more effective as some kind of network device/firewall that eats excessive network connection requests, then lets the administrator know that computer X appears to be infected (bonus points for inspecting packet content to determine type of infection).

    In fact, that implementation isn't new, I recall seeing a computer setup at a colocation site setup to inspect http traffic and blocked http requests that looked like code-red infection attempts.
  • by djembe2k ( 604598 ) on Friday November 22, 2002 @09:58AM (#4731818)
    Yes, this will slow down the spread of viruses -- but the article makes a big deal of the fact that a throttled system can detect the attempts to rapidly make many network connections, setting off an alert. Of course, as soon as people come to count on this as their primary form of virus detection, a virus will be written that only attempts one connection a second, and then, very slowly it will spread undetected on those systems that rely on the throttle for detection. And we know there will be people who rely on it exclusively . . . .
    • Those users you mention were hopeless anyway.

      The nice thing about this is that *even if it doesn't improve detection*, it' slows down viruses a large amount. So the virus writer has rewritten his virus to avoid detection by throttling its own connections.

      Guess what? We've forced that virus writer to cripple his virus' ability to spread in order to avoid detection. Yes, the virus can spread undetected. No, it can't spread as rapidly as Nimda or Code Red did.
  • by Toodles ( 60042 ) on Friday November 22, 2002 @09:59AM (#4731826) Homepage
    In short, this guy's idea for curbing infection rates of &pluralize("virus"); is to restrict systems network access to one new host per second. Exceptions would be made for high demand, known servers, such as mail server and (I presume, even though it wasn't in the article) HTTP or SOCKS proxies. Interesting idea, and it would help in slowing down the infection of, say, Nimba or Code Red.

    I can't help but think that his logic is flawed however. For example, most corporate headaches come from email based virii. If the only connections needed for the virus to spread is the email server it already has access to, there is no delay for the emails to be sent out to the mail server. No one could request for the email server to be throttled and keep their job, so the infected emails would be sent out, with no perceptable delay caused by the throttling.

    The only thing this might help with is worms only, no virii in the more common sense such as email based LookOut virii, .exe/.com infectors, or boot sector infectors. The article fails to mention the Hows of this throttling; is it based on the routers (in which case quick infection of the local subnet would take place) or on the switches (which could break most broadcast applications, not to mention mean all systems outside the subnet look the same) or in the OS (in which case the virus could put its own TCP/IP stack in to replace the throttled one, and end up with no throttling affects whatsoever).

    How about, instead of throttling network access, we move to more reliable code, better access controls at the kernel level, and a hardware platform that makes buffer overruns and stack smashing a thing of the past. While I am anti-MS, Palladium does actually have some good ideas on the hardware level. Is the DRM level that stinks to high heaven.
    • ... the solution is generally free. You say:
      How about, instead of throttling network access, we move to more reliable code, better access controls at the kernel level, and a hardware platform that makes buffer overruns and stack smashing a thing of the past. While I am anti-MS, Palladium does actually have some good ideas on the hardware level. Is the DRM level that stinks to high heaven.

      I've got good news for you. The average free *nix already has more reliable code with better access controls at the kernel level. You can check it out for yourself because the software is free, unlike that other silly stuff you mentioned from a particular abusive and convicted vendor, caugh, MicroSoft. Heck, you could even just use a mail client that does not run as root and does not automatically execute commands sent from strangers, like most free software. Way to go!

      I've also got bad news for you. Buffer overflows can not be defeated at the hardware level in a general purpose computer. Why is left as an exercise for the reader, but a shortcut is that Microsoft says it will work.

  • why not? (Score:2, Funny)

    by pixitha ( 589341 )
    why not just stop the anti-virus companies from making all the virus's in the first place?

    I mean, they make money on sales of anti-virus software, without any kind of regulation, hell with the way corporate america is already going, who says its not a big scam anyhow?
  • I'm sure this sounds like a good idea to some people, but I'm not convinced.

    The idea, then, is to limit the rate at which a computer can connect to new computers, where "new" means those that are not on a recent history list. Dr Williamson's "throttle" (so called because it is both a kind of valve and a way of strangling viruses at birth) restricts such connections to one a second. This might not sound like much to a human, but to a computer virus it is an age.

    This sounds to me like the idea is to basically make the tcp/ip stack single threaded.

    Ok smart guy, so lets use an http request as an example. Loading a web-page, a browser could theoretically make several connections to several different servers. So, with our single threaded, "throttled" tcp/ip stack, a simple web page could take several seconds to load, at least until the server on the other end is in the "history".

    Ok, so this "history" as the document describes... where is it kept? Hard drive? RAM? So, for every outgoing connection, the machine needs to check the address against a table somewhere... this is added overhead. Lets say that the address needs to be resolved... well, then we need to go through this process a second time just for the DNS server.

    So, this "Doctor Matthew Williamson" of HP... is he full of crap? I dunno -- I don't have a phd.

    • What about mail servers? Imagine a company attempting to do "normal" business at 1 new connection a second. Internal mail would work great, but anything to anyone else would be lagged multiple days.

      Side benefit: I suppose it would slow down the spammers, too, forcing them back to sending snail mail chain letters.
      • yeah, excellent point. This would suck big balls on a mail server, especially for an ISP who's mail server might contact tens of thousands of unknown systems each day. Try that through a single threaded tcp/ip stack @ 1 per second!

        So then what? Is Dr. Whatshisname going to tell us that this doesn't apply to internet servers? Oh good... that'll be where all the viruses reside.

        --csb
        • 99% or more of the machines infected by Nimda and Code Red had NO need whatsoever to open multiple connections. Viruses DON'T all reside in major servers. In fact, that's the LEAST likely place for them to reside, as such machines will be the most well-maintained and patched against security holes/checked thoroughly for improper activity. Nimda and CR were hitting mostly machines that were never configured as a server but happened to be running IIS because of MS stupidity in default configurations.

          Even if 10% of infected machines are unthrottled because they need to be for normal use, we've severely reduced the capability of 90% of the transmission vectors of a virus. This scheme isn't about black and white winning/losing - It's about simply slowing the damn things down so they're less of a threat.
  • The basic idea of "find ways to strangle virii" is a good one. I think he's onto something here, something so obvious it wasn't obvious. Even if his technique slowed virii down only a few percent, the spread over time would be much lower.

    However, this is really only one idea. Its value is in pointing out that to deal with an age of virii, unreliable web pages, email viruses, trojans, bad firewalls, and everything else that didn't exist fifty years ago, we need to think in radically different methods.

    The greatest value of this research is really going to be how it gets people to take a new look at computing. And for that, I say, it is about time. Our ideas for dealing with computer troubles need to evolve since the troubles we're facing continue to occur, spread, and change.
    • Such secure practices in operating system design has been here with us all along: Unix, Linux, BSD. These OSs are designed modular, which protects the system from complete failure (single services and isolated resources may be comprimised fairly quickly with basic attacks) in the event on af infection. Intensive attention is paid to permissions, file integrity, and security - which, when paired with the modular design greatly inhibits the damage that a virus can do. The bulk of the code is written in the open source model, which further extends security. The power of these systems allows for powerful and rapid administration, which is another deterrent to the spread of worms or the potential damage inflicted by viruses.

      These virus concerns should only bother Windows users right now.

  • P2P (Score:3, Interesting)

    by Shade, The ( 252176 ) on Friday November 22, 2002 @10:07AM (#4731894) Homepage
    Unfortunately I don't know much about P2P protocols, but wouldn't this tend to slow them down a bit? How many connections does Gnutella (for instance) throw out per second?
    • Except on startup of the program, very few.

      Gnutella opens 1 to n connections between your server and remote servers. Each one is kept open for communication until one end closes it, at which time the client will open a connection to a new server.

      The process of opening a new connection can involve multiple opens, as it will search to find a client which is currently operating and able to accept new connections (not overloaded) from a cache of hosts which have been seen to previously communicate on the network.

  • Perhaps we could somehow throttle Microsoft and limit them to releasing one new OS every 5 years or so. Maybe that would give us enough time to patch all the Gaping Security Holes.
  • Sounds simple (Score:3, Insightful)

    by heikkile ( 111814 ) on Friday November 22, 2002 @10:16AM (#4731955)
    Many Linux firewalls already do connection tracking. All this needs is another table of recent connections (unless one already exists for routing purposes!), and a few options to tune it with (/proc/sys/net/ip_throttle_memory (how many seconds to count as recent), /proc/sys/net/ip_throttle_delay (how long to delay when throttling))

    When do we see this in iptables ??

  • Just secure the code (Score:3, Informative)

    by mao che minh ( 611166 ) on Friday November 22, 2002 @10:17AM (#4731961) Journal
    As systems become more adaptive and proactive against malicious code, so too will the viruses against these counter measures. The next generation of virus writers will be bred in the same computing climate that the future white hats will hail from - there is no reason to think that viruses will not evolve right alongside the platforms that they attack.

    I support the notion that the key to ultimate security lies in the quality of the code. I'll go further and say that open source is the key to reaching the absolute goal of inpenetrable code. The open source model is our best bet at insuring that many, many eyes (with varying degrees of skill and with different intentions) will scan the code for flaws. I just wish that some of the more popular open source projects were more heavily reveiwed before their latest builds went up.

  • by toupsie ( 88295 ) on Friday November 22, 2002 @10:22AM (#4732009) Homepage
    If you are not running Microsoft Windows, are viruses a real problem? Running a Mac OS X box as my main desktop, I have never had one virus attack my system nor do I know of any fellow mac users that have had their system damaged by a virus. The only viruses I have seen on a Mac are Office Macro viruses -- no biggie for a Mac user. I am sure Linux desktop users, outside of the annoying XFree86 virus, are in the same situation. This whole article seems to be a complete waste of time because it discusses modifying a network to handle the insecurity of Windows. Why not just get rid of the problem? Spending more money making Windows secure doesn't seem like a bright idea.

    This is like banging your head with a hammer and wearing a thick, foam rubber hat so it doesn't hurt as much.

  • Like any other type of security strategy, a proper one should have several layers of defence. I think this idea is an excellent one, and would serve well as one layer in a complete strategy. Another good layer might be trapping [hackbusters.net]. Of course heuristics and signature scanning should be used as well. The most important layer of all IMHO... training. Human training.

  • False Positives (Score:4, Insightful)

    by Erasmus Darwin ( 183180 ) on Friday November 22, 2002 @10:36AM (#4732101)
    I can think of two false positives off the top of my head where legit traffic would get unfairly throttled:

    Web-based message boards -- Several of the message boards that I'm on allow users to include inline images. However, the users are responsible for hosting the images on their own servers. So a given page full of messages could easily add an extra 10 hosts to the "fresh contact" list, causing a 10 second delay. Furthermore, at least one of the message boards has a large enough user population that the "recent contact" list wouldn't help out enough at reducing the delay.

    Half-Life -- The first thing Half-Life does after acquiring a list of servers from the master server list is to check each one. For even a new mod (like Natural Selection), this can be hundreds of servers. For something popular (like Counter-Strike), it's thousands.

  • I remember back .. 10 years back.. actually 5 or 6. Assembly written viruses were rampant. Everyone knew what they were and were more likely to find some way to prevent it. Once a week i had a bootsector virus detected that needed to be cleaned from floppies and hard drives. Virus cleaners were rampant and they nagged you somewhat when they were ot of date. They even gave you instructions how to update sometimes.

    Let's fast-forward. Today, OS's only seem more secure, they aren't. We don't get loads of virus software floating about like we used to. More people don't know about viruses than do... and what's worse, they are less viruses about that do more damage.

    I'm also surprised that intrusion detection systems don't have nag screens which are attached to daemons to let you know that your software needs to be updated, or you are fucked.

    Servers should be required to run a small cron job'd progoram like Nessus (search freshmeat), which would nag you when the data is old. snort, the ids software should do the same.

    For the lack of viruses, we need whitehats to write exploits that aren't damaging but are .. surprising. Popping up messages like, "I could have formatted your computer because of XXX, go fix it by doing... "

    Maybe if people were made more aware that the computing world isn't all plug-n-play, bells and whistles, that you are using a device that needs care.
  • by HighTeckRedNeck ( 538597 ) on Friday November 22, 2002 @11:05AM (#4732236)
    What we need to do is use all the extra cycles of the average computer waiting on its user to press a key to search for things that don't belong just like biological immune systems expend energy looking for invaders. Virus scanners are a start for recognizing intruders but only after they get recognized by antivirus writers and then distributed to the few that will pay and update. This gives the virus a long head start and "sheltered hosts". The operating system should use the spare cycles to do a tripwire style scan of the rest of the system. The faster an intrusion is found the less time it has to create trouble. Areas like user storage will be problematic but such security measures should be integral to the system administration and operation at the operating system code level.

    Further it should be (putting on fire suit) a function of the government to finance an independent system to publicize standardized virus recognition fingerprints. Then it should be integral to the operating system to run a scan as part of the executable load function. This would be justified as protecting commerce. This won't solve the problem of "script" viruses that play off the integration features of Microsoft products but that can be dealt will by requiring Microsoft to produce products that actually ask for permissions from the user before doing stupid stuff. Sometimes a parent just has to take control of their offspring. Either that or firewall off anyone using Microsoft products, most of them are so non standard they aren't hard to recognize. Many places don't let Microsoft attachments go through and it has saved them a lot of lost time. XML and other standard formats work just fine and are interoperable with other systems.

    Do unto others as you would have done to yourself, don't let America become like Israel. It is un-American to support human rights violations, support justice in Palestine.

  • I think that P2P programs may set off the alarm a bit too easily, no?

  • by Orafu1 ( 628471 ) on Friday November 22, 2002 @11:42AM (#4732498)

    The idea of slowing down the attack rate of an intruder is really not so new. One example is the infamous Linux "syn-cookies" countermeasure to syn-flooding. Syn-cookies prevent the excessive use of connection resources by reserving these resources to connections that have evidently gone through a genuine TCP three-way handshake. This forces the attack to slow down, since instead of throwing SYN-packets at a host as fast it can it now has to do a proper three-way handshake. This involves waiting for the associated round-trip times which cause the attack to slow down to the speed of genuine connection attempts.

    Now since the attack has been slowed down to the speed of the genuine users, it takes part in the competition for connection resources on a fair and equal ground with other users, wich makes it as successful as other users to acquire connection resources. That means that the rate of attack is not quick enough for a resource starvation attack anymore, and it is reduced to a resource abuse attack. Since the latter type of attack needs to be employed for a long time to cause significant damage, the risks of being discovered become too big to make the attack practical.

    Well, now this is not exactly a "throttling" countermeasure as described in the Economist's article. The countermeasure from the article selectively slows down outgoing connection attempts to "new" hosts, in order to further slow down the attack in an attempt to put genuine users not on equal footing with the attack but at a significant advantage. This element of selection may be new, at least I can not come up with an older example. As others commented before, the selection technique also has its disadvantages:
    a) depending on the attack, different kinds of selection methods must be employed to actually single out the malicious connections -- there is is no predefinable "catch-all-attacks" selection method
    b) depending on the services you run on your network, the effort you have to make to find out how your usage patterns can be discerned from known attack patterns varies.

  • details? (Score:3, Interesting)

    by stinky wizzleteats ( 552063 ) on Friday November 22, 2002 @12:25PM (#4732823) Homepage Journal

    Ah yes, well, see, we're going to throttle the network, so that the virus spreads more slowly.

    Throttle what? bandwidth? That wouldn't have much of an effect on virus activity, but it certainly would affect everything else. Connections per second would probably slow down a virus, but would basically shut down SMB and DNS as well.

    You better make sure Ridge doesn't hear about this, or we'll be required by law to wear 20 lb. lead shoes everywhere we go, to make it easier to catch running terrorists.

  • by malfunct ( 120790 ) on Friday November 22, 2002 @12:56PM (#4733146) Homepage
    I like the idea on a desktop where the connections per second is easily less than 1.

    In the datacenter I work at we handle 2000 transactions per second per machine on average with peaks reaching 10000 transactions per second. Not every transaction requires a new connection because of caching in our software but we create far more than 1 new connection per second.

  • by Mandi Walls ( 6721 ) on Friday November 22, 2002 @01:28PM (#4733497) Homepage Journal
    While throttling is an interesting idea, it can be no replacement for methods that have been available for some time.

    • Patching your goddamn systems
    • ingress and egress filtering of IP addresses, at the local LAN and ISP level, to prevent IP address spoofing
    • using some common sense when filtering outbound traffic. does my web server need to be able to initialize outbound connections? no? then why does it?
    • host-based firewalling. reporting based on permitting outbound connections to known services to prevent droning of workstations
    • get rid of Outlook. if you're going to sit there and tell me that using Outlook is more important than the chance your financial statements, contract bids, salary information, etc gets sent offsite, you're insane
    • get HR and legal involved in the security policy. make turning off the host firewall and virus protection a terminable offense, up there with trying to access forbidden data
    • No unencrypted communications with business partners and customers
    • NAT everyone. Your accountant does not need a publicly-accessible workstation
    • VPN. It's a nice idea, but do you trust the marketing director's teenage kids on the computer at the other end?

    Now. why don't these things happen? Time. Money. Combination of both. Convenience. Lack of understanding on the part of users.

    But the big one is the belief that security is a product that can be purchased, that there is a quick fix out there that will solve all your security ills and hide you from all the bad guys.

    Security is a PROCESS. Better yet, it's a combination of processes, relating to employees at all levels of your organization, from the CEO to the custodial service contracted by your property manager. Hell, even building safer software isn't going to help you if your users refuse to use it 'cause it's a pain in the ass. Remember, they believe in the panacea of the "single sign-on". They put their passwords on post-its around their workstations. They keep their contacts (oh help us) in their Hotmail addressbook, regardless of how many 'sploits have been uncovered in Hotmail. They're afraid of computers.

    Security is expensive. And it should be, because it has to be done right. You need user participation, on all levels. It requires education and training, and a reduction in ease of use.

    There is no magic wand.

    --mandi

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...