Throttling Computer Viruses 268
An anonymous reader writes "An article in the Economist that looks at a new way to thwart computer viral epidemics, by focusing on making computers more resilient rather than resistant. The idea is to slow the spread of viral epidemics allowing effective human intervention rather than attempting to make a computer completely resistant to attack."
slow the spread of viral epidemics (Score:5, Funny)
Re:slow the spread of viral epidemics (Score:5, Funny)
Everyone drop your baudrate to 110.
Just for laughs, we used to get stoned and call a multi-line chat board here in Austin, Tx (long live AfterHours, R.I.P. Tombob). We'd drop our baudrate to 300 or 110. and attempt to have coherent conversations while inebriated.
Yeah, pathetic but the internet wasn't available to the public yet and we were young and st00pid.
Re:slow the spread of viral epidemics (Score:2)
Re:slow the spread of viral epidemics (Score:2)
I have a brilliantly original idea (Score:5, Insightful)
I'm not joking. The #1 rule of computer science is that computer scientists are lazy.
We need to stop working just to accomplish the minimal functionality desired and start testing the hell out of our software to ensure that it's secure.
Re:I have a brilliantly original idea (Score:4, Interesting)
Re:I have a brilliantly original idea (Score:2)
Specs? testing? what is that? I've been coding in IT depts/a
Re:I have a brilliantly original idea (Score:5, Funny)
Re:I have a brilliantly original idea (Score:3, Insightful)
Plus, you usually have to balance security with user friendliness (putting on flame retardent jacket). Simply adding users vs root is a hassle for your average (home) user. People need to understand security to be willing to put in secure methods. Lets face it, people just want crap to work right now. They turn off security measures (like firewalls) to get something to work (like a game), then don't turn them back on so they don't have to deal with it the next time they try to play that game.
Re:I have a brilliantly original idea (Score:5, Insightful)
There's always a hole that cannot be planned.
True, but why do people have to keep writing programs with static buffer sizes? I cannot think of one single acceptable excuse to write a piece of software where a buffer overflow can happen.
If user input is in any way involved - directly or indirectly - then you need to test it before you accept it! There is no exuse!
Buffer overflows is not the only security issue with software, but the principle behind preventing it applies to most of the security issues out there...
So, I have to agree with your parent poster: the people making the software are lazy!
Re:I have a brilliantly original idea (Score:5, Informative)
I think it isn't that people WRITE programs with static buffers now-a-days as much as it is that people who maintain old software don't fix the static buffers.
Plus I could also argue what is more important to the program? Static gives me knowledge of the maximum size of memory used, if that knowledge is required. Searching is faster in arrays than linked lists (although replacing, on average, is slower). Don't assume that static buffers are ALWAYS wrong.
Re:I have a brilliantly original idea (Score:5, Insightful)
Indeed - generally, there's nothing wrong with static buffers. If you're going to use them, however, there is absolutely no excuse for not bounds checking access to that buffer. That is, if you know that the buffer can contain say 1000 characters, check anything you write to it to make sure it fits!
That's most of what's "wrong" with static buffers - that it's too easy to use them incorrectly. It's not entirely the fault of the buffer, though, that it's easily misused
Re:I have a brilliantly original idea (Score:5, Insightful)
I think that ALL programs should be running in the equivalent of a sandbox at all times. There should be sandboxes inside sandboxes. When you download something off the net, you can go ahead and run it in a relatively safe, walled-off environment. There should be NO need for the program to look outside of that. Later on you might decide to allow the program more access to your system, once you begin to trust it, or some else in your web of trust has trusted it.
The OS needs to be designed to do this from the beginning.
Re:I have a brilliantly original idea (Score:2)
Re:I have a brilliantly original idea (Score:4, Insightful)
FUDDY FUDDY FUD FUD
Depends what you mean by "performance application". Java is just as fast as C++ for a long-lived server process, running on a decent OS with a new-ish (i.e. 1.3.0 or above) JVM. Hotspot (even more so the newer 1.4 versions) is a fantastically good optimising engine which tunes your compilation as it runs. That's something gcc can never do...I have seen the suggestion put forward by better scientists than myself that something using the same concepts as Hotspot should in most cases be able to beat a traditional compiler, for that reason.
For client side apps Java can "feel" a little slow, but that is often caused by the graphics libraries, Swing is a little sluggish. Look at the Ecplise IDE however if you want to see a client side graphical Java app running just as fast as C.
Re:I have a brilliantly original idea (Score:3, Interesting)
Eventually at some level code needs static buffers. Well-designed programs along with proper code validation techniques ensure a minimal number of errors. Java/C#/language-of-the-month can help software engineering, but by no means are they a panacea.
Re:I have a brilliantly original idea (Score:2)
> programs with static buffer sizes?
Mostly because they are programming in computer
languages that make basic things like storing
information in a buffer a pain in the neck for the
programmer. As long as we have languages with
malloc or the equivalent (C, C++, and all their
ilk), we will have buffer overruns and pointer
errors and other such nonsense.
Re:I have a brilliantly original idea (Score:4, Insightful)
If you write the simplest code you can that meet the requirements then more than likely its secure. It has no fancy tricks, its easy to see what its doing, therefore has less holes that need to be found.
Re:I have a brilliantly original idea (Score:3, Funny)
Re:I have a brilliantly original idea (Score:3, Insightful)
Of course, there's my solutions to slowing the spread of virii: (All should help. Any can be done.)
Sorry guys, but alot can be done with the existing stuff. Even though it hasn't been made *simple* or in a lazy manner (read, easiest way), its what we have to work with. One well written piece of paper circulated to 500 people can come a long way in upgrading the user's brainware. Its eaisier than convincing M$ (and others) to rewrite code. Lets see what happens then.
Re:I have a brilliantly original idea (Score:3, Insightful)
What happens when it's a technically inept user or one with malicious intent? Immediately, the fact that your program expects certain kinds of information in certain character ranges etc. to be input at point X causes a problem as wrong input is provided, or it's done in an obscene amount (hence buffer overruns) and the like. If you have an extremely simple program, your approach works: if, howerever, it's like *anything* done in an enterprise development environment several programs (or several portions and routines of the same program) nest together and share that information for their own purposes. Simplicity must give way to verbosity, in this case.
There's also expected order of operations, component stressing (memory leaks) and so on. Don't take the shortcut.
Re:I have a brilliantly original idea (Score:3, Informative)
Seriously, this is a whole new way to think about security, and it has a lot of promise. Security systems will never be perfect, and if they are designed never to fail, the consequences of failure are likely to be dire. By managing the consequences of failure, you can best limit the effects of a determined attack. I think this is equally true of electronic security and physical security.
Re:I have a brilliantly original idea (Score:5, Insightful)
In this business, it's a tradeoff between quality and time to market. Up until recently, software purchasing decisions haven't been based on quality very much so the software producers have given the customer what he wants: Buggy product now.
Two words: (Score:2)
Thank you.
Re:I have a brilliantly original idea (Score:4, Interesting)
Everyone has two complaints about the software he/she uses:
No one accepts, that the enhancement of one leads to a degradation of the other one. Cisco has a nice approach (at least they had it during my ISP days): There is a feature rich version and a stability oriented version. The pick is yours.
Martin
Technique (Score:5, Insightful)
The technique, called "heuristics," checks for suspicious commands within software code to detect potential viruses.
Heuristic techniques can detect new viruses never seen before, so they can keep malicious code from spreading. An older method, called signature-scanning, uses specific pieces of code to identify viruses.
Both methods have down sides. Heuristic techniques can trigger false alarms that flag virus-free code as suspicious. Signature-scanning requires that a user be infected by a virus before an antivirus researcher can create a patch--and the virus can spread in the meantime. Most antivirus vendors use both techniques.
It's time for the industry as a whole to look at different approaches The time-honored method of signature scanning is a little worn and weary given new viruses coming out
Re:Technique (Score:5, Interesting)
why ? new viruses are designed to subvert them. I've done it, installing 5 virusscanners to check if, and how they detect your virus. (btw my virus was a
example
wrong:
-> to_infect = "*.com";
right:
-> boem = "*.c";
-> othervariable = 5;
-> to_infect = strcat(boem,"om");
I have yet to see the first scanner that detects this one. The difference in codesize is about 3 extra bytes (assuming you were using strcat anyway) so in today's 500kb viruses it is negligeable.
Heuristics are nice, they do have some effect, but they are no solution.
Virusscanning is inherently responsive. The best they can hope to do is to repair the damage when it is done. They have no use whatsoever for online worms.
Re:Technique (Score:2)
Re:Technique (Score:3, Informative)
Yes. By definition, heuristics [techtarget.com] can only find some evil programs, not all of them. (If they could, they'd be algorithims). Holes will always exist.
And since virus-scanner software must be widely distributed to all the users it's supposed to protect, the virus author can always test his code against the heuristic until he finds a way to slip past it.
This suggests an altered business model for anti-virus vendors: start treating their heuristics like a trade secret, and don't let them out of the building. Run virus scanning on an ASP model.
Of course, the privacy, network-capacity, and liability problems with that approach are enormous.
Re:Technique (Score:4, Insightful)
True, but most of the new viruses that come out are produced by script kiddies and their virus construction kits, and heuristics work well for detecting these.
Besides, AV software does not stand alone. AV security includes scanning, monitoring and blocking at the mail servers and firewalls, good communication between av software companies and IT AV staff, desktop security policies, and the most important, user training. Admittedly the last is the hardest, but well informed users are less likely to infect themselves and risk infecting everyone else.
Details and implications (Score:2)
The idea, then, is to limit the rate at which a computer can connect to new computers, where "new" means those that are not on a recent history list. Dr Williamson's "throttle" (so called because it is both a kind of valve and a way of strangling viruses at birth) restricts such connections to one a second.
Given the large institution focus of the article, I assumed the control would be external at the network level. The only way to really stop a computer from connecting to "new" machines is to keep a record of connections and stop "new" ones external to the machine. If you can't secure the computer secure the network the author seems to be saying.
The author wonders why no one had thought of this before and I can tell him that the reason is that it's contrary to the founding priciples of the internet and it won't work. The whole idea behind the internet is to have a network without central control or intelligence. Putting this kind of invasive intelligence into the net adds complications useful only for censorship and control. How, pray tell, can you do this for a mail server? Mail servers contact new machines all day, that's their job! The virus mentioned as an example happened because of poor software from a certian vendor, Microsoft. The same trick can be had again if the virus shifts its mailing burden to the stupid IIS server.
Attention has been focused on the root cause of the problem: mail clients that run as root and automatically execute commands sent by strangers. Everyone said it was a bad idea when M$ did it, and everyone should continue to point the finger of blame in the right direction. Adding hacks like this elswhere is a waste of time and has serious implications for the internet as a medium for imformation exchange.
human intervention (Score:3, Insightful)
Re:human intervention (Score:3, Interesting)
Personally, I have seen some interesting trojan epidemics on networks that are in no way connected to the Internet. There was a company that was terribly paranoid and allowed Internet use only and exclusively from a particular computer. This way they thought they could overcome problems with viruses they had in the past. There was a not so dumb admin that dealed with the E-mail, filtering it through antivirus tools, before copying it into a diskette and send it into the LAN. And you know? They kept having serious problems with viruses. Some deeper analysis showed that every trojaned E-Mail containing a corporate cliche inside the subject was always the cause for the next epidemics.
The best way to throttle viruses (Score:2, Interesting)
NOW we're talking! (Score:4, Insightful)
This will of course lead to a new class of virus.. (Score:5, Funny)
Re:This will of course lead to a new class of viru (Score:4, Funny)
You might have heard of it, it was called "Clippy"
Re:The "annoy the user to death" has already hit! (Score:2)
[Are you certain]
[press enter to exit]
[press escape to continue]
The"annoy the user to death" virus has already hit!
One connection per second? (Score:2, Insightful)
It would probably save other sites from being Slashdotted, though.
Re:One connection per second? (Score:2, Informative)
If you read the article, you'll see that the limit is on OUTgoing connections, not incomming traffic. In other words, this type of AV effort will not eliminate the slashdot effect.
Hope he doesn't patent this (Score:2, Interesting)
I just got an image of him presenting his paper, and pointing to each audience member, "patent pending, patent pending, patent pending" ala Homer Simpson.
Not very sophisticated. (Score:4, Insightful)
Hrm... well, it might have some benefit for things like Nimda, but it won't do anything for nasties that spread via email. If this becomes a default in a future version of Windows, though, you can bet that any virus meant to propagate by opening outgoing connections will just self-throttle, or disable the feature first. Already there is precedent for this, such as Bugbear that disables software firewalls so it can get out and spread.
I would much rather see effort spent educating people to install security related patches regularly and turn off unused services, and push vendors towards "secure by default."
Re:Not very sophisticated. (Score:2)
The BASIC idea of finding ways to strangle virii and warn of spreads is a good one. But you make an excellent point that we have to consider ALL methods of spreading virii.
Re:Not very sophisticated. (Score:3, Interesting)
OK, this seems to point to the question: Why was the ability to connect to "new" computers at an extremely high rate there in the first place? Is that ability ever utilized to any extent in legitimate, day-to-day operations?
If so, this might cause you some problems and putting "throttling" in there is a really bad idea. But if this ability isn't used, then maybe the "throttling" should be put in at the OS level.
The only time I can see having this at the OS-level being a problem is when you first start up some big iron that needs to connect to thousands of clients. The OS might kill any attempt to do this. But once you've established a semi-regular list of clients, then having the OS thwart any attempts to collect to a massive amount of "new" machines seems like a good idea.
security vs. privacy (Score:2, Interesting)
But to have this system installed you will be giving someone an authorization to see your computer use profile, giving away your privacy. And it will not detect most virus that are only interested in destroing your data and/or spam your friends via email.
Re:security vs. privacy (Score:2)
I know I'd like to beat some of my users regularly with a stick.
Now were gonna have (Score:2, Insightful)
Will it work? (Score:2, Interesting)
If it becomes a widespread implementation on the upstream routers, then virus writers will throttle their own connections to 1 per second to evade detection.
This defense was only tested against Nimda, and other viruses may work other ways. Will it stop email virii?
Makes the Warhol worm a bit harder to implement though
Then we've at least partially won. (Score:2)
Re:Will it work? (Score:2)
Nimba (the first one) had a bug so it scanned all the ip's in the same order. (it forgot to seed the random generator). But if a virus truely randoms seeks out ip's it will be trothled for a short while. But after some time the same exponentional behaviour will occur where more and more computers infect more and more computers.
But he concludes correctly: nimba will be throtled.
Details, details (Score:2, Interesting)
...where are the details. What kind of heuristics is this 'throttle' using? Do they look for disparate connections, like 100+ individual hosts per minute, or simply just for connections outside of a tripwire-esque 'connection profile' for the machine? What kind of protocols does the throttle watch?
I really enjoy the Economist, but this article is so shallow and fluffy, especially for them.
computer history (Score:2, Interesting)
No it won't (Score:2)
The only "user intervention" is the fact that once a virus starts opening outgoing connections like crazy, the user will perceive severely reduced system performance.
Not even a Gnutella client starting up and searching for other hosts can come close to the number of connections many virii open up. (Although it may be useful to whitelist certain apps as having permission to connect faster - They still should be throttled, but maybe 1 second for all apps but you can give Kazaa permissions for a
Don't trust anything (Score:2)
The other problem: What if KaZaA itself turned out to have an exploitable vulnerability and became infected?
Or if a virus deliberately infected KaZaA after coming into the system another way? (Note: Making the speed limit exceptions port-based would eliminate this, leaving only a vulnerability in KaZaA itself.)
In fact, port-based limit settings would be an excellent solution to a number of the issues of machines which have legit reasons to be opening lots of outgoing connections, like mail servers. Allow a high speed limit on outgoing SMTP, but throttle anything else. (Why would a mail server make numerous HTTP contacts?) Too bad that vulnerable MTAs are probably the second most common virus vector... But at least a mailserver could still be throttled against spreading an IIS worm.
Last but not least - How long until we see an implementation of this for Linux, possibly at the firewall level? (i.e. to restrict outgoing connections at a NAT server. Of course, such a server would inherently make it harder for a virus/worm to enter in the first place.)
Link to paper (Score:4, Informative)
Re:Link to paper (Score:2)
Now that I've read it, I see that he's just talking about the first connection to a computer. So, if your web page's images are all on the same server, no delay. If you have one on images.slashdot.org and another on adserver.f-edcompany.com and another on aj783.akamai.net, there will be a slight delay.
Issue at Hand (Score:5, Insightful)
Software is expected to behave 100%. How many of the developers here have had some strange bug, that may only appear in 1 out of every million users (not instances, otherwise it would happen in less than a second in most all modern processors). Then we are asked to fix it.
This solution is great, throttle the computer, lose that 2% of all connections being instantaneous, but then it won't be perfect.
I think we have to more realistically analyze the needs of modern software, and accept that it can "fail" to an acceptable degree if we want some superior functionality.
The human brain is great, but it fails (quite too much for myself). IBM is annoucing building a computer that could simulate the human brain, but it won't reap the rewards of our brains, until it's willing to give in to the issues that we face, uncertain failure.
With our "uncertain failure", look how great we are at calculating PI to the 100th digit (well, normal individuals anyway). Our brains certainly couldn't calculate nuclear simulations with the "uncertain failure"
We will probably have to split "computer science" into the "uncertain failure, superb flexibility" and the "perfect, 99.999% of the time" categories.
This sounds great for the "uncertain failure" group.
Problems With Insecurity (Score:4, Insightful)
Why do we not look at applications and give them a domain before we just open the floodgates? Why not just say, "hey, email comes from the outside world, I don't trust the outside world, so I won't let my email client do anything it wants to". I know that this wouldn't stop all of these problems, but I think the general idea would circumvent many virii.
This just ups the ante. (Score:2, Informative)
Of course the article doesn't really say whether this is enforced on the local machines or is applied from outside (i.e. at a switch or router). However, by talking about it as an inoculation, it suggests it really enforced on the local machine.
It's a good idea, in general, but it has to be user-tweakable, and that means it's virus-tweakable too.
Good idea! (Score:2, Insightful)
Give your switches enough memory and let them keep a history of 20 IP addresses per host. (this number needs to be tweaked according to usage of course) When you get a IP packet going to a new host, record the address and start a 1-second timer. While the timer runs, drop all IP packets to hosts not on the list.
The packets you drop will be resent, and you get the wanted behaviour.
Another advantage is that you only need to change the switches, not the systems.
Only problem I can see: What about web pages with lots of images from different servers? Those will take forever to load. You could tell everyone to use a proxy, but you wouldn't be able to run this throttling on the proxy...
suggestion... (Score:3, Funny)
Re:suggestion... (Score:2, Funny)
Run Windows! [...] Maybe it would slow down the spreading of viruses too?
You really haven't been paying attention, have you?! :))
If education can thwart AIDS… (Score:2, Insightful)
Support Neo-Ludditism (Score:4, Funny)
Prevent the spread of viruses, make computers more secure, enjoy life in the Real World, spend more time with your family & loved ones!
All this and more can be yours! Support Neo-Ludditism - break your computer today!
No computers means no computer problems!
Just imagine a profitable new career in
[/SARCASM]
This will only work for TCP. What about UDP ? (Score:3, Insightful)
You have it wrong. (Score:2)
i.e. if you've opened an outbound connection to that host already in recent history - No speed limit.
Microsoft already does this... (Score:5, Funny)
Windows: "Are you a dll?"
Virii thought: "Umm... Yes. I like Outlook."
Windows: "Okay, hang on..."
Launches Outlook...
Virii thought: "Why is everything blue?"
Windows:
Virii thought: "Oh, if only I had hands!!!"
Re:Microsoft already does this... (Score:2, Informative)
-j
Re:Wtf are you smoking? (Score:2)
Is this on the individual computers? (Score:2, Insightful)
It would probably be more effective as some kind of network device/firewall that eats excessive network connection requests, then lets the administrator know that computer X appears to be infected (bonus points for inspecting packet content to determine type of infection).
In fact, that implementation isn't new, I recall seeing a computer setup at a colocation site setup to inspect http traffic and blocked http requests that looked like code-red infection attempts.
virus writers will respond, of course (Score:3, Insightful)
Then we've partially won (Score:2)
The nice thing about this is that *even if it doesn't improve detection*, it' slows down viruses a large amount. So the virus writer has rewritten his virus to avoid detection by throttling its own connections.
Guess what? We've forced that virus writer to cripple his virus' ability to spread in order to avoid detection. Yes, the virus can spread undetected. No, it can't spread as rapidly as Nimda or Code Red did.
Umm, I don't buy it. (Score:5, Insightful)
I can't help but think that his logic is flawed however. For example, most corporate headaches come from email based virii. If the only connections needed for the virus to spread is the email server it already has access to, there is no delay for the emails to be sent out to the mail server. No one could request for the email server to be throttled and keep their job, so the infected emails would be sent out, with no perceptable delay caused by the throttling.
The only thing this might help with is worms only, no virii in the more common sense such as email based LookOut virii,
How about, instead of throttling network access, we move to more reliable code, better access controls at the kernel level, and a hardware platform that makes buffer overruns and stack smashing a thing of the past. While I am anti-MS, Palladium does actually have some good ideas on the hardware level. Is the DRM level that stinks to high heaven.
Re:Umm, I don't buy it. That's good because ... (Score:3, Insightful)
How about, instead of throttling network access, we move to more reliable code, better access controls at the kernel level, and a hardware platform that makes buffer overruns and stack smashing a thing of the past. While I am anti-MS, Palladium does actually have some good ideas on the hardware level. Is the DRM level that stinks to high heaven.
I've got good news for you. The average free *nix already has more reliable code with better access controls at the kernel level. You can check it out for yourself because the software is free, unlike that other silly stuff you mentioned from a particular abusive and convicted vendor, caugh, MicroSoft. Heck, you could even just use a mail client that does not run as root and does not automatically execute commands sent from strangers, like most free software. Way to go!
I've also got bad news for you. Buffer overflows can not be defeated at the hardware level in a general purpose computer. Why is left as an exercise for the reader, but a shortcut is that Microsoft says it will work.
why not? (Score:2, Funny)
I mean, they make money on sales of anti-virus software, without any kind of regulation, hell with the way corporate america is already going, who says its not a big scam anyhow?
Somebody smoking crack? (Score:2)
The idea, then, is to limit the rate at which a computer can connect to new computers, where "new" means those that are not on a recent history list. Dr Williamson's "throttle" (so called because it is both a kind of valve and a way of strangling viruses at birth) restricts such connections to one a second. This might not sound like much to a human, but to a computer virus it is an age.
This sounds to me like the idea is to basically make the tcp/ip stack single threaded.
Ok smart guy, so lets use an http request as an example. Loading a web-page, a browser could theoretically make several connections to several different servers. So, with our single threaded, "throttled" tcp/ip stack, a simple web page could take several seconds to load, at least until the server on the other end is in the "history".
Ok, so this "history" as the document describes... where is it kept? Hard drive? RAM? So, for every outgoing connection, the machine needs to check the address against a table somewhere... this is added overhead. Lets say that the address needs to be resolved... well, then we need to go through this process a second time just for the DNS server.
So, this "Doctor Matthew Williamson" of HP... is he full of crap? I dunno -- I don't have a phd.
Re:Somebody smoking crack? (Score:2)
Side benefit: I suppose it would slow down the spammers, too, forcing them back to sending snail mail chain letters.
Re:Somebody smoking crack? (Score:2)
So then what? Is Dr. Whatshisname going to tell us that this doesn't apply to internet servers? Oh good... that'll be where all the viruses reside.
--csb
No one smoking crack (Score:2)
Even if 10% of infected machines are unthrottled because they need to be for normal use, we've severely reduced the capability of 90% of the transmission vectors of a virus. This scheme isn't about black and white winning/losing - It's about simply slowing the damn things down so they're less of a threat.
It's a start . . . (Score:2)
However, this is really only one idea. Its value is in pointing out that to deal with an age of virii, unreliable web pages, email viruses, trojans, bad firewalls, and everything else that didn't exist fifty years ago, we need to think in radically different methods.
The greatest value of this research is really going to be how it gets people to take a new look at computing. And for that, I say, it is about time. Our ideas for dealing with computer troubles need to evolve since the troubles we're facing continue to occur, spread, and change.
Re:It's a start . . . (Score:2)
These virus concerns should only bother Windows users right now.
P2P (Score:3, Interesting)
Re:P2P (Score:2)
Gnutella opens 1 to n connections between your server and remote servers. Each one is kept open for communication until one end closes it, at which time the client will open a connection to a new server.
The process of opening a new connection can involve multiple opens, as it will search to find a client which is currently operating and able to accept new connections (not overloaded) from a cache of hosts which have been seen to previously communicate on the network.
Real Software Throttling (Score:2, Funny)
Sounds simple (Score:3, Insightful)
When do we see this in iptables ??
Just secure the code (Score:3, Informative)
I support the notion that the key to ultimate security lies in the quality of the code. I'll go further and say that open source is the key to reaching the absolute goal of inpenetrable code. The open source model is our best bet at insuring that many, many eyes (with varying degrees of skill and with different intentions) will scan the code for flaws. I just wish that some of the more popular open source projects were more heavily reveiwed before their latest builds went up.
Are Viruses a real problem? (Score:3, Insightful)
This is like banging your head with a hammer and wearing a thick, foam rubber hat so it doesn't hurt as much.
Mixed strategy is best... (Score:2)
False Positives (Score:4, Insightful)
Web-based message boards -- Several of the message boards that I'm on allow users to include inline images. However, the users are responsible for hosting the images on their own servers. So a given page full of messages could easily add an extra 10 hosts to the "fresh contact" list, causing a 10 second delay. Furthermore, at least one of the message boards has a large enough user population that the "recent contact" list wouldn't help out enough at reducing the delay.
Half-Life -- The first thing Half-Life does after acquiring a list of servers from the master server list is to check each one. For even a new mod (like Natural Selection), this can be hundreds of servers. For something popular (like Counter-Strike), it's thousands.
How about writing faux viruses? (Score:2)
Let's fast-forward. Today, OS's only seem more secure, they aren't. We don't get loads of virus software floating about like we used to. More people don't know about viruses than do... and what's worse, they are less viruses about that do more damage.
I'm also surprised that intrusion detection systems don't have nag screens which are attached to daemons to let you know that your software needs to be updated, or you are fucked.
Servers should be required to run a small cron job'd progoram like Nessus (search freshmeat), which would nag you when the data is old. snort, the ids software should do the same.
For the lack of viruses, we need whitehats to write exploits that aren't damaging but are
Maybe if people were made more aware that the computing world isn't all plug-n-play, bells and whistles, that you are using a device that needs care.
Intergral tripwire. (Score:3, Insightful)
Further it should be (putting on fire suit) a function of the government to finance an independent system to publicize standardized virus recognition fingerprints. Then it should be integral to the operating system to run a scan as part of the executable load function. This would be justified as protecting commerce. This won't solve the problem of "script" viruses that play off the integration features of Microsoft products but that can be dealt will by requiring Microsoft to produce products that actually ask for permissions from the user before doing stupid stuff. Sometimes a parent just has to take control of their offspring. Either that or firewall off anyone using Microsoft products, most of them are so non standard they aren't hard to recognize. Many places don't let Microsoft attachments go through and it has saved them a lot of lost time. XML and other standard formats work just fine and are interoperable with other systems.
Do unto others as you would have done to yourself, don't let America become like Israel. It is un-American to support human rights violations, support justice in Palestine.
P2P programs? (Score:2)
Not so new: remember syn-cookies? (Score:3, Interesting)
The idea of slowing down the attack rate of an intruder is really not so new. One example is the infamous Linux "syn-cookies" countermeasure to syn-flooding. Syn-cookies prevent the excessive use of connection resources by reserving these resources to connections that have evidently gone through a genuine TCP three-way handshake. This forces the attack to slow down, since instead of throwing SYN-packets at a host as fast it can it now has to do a proper three-way handshake. This involves waiting for the associated round-trip times which cause the attack to slow down to the speed of genuine connection attempts.
Now since the attack has been slowed down to the speed of the genuine users, it takes part in the competition for connection resources on a fair and equal ground with other users, wich makes it as successful as other users to acquire connection resources. That means that the rate of attack is not quick enough for a resource starvation attack anymore, and it is reduced to a resource abuse attack. Since the latter type of attack needs to be employed for a long time to cause significant damage, the risks of being discovered become too big to make the attack practical.
Well, now this is not exactly a "throttling" countermeasure as described in the Economist's article. The countermeasure from the article selectively slows down outgoing connection attempts to "new" hosts, in order to further slow down the attack in an attempt to put genuine users not on equal footing with the attack but at a significant advantage. This element of selection may be new, at least I can not come up with an older example. As others commented before, the selection technique also has its disadvantages:
a) depending on the attack, different kinds of selection methods must be employed to actually single out the malicious connections -- there is is no predefinable "catch-all-attacks" selection method
b) depending on the services you run on your network, the effort you have to make to find out how your usage patterns can be discerned from known attack patterns varies.
details? (Score:3, Interesting)
Ah yes, well, see, we're going to throttle the network, so that the virus spreads more slowly.
Throttle what? bandwidth? That wouldn't have much of an effect on virus activity, but it certainly would affect everything else. Connections per second would probably slow down a virus, but would basically shut down SMB and DNS as well.
You better make sure Ridge doesn't hear about this, or we'll be required by law to wear 20 lb. lead shoes everywhere we go, to make it easier to catch running terrorists.
What about datacenters... (Score:3, Interesting)
In the datacenter I work at we handle 2000 transactions per second per machine on average with peaks reaching 10000 transactions per second. Not every transaction requires a new connection because of caching in our software but we create far more than 1 new connection per second.
No Replacement for Good Security Practice (Score:4, Interesting)
Now. why don't these things happen? Time. Money. Combination of both. Convenience. Lack of understanding on the part of users.
But the big one is the belief that security is a product that can be purchased, that there is a quick fix out there that will solve all your security ills and hide you from all the bad guys.
Security is a PROCESS. Better yet, it's a combination of processes, relating to employees at all levels of your organization, from the CEO to the custodial service contracted by your property manager. Hell, even building safer software isn't going to help you if your users refuse to use it 'cause it's a pain in the ass. Remember, they believe in the panacea of the "single sign-on". They put their passwords on post-its around their workstations. They keep their contacts (oh help us) in their Hotmail addressbook, regardless of how many 'sploits have been uncovered in Hotmail. They're afraid of computers.
Security is expensive. And it should be, because it has to be done right. You need user participation, on all levels. It requires education and training, and a reduction in ease of use.
There is no magic wand.
--mandi
Re:How's that again? (Score:2)
No DoSing here. It's completely transparent to the guy in room 207 sending email or looking up stuff on the intranet.
Would probably work... (Score:2)
A Distributed Denial of Service is done by hijacking many user boxes and from each bombarding a server with hundreds of bogus requests per second. This throttle would likely choke that (unless the server being DDoS'd is on this users list of known servers)
No, it wouldn't. And a solution to some spam... (Score:2)
The throttle rules are most likely something like this:
Have I connected to this host in the past x minutes?
Yes -> Originate as many new connections to that host as I want, as fast as I want.
No -> Have I made a connection to a new machine in the last second?
Yes -> Wait 1 second.
No -> Go ahead, make a connection and put this host in history list.
Anything else would cause a problem even in normal usage.
Note: This is only applied to outgoing connections, not incoming connections (So servers wouldn't be affected unless they were infected and suddenly tried to make lots of outgoing connections.)
Interestingly, this would put a major damper on spammers abusing open relays. One would probably have to increase the speed limit for normal mailserver operation, but even "sane" speeds would be enough to severely retard spammers except for the largest of mailservers.
It wouldn't work if the spammer had control over the machine doing the worst of the gruntwork, though - He could just kill the throttle. But most of the time the dirty work is done by some unsuspecting open relay.
Re:How's that again? (Score:2, Insightful)
Hackers/kiddies/whomever are annoyingly clever at times. My assumption is that someone may be able to take advantage of a throttle to compromise legitimate traffic.
Since that's what exploits are all about, I have absolutely no doubt someone will try it if such defenses become commonplace.