Intrusion Tolerance - Security's Next Big Thing? 170
An anonymous reader writes "DARPA's OASIS program consists of more than 20 research projects in intrusion-tolerant systems. The basic idea is to concede that systems will be penetrated by malware and hackers, but to keep operating anyway. Other projects take a wide variety of technical approaches to providing intrusion tolerance. MIT's Automatic Trust Management uses models of trust to choose from a variety of ways to achieve system goals; Duke/MCNC's SITAR (Scalable Intrusion Tolerant Architecture) adapts tricks from fault-tolerant systems and distributes decision-making; BBN-Illinois-Maryland-Boeing's ITUA employs unpredictable adaptation. Shutting down the military while waging war is not an option, but the idea of continuing to operating critical defense systems even after known penetration by hostile hackers or damaging worms will take some getting used to."
BIological Systems (Score:5, Insightful)
I think an interesting option for powerfull machines would be to 'fall on the sword' if complete failure was immenent.
Repeat after me... (Score:5, Funny)
I must not fear. Fear is the mind-killer. Fear is the little death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me. And when it has gone past, I will turn the inner eye to see its path. Where the fear has gone there will be nothing. Only I will remain.
-- The Bene Gesserit Litany of Fear
Dune by Frank Herbert
Re:Repeat after me... (Score:2, Funny)
Re:Repeat after me... (Score:5, Funny)
This replaces the old mantra right? "I refuse to patch, for patches deny faith, and without faith I am nothing." (Douglas Adams)
Re:BIological Systems - Scares me! (Score:5, Interesting)
But they (biological systems) also autonomously evolve, compete strongly, and often get wiped out. And when they do too well, they have the tendency to consume all resources, pollute, and then die out or reinvent themselves.
We (humans) are a biological animal. Let's be careful building something that will compete with us. The potential dangers of this scenario have been played out in Terminator and countless other sci-fi epics. Self-aware entities fight for their survival and the survival of their species/genes.
You might say "but we control the technology", but in fact the next generation of computers will control us. Digital Rights Management (DRM) is in effect our surrendering of our rights to machines. As more of our survival becomes dependent on machines (as has been increasing at an exponential rate recently), this means our rights of survival are out of our hands. Think of DRM as the Declaration of Independence, but in reverse -- well, we had a nice run there for a couple hundred years! But I'd rather be a heavily-taxed under-represented colonist of a foreign empire than a farm animal to machine masters any day.
I don't mean to rant tinfoil hat conspiracy nonsense, and it's important to secure our systems from collapse, but let's not be so quick to push ourselves toward slavery just yet. I think this (self-aware networks) is an area that is as important as nano/biotech to watch out for, and it's far more likely that we become totally enslaved to technology than that we all get turned into gray goo.
Re:BIological Systems - Scares me! (Score:2)
I figure I'd better have a say in what's going to happen in my life regarding technology. I imagine humans WILL become obsolete, so the best we can do is try not to make it painful for ourselves when it happens.
Re:BIological Systems - Scares me! (Score:3, Funny)
Dintcha just know that was coming? :o)
Re:BIological Systems - Scares me! (Score:4, Interesting)
Our biological forms are too fragile to survive anywhere long term except here on Earth. Even if we found a way to terraform other worlds, we would still need intelligent machines to do it for us and then to get us there.
And as many futurologists have pointed out, if we do pursue such technology, there *will* come a point in the next few decades when our creations' intelligence finally surpasses our own.
So what are you going to do? Crawl back to your cave, maybe even give up using fire because of the risk of where it might lead? We need to meet this challenge head on; prepare for it, make room for it in our plans.
I think what it boils down to is this: will our creations tolerate us, can we co-exist? I think the answer lies here: if we ourselves are moral then so will be our children and we will live in peace. If we are not, though, and we create children without any moral spirit, well yes, then as a biogical species we're doomed.
Re:BIological Systems - Scares me! (Score:2)
They thought that by now (read: the beginning of the 21st century) we'd have intelligent machines that surpass the intelligence of humans and which help (or perhaps hinder) our thinking processes.
Let me give you a clue. Our "fragile" forms are a lot less fragile than our computers or our machines. Sometimes we armor plate them to survive a teeny bit longer than we would if naked in a harsh environmen
Re:BIological Systems - Scares me! (Score:3, Insightful)
But the technology we have today was unforeseen by previous generations. Just think about the internet for example. Asimov came closest I think, with his "Multivac" - but even he thought it was much farther off.
So the technology may yet appear in our own lifetimes. Once the right component density is available (only a matter of time, now) it could take just one breakthrough in AI systems design to change everything.
But if you have a principled objection to the
Re:BIological Systems - Scares me! (Score:2)
I think I see your problem. You're taking your hints from science fiction authors rather than the science itself. Obviously he also predicted the nature of AI, though it hasn't come close.
Predicting the internet isn't a big stretch by comparison. The difference in the amount of knowledge needed to do it is like the difference between drinking a coke and drinking all the water in the ocean. We only begin to
Re:BIological Systems - Scares me! (Score:2)
*sigh*. I don't have a problem, and you took this out of context. I only mentioned science fiction in the context of what people are capable of imagining versus what actually happens, to illustrate that what you think is plausible now is far short of what might actually appear in a few short decades.
Will respond to the rest later, gotta be somewhere else now.
Re:BIological Systems - Scares me! (Score:2)
Obviously, or we'd already have done so. Many things are difficult, that were once thought to be impossible but are now commonplace.
Re:BIological Systems - Scares me! (Score:3, Interesting)
Just imagine what it would be like if we could abandon our fragile, biological bodies for a self-repairing machine body:
- Space travel: life support greatly simplified. Just need an energy source and sufficient radiation shielding for the components which will already
Re:BIological Systems - Scares me! (Score:3, Insightful)
A network that knows its own configuration, is able to introspect on the status of its nodes, and has the power to make changes to its routing and component members is "self aware" and "self mutable". It is also well within our technological capacity to build one. The abilities to introspect and self-modify are the core of intelligence. Read Gödel, Escher, Bach: An Eternal Golden Braid [amazon.com].
Re:BIological Systems - Scares me! (Score:2)
2. intelligent machines
3. robot vacuums
4. ???
5. Skynet! (What happened to "Profit!" for this step?)
On an only slightly more serious note... (but not much)
If we were to invent a truly conscious and intelligent machine: (computer/program/etc)
1: Would it then be 'slavery', would we need to 'free' it?
2: Would pulling the power plug be murder?
Re:BIological Systems - Scares me! (Score:3, Insightful)
Whether strong AI is possible is still an open question. It has been "coming soon" now for at least four decades.
Re:BIological Systems - Scares me! (Score:2)
Re:BIological Systems - Scares me! (Score:2)
Where is the evidence, that any part of the human brain can do anything that cannot be simulated by a computer. Surely one computer to simulate each brain cell is unrealistic, because we don't have that many computers. But with sufficient parallelity there is no reason to think they couldn't get self-aware.
Re:BIological Systems (Score:5, Interesting)
Intrusion tolerance, IMO, is just a subset of fault tolerance -- something failed to let the intrusion happen. So how do you tolerate that sort of fault?
A good fault-tolerant system will have multiple layers that fail in totally different ways. This will thwart most automated attacks, since they tend to exploit a single, known vulnerability and won't be equipped to respond to another, totally different layer. If the layers are different enough (say a *nix-based firewall behind a Windows-based firewall), most attackers will be so thrown off that they will (at the very least) have to spend a significant amount of time trying to figure out what to do next. This buys you time to realize what's going on and stop it. Couple this with a very low interdependence, and an attacker can spend a lot of time breaking in to something that may be of little or no use to them.
Intrusion tolerance? You betcha -- this acknowledges the fact that there's no such thing as failsafe security, but takes advantage of a wide variety of options, which won't fail similarly, to slow down attacks and give administrators time to see what's going on and stop it.
Isn't this all obvious though? It seems like it when you read it, but the 4 concepts noted above are very often ignored (to varying degrees). Especially #2; this is the hardest because it means hiring a *nix geek and a Windows geek and a Cisco geek and maybe a couple of other ones as well, and no one wants to spend that kind of money. So instead, they get a guy or gal who only knows one system, so everything lives or dies on the failings of that system. Or even worse, they hire a whole team of guys and/or gals that all agree to use the same platform, for simplicity's sake. Bad! Bad! Remember the scale:
More Secure...................Less Secure
_________________________________________
Less Convenient...........More Convenient
Eh. Talking's easy...
--
eep
Re:BIological Systems (Score:2, Insightful)
No, I'm not. I have lots of various kinds of cells, arranged in tissues and organs.. not a single culture. And if they need a culture, it can matter where they get it.... its not all the same. A few supporting reasons beyond text books, school, etc.: 1) Some diseases only affect certain tissues. 2) Organ transplants work.
Re:BIological Systems (Score:2)
Yes and no. An organism will sacrifice individual cells so that the rest may live on.
Is the machine the organism or is it just a cell?
Re:BIological Systems (Score:5, Interesting)
Think of your computer as a cell, and the network as the biological system.
The network can continue running when infected, but not the cell. When the cell is infected, it dies (or worse.)
Ergo, I think intrusion tolerance is a meritless approach.
This idea I like. Call this intrusion intolerance. Require the system to meet a comprehensive suite of invariant conditions, or cease operating. A much more practical and effective solution.
Re:BIological Systems (Score:2)
Re:BIological Systems (Score:2)
The way you get the larger system to be intrusion tolerant is to make the subsystems intrusion intolerant. The time to 'fall on the sword' isn't when complete failure is immenent, it's when it's working in the wrong direction.
You can build reliable systems from unreliable components. Unfortunately the norm seems to be building unreliable systems from reliable components.
Ed note : no, it isn't (Score:4, Funny)
1) Remove all sources of power
2) Incinterate the hard disk, ram, motherboard and most importantly, the sys admin who was in charge of the box.
3) Bury the ahses in a safe concrete cavern, do not touch for 1000 years.
Re:Ed note : no, it isn't (Score:2, Funny)
"intrusion tolerance" (Score:4, Funny)
Somebody drag my mind out of the gutter please!
Re:"intrusion tolerance" (Score:2)
Obvious Question... (Score:4, Interesting)
Re:Obvious Question... (Score:3, Interesting)
Analogy (Score:5, Interesting)
However if your servers/farms are crunching numbers for a Satellite recon or is running a battlefield communication center then your not quite sure how it would behave. A lot of modelling and discussions will go on about this, but some of these problems (of data consistency) have already been handled previously in Computer Science... so its not that big a deal.
It will I guess be like one of those "decisions" a battlefield commander takes, of how much he trusts the intel he is getting and how he wishes to proceed and are the risks acceptable.
Similarly the network/systems ppl will be making choices whether they can live with this intrusion or not...how best to handle it without stopping the grid.
Actually, intrusion tolerance is already here (Score:3, Interesting)
When I started writing Hermes (see my sig), one of the major issues I dealt with was security and intrusion tolerance. The question is-- given that this would be used to access comfidential customer informat
Re:Analogy (Score:2)
That's why the smart commander avoids a hardware monoculture through the use of AMD boxen as well. In addition, fast AMD processors may be used in combat as incendiary devices.
That's what war is all about! (Score:5, Interesting)
What do they think the military goes home when someone gets killed or they find out there might be a spy? That's why our military security is completely segmented. The whole concept of need to know basis, is the understanding that information will fall into the wrong hands, you just want to minimize how much information can fall into the wrong hands when someone or something is compromised. That computers, especially military computers would follow this highly pragmatic principle shouldn't come as much of a surprise.
Fog of War is the operative model (Score:5, Interesting)
There is an old philosophy that you don't need to create a perfect lie. You only need to tell so many lies that they truth can no longer be seen.
A system of honeypots, firewalls, and harmless paths into a network would allow a hacker to be studied, traced, and combated (counter-hacked?).
The law is becoming an obstical to such an approach. There is legal speculation that honeypots constitute a form of wiretapping. Bad laws are going to make it very difficult to be a white hat in a few years.
Re:Fog of War is the operative model (Score:2)
Re:That's what war is all about! (Score:5, Insightful)
There's a reason former US presidents get USSS protection for quite some time (now 10 years, formerly life) after leaving office - What they know remains highly prejudicial to national security after they go.
The problem with computers is that you can force them to reveal everything they know without leaving them catatonic with drugs or physically destroyed - In theory, nobody would ever know.
This biological concept of security needs to use the full biological model of sacrifical guards. The body repels invaders by sacrificing cells to attack the invader. A computer that merrily allows an intruder to work its way back through the network until they can read everything is no use.
Maybe create switches that have fusible links on the network ports that can be destroyed with a command from within the network? Make the links cheap and easy to replace, so that it's not a major imposition to fix if someone does it maliciously or accidentaly. A physically "down" network port is absolute security against a remote attacker, particularly when a computer only has a single NIC.
Re:That's what war is all about! (Score:5, Insightful)
I don't think the idea is that the computers will just ignore intrusions. At the very least, they'll notify a human operator that an intrusion has taken place while trying to continue normal functioning. If possible it will probably try to elimiante the intrusion.
However the first priority is to continue it's primary functions. The military can't aford to have it's communication grid or it's airflight control or other items of such a crucial nature shut down in the middle of combat, not unless there's a backup ready to take over. (And do you trust a compromised machine to decide whether or not a backup system is available?)
So the system continues to do it's best to carry out it's tasks while a human operator decides when and if the machine can be shut down and another swaped in to take it's place, and coordinates any possible counter-hacking operations.
If you want to fall back to a cold war/MAD mentality, here's a worst case scenario for you. Say that twenty years from now China launches an unexpected nuclear ICBM assult against the US. At the same time Chinese hackers attempt to infiltrate every known computer in NORAD and any SDI systems. Would you want the computers to automatically destroy themselves, thereby eliminating any chance of a timely defense or counterattack, or assume that the hackers haven't got full access and keep the computers going as long as possible since the other alternative is death?
And if you're going for a MAD strategy, which of those two systems would you want your adversaries to know that you have?
Re:That's what war is all about! (Score:3, Informative)
Re:That's what war is all about! (Score:2)
Who's to say they're attached to a generally accessible system? Maybe China has planted moles in the US's military departments who can access the military only networks the machines are on. Maybe
Re:That's what war is all about! (Score:2)
True? Maybe. Maybe not. But worth thinking about.
Re:That's what war is all about! (Score:2)
True? Maybe. Maybe not. But worth thinking about.
I don't really agree with the idea of MAD. I think that restarting work on SDI is just about the only good thing Bush Jr. has done.
That being said however, if those in charge decide to go with MAD, i'd a
Yeah! (Score:3, Insightful)
No, that's great.
This [slashdot.org] and this [slashdot.org] are complete surprises. Who would think to create a momoculture of poor security systems like that? Especially after r
Perhaps systems which undo intrusions? (Score:5, Interesting)
Other interesting ideas would be determining "tainted" processes run or otherwise affected (library overwrites, etc) by the intruder, and automatically sandboxing these processes in a nifty little world that looks realistic, but couldn't be used for a DDoS.
Anyone up for writing a drop-in libc replacement that screens any attempts to overwrite libc? You'd also have to override the linker behavior, so that an attacker couldn't just LD_PRELOAD a normal libc for their apps. You'd still be open to statically compiled apps, so this may be a lot of work for only a little gain.
Of course, this would make it hard to upgrade libc
Re:Perhaps systems which undo intrusions? (Score:2)
One way to do this is actually make a checklist of what one does in order not to get caught when gaining root access on a system:
Destroying log files and wtemp, disabling login services (telnet, rsh, rlogin, rexec, ssh) and serial/console ports afte
Re:Perhaps systems which undo intrusions? (Score:2)
I don't think User Mode Linux is "there" yet, but this scenario is the kind of thing I'm thinking of:
Intruder exploits yet another overflow in wu-ftpd and fires up a shell. At this point, the IDS has determined that wu-ftpd is acting erratically and forks the system: the original was actually an UML instance running on a host with a bit of ipmasq/conntrack glue. A new UML is spawned, all the services restart within i
Re:Perhaps systems which undo intrusions? (Score:2)
Re:Perhaps systems which undo intrusions? (Score:2)
Imagine something like VMWare with "selective rollback". Because of the combinatorics, I'm not sure it's entirely possible (which is not to say that it's not partially possible), but it's certainly an idea worthy of pursuit in some form...
C//
Re:Perhaps systems which undo intrusions? (Score:2)
Something like this would work fine as long as the intruder didn't change anything that was being changed by a normal process. If the intruder started writing or removing CC numbers from a CC list that was being updated (as if I'd keep them in plain text...), then a rollback would have to be very very crafty to identify "bad" changes vs. "good" changes (hence the idea of custom write(
Re:Perhaps systems which undo intrusions? (Score:3, Insightful)
You don't need to know in advance the vulnerability to figure out how someone got in. If Apache suddenly spawns a shell, well, that is a pretty good hint right there (or that some nutter is using a shellscript as a CGI, but they deserve getting false negatives in that case).
Plus, if you combine this with packet data logging (probably with a protocol level filtering tool,
Re:Perhaps systems which undo intrusions? (Score:2)
What's so unusual about this? (Score:5, Insightful)
Seriously. The implementations are new, but the concept goes back to the dawn of interconnected computers, maybe further. Back in the Iron Age, you used different passwords on different systems specifically so that, if one of the systems were penetrated and your password compromised, all the other systems you had access to would not be immediately compromised as well. That was a limited form of intrusion tolerance, forcing the intruder to start over from scratch on every system in the network.
Example of intrusion tolerant system (Score:5, Funny)
Re:Example of intrusion tolerant system (Score:2)
You owe me a cup of coffee, a shirt, and a keyboard.
And to think that they just got the "Homeland Security" contract.
Maybe they are, with the exception that... (Score:2)
(reboot)
Okay, no intruuuud...BSOD
(reboot)
Good morning Dave! Where would you liiik.....e
Actually, considering that this is DARPA, maybe this is a good thing. Maybe they will host the next war, and no one will come! Really!
[Please note: I have the right to say this. I have/had a dual boot system, and my VFAT partition has finally corrupted beyond repair.
Re:What's so unusual about this? (Score:3, Insightful)
That was not so much tolerance, as it was the only protection, and it still applies, except for idiot admins who use the same password over and over.
This is more of an internal "protect the data stream" kind of thing.
Re:What's so unusual about this? (Score:2)
In newer machines, there is even a shadow file so that even if he gets user-level access, he cann
Re:What's so unusual about this? (Score:2)
These are two aspects of the same thing. Hashed passwords and shadow password files are layers to make it harder to compromise everybody on a single machine once an intruder's got a foothold on that machine. Avoiding shared passwords make it harder to gain footholds on other machines in the network once an intruder's compromised that first machine in the network. Basic defense in depth, and it's what the most popular systems today seem bent on eliminating.
interesting, but not really a new concept (Score:5, Interesting)
Re:interesting, but not really a new concept (Score:3, Insightful)
There is a parallel here; Most large corporations heve given up on the virus war, and have implemented "Virus Management" strategies.
They have basically said, "Ok, we can't keep them out,so we'll just let them in a little bit."
So now we're doing the same thing on the security front. I must admit, I'm not all that surprised.
The cynic in me says, "That's what you get for outsourcing all those tech jobs."
Re:interesting, but not really a new concept (Score:4, Insightful)
They know no system is totally secure - especially when your adversary has spies, troops, and bombs. You expect enemy signals intelligence, broken codes, code-books captured in combat, spies in your data centers, secure comm channels destroyed.
There is no one line/security barrier: the only rational approach is a defense in depth, with montoring of problems, and the ability to route around compromized and destroyed systems.
Re:interesting, but not really a new concept (Score:2)
Perhaps, but none of the commonly used crypto today came from the military, because the military doesn't want to share their crypto capabilities or research with the public. Think about DES, RSA, Diffie-Hellman, AES, etc.
Absolutely true, and that's why I'm saying that this so-c
Re:interesting, but not really a new concept (Score:2)
My point was that having multiple levels of security (front do
Prior Art? (Score:5, Funny)
Hasn't this always been the strategy of Windows? Now if they could just finish implementing that second part...
Same as in many materials uses (Score:2, Insightful)
perhaps I need coffee
Jeepers ... (Score:3, Funny)
Why does it have to be like this? (Score:3, Insightful)
You get into trouble when you start piling on feature after feature after feature. Is all of that really needed?
Denial of Service is, unfortunately, harder to deal with. But when you have your own network, it's much easier to deal with. Dependancy on the Internet still creates a problem (the majority of US government data communication is done via the Internet). It comes down to a cost benefit analysis - is it worth building a totally seperate network? For the military, I'd say yes.
Re:Why does it have to be like this? (Score:2)
This assumes that, just by making it separate, it will fail to be vulnerable. With a small, highly restricted network this would likely be true. The military network is huge and I think it is naive to assume that it could not be compromised by a determined attacker.
Re:Why does it have to be like this? (Score:2)
Neither has AmigaOS, ProDOS or DR-DOS.
Really, you should listen to the trolls more often.
Just My .02 USD (Score:5, Insightful)
I would prefer to consider that (at least from my own philosophical viewpoint), that you can construct systems with defined patterns of behavior, even when "malware" is introduced.
From one of the links referenced above
Successive levels in the hierarchy are linked by refinement mappings that can be shown to preserve properties of interest. This project will apply this technology to intrusion tolerance properties.
This harkens back to enforcement mechanisms (Biba Integrity Model, No Read Up, No Write down policies, Models for descriptions of multi-level secure behavior, etc...). (Aside: Amoroso's book is an excellent reference)
What this alone tells me (I didn't read all the blurbs, articles, and briefings), is that we are discussing mappings (mathematical functions), and properties (which can be mathematically tested for by use of a logic or algebraic system).
At a glance, I am thinking of some of the issues in formal methods, proven-secure-O/S kernels, and other high-reliability software engineering methods for [secure] systems.
I like the idea that mathematical theorem provers can be applied to any system so defined.
Some basic issues do arise for practical application
- Theorem - proving aspects mean very precise use of functional requirements and mathematical specification for system behaviors. (Also, special talent and additional manpower is necessary. Also, mis-applications of the tools used, or introduced human error in the test process can subvert the efforts)
- This should be applied (I believe) to systems-of-systems and their behaviors. The systems that your system interacts with would have to had similiarly rigorous analysis and design.
- There is (I believe) a trend in military computing towards commercial, and less custom, software development. Long-term, where will the actual development of such systems be funded (beyond the initial R&D stage).
- The use of analysis of pre and post conditions in the executing environment (to ensure that violations of the underlying security policy are not permitted) is not a new concept. While I am not saying that this is an intrinsically ecessary mechanism for these methods, most current system lack such an approach, and there may be fundamental computer security issues present by the nature of the software development environment. If these methods are used, it is still highly desirable to design systems with security in mind regarding their handling of all data, traffic, and O/S vulnerability issues.
I only took a brief look at the material, but these are some thoughts. I also think that the effort itself is very worthwhile, and potentially of value. Also, looking at Dr. Lulu's credentials, there is no naivite in his software background; the basic tenents can't just be shrugged off.
Sam Nitzberg
sam@iamsam.com
http://www.iamsam.com
The way it should be (Score:5, Interesting)
However, I got sporadic complaints about images not sizing properly, even though I initially found nothing wrong.
However, what had happened is that a critical piece of software (ImageMagick) wasn't loaded on the new server - but since all the functions that resized images had numerous fallbacks (such as using expired, cached copies, and failover to full size display which even then didn't always cause a problem since they were frequently resized with HTML tags)
In any event, this (I think) demonstrates the idea - there were several layers of failure that had to happen before images didn't show - and everything kept more-or-less rolling for 2 weeks.
Similar idea to another group (Score:5, Interesting)
There was related work done like this back in the day at AT&T but Rinard and Demsky have introduced automatic repair which, as you might imagine like this security idea, is scary to some people. Imagine a program that would have crashed due to some bug or malicious data mangling, now kept running by a tool... But the tool chooses the repair actions based on heuristics and specifications by the developer... takes some getting used to!
All of this stuff falls under fault tolerance... its pretty crazy to look at what the AT&T/Lucent Phone Switches do when they fail... they try a million different things to keep operating no matter what happens...
The next big thing? (Score:3, Funny)
Suit 1: We've got 10,000 uberhumungo servers running Microsoft 2003 Humungo Server Edition, with b2b backend, integrated transaction safe, load-balanced Humungo Edition IIS.
Suit 2: Well, we have all of that, plus Intrusion Tolerance.
Suit 1: Oh, baby. Can I merge with you?
tolerance and love (Score:2, Funny)
penetrated in advance (Score:2, Interesting)
Sounds like an old thing (Score:2)
You should be able to access data and use it, but the data should not be able to access your computer.
The problem is that many closed source software programs have backdoors and basic coding flaws. If you understand what a program does(open source), then you can know it won't cheat you.
Nothing New... (Score:3, Funny)
New HCC RAM design for this kind of application (Score:2, Funny)
Reference model (Score:2)
OMG! We've been assimilated. Everybody listen AD2ô8 yç 48
[Carrier lost]
Qmail? (Score:2)
Oh wait, I've just described qmail.
Excellent (Score:2)
Kind of like the missus, really...
While we're at it... (Score:2)
what?!? (Score:4, Informative)
From the MIT article, it sounds like some intelligence will shut some non-critical services down so that the core still runs, but isn't that what Intrusion Prevention is supposed to do? When you're talking military use, I expect the important areas to be surrounded by honeypots as part of the Intrusion Detection and Prevention.
Re:what?!? (Score:3, Interesting)
It's the same sort of thinking here. We'd like to think that we can make intrusi
Re:what?!? (Score:2)
Re:what?!? (Score:2)
Charlie is listening... (Score:2, Interesting)
About damn time. (Score:2, Insightful)
bout time the question was change from "how are you going to keep them out" to "what are you going to do when they get in"
There are dangers here (Score:4, Insightful)
Doubting thomases, exit (-1) (Score:4, Interesting)
It's the same with well designed programs -- there was a slashdot article recently on QNX -- that is designed to be fault tolerant -- and it works. Only when you design huge monolithic code monsters where a fault anywhere in the monster means kill the whole beast do you have such frail computer systems.
Imagine human skin hacked by a scrape on some sharp object. If the first decision was to instantly kill the whole host, there wouldn't be too many humans -- can you say *stoopid* design?
Sure, there are some things that can't be healed, but the majority of us have had scrapes and bruises growing up and are still quite healthy -- and even where the car body may have permanent damage, then engine/CPU (the person's brain) is often quite capable.
Next time you think fault tolerant or intrusion tolerant systems are foolish and impossible, think "Stephen Hawking", or "Einstein" (not able to complete High School). I had a *stoopid* manager who thought that making system-audit so efficient, it could be left on by default in all but the most demanding of compute environments was a waste of time -- that it was *impossible* to build real-time intrusion detection systems.
Of course people thought it was impossible to circumnavigate the globe (you'd fall off the edge), impossible to fly, impossible to go faster than the speed of sound, etc.
Every time someone talks about how "impossible", you have to realize they are consciously or unconsciously thinking inside a box. To do the impossible requires something that *isn't* engineering. It isn't manageable. It can't be driven by a schedule. You have to *think outside the box*. You have to be creative. By definition, engineering, isn't creative. Engineering is taking known principles, applying them in some set of known circumstances, and coming out with another "widget", that looks similar to a previous widget.
Most large companies breed conformity and uniformity. While this type of engineering is great for reproducing Honda's on an assembly line, it greatly hinders thinking 'out of the box' (the box of conformity and uniformity that the company asserts is "necessary" for their business). Then they wonder why what was once a 'wonder company' is now a 'dinosaur company'.
Creative people are often *not* group players -- if they had a group mentality, then how can they be expected to come up with any idea that is radically different from the rest of the group?
Creative people tend more toward not having exceptional social graces (think of the novel ideas of unix, or Multics). These were not done by suit-and-tie, management "yes"-men. Even Linux was started by 1 person -- who has not always been known to be the social charmer, even tempered type -- and I certainly don't get the impression that everything is done by group consensus.
But already in linux, there is a fair amount of doing things the 'linux' way, certain people to please, various people who get say-so or veto powers (or are believed to have such) beyond Linus.
People familiar with Microsoft can remember when even the simplest application crash would bring down the entire system. Unix people would generally laugh at this. But now we see those who think a single penetration should cause the whole system to be brought down. Maybe it will require a next-generation OS (dunno enough about QNX to know if it might qualify), but there are other OS's that have better security records than linux (BSD, OS/X (I've heard)).
Linux, laughably, doesn't even have CAPP certification. Sure, there are alot more Microsoft vulnerabilities every
Re:Doubting thomases, exit (-1) (Score:2)
Trust Level (Score:2)
Rus
OT: Please use appropriate terminology. (Score:2)
I know I'll get modded down for this, but I really think that SlashDotters should not be making posts about those evil "hackers"... I am a hacker. I don't break into systems.
(/rms)
GPL'ed intrustion resistance (Score:2, Informative)
Shameless plug: Askemos [softeyes.net] is a GPL'ed incorruptible and intrustion resistant operating system (or application server for that matter).
DOH! UNIX is "Intrustion Tolerant" (Score:2)
The OS has to have sufficient isolation that this luser only damages her own files and processes.
IIRC, FreeBSD even has a Write-Once "SECURE" flag that locks even root out from some functions.
byzantine fault tolerance (Score:3, Informative)
Enough theory - try practice (Score:2)
Intrusion Tolerance is already being practiced, although another term for it is defense in depth.
Another poster has described how defense in depth and fault tolerance apply to firewalls, network infrastructure, etc. I'd like to mention host-based measures to slow an attacker down and limit the damage they can do.
One of the oldest host-based D-i-D measures is chroot jails. A 'chroot' in Unix means that an application is run with access to only a limited subset of the filesystem, one which does not cont
Re:Sad to get old (Score:4, Insightful)
In 90% of the cases, pulling the plug is the best thing to do. but take EBay for example, 1.2 billion in revenue relying entirely on their systems. That means they earned $2,289.38 every minute. So in that perspective, could you really tell someone to just simply shut off the site while you drive back to the office to fix it?
Re:Article is FLAWED! No Mac OS (9.x, 8.x) hack ev (Score:2, Funny)
this coming from someone that has been begging his boss for a mac laptop for 2 months. mini-me sold it, i want one.