Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security

Intrusion Tolerance - Security's Next Big Thing? 170

An anonymous reader writes "DARPA's OASIS program consists of more than 20 research projects in intrusion-tolerant systems. The basic idea is to concede that systems will be penetrated by malware and hackers, but to keep operating anyway. Other projects take a wide variety of technical approaches to providing intrusion tolerance. MIT's Automatic Trust Management uses models of trust to choose from a variety of ways to achieve system goals; Duke/MCNC's SITAR (Scalable Intrusion Tolerant Architecture) adapts tricks from fault-tolerant systems and distributes decision-making; BBN-Illinois-Maryland-Boeing's ITUA employs unpredictable adaptation. Shutting down the military while waging war is not an option, but the idea of continuing to operating critical defense systems even after known penetration by hostile hackers or damaging worms will take some getting used to."
This discussion has been archived. No new comments can be posted.

Intrusion Tolerance - Security's Next Big Thing?

Comments Filter:
  • BIological Systems (Score:5, Insightful)

    by PktLoss ( 647983 ) on Wednesday July 16, 2003 @07:56PM (#6457774) Homepage Journal
    I think it is great that something like this is being looked at. Every biological system on the planet works on the same principal, yes, the system will be attacked, keep functioniong, and attempt to regain controll.

    I think an interesting option for powerfull machines would be to 'fall on the sword' if complete failure was immenent.
    • by Atario ( 673917 ) on Wednesday July 16, 2003 @08:07PM (#6457838) Homepage
      ...this new mantra of security.

      I must not fear. Fear is the mind-killer. Fear is the little death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me. And when it has gone past, I will turn the inner eye to see its path. Where the fear has gone there will be nothing. Only I will remain.

      -- The Bene Gesserit Litany of Fear
      Dune by Frank Herbert
    • by dekashizl ( 663505 ) on Wednesday July 16, 2003 @08:57PM (#6458048) Journal
      Every biological system on the planet works on the same principal, yes, the system will be attacked, keep functioniong, and attempt to regain controll.
      I don't know about you, but my neck hairs bristle at the shift of computer systems into the biological (model) realm. I am well aware that biological systems function well in the face of a variety of offenses.

      But they (biological systems) also autonomously evolve, compete strongly, and often get wiped out. And when they do too well, they have the tendency to consume all resources, pollute, and then die out or reinvent themselves.

      We (humans) are a biological animal. Let's be careful building something that will compete with us. The potential dangers of this scenario have been played out in Terminator and countless other sci-fi epics. Self-aware entities fight for their survival and the survival of their species/genes.

      You might say "but we control the technology", but in fact the next generation of computers will control us. Digital Rights Management (DRM) is in effect our surrendering of our rights to machines. As more of our survival becomes dependent on machines (as has been increasing at an exponential rate recently), this means our rights of survival are out of our hands. Think of DRM as the Declaration of Independence, but in reverse -- well, we had a nice run there for a couple hundred years! But I'd rather be a heavily-taxed under-represented colonist of a foreign empire than a farm animal to machine masters any day.

      I don't mean to rant tinfoil hat conspiracy nonsense, and it's important to secure our systems from collapse, but let's not be so quick to push ourselves toward slavery just yet. I think this (self-aware networks) is an area that is as important as nano/biotech to watch out for, and it's far more likely that we become totally enslaved to technology than that we all get turned into gray goo.
      • That is the exact reason I'm going into this area of research. I think it's so incredibly likely that computers will achieve a human-type (and superhuman level) of intelligence that I plan to be a part of designing it.

        I figure I'd better have a say in what's going to happen in my life regarding technology. I imagine humans WILL become obsolete, so the best we can do is try not to make it painful for ourselves when it happens.
      • But I'd rather be a heavily-taxed under-represented colonist of a foreign empire than a farm animal to machine masters any day.
        Well I, for one, welcome our new computer overlords.

        Dintcha just know that was coming? :o)

      • by ralphclark ( 11346 ) on Thursday July 17, 2003 @06:59AM (#6459894) Journal
        You can't avoid the inevitable.

        Our biological forms are too fragile to survive anywhere long term except here on Earth. Even if we found a way to terraform other worlds, we would still need intelligent machines to do it for us and then to get us there.

        And as many futurologists have pointed out, if we do pursue such technology, there *will* come a point in the next few decades when our creations' intelligence finally surpasses our own.

        So what are you going to do? Crawl back to your cave, maybe even give up using fire because of the risk of where it might lead? We need to meet this challenge head on; prepare for it, make room for it in our plans.

        I think what it boils down to is this: will our creations tolerate us, can we co-exist? I think the answer lies here: if we ourselves are moral then so will be our children and we will live in peace. If we are not, though, and we create children without any moral spirit, well yes, then as a biogical species we're doomed.
        • A lot of "futurologists" pointed out exactly what you're saying about this time period twenty years ago.

          They thought that by now (read: the beginning of the 21st century) we'd have intelligent machines that surpass the intelligence of humans and which help (or perhaps hinder) our thinking processes.

          Let me give you a clue. Our "fragile" forms are a lot less fragile than our computers or our machines. Sometimes we armor plate them to survive a teeny bit longer than we would if naked in a harsh environmen
          • Well that's how things are today, all right.

            But the technology we have today was unforeseen by previous generations. Just think about the internet for example. Asimov came closest I think, with his "Multivac" - but even he thought it was much farther off.

            So the technology may yet appear in our own lifetimes. Once the right component density is available (only a matter of time, now) it could take just one breakthrough in AI systems design to change everything.

            But if you have a principled objection to the
            • Asimov came closest I think, with his "Multivac" - but even he thought it was much farther off.

              I think I see your problem. You're taking your hints from science fiction authors rather than the science itself. Obviously he also predicted the nature of AI, though it hasn't come close.

              Predicting the internet isn't a big stretch by comparison. The difference in the amount of knowledge needed to do it is like the difference between drinking a coke and drinking all the water in the ocean. We only begin to
              • I think I see your problem. You're taking your hints from science fiction authors rather than the science itself.

                *sigh*. I don't have a problem, and you took this out of context. I only mentioned science fiction in the context of what people are capable of imagining versus what actually happens, to illustrate that what you think is plausible now is far short of what might actually appear in a few short decades.

                Will respond to the rest later, gotta be somewhere else now.

              • Making something artificial that is as robust as a living being is much harder

                Obviously, or we'd already have done so. Many things are difficult, that were once thought to be impossible but are now commonplace.

                the "brains" of the robots we can create are much more fragile than we are. We just give them weatherproof, inflexibile coatings before we turn them off and send them into space. Also keep in mind that these robots are made to do less. This inflexibilty means that less can break. This is true in th

      • Hrm. I think it's an opportunity. It is our destiny that our machines replace us; once we have machines that are better at doing the general purpose things we are, why not just become our machines? It's the next logical step in our evolution.

        Just imagine what it would be like if we could abandon our fragile, biological bodies for a self-repairing machine body:
        - Space travel: life support greatly simplified. Just need an energy source and sufficient radiation shielding for the components which will already
    • by ceep ( 527600 ) on Wednesday July 16, 2003 @08:59PM (#6458053) Homepage
      The biological model is an interesting parallel, but we should also look at the failings of the biological model -- within your body, you are still a big monoculture, so once whatever foreign matter is in, it won't encounter anything radically new.

      Intrusion tolerance, IMO, is just a subset of fault tolerance -- something failed to let the intrusion happen. So how do you tolerate that sort of fault?

      1. reduce interdependency and single points of failure. If everything relies on the firewall box, and the firewall box goes down, then everything is down, even if everything else wasn't compromised. This is a failing of the biological model -- there are lots of lines of defense, but what happens when something goes straight for the heart? The brain? The spleen? A fault-tolerant system can't have a single point of failure.
      2. just say "no" to monoculture. This should be a given in redundancy and fault tolerance, but often isn't. So your firewall is a linux box, and it gets hacked, but that's OK because you have another firewall. Oh wait, it's a linux box too, so it will fail in the same manner. This is not good intrusion tolerance, because your intruder can duplicate his or her (or its) past actions -- more of the same probably won't even slow him/her/it down much.
      3. spread stuff around. This usually happens anyway because of load balancing, but couple this with #2 (reducing monoculture) and you'll really slow down an attacker, especially if you can make the separations transparent from the outside.
      4. be vigilant! There's no replacement for the human element; hire somebody (or a team of somebodies) to do nothing but spend all day logged in to critical machines and make sure that nothing out of the ordinary happens. This is another failing of many security models -- people think that they can replace people with machines, but machines are easy to fool -- well-trained people are harder to fool, and the combination of the two (since they are fooled in different ways, see #2) is a lot harder to get around.

      A good fault-tolerant system will have multiple layers that fail in totally different ways. This will thwart most automated attacks, since they tend to exploit a single, known vulnerability and won't be equipped to respond to another, totally different layer. If the layers are different enough (say a *nix-based firewall behind a Windows-based firewall), most attackers will be so thrown off that they will (at the very least) have to spend a significant amount of time trying to figure out what to do next. This buys you time to realize what's going on and stop it. Couple this with a very low interdependence, and an attacker can spend a lot of time breaking in to something that may be of little or no use to them.

      Intrusion tolerance? You betcha -- this acknowledges the fact that there's no such thing as failsafe security, but takes advantage of a wide variety of options, which won't fail similarly, to slow down attacks and give administrators time to see what's going on and stop it.

      Isn't this all obvious though? It seems like it when you read it, but the 4 concepts noted above are very often ignored (to varying degrees). Especially #2; this is the hardest because it means hiring a *nix geek and a Windows geek and a Cisco geek and maybe a couple of other ones as well, and no one wants to spend that kind of money. So instead, they get a guy or gal who only knows one system, so everything lives or dies on the failings of that system. Or even worse, they hire a whole team of guys and/or gals that all agree to use the same platform, for simplicity's sake. Bad! Bad! Remember the scale:

      More Secure...................Less Secure
      _________________________________________
      Less Convenient...........More Convenient


      Eh. Talking's easy...

      --
      eep
      • "You are still a big monoculture, so once whatever foreign matter is in, it ..."
        No, I'm not. I have lots of various kinds of cells, arranged in tissues and organs.. not a single culture. And if they need a culture, it can matter where they get it.... its not all the same. A few supporting reasons beyond text books, school, etc.: 1) Some diseases only affect certain tissues. 2) Organ transplants work. .One of the failings of the biological model is extending it to far to the point where it no longe
    • Every biological system on the planet works on the same principal, yes, the system will be attacked, keep functioniong, and attempt to regain controll.

      Yes and no. An organism will sacrifice individual cells so that the rest may live on.

      Is the machine the organism or is it just a cell?
    • by corebreech ( 469871 ) on Wednesday July 16, 2003 @11:08PM (#6458641) Journal
      It's a good analogy but it doesn't apply to individual machines.

      Think of your computer as a cell, and the network as the biological system.

      The network can continue running when infected, but not the cell. When the cell is infected, it dies (or worse.)

      Ergo, I think intrusion tolerance is a meritless approach.

      I think an interesting option for powerfull machines would be to 'fall on the sword' if complete failure was immenent.
      This idea I like. Call this intrusion intolerance. Require the system to meet a comprehensive suite of invariant conditions, or cease operating. A much more practical and effective solution.
      • It's certainly got to be better than the bend over and take it like a machine method that this article seems to follow. Somehow I don't think a system is going to get much work done while it's being raped by some worm, if only for bandwidth reasons. Code signing is the only way to go for high security systems. Get a mathmatically verifiable hashing algorithm that would require an exponential algorithm to crack, and require multiple keys from multiple people in charge to actually sign an executable before
      • Ergo, I think intrusion tolerance is a meritless approach.
        The way you get the larger system to be intrusion tolerant is to make the subsystems intrusion intolerant. The time to 'fall on the sword' isn't when complete failure is immenent, it's when it's working in the wrong direction.
        You can build reliable systems from unreliable components. Unfortunately the norm seems to be building unreliable systems from reliable components.

  • by Anonymous Coward on Wednesday July 16, 2003 @07:57PM (#6457777)
    What to do when penetrated

    1) Remove all sources of power
    2) Incinterate the hard disk, ram, motherboard and most importantly, the sys admin who was in charge of the box.
    3) Bury the ahses in a safe concrete cavern, do not touch for 1000 years.
  • by lingqi ( 577227 ) on Wednesday July 16, 2003 @07:57PM (#6457779) Journal
    upon hearing this, my first thought was the chatter-box prostitute from Bruce-Willis's "Last Man Standing."

    Somebody drag my mind out of the gutter please!
  • Obvious Question... (Score:4, Interesting)

    by Anonymous Coward on Wednesday July 16, 2003 @07:58PM (#6457788)
    The obvious question is how did the hacker get there? These computers shouldn't even be connected to the internet. And if they're not, then there are more important things to worry about, such as why is there an agent from a different military operating on restricted computers.
    • Your talking about the aftermath and cleanup of an intrustion, which is also very important. But the idea behind these systems is that they are serving critial functions that CAN NOT be turned off, such as in a hospital or during combat. Keep functioning and running and let the humans worry about the clean up.
  • Analogy (Score:5, Interesting)

    by unixwin ( 569813 ) on Wednesday July 16, 2003 @08:02PM (#6457813) Homepage
    What has to be understood is that a compromised system, if part of a larger group of compro & non-compro systems can have a lot of undesirable consequences. In a Corporation network of say 150 servers a couple broken in boxes serving as open relays, ftp/warez sites or just sniffing around do not necessarily have to bring the whole Company down for a day, pulling the plug on them is always an option.

    However if your servers/farms are crunching numbers for a Satellite recon or is running a battlefield communication center then your not quite sure how it would behave. A lot of modelling and discussions will go on about this, but some of these problems (of data consistency) have already been handled previously in Computer Science... so its not that big a deal.
    It will I guess be like one of those "decisions" a battlefield commander takes, of how much he trusts the intel he is getting and how he wishes to proceed and are the risks acceptable.
    Similarly the network/systems ppl will be making choices whether they can live with this intrusion or not...how best to handle it without stopping the grid.
    • When you look at the whole idea of a screened subnet where you have your more exposed public servers in a spot where intrusions cannot easily spread to your internal private network, this is indicative of some level of intrusion tolerance to the network as a whole (not the individual computers though).

      When I started writing Hermes (see my sig), one of the major issues I dealt with was security and intrusion tolerance. The question is-- given that this would be used to access comfidential customer informat
    • It will I guess be like one of those "decisions" a battlefield commander takes, of how much he trusts the intel he is getting...

      That's why the smart commander avoids a hardware monoculture through the use of AMD boxen as well. In addition, fast AMD processors may be used in combat as incendiary devices.

  • by dtolton ( 162216 ) * on Wednesday July 16, 2003 @08:03PM (#6457816) Homepage
    Shutting down the military while waging war is not an option, but the idea of continuing to operating critical defense systems even after known penetration by hostile hackers or damaging worms will take some getting used to."

    What do they think the military goes home when someone gets killed or they find out there might be a spy? That's why our military security is completely segmented. The whole concept of need to know basis, is the understanding that information will fall into the wrong hands, you just want to minimize how much information can fall into the wrong hands when someone or something is compromised. That computers, especially military computers would follow this highly pragmatic principle shouldn't come as much of a surprise.
    • by Picass0 ( 147474 ) on Wednesday July 16, 2003 @08:48PM (#6458014) Homepage Journal
      Perhaps the aproach should be to throw so many false leads at the attacker that they play their hand before they do any real damage.

      There is an old philosophy that you don't need to create a perfect lie. You only need to tell so many lies that they truth can no longer be seen.

      A system of honeypots, firewalls, and harmless paths into a network would allow a hacker to be studied, traced, and combated (counter-hacked?).

      The law is becoming an obstical to such an approach. There is legal speculation that honeypots constitute a form of wiretapping. Bad laws are going to make it very difficult to be a white hat in a few years.
    • by sn00ker ( 172521 ) on Wednesday July 16, 2003 @09:44PM (#6458274) Homepage
      That's why our military security is completely segmented. The whole concept of need to know basis
      And, as with the military, if you compromise high enough up the chain you can do a WHOLE lot of damage. Senior military officials don't just have military drivers because of their rank - The drivers also have guns.
      There's a reason former US presidents get USSS protection for quite some time (now 10 years, formerly life) after leaving office - What they know remains highly prejudicial to national security after they go.

      The problem with computers is that you can force them to reveal everything they know without leaving them catatonic with drugs or physically destroyed - In theory, nobody would ever know.
      This biological concept of security needs to use the full biological model of sacrifical guards. The body repels invaders by sacrificing cells to attack the invader. A computer that merrily allows an intruder to work its way back through the network until they can read everything is no use.
      Maybe create switches that have fusible links on the network ports that can be destroyed with a command from within the network? Make the links cheap and easy to replace, so that it's not a major imposition to fix if someone does it maliciously or accidentaly. A physically "down" network port is absolute security against a remote attacker, particularly when a computer only has a single NIC.

      • by Daetrin ( 576516 ) on Wednesday July 16, 2003 @10:17PM (#6458447)
        This biological concept of security needs to use the full biological model of sacrifical guards. The body repels invaders by sacrificing cells to attack the invader. A computer that merrily allows an intruder to work its way back through the network until they can read everything is no use.

        I don't think the idea is that the computers will just ignore intrusions. At the very least, they'll notify a human operator that an intrusion has taken place while trying to continue normal functioning. If possible it will probably try to elimiante the intrusion.

        However the first priority is to continue it's primary functions. The military can't aford to have it's communication grid or it's airflight control or other items of such a crucial nature shut down in the middle of combat, not unless there's a backup ready to take over. (And do you trust a compromised machine to decide whether or not a backup system is available?)

        So the system continues to do it's best to carry out it's tasks while a human operator decides when and if the machine can be shut down and another swaped in to take it's place, and coordinates any possible counter-hacking operations.

        If you want to fall back to a cold war/MAD mentality, here's a worst case scenario for you. Say that twenty years from now China launches an unexpected nuclear ICBM assult against the US. At the same time Chinese hackers attempt to infiltrate every known computer in NORAD and any SDI systems. Would you want the computers to automatically destroy themselves, thereby eliminating any chance of a timely defense or counterattack, or assume that the hackers haven't got full access and keep the computers going as long as possible since the other alternative is death?

        And if you're going for a MAD strategy, which of those two systems would you want your adversaries to know that you have?

        • You'd probably get an Insightful mod from me, if I had mod points and hadn't already posted, but:

          If you want to fall back to a cold war/MAD mentality, here's a worst case scenario for you. Say that twenty years from now China launches an unexpected nuclear ICBM assult against the US. At the same time Chinese hackers attempt to infiltrate every known computer in NORAD and any SDI systems. Would you want the computers to automatically destroy themselves, thereby eliminating any chance of a timely defense

          • If missile control/defence networks operate through networks that could be attacked from China, then the US really does deserve the nuclear annihilation that would befall it. Systems that have absolutely horrific consequences associated with their failure should never be attached to generally accessible systems.

            Who's to say they're attached to a generally accessible system? Maybe China has planted moles in the US's military departments who can access the military only networks the machines are on. Maybe

        • The story goes that JFK left an executive order which still stands, stating that under no circumstances would America attempt to take part in a war of mutually assured destruction. He preferred leaving the planet to the Russkies than to the cockroaches.

          True? Maybe. Maybe not. But worth thinking about.

          • The story goes that JFK left an executive order which still stands, stating that under no circumstances would America attempt to take part in a war of mutually assured destruction. He preferred leaving the planet to the Russkies than to the cockroaches.

            True? Maybe. Maybe not. But worth thinking about.

            I don't really agree with the idea of MAD. I think that restarting work on SDI is just about the only good thing Bush Jr. has done.

            That being said however, if those in charge decide to go with MAD, i'd a

    • Yeah! (Score:3, Insightful)

      by twitter ( 104583 )
      The whole concept of need to know basis, is the understanding that information will fall into the wrong hands, you just want to minimize how much information can fall into the wrong hands when someone or something is compromised. That computers, especially military computers would follow this highly pragmatic principle shouldn't come as much of a surprise.

      No, that's great.

      This [slashdot.org] and this [slashdot.org] are complete surprises. Who would think to create a momoculture of poor security systems like that? Especially after r

  • by Qzukk ( 229616 ) on Wednesday July 16, 2003 @08:04PM (#6457821) Journal
    I think the next step from intrusion-tolerance would be a system that logs intruder activity, determines how the intruder got in, and when the intruder leaves, cleans up whatever rootkits, etc. were left behind after logging everything it can about the event.

    Other interesting ideas would be determining "tainted" processes run or otherwise affected (library overwrites, etc) by the intruder, and automatically sandboxing these processes in a nifty little world that looks realistic, but couldn't be used for a DDoS.

    Anyone up for writing a drop-in libc replacement that screens any attempts to overwrite libc? You'd also have to override the linker behavior, so that an attacker couldn't just LD_PRELOAD a normal libc for their apps. You'd still be open to statically compiled apps, so this may be a lot of work for only a little gain.

    Of course, this would make it hard to upgrade libc ;)
    • I think the next step from intrusion-tolerance would be a system that logs intruder activity, determines how the intruder got in, and when the intruder leaves, cleans up whatever rootkits, etc. were left behind after logging everything it can about the event.


      One way to do this is actually make a checklist of what one does in order not to get caught when gaining root access on a system:

      Destroying log files and wtemp, disabling login services (telnet, rsh, rlogin, rexec, ssh) and serial/console ports afte
      • So how would the above solution deal with all that other than at the postmortem stage?

        I don't think User Mode Linux is "there" yet, but this scenario is the kind of thing I'm thinking of:

        Intruder exploits yet another overflow in wu-ftpd and fires up a shell. At this point, the IDS has determined that wu-ftpd is acting erratically and forks the system: the original was actually an UML instance running on a host with a bit of ipmasq/conntrack glue. A new UML is spawned, all the services restart within i
        • For databases, I think the smart thing to do is just shut down operations. You don't want to act like you are making successful transactions since customers (paying or internal employees) would have to perform transactions again which can be worse than not having performed them yet.
    • I think the next step from intrusion-tolerance would be a system that logs intruder activity, determines how the intruder got in, and when the intruder leaves, cleans up whatever rootkits, etc. were left behind after logging everything it can about the event.

      Imagine something like VMWare with "selective rollback". Because of the combinatorics, I'm not sure it's entirely possible (which is not to say that it's not partially possible), but it's certainly an idea worthy of pursuit in some form...

      C//
      • Imagine something like VMWare with "selective rollback". Because of the combinatorics, I'm not sure it's entirely possible

        Something like this would work fine as long as the intruder didn't change anything that was being changed by a normal process. If the intruder started writing or removing CC numbers from a CC list that was being updated (as if I'd keep them in plain text...), then a rollback would have to be very very crafty to identify "bad" changes vs. "good" changes (hence the idea of custom write(
  • by Todd Knarr ( 15451 ) on Wednesday July 16, 2003 @08:04PM (#6457825) Homepage

    Seriously. The implementations are new, but the concept goes back to the dawn of interconnected computers, maybe further. Back in the Iron Age, you used different passwords on different systems specifically so that, if one of the systems were penetrated and your password compromised, all the other systems you had access to would not be immediately compromised as well. That was a limited form of intrusion tolerance, forcing the intruder to start over from scratch on every system in the network.

    • by goombah99 ( 560566 ) on Wednesday July 16, 2003 @08:22PM (#6457900)
      All micorsoft operating systems are extremely compliant with RFC intrusion tolerance. Indeed they positively welcome intruders open arms and open legs. once in the intruder can pretty much do as they please. If that isn't intrusion tolerant I dont know what is.
      • Indeed they positively welcome intruders open arms and open legs

        You owe me a cup of coffee, a shirt, and a keyboard. :)

        And to think that they just got the "Homeland Security" contract.

      • Oh? An intruder? Okay. I'll keep oper..a....tiing as no..r...m.a BSOD..

        (reboot)

        Okay, no intruuuud...BSOD

        (reboot)

        Good morning Dave! Where would you liiik.....e ... t...o.... g....oooooo ... [I can feel my brain going].... BSOD.

        Actually, considering that this is DARPA, maybe this is a good thing. Maybe they will host the next war, and no one will come! Really!

        [Please note: I have the right to say this. I have/had a dual boot system, and my VFAT partition has finally corrupted beyond repair.
    • Actually, I don't see it the same way. That was basically the same type of wall, on different systems.

      That was not so much tolerance, as it was the only protection, and it still applies, except for idiot admins who use the same password over and over.

      This is more of an internal "protect the data stream" kind of thing.

      • Sure it is. The example he should have used was hashed passwords, shadow password files, etc. Obviously, you would hope that malicious users don't get on your system. Assuming it's a single-user system, password hashing, in theory, would be unnecessary if you were sure you'd never be intruded. The point behind the hashing is that if someone downloads the passwd file, he does not immediately get your password.

        In newer machines, there is even a shadow file so that even if he gets user-level access, he cann

        • These are two aspects of the same thing. Hashed passwords and shadow password files are layers to make it harder to compromise everybody on a single machine once an intruder's got a foothold on that machine. Avoiding shared passwords make it harder to gain footholds on other machines in the network once an intruder's compromised that first machine in the network. Basic defense in depth, and it's what the most popular systems today seem bent on eliminating.

  • by Eric Smith ( 4379 ) * on Wednesday July 16, 2003 @08:05PM (#6457830) Homepage Journal
    All it's doing is moving the security barrier. You're creating a new line, and saying that it's OK for attackers to cross the old line, since that doesn't get them across the new line. But defending the new line is not fundamentally any easier than defending the original line.
    • I concur.

      There is a parallel here; Most large corporations heve given up on the virus war, and have implemented "Virus Management" strategies.
      They have basically said, "Ok, we can't keep them out,so we'll just let them in a little bit."

      So now we're doing the same thing on the security front. I must admit, I'm not all that surprised.

      The cynic in me says, "That's what you get for outsourcing all those tech jobs."

    • by Gorobei ( 127755 ) on Wednesday July 16, 2003 @08:43PM (#6457990)
      Huh? The military has had *thousands of years* of experience in information security! They created/funded/supported research in almost every major communications system/cypto system of the past two millennia.

      They know no system is totally secure - especially when your adversary has spies, troops, and bombs. You expect enemy signals intelligence, broken codes, code-books captured in combat, spies in your data centers, secure comm channels destroyed.

      There is no one line/security barrier: the only rational approach is a defense in depth, with montoring of problems, and the ability to route around compromized and destroyed systems.
      • The military [...] created/funded/supported research in almost every major communications system/cypto system of the past two millennia.

        Perhaps, but none of the commonly used crypto today came from the military, because the military doesn't want to share their crypto capabilities or research with the public. Think about DES, RSA, Diffie-Hellman, AES, etc.

        There is no one line/security barrier: the only rational approach is a defense in depth

        Absolutely true, and that's why I'm saying that this so-c

  • Prior Art? (Score:5, Funny)

    by Anonymous Coward on Wednesday July 16, 2003 @08:06PM (#6457833)
    " concede that systems will be penetrated by malware and hackers, but to keep operating anyway"

    Hasn't this always been the strategy of Windows? Now if they could just finish implementing that second part...
  • by Anonymous Coward
    Much engineering effort goes into the benefits of balancing somethings hardness against its resilience. The broad idea for security lately has been to make systems as hard as possible, but leaving them brittle. Even Diamond and Alumina Ceramics shatter relatively easily. Building systems with something more akin to the resilience of steel makes sense... ... as long as you have some damned way of translating materials science into network security.

    perhaps I need coffee :)
  • Jeepers ... (Score:3, Funny)

    by Mainframes ROCK! ( 644130 ) <.moc.liamg. .ta. .viftaw.> on Wednesday July 16, 2003 @08:07PM (#6457837) Homepage
    ... sounds like somebody is reinventing Multics... again.
  • by espo812 ( 261758 ) on Wednesday July 16, 2003 @08:08PM (#6457840)
    Why do we have to accept break ins? OpenBSD hasn't had a vulnerability disclosed in months now. Does that mean there are no vulnerabilities? No. Is an OpenBSD box pretty much unusable out of the box? Pretty much yes. But the thing is if you keep things simple, they should be easy to audit. Bugs should be easy to detect and fix.

    You get into trouble when you start piling on feature after feature after feature. Is all of that really needed?

    Denial of Service is, unfortunately, harder to deal with. But when you have your own network, it's much easier to deal with. Dependancy on the Internet still creates a problem (the majority of US government data communication is done via the Internet). It comes down to a cost benefit analysis - is it worth building a totally seperate network? For the military, I'd say yes.
    • is it worth building a totally seperate network? For the military, I'd say yes

      This assumes that, just by making it separate, it will fail to be vulnerable. With a small, highly restricted network this would likely be true. The military network is huge and I think it is naive to assume that it could not be compromised by a determined attacker.

    • OpenBSD hasn't had a vulnerability disclosed in months now

      Neither has AmigaOS, ProDOS or DR-DOS.
      Really, you should listen to the trolls more often.

  • Just My .02 USD (Score:5, Insightful)

    by Sam Nitzberg ( 242911 ) on Wednesday July 16, 2003 @08:16PM (#6457872)
    In general, I don't like the idea of making a concession that malware will have to be operating in a given computing environment (as stated above), and to think otherwise would simply be incorrect. OK, Windows environments may be an obvious exception ;-)

    I would prefer to consider that (at least from my own philosophical viewpoint), that you can construct systems with defined patterns of behavior, even when "malware" is introduced.

    From one of the links referenced above :

    Successive levels in the hierarchy are linked by refinement mappings that can be shown to preserve properties of interest. This project will apply this technology to intrusion tolerance properties.

    This harkens back to enforcement mechanisms (Biba Integrity Model, No Read Up, No Write down policies, Models for descriptions of multi-level secure behavior, etc...). (Aside: Amoroso's book is an excellent reference)

    What this alone tells me (I didn't read all the blurbs, articles, and briefings), is that we are discussing mappings (mathematical functions), and properties (which can be mathematically tested for by use of a logic or algebraic system).

    At a glance, I am thinking of some of the issues in formal methods, proven-secure-O/S kernels, and other high-reliability software engineering methods for [secure] systems.

    I like the idea that mathematical theorem provers can be applied to any system so defined.

    Some basic issues do arise for practical application :

    - Theorem - proving aspects mean very precise use of functional requirements and mathematical specification for system behaviors. (Also, special talent and additional manpower is necessary. Also, mis-applications of the tools used, or introduced human error in the test process can subvert the efforts)

    - This should be applied (I believe) to systems-of-systems and their behaviors. The systems that your system interacts with would have to had similiarly rigorous analysis and design.

    - There is (I believe) a trend in military computing towards commercial, and less custom, software development. Long-term, where will the actual development of such systems be funded (beyond the initial R&D stage).

    - The use of analysis of pre and post conditions in the executing environment (to ensure that violations of the underlying security policy are not permitted) is not a new concept. While I am not saying that this is an intrinsically ecessary mechanism for these methods, most current system lack such an approach, and there may be fundamental computer security issues present by the nature of the software development environment. If these methods are used, it is still highly desirable to design systems with security in mind regarding their handling of all data, traffic, and O/S vulnerability issues.

    I only took a brief look at the material, but these are some thoughts. I also think that the effort itself is very worthwhile, and potentially of value. Also, looking at Dr. Lulu's credentials, there is no naivite in his software background; the basic tenents can't just be shrugged off.

    Sam Nitzberg
    sam@iamsam.com
    http://www.iamsam.com

  • The way it should be (Score:5, Interesting)

    by mcrbids ( 148650 ) on Wednesday July 16, 2003 @08:18PM (#6457885) Journal
    Recently I upgraded and migrated to a newer, much faster server. When I moved over all my software, everything worked OK, so I switched DNS about 2 weeks ago.

    However, I got sporadic complaints about images not sizing properly, even though I initially found nothing wrong.

    However, what had happened is that a critical piece of software (ImageMagick) wasn't loaded on the new server - but since all the functions that resized images had numerous fallbacks (such as using expired, cached copies, and failover to full size display which even then didn't always cause a problem since they were frequently resized with HTML tags)

    In any event, this (I think) demonstrates the idea - there were several layers of failure that had to happen before images didn't show - and everything kept more-or-less rolling for 2 weeks.
  • by pioneer ( 71789 ) on Wednesday July 16, 2003 @08:22PM (#6457901) Homepage
    This is similar to research being done at MIT [mit.edu] in the Computer Architecture Group [mit.edu] by Martin Rinard [mit.edu] and his graduate student Brian Demsky. They are building and researching ways to automatically detect and repair data structure errors so that if a programs data structures get corrupted their tool will repair the heap so the program can keep running.

    There was related work done like this back in the day at AT&T but Rinard and Demsky have introduced automatic repair which, as you might imagine like this security idea, is scary to some people. Imagine a program that would have crashed due to some bug or malicious data mangling, now kept running by a tool... But the tool chooses the repair actions based on heuristics and specifications by the developer... takes some getting used to!

    All of this stuff falls under fault tolerance... its pretty crazy to look at what the AT&T/Lucent Phone Switches do when they fail... they try a million different things to keep operating no matter what happens...

  • by Valar ( 167606 ) on Wednesday July 16, 2003 @08:25PM (#6457913)
    More likely, the next big jive word my boss is going to get obsessed with. I mean, sure, it's a great idea, and eventually I see it coming into heavy use, but for right now, I just see the corporate types throwing it around in their techno-babble pissing matches

    Suit 1: We've got 10,000 uberhumungo servers running Microsoft 2003 Humungo Server Edition, with b2b backend, integrated transaction safe, load-balanced Humungo Edition IIS.
    Suit 2: Well, we have all of that, plus Intrusion Tolerance.
    Suit 1: Oh, baby. Can I merge with you?
  • Oh... I thought we were going to start being Politically Correct and stop saying bad things about script kiddies.. I'm relieved to see the world hasn't quite reached that level or purgatory just yet.
  • by zogger ( 617870 )
    My best guess is that the military (and the pseudo government international defense-corporate twins) know they are penetrated in advance, ie, they got spies inside, and no way to keep them off their nets, even if secured from the "internet". They need some way to keep functional even though they know they are compromised. When you have top level nuke secrets waltzing out of supposedly secure places like los alamos, well, no amount of software is going to save you. When you have top FBI cybercops being spies
  • Just like paint programs don't allow you to delete files when you open a .jpg, so should any network software have the same power.

    You should be able to access data and use it, but the data should not be able to access your computer.

    The problem is that many closed source software programs have backdoors and basic coding flaws. If you understand what a program does(open source), then you can know it won't cheat you.
  • by st0rmshadow ( 643869 ) on Wednesday July 16, 2003 @08:48PM (#6458016)
    This is nothing new, Windows has had tolerance towards intrusions for years...
  • One project is working on a new standard for memory in DIMM form - the HCC DIMM - Hacker Checking and Correcting memory.
  • A fault tolerant system in which, if penetrated, continues to operate until control can be regained. . .
    OMG! We've been assimilated. Everybody listen AD2ô8 yç 48

    [Carrier lost]

  • The easiest way (and perhaps the only way) of achieving intrusion-tolerance is by segmentation. Split a program into several parts which trust each other as little as possible (and run with minimal priviledges); even if one part is compromised, the attacker won't gain enough priviledges to do very much.

    Oh wait, I've just described qmail.
  • A network, that when penetrated, just lies back and thinks of England...

    Kind of like the missus, really...

  • Maybe it's time to revive discussion of error-oblivious programming methods. (Google for it.)
  • what?!? (Score:4, Informative)

    by shokk ( 187512 ) <ernieoporto AT yahoo DOT com> on Wednesday July 16, 2003 @10:06PM (#6458390) Homepage Journal
    So the idea is, have a vulnerability, get attacked, keep on trucking with the same vulnerability, continue to get pounded through the same vulnerability relentlessly by every script kiddie's scan, vendor never patches because we've all accepted that we can just live with the vulnerabilities, keep on suckin'?

    From the MIT article, it sounds like some intelligence will shut some non-critical services down so that the core still runs, but isn't that what Intrusion Prevention is supposed to do? When you're talking military use, I expect the important areas to be surrounded by honeypots as part of the Intrusion Detection and Prevention.

    • Re:what?!? (Score:3, Interesting)

      by ctr2sprt ( 574731 )
      Look at it this way. When you build a bridge, you try to make it as solid as possible. You don't want it crashing down, right? So you do everything you can to protect it against every forseeable outcome. And once that's done, you design the bridge to break in pieces; to break slowly rather than come crashing down; and in general to control the collapse as much as possible, even though such a collapse should be impossible.

      It's the same sort of thinking here. We'd like to think that we can make intrusi

  • by Woggle ( 577208 )
    Remember that from the Vietnam War? Intrusion tolerant computer systems... the more things change, the more the seem the same.
  • About damn time. (Score:2, Insightful)

    by scphantm ( 203411 )
    I personally have gotten sick of arguing with people asking them what they are going to do WHEN they get attacked. i lost count of how many admins i have delt with that thought just because they have a firewall and a BSD distribution, noone is going to get in.

    bout time the question was change from "how are you going to keep them out" to "what are you going to do when they get in"

  • by Mostly a lurker ( 634878 ) on Thursday July 17, 2003 @12:10AM (#6458850)
    I guess everyone would agree that there is some merit to the concept of defense in depth. That said, recognise that the typical user (i.e. those most likely to be hacked) will generally not do anything about an intrusion as long as they can continue to work. I think a result of better intrusion tolerance would be a significant increase in the number of long term compromised systems.
  • by lpq ( 583377 ) on Thursday July 17, 2003 @12:50AM (#6458969) Homepage Journal
    If you have a multi-level and/or granular security architecture, penetration or a hack at one security level doesn't mean automatic access to other levels or privileges. So they hack the webserver process. If the webserver is running as a non-root process in a chrooted jail -- perhaps even on a 'virtual machine', does that automatically mean we should shut down the whole system?

    It's the same with well designed programs -- there was a slashdot article recently on QNX -- that is designed to be fault tolerant -- and it works. Only when you design huge monolithic code monsters where a fault anywhere in the monster means kill the whole beast do you have such frail computer systems.

    Imagine human skin hacked by a scrape on some sharp object. If the first decision was to instantly kill the whole host, there wouldn't be too many humans -- can you say *stoopid* design?

    Sure, there are some things that can't be healed, but the majority of us have had scrapes and bruises growing up and are still quite healthy -- and even where the car body may have permanent damage, then engine/CPU (the person's brain) is often quite capable.

    Next time you think fault tolerant or intrusion tolerant systems are foolish and impossible, think "Stephen Hawking", or "Einstein" (not able to complete High School). I had a *stoopid* manager who thought that making system-audit so efficient, it could be left on by default in all but the most demanding of compute environments was a waste of time -- that it was *impossible* to build real-time intrusion detection systems.

    Of course people thought it was impossible to circumnavigate the globe (you'd fall off the edge), impossible to fly, impossible to go faster than the speed of sound, etc.

    Every time someone talks about how "impossible", you have to realize they are consciously or unconsciously thinking inside a box. To do the impossible requires something that *isn't* engineering. It isn't manageable. It can't be driven by a schedule. You have to *think outside the box*. You have to be creative. By definition, engineering, isn't creative. Engineering is taking known principles, applying them in some set of known circumstances, and coming out with another "widget", that looks similar to a previous widget.

    Most large companies breed conformity and uniformity. While this type of engineering is great for reproducing Honda's on an assembly line, it greatly hinders thinking 'out of the box' (the box of conformity and uniformity that the company asserts is "necessary" for their business). Then they wonder why what was once a 'wonder company' is now a 'dinosaur company'.

    Creative people are often *not* group players -- if they had a group mentality, then how can they be expected to come up with any idea that is radically different from the rest of the group?

    Creative people tend more toward not having exceptional social graces (think of the novel ideas of unix, or Multics). These were not done by suit-and-tie, management "yes"-men. Even Linux was started by 1 person -- who has not always been known to be the social charmer, even tempered type -- and I certainly don't get the impression that everything is done by group consensus.

    But already in linux, there is a fair amount of doing things the 'linux' way, certain people to please, various people who get say-so or veto powers (or are believed to have such) beyond Linus.

    People familiar with Microsoft can remember when even the simplest application crash would bring down the entire system. Unix people would generally laugh at this. But now we see those who think a single penetration should cause the whole system to be brought down. Maybe it will require a next-generation OS (dunno enough about QNX to know if it might qualify), but there are other OS's that have better security records than linux (BSD, OS/X (I've heard)).

    Linux, laughably, doesn't even have CAPP certification. Sure, there are alot more Microsoft vulnerabilities every
    • I read this full argument and generally agree however you operate on dictionary type of notions. ie: Since someone is thinking outside of the box or creatively it's necessarily a good thing. It's not like that in all situations, don't get me wrong it's good to be objective in alot of situations it's just that security isn't one of them. When it comes to dealing with security systems there really is no thinking outside of the box. The goal of a security system is to secure the system; as you said before if s
  • This is all well and good but what about if there is a bug in the actual trust part of the kernel or simple user error gives people more access than they should have? You can't protect against human stupidity

    Rus
  • I know it's off-topic, and I really don't like to have to wax RMS, but it's "cracker", not "hacker". "Hacker" isn't a synonym for "computer criminal"...

    I know I'll get modded down for this, but I really think that SlashDotters should not be making posts about those evil "hackers"... I am a hacker. I don't break into systems.

    (/rms)
  • Shameless plug: Askemos [softeyes.net] is a GPL'ed incorruptible and intrustion resistant operating system (or application server for that matter).

  • One of the cornerstones of any multiuser OS is that some users are expected to malicious.

    The OS has to have sufficient isolation that this luser only damages her own files and processes.

    IIRC, FreeBSD even has a Write-Once "SECURE" flag that locks even root out from some functions.

  • by Sajma ( 78337 ) on Thursday July 17, 2003 @08:46AM (#6460334) Homepage
    Byzantine fault tolerance [mit.edu] (BFT) is a "traditional" distributed systems technique that enables intrusion resilience. BFT replicates a service such that the service continues to work correctly as long as less than one third of the replicas are comprimised. Combined with proactive recovery (periodically shutting down replicas and restarting them from a read-only disk), this can enable the system to survive an arbitrary number of compromises over its lifetime.
  • Intrusion Tolerance is already being practiced, although another term for it is defense in depth.

    Another poster has described how defense in depth and fault tolerance apply to firewalls, network infrastructure, etc. I'd like to mention host-based measures to slow an attacker down and limit the damage they can do.

    One of the oldest host-based D-i-D measures is chroot jails. A 'chroot' in Unix means that an application is run with access to only a limited subset of the filesystem, one which does not cont

"Your stupidity, Allen, is simply not up to par." -- Dave Mack (mack@inco.UUCP) "Yours is." -- Allen Gwinn (allen@sulaco.sigma.com), in alt.flame

Working...