Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security

Distributed Security 110

A reader writes: ""Where Schneier had sought one overarching technical fix, hard experience had taught him the quest was illusory." A long and detailed article at The Atlantic Online on why Bruce Schneier has come down from his strong cryptography tower to preach the gospel of small scale, ductile security against the popular approach of broad scale, often high tech security that often proves to be very brittle."
This discussion has been archived. No new comments can be posted.

Distributed Security

Comments Filter:
  • yeah (Score:2, Funny)

    by vectus ( 193351 )
    so 'distributed' is the new buzzword, huh?

    I think I'm going to create a distrubited ASP, umm.. internet synergy proxy. it'll use a beowolf cluster of nodes of umm, privacy and corporate responsiblity.

    so.. like.. send me your money, and I'll set it all up there..
    • Re:yeah (Score:2, Funny)

      by dsconrad ( 532462 )
      so 'distributed' is the new buzzword, huh?

      I think I'm going to create a distrubited ASP, umm.. internet synergy proxy. it'll use a beowolf cluster of nodes of umm, privacy and corporate responsiblity.

      so.. like.. send me your money, and I'll set it all up there..


      Sounds great, as long as you stay proactive.
    • Re:yeah (Score:2, Interesting)

      by Anonymous Coward

      I was the anonymous poster of this story, and the title I submitted was "America's Maginot Line". I am disappointed that this title was dropped as it is directly relevant to the attitude being discussed in the article, and in fact the Maginot Line is directly referenced in the piece. A quick search reveals not a single use of the word 'distributed' in the entire article.

      p.s. Sorry about the clumsy double use of 'often' in the last sentence - wouldn't have minded some editorial action there.

    • Dude you forgot the XML Web-Based content enabled e-business stuffing.
  • Haven't we been here before [slashdot.org], about a year ago?!
    • from the no-you-are-just-seeing-things-dept:

      Oblivious writes, Even though you may think you have seen this on Slashdot before, you really haven't, the editors would catch it otherwise you see! People who submit worthy stories get rejected but those that submit old and worthless shit get through you see!

      from the I-am-getting-really-annoyed-with-the-repeats-dept:

      Aggrivated writes, Editors, I have supported you on this in the past, now it is just out of hand. What is this, the third, fourth+ time this week that this has happened? It's fucking Tuesday morning boys. I am no longer in support of your lack of effort. This is now a job. Editors get in trouble for small mis-spellings. You should be fired for failure to do your job.
  • Let's move into a non-password method of user validation for computer systems and networks. Anyone for a USB retinal scanner or DNA fingerprint validation system for your office network PC.

    • As a hacker of your system, I'm all for this. Now I don't have to break your password, just spoof the USB packets.
      Thank you much.
      J. Hacker Anonymous
    • I would like to sugest a new kind of biometric identification - the Personalized Advanced Identification Norm ( P.A.I.N. ).

      The user is made to read Slashdot articles at level -1 for 15 minutes. During that time the pattern in which the user bangs his/her head in the table is measured. This pattern is trained into a neural-network, that is later used to identify the user.

      This method is a bit slow but i'll personally garantee that it's fail-proof (trust me).
    • by dido ( 9125 ) <dido&imperium,ph> on Tuesday August 13, 2002 @04:04AM (#4060199)

      What Schneier actually advocates in the article is the use of at least two of these three layers for doing user authentication: something you know (e.g. a password), something you have (e.g. a smart card or other secure token), and something you are (biometrics falls into this rubric). Depending on only one is necessarily weak, but even two of the three taken together would be strong indeed. For instance, if you have a website that uses not only username/password pairs for authentication, but lives on SSL *and* requests client-side certificates from any browser that wishes to visit the protected page uses both something you know (your username and password) and something you have (the computer where the browser with the client-side certificate is installed, or better yet if the cert lives on a smart card). THAT would make Schneier's Parable of the Dirty Website fail utterly without extra work: without the client-side cert, the web page wouldn't even serve the username/password page to you. Fine, the password is compromised because the employee used the same password to surf for porn, but since access to the certificate is limited to the computer where it's installed, or the smart card possessed by the employee no dice unless you can also steal the smart card and/or computer. Even better would be to provide biometric authentication for the secured computer, so you'd then have to steal the fingerprint or retinal scan or whatnot as well to break the system.

      It can be done of course, but it would require contortions worthy of Sneakers [imdb.com] .

      The whole article actually feels like a distillation of the last six months of the Crypto-Gram newsletter [counterpane.com].

      • There is no way to directly measure that you are you. Any biometric system can only identify that you are by measuring something that you have, such as your fingerprint. Regrettably, this has been shown to be easy to fake. Trinity sounds nice though, sort of religious, don't you think!

        The problem with Biometrics is that in the end it is nothing more than a digital signal going down a wire. If the hardware can be compromised, then your physical attribute that can be measured just becomes another signal that can be captured and replayed.

      • But the point is also that if you put all your trust in authentication, no matter how many factors you use, then you have a point of failure.

        Imagine perfect, error-free, unforgeable identification. Then you can guarantee that the person looking at your classified files is not an impostor, but is the real Kim Philby. Or the CIA director who took classified files home on his laptop and connected it to the Internet over his AOL account (I am *not* making this up).

        I see the central point as being the need to keep human attention in the loop and to contain failures.
    • FWIW, the referenced article includes an extensive discussion of the ways that currently commercially-available biometric authentication mechanisms have been found, how to put it politely, rather less reliable and rather more easy to spoof than their glossy booklets and glossy marketroids (not to mention lossy polliticians) have been wont to claim.

      Which is not to say that a biometric device combined with intelligent human oversight (so you'll be spotted if you try to use an artificial hand to fool a device based on hand and finger sizes, for example) isn't an appropriate component of an authentication system, and the article gives an example in use on Mr Schneier's home turf.

      Seriously, do read the article, even if it is a little on the long side. It contains a lot of good sense: in particular its emphasis on putting human decision-making back into the loop, rather than looking for all-encompassing technical solutions. We're clearly not yet at the point where our technology is sufficiently advanced that it can act as if by magic - as a lot of snake-oil merchants pretend, and a lot of quick-fix politicos who should know better affect to believe.

    • Anyone for a USB retinal scanner or DNA fingerprint validation system for your office network PC.

      Go watch `Demolition Man':-)

      The only true answer to computer security problems is: (wait for it)

      Never put anything you on a computer unless you know how you will recover when the whole world sees it.
  • by aebrain ( 184502 ) <aebrain@gmail.com> on Tuesday August 13, 2002 @02:12AM (#4059975) Homepage Journal

    Ductility - the ability to fail gracefully - isn't just essential in the area of security, it's true for reliable systems generally. All programmers who've worked on stuff like Combat Systems for ships, aircraft avionics, railway control systems etc should know this, and most do.

    There are 2 ways of making things secure - either against outside attack, or internal failure. I call them the Battleship and the Blob. With the Battleship, you load up the Firewall, or put in 2048-bit encryption, or even have an air gap. You basically rely on a layer of "armour plate" that your predicted threat can't penetrate. But this often fails - the threat either goes around the armour, or the incoming shell is bigger than you'd bargained for, and penetrates. Far safer in practice, though not in theory, is the Blob. This has layer after layer of safety features, each of which is easily circumvented in isolation, but every one of which limits the damage. Bugs can exist, attacks get through, but it works anyway. You can shoot the Blob full of holes, but it keeps on oozing along... Terminator 2 not Terminator 1.

    What does this mean for programmers? Use strong typing (if your language doesn't support it, fake it with explicit sanity checks, boolean isSane()), always check inputs for sanity, check your outputs are plausible at least, get good peer review on everything, KISS, basically all the techniques professional Software Engineers rather then 31337 haXOrs have been spouting on about for some time. The software equivalent of "Wear belt, braces, keep a piece of string in your pocket, and then make sure your underwear's in good shape."

    • Unfortunately, most programmers, researchers, and other professional types take an interest in the Linux kernel itself, leaving the more boring stuff (like userland tools) to the hobbyists. I can't count the number of times I've cursed something written in-house by RedHat, Mandrake, etc, because it was written so poorly, it bombed when I accidentally hit the return key, escape, F10, etc. The two major times I have reason to run a program written by a Linux distribution is during installation and post-configuration. I dread both of these times when I'm installing Linux, because I know that no distribution is going to hire a real programmer to write these programs for them. They end up writing quick and dirty hacks in an unmaintainable scripting language, then rush the product through a beta test period. Ugh.

      The only thing you can do when you install Windows is click "next" repeatedly. It's infuriating, but at least it never bombs on me, like the Linux installation programs do. If I had a choice between a fancy, configurable install program and a barebones install program, I'd definitely go for the fancy, configurable one... the first time. After that, I'd stay as far away from it as possible. More features = more bugs, and you do NOT want bugs in your install program.

      It would also help if RedHat, Mandrake, et al hired real programmers, rather than spending all their money on managers and suits. One programmer with twenty years of experience is worth a hundred suits.
    • >Use strong typing (if your language doesn't
      >support it, fake it with explicit sanity checks,
      >boolean isSane()), always check inputs for sanity,
      >check your outputs are plausible at least,

      Funnily in the first part what you described is Eiffel contracts programming, now who is using Eiffel here?
      Nobody!
      The worst part is I don't know of any other language which use so heavily assertionw, pre-conditions, post-conditions, etc..

      > get good peer review on everything, KISS[]

      I would also add: use unitary test and non-regression tests. And if you're manager make sure that everyone is testing properly their software.

      Incredible as it seem many manager make their team skip the unit test part because "unit test are too costly"!!!

      • Funnily in the first part what you described is Eiffel contracts programming, now who is using Eiffel here? Nobody! The worst part is I don't know of any other language which use so heavily assertionw, pre-conditions, post-conditions, etc..

        You spotted it. Yes, Eiffel has some excellent features here. Ada does essentially the same job by strong typing - no need to check if a value is between 0 and 23 if the variable is of a type that can't have values outside that range - and raises an exception if you try to put one in. Ada's strong typing is often better than Eiffel in that regard, but Eiffel's contracts are more useful in other circumstances. Ideally a language should be a cross between them, and also have some of the neatness of Java as regards inheritance, as opposed to Ada-95's rather clumsy syntax. Until that come along, my favourite's Ada, but I count Eiffel practitioners as being of like mind.

        But having such features as part of the language just increases productivity and makes the programmer's life easier. You can get maybe 50% of the benefit by excellent practice with any language. In Theory, C programs written by a Genius can be almost as safe as Ada or Eiffel programs written by someone merely competent. Practice has shown that they never are, but that's because it takes so much more work in C. See the article Correctness by Construction [af.mil] in Crosstalk [af.mil], the Journal of Defence Software Engineering. It still shouldn't stop programmers in C, C++, C#, Java or whatever from manually doing what the high-level languages provide automatically, and the really good ones know this. The result may not be good enough to fly a plane or run a railway safely, but good enough for non-safety-critical applications.

        I would also add: use unitary test and non-regression tests. And if you're manager make sure that everyone is testing properly their software.

        Damn straight. Wish there were more people like you around, BSDs and buffer overflows would be endangered species rather than being in plague proportions.

        A.E.Brain's Tip of the day for Java programmers: Classes should have a main() that performs a self-test, a boolean-returning isSane() that weeds out obviously wrong values, and a fakeSomeTestData() constructor for other classes self-tests to use. Try it - development time will decrease, productivity increase, and maintenance is a doddle as anyone new coming in can run any class and see how it works.

    • But this often fails - the threat either goes around the armour, or the incoming shell is bigger than you'd bargained for, and penetrates. Far safer in practice, though not in theory, is the Blob. This has layer after layer of safety features, each of which is easily circumvented in isolation, but every one of which limits the damage.
      Two problems: (1) in an actual organization, people need to get work done, and don't have an infinite amount of time to deal with security systems. This is easily seen at a nuclear power plant where Joe Operator can spend up to 25% of his (paid, presumably productive) workday dealing with security and access control mechanisms (2) organizations don't have an infinite amount of money to spend on IT, either. Consider $250,000 spent on a 5-axis milling machine vs. the same amount spent on IT systems and their associated security requirements. Yes, the 5-axis machine is expensive, fussy, difficult to set up, and requires a lot of training. But once it is in and running, it works, generating a stream of profit for the organization. And while it requires maintenance from time to time, it doesn't suddenly explode, taking the entire customer list with it (say). Which may explain the sudden drop in IT investement in the last 2 years!

      sPh

    • I would add to this-- Minimizing the number of lines of active code is not always a good idea, but one should always minimize the operational complexity of the algorythms of the main engines.

      And ALWAYS check output for anything mission-critical. I don't want to hear about any more people beeing OD'd on radition theraby ;)
  • by Dr. Awktagon ( 233360 ) on Tuesday August 13, 2002 @02:12AM (#4059976) Homepage

    Technological solutions for social problems (like legislative ones) are only as good as their worst failure mode.

    I'm tempted to write more in this /. comment but I think that idea is pretty deep. The article (for those who didn't want to read it all, I don't blame you) describes how Schneier came to realize this.

    I believe one of our ex-presidents (LBJ perhaps) has a quote where he expresses the same idea about laws.

    Unfortunately, the most effective solutions aren't always the ones chosen. Our current government seems to have no concept of the idea that you don't just have to "do something", you have to do the right "something".

    Since /. readers are such a cynical and paranoid bunch, we can come up with all sorts of failure modes for today's "security". Imagine the dumb blank look that would appear on Ashcroft's face if you asked him "what if someone gets a copy of the fingerprint used in those biometric systems? will the federal government be paying for finger transplants?". Then after a few seconds the blank look will disappear, and the lies and bullshit would stream out.

    Just like the TV talk shows. One intelligent guest will make a simple point ("what if they sharpen the edge of a credit card? isn't that more dangerous than a nail clipper?"), which to me would be an instant show-stopper, forcing me to stop and re-think the whole system, but then the other guests will pile the bullshit so high the point is quickly forgotten.

    It makes you wonder if the legislators actually consulted any security experts (that weren't trying to sell something). Probably not.

    • what if they sharpen the edge of a credit card? isn't that more dangerous than a nail clipper?

      Yup. Flint knapping is a not-unheard of hobby. Wonder if I could get a piece of deer antler and some rocks past a security guard. Or a CD - ever break one of those? How about a laptop computer? They're full of sheet metal, and you can make an expedient knife out of sheet metal.
    • I believe one of our ex-presidents (LBJ perhaps) has a quote where he expresses the same idea about laws.

      This be the quote you're looking for:

      You do not examine legislation in the light of the benefits it will convey if properly administered, but in the light of the wrongs it would do and the harms it would cause if improperly administered.
  • by dsconrad ( 532462 ) on Tuesday August 13, 2002 @02:27AM (#4060006)
    The article brought up a good point about cryptosystems that depend on keeping the algorithm secret. Once that secret gets out, the security is hopelessly compromised. The Germans learned this the hard way in WWII.

    I think this has a nice parallel to the entertainment industry's approach to DRM. The fiasco with DVD encryption is a perfect example. Once the format was broken, the genie was out of the bottle. Making laws to try and stuff the genie back in just will not work.

    With the ever increasing number of people who try to break security protocols as a hobby, it seems that relying on secrecy to keep things safe is a recipe for disaster. The internet allows information to be distributed so quickly and widely that no secret will stay secret very long.

    If the entertainment/software/etc industries continue to rely on their nonexistant ability to keep secrets, we will either have an overabundance of silly overbroad laws, or else the companies will falter and die. No matter how large and dedcated their tech geeks are, there is no way to match the vast number of hobbyist nerds trying to break stuff for fun.
    • Once that secret gets out, the security is hopelessly compromised. The Germans learned this the hard way in WWII.

      well, I'm not sure abot that. once the allies worked out how the enigma machines worked then german comms were not suddenly an open book. yes, we could set the early calculating machines (the bombes) and the first computer (collosus) to attwempting to work out the daily code but they needed help. alot of the breaks were where the germans were careless like sending weather reports first thing in the morning in a known format. if if you knew the weather was clear then you could capture the first msg of the day form place X and know that the cipher text matches the plaintext "the weather is clear". there was, of course, a bit more to it than that but thats a basic idea.

      knowing the algorithm wasn't enough as the task, with no clues, was too computationally intensive for the technology of the day to solve, much like cracking public key is certainly do-able when you know the code, it's just not doable in any reasonable timeframe.

      the clues that the allies got to what the cyphertext might decode to and the codebooks they captured contributed massively to the code breaking effort.

      dave
    • Once that secret gets out, the security is hopelessly compromised. The Germans learned this the hard way in WWII.

      Actually, no. The germans used public key cryptography. We just came up with a computer fast enough to crack it. Their answer? Increase the length of the key. (sound familiar?)

      -Ben

      • I'm not an expert in the history of cryptography, but from what I have read, public key cryptography was invented in 1976 (or maybe '75 or '77) by Whitfield Diffie and Martin Hellman, so it is unlikely that the Enigma machines (or any other cryptography used by the Germans) was that.

        I think that those devices were a sort of stream cipher that used an initialization vector, but I could certainly be mixing up my terminology, or even just plain wrong...

  • Interesting article. (Score:4, Interesting)

    by ^MB^ ( 153039 ) on Tuesday August 13, 2002 @02:39AM (#4060040)
    Very long, but worth the time to read. I've been a big fan of Schneier since i read his book a few years ago.

    Best Article quote: "Cryptophiles, Schneier among them, had been so enraptured by the possibilities of uncrackable ciphers that they forgot they were living in a world in which people can't program VCRs.

    Perfect timing as I'm gearing up for CRYPTO 2002 [iacr.org] at UCSB, YAY!

    -Nick
    • I don't see why people present this as a turnaround on Schneier's part.
      In _Applied Cryptography_ he gives every impression the policy and protocol are critical to the successful security of a system - the first few chapters are dedicated to "if Alice tells Bob..." scenarios!
    • Presumably by mentioning 'his book' you're talking about _Applied_Cryptography_, but he's written several [amazon.com] other books too.
  • The beef (Score:5, Insightful)

    by jukal ( 523582 ) on Tuesday August 13, 2002 @02:41AM (#4060041) Journal
    Is actually in the first sentence.

    <clip> "The trick is to remember that technology can't save you," Schneier says. "We know this in our own lives. We realize that there's no magic anti-burglary dust we can sprinkle on our cars to prevent them from being stolen. We know that car alarms don't offer much protection. The Club at best makes burglars steal the car next to you. For real safety we park on nice streets where people notice if somebody smashes the window. Or we park in garages, where somebody watches the car. In both cases people are the essential security element. You always build the system around people."</clip>

    • Re:The beef (Score:3, Funny)

      by Spunk ( 83964 )
      The Club at best makes burglars steal the car next to you.

      Well, that sounds to me like it works perfectly.
    • I remember this was one of the arguments for the use of the "LoJack" system. The idea was that car alarms and other visible means of theft deterrent are actually anti-social. A car thief finds a car with the club on it, he goes to the next car in line. The owner of the first car has effectively passed his misfortune onto someone else. However, with a hidden system like the LoJack, the criminal was caught and tossed in jail where he belongs. Society as a whole receives a benefit. Of course, I'm not a big fan of carrying a tracking device.
      • How is a Club user responsible for the theft of someone else's car? Isn't it the car thief's "antisocial" behavior at the root of this?
        • Owners of The Club are not responsible for the car theft. However, by using the club, they are paying (for the club) to offload the chance of theft onto other people. The club is no good (you just cut the steering wheel and remove it), but if it did deter thieves somewhat, you could end up with a situation where everybody had to buy the club just to maintain the status quo of which cars got stolen. If it slowed thieves down, they would get caught more, so I guess that would be a good thing.

          Hmm, I guess I don't have a useful conclusion on this one.
  • I just wrote something similar at Gamasutra: Cyberspace in the 21st Century: Security is Relative [gamasutra.com]
    • I just wrote something similar ...
      Just a suggestion. The link took me to a login box. If you want someone - even (shudder) /.-ers - to comment on it, please post a summary or put it where the world and its pet dog can see it without jumping through hoops.
      • Just my revenge for all the NYTimes stories...

        but I take your point.

        Maybe one day NYTimes, Gamasutra and other sites will abandon the idea of registrations.

        Here's a sample of the article just to give you a flavour:

        "With the advent of the Internet it's important not to lose sight of what we're trying to secure, and risk ending up thinking security is sacred. Fragile systems that lose significant value when they're compromised by accident or deliberate act are indeed candidates to warrant considerable security. However, more flexible systems that are expected from the outset to be compromised (perhaps only in part) on a continuous or occasional basis can still maintain their value. Security for such systems is, and must be, an intrinsic property and not an added feature.

        The thing is, there's a risk that by continually reinforcing a system's security it simply becomes more and more complicated, burdensome to maintain, unwieldy, and worst of all, ever more fragile. That's why I think it's useful exploring analogues to networked computers, it broadens one's perspective of what's important and how much security, or lack of it, other systems can tolerate.
        "
  • RSA Wars (Score:5, Funny)

    by DarkHelmet ( 120004 ) <<ten.elcychtneves> <ta> <kram>> on Tuesday August 13, 2002 @03:28AM (#4060138) Homepage
    During the 1990s Schneier was a field marshal in the disheveled army of computer geeks, mathematicians, civil-liberties activists, and libertarian wackos that--in a series of bitter lawsuits that came to be known as the Crypto Wars

    Luke: You were in the Crypto Wars?

    Schneier: I was once an RSA Knight like your father. He was the best Composite Factorer in the whole galaxy... I see you have written programs that factor large numbers yourself. He was a good friend. Before the Dark Times, before The Empire [microsoft.com].

    Luke:What happened to my father?

    Schneier:A young RSA Knight by the name of Len Adleman [usc.edu] betrayed and murdered your father. Adleman was seduced by the Dark Side of the Force [usc.edu]

  • by richard-parker ( 260076 ) on Tuesday August 13, 2002 @03:30AM (#4060143)
    The article briefly mentions the following:
    A few years ago Schneier devoted considerable effort to persuading the State of Illinois to issue him a driver's license that showed no picture, signature, or Social Security number.
    I haven't heard that story before. Can somebody point me to a source with more details?
  • by Camillo ( 123336 ) on Tuesday August 13, 2002 @03:36AM (#4060157)
    Bruce's "enlightenment" is of course a good thing, and he is brilliant in his way of presenting security issues for the masses. However, security engineering is far from a new field, and many of the principles are well established.

    Take a look at Ross Anderson's home page [cam.ac.uk], read a few of his classics like "Why Cryptosystems Fail", "Programming Satan's Computer" and "The Cocain Auction Protocol".

    Ross' book "Security Engineering - A Guide to Building Dependable Distributed Systems" should be mandatory reading for anyone who writes code for networked computers - no matter what kind of computers.

    I feel that one of the biggest threats to Internet security today is the inability to learn from history. That is, after all, at the core of the engineering arts and sciences.

    • one of the biggest threats to Internet security today is the inability to learn from history. That is, after all, at the core of the engineering arts and sciences.

      I think you mean the ability to learn from history is the core of engineering and science. The inability to learn from history is the core of legislating solutions to technical problems. The result of this inability is almost always something that is determined by the law of unintended consequences.
  • by martin ( 1336 )
    Nice to see one who is so respected to be humble enough to say 'I was wrong'.

    Of course this is old news as his book "Secret & Lies" discusses all this in detail.
  • Like Schneier says, a good sentry is one of the best additions to the security blanket. Trouble is, where do you find good sentries? Night watchmen are some of the worst paid employees on the payroll, and time and time again have been shown to miss the obvious attacks. It's repetitive, boring work that most people would hate.

    The problem lies with the way the human brain operates. We evolved to match patterns as a survival skill. To pick out images from masses of almost random data. Is that a piece of ripe fruit on that tree over there? We are so good at it that we can see patterns in anything: faces in inkblots, or subtle "head and shoulders" movements in stock markets. Generating false positives is also a survival trait when it comes to looking for threats. Is that moving mass of lines the face of a tiger, or a snake? Better to be cautious and check it out.

    But monitoring for exceptions is not a thing that humans are good at. Staring at production lines filled with identical chocolates looking for the one that isn't right, human eyes and brains fail at this task. What happens is that your pattern matching circuitry spots the wrong pattern: "these are all the same so there is no problem" each new piece of incoming data confirms this and the brain goes to sleep (try it some time!).

    At airport scanners the operators have to take very frequent breaks from studying the X-ray images of suitcases. On top of this, every 10 minutes or so, a bag is fed through that they should react to. Like they say, this keeps them on their toes, or put differently stops pattern matching saying "I already found the pattern, stop bothering me with new data". This approach is better but it is still too labour intensive.

    IMHO the way forward lies in a combination of human and automatic scrutiny. The automatic part consists of filtering out the routine, leaving human eyes to sort out the final details. If a security system generates 1,000 alerts an hour it will be ignored. Making a more sophisticated system that cuts down the number of false alerts is usually expensive and as Schneier suggests more likely to weaken things by giving a false sense of security. If however, the system generates 1,000 alerts and flags up the 10 most suspicious for human eyes to look at in detail then you capture the best of both worlds. The smart piece is the algorithm that ranks the alerts as more or less interesting and this is where security experts make the difference.

    What Schneier is suggesting is that human+machine monitoring of a smaller range of very specific inputs is better than automatic trawling of masses of nonspecific input.

    Good article, well worth the read.
    • The problem is not so much the way the sentry's brain function as the way the PHB's brain functions. Too many amateurs either have no security, or they think that the one or two layers they've got is sufficient. Even worse, they might have a decent security system in place, but they compromise it because they can't be bothered doing their part to make it work.

      The first thing you have to ask is: what do you want to make secure? Is it a PC, a site, an aircraft? There's not much point having layers of sentries, keys, and passwords if your "secure" computer is hooked up to the net. Even with firewalls, if it's supposed to be secure, it shouldn't be online in the first place. If the site's meant to be secure, then it helps to only have one gate that people enter and exit through. Another mechanism is for employees to keep their ID cards clearly visible and to challenge any unfamiliar person walking around unescorted. When it comes to securing aircraft, the most sensible option is in the article: put in a half-way decent door! Sure it weighs a few kilo extra, but not so much that it will cause a problem, given the normal distribution of passenger weights.

      Technology can help human security - some of the better airport X-ray machines will highlight different items on screen - organic items in one colour, metallic in another colour, so that the operator's eye is drawn to the suspect item.

      Your idea of filtering the alerts so that the operator only sees the top 10 is nice, but I think it suffers from a fatal flaw. Yes, 1000 alerts per hour will be ignored, or else it will overwhelm the operator, which is no better. Yes, designing a proper system and calibrating it to reliably remove false alerts is very expensive, although it shouldn't cause a false sense of security if it's used correctly. But aren't you proposing the exact same thing by generating 1000 alerts but only flagging the 10 (or even 100) most suspicious? You're filtering the results before giving them to the user, and unless they have a lot of time, or are very keen, they're not going to check out any more than the 10 that get shown to them. Also, what's the point of generating that many alerts if most of them are being ignored or filtered? On the one hand, your system might be too sensitive, in which case you can hopefully calibrate it to a better response rate. On the other hand, someone might be expecting to evaluate all the alerts later, but in any site big enough to generate 1000 alerts per hour, chances are that if they slip through immediately, it will be too late to do anything once the alerts are finally reviewed.

      Ultimately, the best security system is one that's designed specifically for your application, that uses a range of different techniques (hence the term distributed), where the limitations of each method are well known, and there is at least one other method to cover a gap if something gets beaten. One mans perfect security system might be overkill for someone else, and an insecure joke to another.

      Just my $0.02
  • The real problem (Score:2, Insightful)

    by joneshenry ( 9497 )
    Schneier's new approach according to the article is to rely on "intelligent, trained" and "well paid" people instead of blind trust in technology. Throughout the article Schneier repeatedly attempts to cut through official explanations to reveal their foolishness by examining root causes. Yet in the end Schneier can say nothing about the causes of the real root problem, at least in the US--almost no one is willing to pay for these people nor give them the freedom to do their jobs in the best way they can see. Unless security is handled by well paid people from top to bottom there will be no real security.

    Bad technology that takes away human initiative is used in the US because the good people are too expensive and the cheap people are not reliable. Besides there is a perpetual labor surplus especially of the people who will work for cheap due to basically unrestricted immigration. And since so many of the immigrants come from non-Western European countries there will never be mass public support for paying them higher wages. Those are the facts that limit the effectiveness of security in the US, or the effectiveness of many other things.

    There is an incredible article in this month's The Weekly Standard Patio Man and the Sprawl People [weeklystandard.com]. David Brooks' insight into the American psyche is that the American approach to problems is to move away, especially to move away from people who are different, to move to a community of similar people. Where people stay rooted such as the South there is open conflict. Where people move to new communities such as the suburbs there can be a facade of acceptance--until too many of the different people start to move in.

    In recent years I have noticed an increasing chorus in the media extolling the virtues of Europe, its peacefulness, its openness. I feel a small nagging doubt similar to when I heard praise for Japan's system in the early 80s. In the case of Japan the Sony headed by Akira Morita is not the Sony of today, and in the case of Europe, it does not seem to be headed in the direction of the one long-lasting democracy on that continent--Switzerland. The vaunted EU hardly submits every question of importance such as the Euro to referendum, unlike Switzerland. And even more worrisome, the direction of Europe the past century has been continuous fissioning of countries, instead of Switzerland's keeping itself together despite populations native speaking at least four different languages. Europe essentially murdered or expelled much of its Jewish population, it has not solved the Roma problem, and now Europe is struggling with Muslim immigration.

    Even when European countries stay intact all is not well. Is not Italy's problem between north and south the same as the United States'?

    Almost all conflict in the past couple of centuries can be summarized as the painful transition from agricultural serfdom to industrial society. A successful modern nation needs to actually pull off two incredible reformations, while most can't manage one. First agricultural serfdom has to be reformed so that small farmers own their land. Switzerland accomplished land reform in the 1800s, Japan had land reform imposed on it by General Douglas MacArthur during the Occupation because it was the only way to prevent a Communist insurrection. Once the land is put in the hands of a land-owning small farmer class there will be no danger of revolution. Sadly nations such as Russia have not accomplished just this one step over the past two centuries. Second, and perhaps paradoxically, the populace must in large part move to the cities and the power of the rural areas over the government must be diminished, for the rural areas tend to be more conservative and less willing to support reform.

    Needless to say the vast majority of the nations on this planet have not successfully reformed themselves, twice. Thus there is an endless supply of refugees and endless labor surplus. Security remains far off and elusive.

    • In recent years I have noticed an increasing chorus in the media extolling the virtues of Europe, its peacefulness, its openness. I feel a small nagging doubt

      One of the key differentiators between the US and the EU is that the US has a far lower population density. And because of the conquest and genocide of the indigeneous population, much of the land in the US was wide open and available for colonisation. As your referenced article points out, this led to the emergence of an "avoidance" strategy for handling social development in the US: just up stakes and move west, young man.

      For the most part, Europeans don't have this luxury. The social networks that bind European societies are more complex and tightly knitted than US ones. It's related to how the sociologist Norbert Elias [google.com] describes social interdependencies and the mannered society. European manners have evolved to handle large groups of sometimes wildly divergent peoples and cultures that must live intermingled with each other.
  • Like everyone else is saying, this article is well worth the read.

    I am working to start up a business involved with computer systems and security - both on a software/hardware level, and also general building security. This has given me some great inights, and I'll certainly look to read more of his works.

    It is interesting, he has confirmed something that I have considered an immutable law - that no matter how failsafe a system is, it will always fail. This is proven again and again throughout history, and there is no reason for us to expect it to stop. There is no perfect government, no completely secure castle, no perfect human - failure WILL occur, so plan for it.

    This article serves also as a good reminder to get back to reality - there is a digital world, but it exists in a real world. Security cannot be automated, and never will be. When a new technology emerges so will the ability to defeat it. We must remember the human factor in everything.
  • by Styx ( 15057 ) on Tuesday August 13, 2002 @09:22AM (#4061240) Homepage

    Shneier et al just released a paper [counterpane.com] about a PGP/GPG vulnerability [theregister.co.uk]. This vulnerabilty relies on the PGP user not being paranoid, and doing something that's not too smart.

    So, once again, you're only as secure as the weakest link, which is often the user...

    • I agree. All the technology in the world won't make anything secure if the people using it aren't security conscious

      What good are passwords when they're on a post-it note, taped to the monitor?

      Forget that PGP vulnerability. How many people would accept and use a fake public key without checking it's validity first?

      How many people put passwords on somthing if it's optional?
      How many people use "password" as their password?
      How many CEOs use "password" as their password?
      And that's taped on a post it note to the side of their monitor?

      My father didn't like the idea of having to use a real password on his *PAYPAL* account. He wanted to use his username for the password!

      Maybe it's time we invest all this money that we're using on Crypto R&D, and spend it on basic security courses for our users.
  • by kpharmer ( 452893 )
    So, security benefits from a strategy in which it fails gracefully, and is best implemented in small, easily manageable pieces?

    And security also benefits from a reliance upon complex (human) intelligence instead of simplistic boolean concepts of success/fail?

    Hmmm, doesn't that sound like just about every other kind of system in the world? Whether we're talking about how to build elegant systems that fail gracefully, or how to build systems that deliver what you want rather than what is easy, there are examples all around us.

    However, if we look farther ahead and we will see another set of problems. For example, a reliance upon humans to evaluate system performance (whether the system is a security system or a telephone network) is expensive and is also unreliable. One of the next steps is SPC - where we can provide tools to help the humans automate much of the drudgery of looking through gazillions of bytes of low-level information.
  • The Cliff's Notes version reads as follows:
    "... the most critical aspect of a security measure is not how well it
    works but how well it fails."

    "... security measures must avoid being subject to single points of
    failure.... once hackers bypass the firewall, the whole system is often
    open for exploitation.... Finally, and most important, decisions need to
    be made by people at close range -- and the responsibility needs to be
    given explicitly to people, not computers"

    "...security schemes should be designed to maximize ductility, whereas
    they often maximize strength."

    "... Secrecy, in other words, is a prime cause of brittleness -- and
    therefore something likely to make a system prone to catastrophic
    collapse. Conversely, openness provides ductility."

    "... brittleness is an inherent property of airline security."

    "... Smart cards would not have stopped the terrorists who attacked the
    World Trade Center and the Pentagon.... their intentions, not their
    identities, were the issue."

    "[Good Security]'s most important components are almost always human."

    "A typical corporate network suffers a serious security breach four to six
    times a year .... A typical corporate network is hit by such
    doorknob-rattling several times an hour."

    "... murderous adversaries are exactly why we should ensure that new
    security measures actually make American life safer"

    "One key to the success of digital revamping will be a little-mentioned,
    even prosaic feature: Training the users not to circumvent secure
    systems."

    "...technology can't save you ... You always build the system around
    people"
  • OK, it's off-topic, but the article does start out with it. Can't even find a non-chain restaurant for lunch in Silicon Valley? Where was he? Must have been over on 280 near the airport or something, because just about anywhere else you can find real food if you go more than a block or two off the freeway. Some of it's boring, but it's certainly around.
  • Wow, it's the most interesting article I've read this summer. I really suggest people who are interested in security to take the time to read it entirely; it's well worth it.

    GFK's

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...