Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Inside The Twisted Mind of Bruce Schneier 208

I Don't Believe in Imaginary Property writes "Bruce Schneier has an essay on the mind of security professionals like himself, and why it's something that can't easily be taught. Many people simply don't see security threats or the potential ways in which things can be abused because they don't intend to abuse them. But security pros, even those who don't abuse what they find, have a different way of looking at things. They always try to figure out all the angles or how someone could beat the system. In one of his examples, Bruce talks about how, after buying one of Uncle Milton's Ant Farms, he was enamored with the idea that they would mail a tube of live ants to anyone you asked them to. Schneier's article was inspired by a University of Washington course in which the professor is attempting to teach the 'security mindset.' Students taking the course have been encouraged to post security reviews on a class blog."
This discussion has been archived. No new comments can be posted.

Inside The Twisted Mind of Bruce Schneier

Comments Filter:
  • by mattpalmer1086 ( 707360 ) on Friday March 21, 2008 @08:15AM (#22817710)
    Symmetric crypto easier than public key? Are you kidding? Public key is based on simple one-way math functions. It's easy to prove it's secure (with certain assumptions about not being able to solve hard problems, like discreet logs or factoring large numbers). If the maths is solid, you've got a good encryption algorithm. If the single hard maths problem isn't cracked, you're safe. Job done.

    I could probably invent a reasonable public key algorithm with a maths textbook to hand - but no way could I invent a good symmetric crypto algorithm. Symmetric crypto relies on scrambling things up in a way it can't be unscrambled easily. You have to know a *lot* about cryptanalysis to even begin designing one, and you can still become vulnerable to a surprise attack. There is no general way of mathematically proving that how you are doing the scrambling is secure in any way - only that it is resistant to all the known attacks so far.
  • by grassy_knoll ( 412409 ) on Friday March 21, 2008 @09:47AM (#22818500) Homepage
    It seems when many consider risk they don't consider the probability of something happening only the possibility.

    Consider the National Safety Council's Odds of Dying [nsc.org] page. According to them, one has a 1 in 73,085 chance of dying in a motorcycle accident while there's a 1 in 19,216 of dying in a motor vehicle accident as a car occupant.

    However, motorcycles are perceived ( at least by people I know, obviously a small sample ) as more risky because "people die riding those". Obviously that happens, but not to the same extent as people dying in car accidents.

    Since many people drive every day, that's a routine activity they don't seem to associate with risk; your average person doesn't seem to assign the probability of risk very high even though it's statistically more dangerous.
  • by sdaemon ( 25357 ) on Friday March 21, 2008 @10:20AM (#22818882)
    Actually, a one-time pad is an excellently secure symmetric cipher, the strength of which is dependent only upon the randomness of the pad (and the mechanism for distributing copies of the pad to the various parties who require it).

    You have to distribute copies of a secure symmetric key anyway. Distributing copies of a OTP is no different.
  • by Anonymous Coward on Friday March 21, 2008 @10:36AM (#22819088)
    You do realise that that's pretty much what Bruce is saying, too, don't you?

    Or, well, I guess you don't. Did you RTFA? For that matter, do you read Bruce's blog? I'm not saying you need to do either - it's fine if you didn't / don't -, but you should not pass judgement on him if you don't.
  • by uberdilligaff ( 988232 ) on Friday March 21, 2008 @11:19AM (#22819632)
    Nobody factors large primes, because they're, well, prime. What public key crypto depends on is factoring a large number into its two prime factors. That's what you meant, and that is still a computationally difficult problem.
  • by Violet Null ( 452694 ) on Friday March 21, 2008 @12:18PM (#22820488)
    It doesn't matter how many people die of something. What matters is the percentage of people who do it that die.

    Saying "jumping off the top of a building with piano wire wrapped around your neck" is much, much safer than being a passenger in a care because, hey, your chances of dying that way are only 1 in 492,593,129. That number just tells you how often death happened while doing that; without the vital piece of information about how many times it was attempted without dying, you don't really know anything of interest.
  • by Ungrounded Lightning ( 62228 ) on Friday March 21, 2008 @07:22PM (#22825066) Journal
    It's not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail.

    You've got to be kidding. Maybe it's not natural for most software "engineers", but I bet it's pretty natural for engineers in general.

    Indeed.

    I was tempted to take issue with Bruce on that point. After I cut my programming teeth in classified research I built a career in automobile automation engineering. I was ALWAYS looking at all the things that could go wrong, through bugs, mischance, or malice, and insuring that they were properly handled.

    And this was expected. And it was honored, and the extra time required was paid for gladly. (I happened to be better at it than most, but everybody at least tried.)

    But then I got out to Silicon Valley. And here I discovered that there was another, and much lower, standard of reliability when it came to software. The contrast was almost painful. (One of my colleagues once remarked that I was the only guy he would trust to program his pacemaker. B-) )

    And I finally realized what it was:

    In Detroit, virtually ANY hunk of software can be life-critical. A bug in the idle speed control might result in a year's production of cars that tend to stall a car's length into the intersection when accellerating from a stop sign. A bug in the factory energy management system might shut off the lights in the plant while the machines are still moving and the workers are within inches of them. A bug in the alarm on the annealing oven's flame curtain could let the plant fill with hot gas loaded with carbon monoxide, then blow the roof into the next county. A bug in the airbag tester program could fire the bag with the worker inside the test cell trying to hook it up. (To name four that I actually worked on.)

    In Silicon Valley, on the other hand, there is massive pressure to get the product out marginally ahead of the competitors. And there are business models that turn product bugs into revenue streams via maintenance services and updates. (I finally switched to "the hard side of the force" - chip design - where a million dollars of nonrecurring expenses per bug-fix chip spin and months of lead time promotes the same sort of attention to perfection that I was used to.)

    Bruce lives and works out here in the land of fruits, nuts, flakes - and software engineers that treat bugs as features and ship dates as the holy grail. So perhaps most of his experience is with "software engineers" of that sort, leading him to make an overgeneralization.

    (Then again, while I apply that sort of thinking to my software and now hardware work, Bruce lives it 24/7. So perhaps he has a point. B-) )

I've noticed several design suggestions in your code.

Working...