Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security IT

The Six Dumbest Ideas in Computer Security 792

Frater 219 writes "The IT industry spends a huge amount of money on security -- and yet worms, spyware, and other relatively mindless attacks are still able to create massive havoc. Why? Marcus Ranum suggests that we've all been spending far too much time and effort on provably ineffective security measures. It may come as a surprise that anti-virus software, penetration testing, and user education are three of "The Six Dumbest Ideas in Computer Security"."
This discussion has been archived. No new comments can be posted.

The Six Dumbest Ideas in Computer Security

Comments Filter:
  • zerg (Score:5, Funny)

    by Lord Omlette ( 124579 ) on Sunday September 11, 2005 @04:45PM (#13533623) Homepage
    Unless they ban the movie Hackers and eradicate all copies of it everywhere, they're not gonna make hacking uncool...
    • Re:zerg (Score:5, Funny)

      by H_Fisher ( 808597 ) <[h_v_fisher] [at] [yahoo.com]> on Sunday September 11, 2005 @04:56PM (#13533692)
      Forget the computer-security angle; I would suggest this be done as a humanitarian action.

      Come to think of it, why the hell isn't the UN trying to do this already? Won't somebody PLEASE think of the children?

    • Re:zerg (Score:5, Interesting)

      by Kymermosst ( 33885 ) on Sunday September 11, 2005 @04:59PM (#13533705) Journal
      Unless they ban the movie Hackers and eradicate all copies of it everywhere, they're not gonna make hacking uncool...

      Don't forget Sneakers, which was way cooler (IMNSHO) than Hackers.

      • Re:zerg (Score:3, Insightful)

        No, no. Before Sneakers there was "War Games."

        Cool by default because it was a movie about hacking before the world at large even knew about hacking (and phreaking, and blue boxes...)

        • Re:zerg (Score:3, Insightful)

          No, no. Before Sneakers there was "War Games."

          Cool by default because it was a movie about hacking before the world at large even knew about hacking (and phreaking, and blue boxes...)

          Not to mention the fact that, unlike so many other movies about hacking, War Games involved actual research on the system being targeted on the part of the main character in the movie. Sure, most of the research was done as a montage because otherwise it's boring, but it was strongly implied that he spent weeks trying to

    • Re:zerg (Score:3, Informative)

      by chrysrobyn ( 106763 )
      Unless they ban the movie Hackers and eradicate all copies of it everywhere, they're not gonna make hacking uncool.

      There were precisely two cool things about Hackers [imdb.com].

      1. Angelina Jolie.

      2. Airbrushed keyboards.

      Sneakers [imdb.com], on the other hand, Hollywoodified an already absurd idea..

  • by a_greer2005 ( 863926 ) on Sunday September 11, 2005 @04:47PM (#13533631)
    is the unpatched laptops that are fine while in the cacoon of the company LAN/WAN/VPN, but are all too often connected directly to the net by workers who take them home or road warriors who get on the net the second they hit the hotel room.

    These people get the crap and then bring it into the cacoon, thus negating the hundreds of thousands of dollars of security infrastructure

    • by Johnny Mnemonic ( 176043 ) <mdinsmore@nosPAM.gmail.com> on Sunday September 11, 2005 @05:39PM (#13533896) Homepage Journal
      We give our users Mac laptops, which largely corrects this issue.
    • by mr_z_beeblebrox ( 591077 ) on Sunday September 11, 2005 @09:24PM (#13534921) Journal
      thus negating the hundreds of thousands of dollars of security infrastructure

      They didn't negate it. The stateful firewall still stopped traffic at it's border etc... What they did was expose the lack of hours spent planning the security. here is what I do and you are free to do it, improve it or ignore it (that makes it free). In my company every network jack that does not have a direct attached device on it is plugged into a bank of switches that are seperated from my network by a pixfirewall. The firewall has rules that allow basis e-mail, web and specific application data to go accross. Most traffic is denied. If anyone plugs a laptop in they are able to do those things but are unable to do Windows file share, domain login etc... If they need to use those I have to be given control of the box and it does not leave the building.
    • Actually, if his points were implemented properly on those laptops, then they wouldn't be capable of being carriers of infection.

      As well, any network that can get completely owned by a road warrior is inherently brittle. It needs more defense in depth.
  • by Anonymous Coward on Sunday September 11, 2005 @04:47PM (#13533636)
    What are some of the dumbest security *policies* you've encountered?

    I worked for a firm earlier where we had to change our passwords every week where the password had to 1) be exactly 14 characters and 2) be ~60% different to the previous four passwords.

    The result was of course that almost every user had their passwords on post-it notes.
    • by nunchux ( 869574 ) on Sunday September 11, 2005 @05:28PM (#13533845)
      Five years or so ago I did freelance work for a short-lived "online greeting card company" (shut up, I know.) Basically I'd go to a control panel to get an order, adjust the proof in the Flash template and send it back. I had absolutely no access to any other part of the site, the databases, not even the customer's contact info (much less credit card #'s.)

      I still had to change my password every two weeks, with conditions similar to what you describe-- IIRC ten or more characters, mix of numbers and letters, had to be substantially different than the one before. I eventually a system down for remembering what it was, but I'll be the first to admit I was using my Mac's "stickies" to keep track of the password for the first six months. Considering they were dealing primarliy with graphic designers, not programmers, I can only imagine what some of the other employees were doing. Since they also weren't the easiest employers to deal with, I can only imagine that the lack of give-a-shit factor kept many employees from trying to hard to keep that ever-changing password a closely guarded secret. Let me stress that the damage that could be done if my password was compromised was completely negligible-- maybe someone could have inserted a dirty message in a greeting card, but it still had another check to go through before it went online!

      Basically my point is, there's a point where security for security's sake is an annoyance. I'm certainly not an expert in these matters but IMO making low-level users go through hoops is just going to foster ill will, better to lock down their privileges in the first place and make sure no damage could be done if that account was compromised. Frequently changing admin passwords is of course another matter, but that's part of the responsibility that comes with the job.

    • That is actually not too bad, unless you have webcam pointing at the sticky note. The point being that someone on the other side of the globe cannot see your sticky notes and cannot easily crack a 14 character password either, while locally, you probably have some form of physical security - you do lock the door right?
    • I worked for a firm earlier where we had to change our passwords every week where the password had to 1) be exactly 14 characters and 2) be ~60% different to the previous four passwords.

      For real effectiveness, though, you have to implement this the way we have it at work -- every webapp, from travel reservations to sexual harassment, training has a different account with different login names and mandatory strong, rotated passwords.

    • by Haeleth ( 414428 ) on Sunday September 11, 2005 @05:42PM (#13533913) Journal
      I worked for a firm earlier where we had to change our passwords every week where the password had to 1) be exactly 14 characters and 2) be ~60% different to the previous four passwords.

      Man, you had it easy. My current place uses iris scans for authentication. We have to swap out our eyeballs every 30 days, and our new eyes can't be the same colour as the last pair.
    • The password policy at that firm sucks, but writing passwords on post-it notes isn't such a bad idea. Consider these two different policies:

      A. User allowed to use simple passwords that they can easily remember such as 'password', or 'abc123'. This user doesn't have to write their password down to be able to remember it.

      B. User with a complex password, but writes it on a post it note because they don't stand a chance in hell of remembering it.

      If user B is also requested to take the simple step of placing the
    • by Timbotronic ( 717458 ) on Sunday September 11, 2005 @10:16PM (#13535156)
      I taught a programming course at an Australian government department where they had a "no unauthorised software" policy. Unfortunately, the language I was teaching wasn't on their list, so they wouldn't allow me to install it on the training room computers that weren't even connected to the office network!

      Needless to say the course was less than effective and illustrates what should be the seventh dumbest idea - "Security policies have no effect on productivity". The amount of grief caused to companies by rigid, pedantic security nazis is astounding.

      • MOD PARENT UP (Score:3, Insightful)

        That is really a point worth considering. There are many Dilbert cartoons that use it as a punchline but I never paused to think that that ALL security have a negative productivity aspect (not necessarily net negative, but there is always something negative) to them. Perhaps a standard part of any security procedure should be to list negative aspects because I think people are too idealistic as with "Hey! Lets change passwords everyday!"
  • by kcbrown ( 7426 ) <slashdot@sysexperts.com> on Sunday September 11, 2005 @04:48PM (#13533641)
    Viruses occur because the foundation of the system the users are using isn't secure. The same is true (perhaps to a somewhat lesser degree) of worms.

    To illustrate, ask yourself this question: why do most corporate computer users have permissions on their computer to download and execute arbitrary programs?

    Now, it should be noted that even Linux gives the average user this capability. But that needn't be so.

    Antivirus programs are a bandaid, not a solution. But most people treat them as a solution, and therein lies the problem.

    If you really want to take care of security issues, you have to do so at the foundation.

  • by snuf23 ( 182335 ) on Sunday September 11, 2005 @04:48PM (#13533644)
    Yeah, I'm taking all my anti-virus software off the computers right now. I don't know why I ever though it was useful anyway. It's more efficient to deal with the infections as they come in then it is to try to prevent it.
    I'm gonna stop using condoms too while I'm at it.

    • by JourneyExpertApe ( 906162 ) on Sunday September 11, 2005 @05:10PM (#13533756)
      I'm gonna stop using condoms too while I'm at it.

      What does making water balloons have to do with preventing a computer infection? I don't get it.
    • I did remove all of my anti-virus software and move to code with many fewer vulnerabilities. Is it perfectly secure? No, but it is closer than any other OS I can use. All I can say is 'Thanks Theo'. Running OpenBSD does have costs for me, since its harder to access some multimedia and java applets and I am running slightly older packages than I would if I was running FreeBSD, or many Linux distros.

      I also stopped using condoms, since I limit my activities to my wife. I'm also free from those sorts of

    • Incidentally, I have never run anti-virus software, and yet I have never had a virus. And I run Windows. And, yes, I would know if I had a virus; I regularly help other people remove viruses from their systems. Of course, the people I help typically are running AV software; little good that did them.

      If you're careful about what you install, stay away from Kazaa and warez, and keep an eye on your windows\currentversion\run registry entries, and for god's sake do not open file attachments, you can stay saf
  • Highly applicable (Score:5, Informative)

    by gunpowda ( 825571 ) on Sunday September 11, 2005 @04:51PM (#13533662)
    The Internet has given a whole new form of elbow-room to the badly socialized borderline personality.

    Woah, he's not talking about Slashdot?

  • by TelJanin ( 784836 ) on Sunday September 11, 2005 @04:51PM (#13533666)
    In #4, "Hacking is Cool", he obviously means "cracker." Also, the last part of that section says that security professionals should not know how to crack. Bullshit. If you don't know how exploits are used, how can you block them? How can you write a secure program if you don't know what a buffer overflow is?
    • by TLLOTS ( 827806 ) on Sunday September 11, 2005 @05:12PM (#13533763)
      I think you misunderstood his point with #4. My understanding of what he was saying was that time spent learning how to hack into a system with xyz could be better spent simply learning about good security practices (such as how to prevent a buffer overflow). Rather than spending the rest of your life learning about each new exploit, you simply focus on why those exploits are occuring, and fixing them at the source, rather than trying to simply keep patching.
    • In #4, "Hacking is Cool", he obviously means "cracker."

      There's little point fighting battles that you can't win, unless you mean to make an example in your loss. In this case, you can't possibly win and there's no example to make (except perhaps that language evolves - big deal); I'd suggest saving your effort for something you *can* make a difference to.
  • Poor Article (Score:5, Interesting)

    by hoka ( 880785 ) on Sunday September 11, 2005 @04:53PM (#13533674)
    The article really fails to address any real issue with security. What the article really read like was something more along the lines of, "Six Things Dumb Management Sometimes Do In Relations to Computer Security". The real problem with technical computer security is the poor quality of software (software designed without security, or without enough security in mind), and the general lack of general system protection (NoExec memory, Stack Smashing/Active Bounds Checking, Targetted/Strict ACLs, etc). The damage worms/viruses/hackers can cause on a much stricter system is really far less than a normal system, if the penetration can even be achieved in the first place.
    • Re:Poor Article (Score:5, Insightful)

      by X.25 ( 255792 ) on Sunday September 11, 2005 @07:24PM (#13534369)
      The article really fails to address any real issue with security. What the article really read like was something more along the lines of, "Six Things Dumb Management Sometimes Do In Relations to Computer Security".

      I guess that people who comment like this have never done any serious security work in their life.

      If you had, you'd acknowledge all the points (plus the extras) easily...
  • On my webservers... (Score:5, Interesting)

    by Space cowboy ( 13680 ) * on Sunday September 11, 2005 @04:54PM (#13533678) Journal

    I patch PHP to set a constant in the namespace of the script whenever a 'dangerous' function is called (eg: system(), shell_exec, the backtick operator etc., others :-). The webserver also prepends (php.ini: auto_prepend_file) a PHP file that registers a shutdown-hook. Those constants can then be examined in the shutdown hook code to see if any of the dangerous functions have been called, and if so, check to see if *this* script is allowed to call them.

    If the script is allowed to call the functions, all well and good, it's just logged. If not, the offending IP address is automatically firewalled. I purloined some scripts [ibm.com] from the 'net that allow shell-level access to manipulate the firewall.

    So, now I had a different problem - the webserver wasn't running anywhere near the privilege needed to alter the firewall, and I didn't want to just run it under sudo in case anyone broke in. I wrote a (java (for bounds-checking), compiled with gcj) setuid program that takes a command string to run, an MD5-like digest of the command, and a set of areas to ignore within the command when checking the digest. The number of areas is encoded into the digest to prevent extra areas being added. If the digest doesn't match, the program doesn't run. This is a bit more secure than 'sudo' because it places controls over exactly what can be in the arguments, as well as what command can be run. It's not possible to append ' | my_hack' as a shell-injection.

    So, now if by some as-yet-unknown method, you can write your own scripts on my server (it has happened before, [sigh]), you're immediately firewalled after the first attempt - which typically is *not* 'rm -rf /' :-) Perl and Python are both unavailable to the webserver uid, so PHP is pretty much the obvious attack vector.

    Well, PHP and SQL injection of course, but the same script is used there - if the variables being sent to the page are odd in some way (typically I look for spaces after urldecoding them as a first step - SQL tends to have spaces in it :-), then the firewall is called on again. It's all logged, and the site-owners get to see when and why the IP is blocked. Sometimes it's even highlighted problems in their HTML :-)

    What would be nice would be a register within a PHP script that simply identified which functions were called. In the meantime, this works well for me...

    Just thought I'd share, because it's similar to what the author is saying regarding only trusting what you know to work, and everything else gets the kick (squeaky wheel-like :-)

    Simon

  • DRM (Score:4, Interesting)

    by Kelerain ( 577551 ) <avc_mapmaster AT hotmail DOT com> on Sunday September 11, 2005 @04:54PM (#13533683)
    That nice list, and they didn't include Digital Rights Management [dashes.com]? The link is to a Cory Doctorow talk that explains and argues these points (it was for a talk he gave to microsoft)
    1. That DRM systems don't work
    2. That DRM systems are bad for society
    3. That DRM systems are bad for business
    4. That DRM systems are bad for artists
    5. That DRM is a bad business-move for MSFT
    A very good read if you are in the position of explaining this to someone in a position to mandate DRM.
  • DailyDave (Score:3, Interesting)

    by tiny69 ( 34486 ) on Sunday September 11, 2005 @04:55PM (#13533686) Homepage Journal
    There's already been some entertainment over Marcus's article on the DailyDave. Dave Aitel doesn't agree with Marcus.

    [immunitysec.com]http://lists.immunitysec.com/pipermail/dailydave/2 005-September/002347.html [immunitysec.com]

    Dave's "Exactly 500 word essay on "Why hacking is cool, so that Marcus changes his web site"." [immunitysec.com]http://lists.immunitysec.com/pipermail/dailydave/2 005-September/002366.html [immunitysec.com]

  • by suitepotato ( 863945 ) on Sunday September 11, 2005 @04:57PM (#13533699)
    is the permit by default tendency. This is like having a fence that springs out of the ground only when certain people are sensed approaching it. It needs to be up and topped with barbed wire and the only gate needs to be locked until someone is given a key to it. NAT routers are like that. They can only forward traffic when you bother telling it to and until then sit there stupid making you wonder why your new SSH installation won't talk to the outside world.

    OTOH, it is a collosal pain in the arse to deny all traffic and only allow what you want because so much code is network aware these days and designed to talk to some place across the net. Then again, it does tell you which apps are communicating in the first place.

    On my Windows boxes I use Sygate Personal Firewall to create a specific list of allowed executables and block everything else with a block all entry at the bottom of the fall-through list. No match, no talk. Inbound and out. Combined with NAT it makes for very little traffic reaching my internal network. When I leave my desk for the night and Windows is running, remove a few check marks and save and it only allows the file sharing app to talk and I keep that updated and locked down at all times.
    It also can be set to approve or deny execution of code that may have changed since last allow/deny challenge.

    That which is not forbidden is not only not compulsory, but probably suspicious.
  • by Anonymous Coward on Sunday September 11, 2005 @05:09PM (#13533749)
    1) Restrictive password naming policies

    Password must be 10+ characters in length, contain upper and lower case letters, 3 numbers and 2 special characters.

    Result:

    Users keep their passwords on post-it notes stuck to their monitors.

    2) Constant password expiration

    Passwords expire every 3 months. New passwords can not resemble old passwords.

    Result:

    Users keep their passwords on post-it notes stuck to their monitors.

  • by Quirk ( 36086 ) on Sunday September 11, 2005 @05:15PM (#13533776) Homepage Journal
    "Hidden in Parker's observation is the awareness that hacking is a social problem."

    Crime as a problem of context is studied in Gregory Bateson's seminal book Mind and Nature: A Necessary Unity [amazon.com]. Bateson addresses two flaws in our court system. One is to treat a crime as something isolated and somehow measurable in penal terms. Taking a crime out of context, i.e., the makeup of the criminal, is blind to the forces that generate criminal actions.

    Bateson speaks of (crime) "...as not the name of an act or action; it is the name of a frame for action. ...( he suggests)... we look for integrations of behavior which a) do not define the actions which are their content; and b) do not obey ordinary reinforcement rules." In this context he suggests play, crime and exploration fit the description. As long as we are only able to punish according to some sort of arbitrary eye for an eye method of bookkeeping we will be unable to root out crime.

    Bateson's second criticism of our judicial system addresses it's adversarial nature. He writes... "adversarial systems are notoriously subject to irrelevant determinism. The relative 'strength' of the adversaries is likely to rule the decision regardless of the relative strength of their arguments. Bateson's second

    He further goes on to a brilliant analysis of the Pavlovian study of dogs in terms of the dog's view of the context; and, how the dog's context is violated when the dog's view of a "game" of distinction is morphed into a game of guessing without there being any markers to tell the dog the context of the game has been changed. This switch in context drives neurotic and violent behaviour in the dog. I suspect much anti social behaviour is driven by the criminal's inability to read society's context markers.

  • by Phat_Tony ( 661117 ) on Sunday September 11, 2005 @05:19PM (#13533796)
    "On my computer here I run about 15 different applications on a regular basis. There are probably another 20 or 30 installed that I use every couple of months or so. I still don't understand why operating systems are so dumb that they let any old virus or piece of spyware execute without even asking me."

    Try OSX. As of some update about a year ago, OSX stopped having "default permit" for launching applications by double-clicking. If you double-click and that leads to launching an executable that hasn't been run before, it pops up a dialog to ask you about it.

    Thus, no more executables bearing viruses disguised as documents.

    • The weak spot in this is, that for it to work, the user have deny the executable from running. Most users don't. Especially not if the e-mail containing an the executable contains some plausible explanaiton why they should allow ti to run. E.g. telling them that it is an important secrurity update from Apple.

    • by Have Blue ( 616 ) on Sunday September 11, 2005 @06:21PM (#13534057) Homepage
      Actually, the permission-to-launch dialog does not protect against malicious applications disguised as documents. If you double-click an app it will launch without question. What the dialog box defends against is an automated exploit that involves sending an application and a document to a system and then a request that the document be opened, which would launch the app before this dialog was introduced.
  • The Final Solution (Score:4, Interesting)

    by rufusdufus ( 450462 ) on Sunday September 11, 2005 @05:21PM (#13533807)
    There is a way to fix security problems on end-user machines completely.
    The solution is to keep the operating system and applications on read-only media. The end-user operating system of the future should be designed around this idea, and they should reboot from readonly media on a regular basis, this way viruses cannot spread and worms cannot get a foothold.
    Its doable. Its feasable. Its the future, once engineers really decide to solve the problem.
  • by ColGraff ( 454761 ) <maron1&mindspring,com> on Sunday September 11, 2005 @05:58PM (#13533980) Homepage Journal
    From the article:

    "On my computer here I run about 15 different applications on a regular basis. There are probably another 20 or 30 installed that I use every couple of months or so. I still don't understand why operating systems are so dumb that they let any old virus or piece of spyware execute without even asking me."

    The author has a point here, but answer to his question is very simple - his computer doesn't ask for permission to execute most programs because most users would absolutely panic if their computer regularly asked for their input.

    I base this on my own experience as a college tech, which is necessarily limited. That said, two points to consider:

    I have never, ever seen a student running in a non-administrator account on their Windows PC, even though XP supports this feature. This would prevent much malicious software from running, and avoids the "default permit" behavior that the article author finds so odious. However, users do *not* want to see error messages when they try to run things, nor do they want to log into a different account to install their p2p flavor of the week. They want things to "just work". So, non-administrator accounts are fantastically unpopular.

    Another example: Zonealarm. My school encourages students - in fact, tells students they are *required* to - install ZoneAlarm. So what happens? Zonealarm asks them if they want to let, say, AIM or Weatherbug access their network connect - and the user freaks out. They think it's an error, that their computer is busted, etc.

    In short- desktop machines tend to be default-permit because desktop users are completely unwilling to deal with an alternative arrangement.
  • Locking down users (Score:5, Interesting)

    by slashflood ( 697891 ) <flow@howflo w . c om> on Sunday September 11, 2005 @06:16PM (#13534036) Homepage Journal
    I was working as an IT Manager for a mid-sized company for a while. The main problem with "locking down users" is, that nowadays there is no respect for IT Administrators anymore. Especially in small/mid-sized companies, where every single employee goes directly to his/her boss or even worse to the CEO just to complain about their "inability to work", because of the locked down computer. "The bad admin locked down the computer and I can't work anymore!". Sure, the PHB, CEO, HR won't understand the difference between user/admin rights.

    I have a pretty strong personality and a thick skin, but after a while, I gave up. Even brand-new interns complained about the situation that they were not able to install their "favourite software" or about the blocked ports at the corporate firewall.

    After a while, the HR manager came to me and said, that in four years, half of the employees complained about me. Whenever I tried to change something (firewall, user rights, ...), there were another ten or twenty complains.

    All of the users are working as administrators on their computers at home - I know that, because most of them told me about the troubles they have with spyware and viruses, but they would never accept to have lower permissions at work. The common sense is, that the computer at work is actually theirs.

    The same with company laptops. Everyone connects it at insecure networks at home, friends, hotel rooms, other companies and so on and after a business trip, you have to either reinstall the machine or remove spyware/malware.

    It's just the lack of understanding, the habit to always work with admin rights at home and the lack of respect for the job of an IT administrator/manager.
    • I appreciate the difficulty of dealing with users installing lots of software, but I have experienced the "lockdown solution" in three different organizations (two of them very large), and feel it worked poorly for me in all of them.

      Here's why:
      (1) Response times. When I made a request for installation of, or permission to install, software needed for my work responsibilities, response times ranged from 45 minutes to a couple days. 45 minutes is little enough time to find something else to do in. A period
    • by Detritus ( 11846 ) on Sunday September 11, 2005 @10:07PM (#13535128) Homepage
      What many IT admins forget is that their job is to facilitate the operations of the company, not to run the world's most secure network. You're damn right that I'm going to complain to management when I need X to do my job, and there's some pencil-neck geek in IT who, without a thought, always says "NO" to any request.
  • by pohl ( 872 ) * on Sunday September 11, 2005 @06:27PM (#13534076) Homepage
    ...the idea that it is only the ubiquity of a system (not its design & implementation) that is the greatest determining factor behind the likelihood of exploit.
  • Well said (Score:5, Interesting)

    by X.25 ( 255792 ) on Sunday September 11, 2005 @07:04PM (#13534262)
    Really good points.

    I worked in "security research" field for 10 years. I loved it.

    Then companies got involved, certifications/courses/books appeared, pentesting became a business...

    I moved to another field, for the very reasons MJR explained in his editorial.

    Everyone wanted to be "secure", but noone wanted to invest time or brains in order to achieve that goal.

    In 4 years of pentesting (and I'm talking about BIG players and companies with bright people, big budgets), I have only ONCE seen a company that actually took SERIOUS measures in order to improve its' security. I'm not talking about adding another layer of firewalls or installing new toys, but actually redesigning their security infrastructure/thinking.

    All the others wanted signed paper which says "You are secure now".

    I ended up pointing all of them to MJR's Ultimate Firewall [ranum.com]
  • by Ichoran ( 106539 ) on Sunday September 11, 2005 @07:12PM (#13534305)
    The author may be right that the things he listed are dumb ideas for mission-critical ultra-secure systems. However, he seems to be advocating the five dumbest ideas for usable systems.

    The price of Default Deny is loss of flexibility. If it is easy to avoid denial (e.g. automatic addition to a whitelist), it's just Default Permit by another name. If it's really hard, it will keep you from doing everything except that which you already know you want to do--in other words, nothing new, nothing clever, just the same stuff over and over. This would turn computers into the equivalent of a stereo system. They do thsoe narrowly-defined tasks that they were engineered to do, and nothing else.

    People are going to occasionally want to do something new. When they do, there are certain things that they almost certainly *don't* want to do. Thus, you enumerate badness to help protect them when they want to use their computer as a flexible general-purpose device.

    It's better to have systems that are secure by design. Duh. The point is, though, that even systems that are secure by design are likely to have flaws. If you look for flaws, and fix them, then you have a chance of staying ahead of other people who are looking for flaws to exploit them.

    The coolness of hacking has nothing to do with security. Hacking is cool because it demonstrates our ability to manipulate our environment, to do things that are supposed to be impossible through ingenuity. In a factory of mindless corporate drones, hacking is not cool. But if you live in the real world where programs have flaws, there is even a security use for people who enjoy finding ways to use the flaws to accomplish things that the creators didn't intend.

    Educating users is ridiculous--his point is that users should't be educated because they should be educated before you hire them. Okay, and how did *they* get educated? What happens if you have to hire real people who are talented but they haven't all gone to this magical security training school? His point *should* have been that there are only some things that can be taught, and that you shouldn't assume you can teach completely counterintuitive behavior. But you might be able to teach someone enough to avoid clicking on strange attachments without deleting photos in .PNG format sent to them by family (where .PNG was not a whitelisted attachment, nor was email from a random gmail account).

    I don't want a secure, useless system. I want a secure, *useful* system. And that means compromises need to be made between security and usability. Reading this article gives very little clue as to how to construct a good balance.
  • Educating users... (Score:4, Insightful)

    by Skreems ( 598317 ) on Sunday September 11, 2005 @07:54PM (#13534507) Homepage
    While I agree with some of his other points, I think it's really dangerous to just give up on the idea of educating users. In the long run, no matter how secure you make the rest of your system, the user is always going to be a potential weak point -- they can disable or work around your carefully implemented "perfect security" because they NEED this ability to be able to use the system. On home systems, for example, even if you go with a white list, default deny policy, the user still has to be able to add new programs. Watch them download x fancy new shareware game, give it execute and net access permissions, and totally screw your entire careful security setup.

    To make a point using the author's own analogy... while flying on an airplane, it's basically common knowledge that you don't want to walk up to the door and pull the big silver lever. Bad things happen if you do. However, if the plane has crashed and you need to get out, that's exactly the action you want to take. We don't have fire sensors that only enable the handles if the plane cabin exceeds a certain temperature... we rely on user education to make people only use this option at the right time.

    Even the author's own solution, of scraping off all email attachments and saving them via url doesn't help. If someone sends out a virus, and it gets saved to a remote server, the user can still copy it to their system and run it. But if the user is educated about the kinds of thing that can happen when they do this, and about the dangers of running software from unknown or even partially untrusted sources...
  • by BoneFlower ( 107640 ) <anniethebruce@nOSPAm.gmail.com> on Sunday September 11, 2005 @09:05PM (#13534825) Journal
    The idea that security is about technology.

    It isn't. Sure, certain engineering and design principles can help security a great deal, but when it comes down to it, security is about the human brain. If you don't run the system intelligently, it doesn't matter how well designed it is, or how well the design is implemented. You will get p0wned.

    I'd trust an all Windows 98 network without a firewall, run by someone who knows what they are doing, over an OpenBSD network locked down against everything run by my mom.
  • by russotto ( 537200 ) on Sunday September 11, 2005 @10:54PM (#13535307) Journal
    #2: Enumerating goodness.

    Guess what. You've just pretty much gone back to the dark ages. Everyone has a set of programs installed on their computer by the priesthood, and that's all they can run. Might do something about viruses. Definitely reduces the utility of the machines.

    #3: Hacking worthless
    Holding your adversary's skills in contempt is generally not a good idea. Refusing to learn them is just plain stupid. And, of course, hacking (even the black-hat sort the PC prefer to call "cracking) isn't what he says it is. Learn a particular exploit? Any script kiddie can do that. Figuring out how to identify holes and develop exploits, that's another thing entirely, and as useful for a security professional as lock-bypassing is for Medeco.

    #6: Sit on your duff and let the other guy take the lumps.

    Sure, you CAN do that. But there's reward as well as risk in adopting the new stuff. And consider that if everyone took that strategy, progress would be entirely stifled. His IT exec who waited two years to put in wireless may have saved money -- but he also had two years without wireless, which may have cost him more.
    • by Alioth ( 221270 ) <no@spam> on Monday September 12, 2005 @05:32AM (#13536350) Journal
      On enumerating goodness, in a corporate environment, that's exactly what you want: you don't want everyone to use their computer as a general do everything tool - you want them to use their computer to do the job they are supposed to do. They don't need Comet Cursor or Kazaa to do that.

      Of course, then there's the developers who (should) know what they are up to, and will need to be able to install things without having to go through the IT department for every scripting tool they need to get their job done. So you put those guys on a separate network segment, firewalled off from the rest of the office workers - so if a developer manages to clobber the network, they don't clobber the entire company.
  • by BobaFett ( 93158 ) on Monday September 12, 2005 @12:29AM (#13535634) Homepage
    The first point is entirely on the money. At least 10 years too late, but totally accurate.

    The second is just too overreaching: would you like a computer which can run 30 programs from a master list and nothing else? There are many cases where "enumerating goodness" is exactly the right thing to do, and - guess what - that's exactly how such cases are done, for example, sudo.

    The rest of the article is basically boils down to this: if you don't want your system to be hacked, don't make it hackable. Sure thing. Don't debug your programs, just write them correctly. Don't install airbags into cars, just avoid crashes. Stupid us, doing all the precautions and safety things for years. Just don't make mistakes, see how easy it is?
  • Late but... (Score:3, Insightful)

    by burns210 ( 572621 ) <maburns@gmail.com> on Monday September 12, 2005 @01:02AM (#13535726) Homepage Journal
    So this is way late to the thread, but I will mention it anyway.

    This guy has a couple good 'no duh' points and several really stupid ones. Let me elaborate:

    #1) Default Permit

    This I agree with, in the case of firewalls in a corporate environment, where the input/output can be predetermined and controlled. Everything should be blocked except for the handful of things that need to get through.

    #2) Enumerating Badness

    This idea BLOWS for desktop applications, which is what he advocates. Why is it bad? Because while he only has "30"-or so applications he uses, as most people do, those 30 are different for most users. You can't enumerate all legit software, it can't be done. You can enumerate most of it. But then you get to a list comparable to 70,000 virus signatures you are trying to leave behind. Besides, if I write my own application, my anti-virus software would need an accurate, detailed signature of what the application looks and acts like to be able to identify and allow it... Something I cannot reasonably do. Which is why we have companies creates the signatures, for the (comparably) finite number of viruses and trojans. Default Deny on a desktop, especially personal ones, is a broken, unmaintainable, BAD idea.

    Even in a corporate environment, which has more home-grown apps, you would need custom signatures for each internal app to function. Something not practical for an IT department to create. The idea just doesn't hold on a PC.

    #3) Penetrate and Patch

    His argument: if you had designed it securely, you wouldn't need to pentest it.

    Ok, but how do you know your implementation was complete to the design, or that your design didn't have a hole in it? Well, you have to test it... pentest it, that is.

    Yes, it is a great idea to securely design your apps, with secure-by-design principles. Afterwards, you STILL need to test it in a live environment to ensure you didn't forget or miss any steps. That is only a logical step. Pentesting even the most secure of networks is critical, to be able to PROVE they are secure. You can't just say 'because I said it was!' and expect that to fly.

    #5) Educating Users

    He contradicts himself. He says that you shouldn't have to educate users because they should already be educated... Which is a chicken/egg problem he never admits to. You should do both: hire competent, smart people, AND train them in the policies and guidelines of their environment.

  • by featheredfrog ( 94181 ) <featheredfrog@ancientpond.com> on Monday September 12, 2005 @04:47AM (#13536263) Homepage
    There is at least one other way to improve security...

    http://www.comics.com/comics/dilbert/archive/image s/dilbert2813960050912.gif [comics.com]

Behind every great computer sits a skinny little geek.

Working...