Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security IT

Do We Need Regular IT Security Fire Drills? 124

An anonymous reader writes: This article argues that organizations need to move beyond focusing purely on the prevention of security incidents, and start to concentrate on what they will do when an incident occurs. IT security "fire drills," supported by executive management should be conducted regularly in organizations, in order to understand the appropriate course of action in advance of a security breach. This includes recovering evidence, identifying and resolving the root cause of the incident (not just the symptoms), and undertaking a forensic investigation.
This discussion has been archived. No new comments can be posted.

Do We Need Regular IT Security Fire Drills?

Comments Filter:
  • Pro- vs Re- (Score:2, Insightful)

    by hel1xx ( 2468044 )
    I see no issue with being proactive, vs. Reactive. No sense in shutting the barn door after all the horses have ran out?
    • Re: (Score:3, Interesting)

      by epyT-R ( 613989 )

      I've seen several departments that made reactive approaches a policy. Proactive employees were criticized and repeat offenders let go. I don't get it at all. It costs more money and makes more work and stress. Who wants to keep patching the same problem over and over?

      • by Lumpy ( 12016 )

        Why does? very very low IQ managers and executives.

        Any place that is reactive only needs to be outed so others can be warned away.

        • by epyT-R ( 613989 )

          Well, one of the places I'm thinking of was bought out years ago. It doesn't exist anymore.

          • by Lumpy ( 12016 )

            We can only pray that the management there was fired and not promoted into the new company.

    • by Anonymous Coward

      I worked in a multi-national Fortune 500 corp. People from headquarters would regularly drop-in on local IT rooms unannounced, unplug a server, and say to the local manager "Bang! Your server is dead. What do you do now?" then evaluate the managers actions.

      • Re: (Score:3, Funny)

        by Anonymous Coward

        Call the police, have the goon arrested then walk over and plug the server in. easy as lyin.

      • by Z00L00K ( 682162 )

        Continue working as if nothing happened because the server is mirrored in three copies on different sites, then bring it up again.

        • by dbIII ( 701233 )
          Reach for the tapes or other offline storage in case the other servers are mirrors of damaged garbage (as happened at a web hosting place near me that had a mirror but no backups). Same goes for snapshots - nice most of the time but if the machine has been taken over by someone those snapshots could be gone or changed.
          IMHO a backup is not a backup unless there is something preventing you from immediately changing it - preferably an air gap of some sort.
      • "There's no back-up, I quit, you're screwed".

        • by pnutjam ( 523990 )
          Yeah, that sign on the server that says, "WFYF". That means you picked the wrong server. It stands for, "We're Fucked, Your Fired"
    • by gl4ss ( 559668 )

      well but if your "proactive" is doing a fake reactive to the point of doing a "forensics investigation"*... then you're just playing games.

      *imagine doing a fake murder investigation at work and invading everyone's privacy in the process in the way a real investigation would do..

      • provide anyone with a fake backstory first.

        Fun and teambuilding for the whole office crew and training in deductive thinking and the general process of securing evidence for the IT crew.

      • well but if your "proactive" is doing a fake reactive to the point of doing a "forensics investigation"... then you're just playing games.

        When your proactive penetration testing finds a vulnerability, or one of your vendors issues a critical patch, follow through as if it were for real.

    • by AK Marc ( 707885 )
      Yes, no sense shutting the door after al the horses have ran out. But no sense getting horses if you don't have a door. I've seen things more stupid than that in IT (and elsewhere)
    • Lots of sense in shutting the barn door after only half of your horses ran out. Probably still enough sense in shutting it, as long as more than one hrse is still in.

      And DEFINITLY more sense in shutting it immedeatly and not wasting any time by counting horses first.

  • Write one, test it, maintain it. Otherwise by the time you realize you need one it's too late.
    • by plopez ( 54068 )

      Seriously, this is IT 101. I am used to having drills every 6 months.

  • Well, I do have my instant pop-up Blame Finger ready. (Careful, don't confuse those things with the Commute Finger.)

  • by Anonymous Coward

    If you every worked in IT not management, you would know finding the "root cause" most times is a wild goose chase. Do you think doctors besides House ever find a "root cause".? No you recognize the symptoms, and fix accordingly. I post this as the same time Slashdot just gave me a 503 error, please tell me the "root cause". Your current "server instability" is not the answer management is looking for in this case.

  • by phantomfive ( 622387 ) on Monday January 12, 2015 @07:54PM (#48798751) Journal

    This includes recovering evidence, identifying and resolving the root cause of the incident (not just the symptoms), and undertaking a forensic investigation.

    That is not a skill set most IT departments have.

    • by Livius ( 318358 ) on Monday January 12, 2015 @08:13PM (#48798907)

      That is not a skill set most IT departments have.

      I think that's the point.

      • Wouldn't every company just do what they love to do best in situations like these? Just outsource it to someone else?
    • by Lumpy ( 12016 ) on Monday January 12, 2015 @08:30PM (#48799055) Homepage

      90% of all IT departments can be driven bat shit crazy by installing a simple light timer on a router or switch and hiding it in the rats nest of power and other cables. Set the timer to be "anti burgular" mode where it adds randomness and have it drop power to a piece of gear for only 10 minutes once a day, because in 10 minutes by the time they get to the network closet, it will be back on and running.

      It will drive them nuts and it will take MONTHS for them to find it, bet you they replace the router/switch befoer they find the timer. Bonus points if you make a decoy cable so that the timer is in the center of the cable hidden in the power tray and both ends look factory standard IEC.

    • by plover ( 150551 )

      This includes recovering evidence, identifying and resolving the root cause of the incident (not just the symptoms), and undertaking a forensic investigation.

      This message brought to you by the Unemployed Computer Forensics Investigators Institute, Placement Counselor's division

      That is not a skill set most IT departments have.

      I highlighted the space between the lines. HTH

    • by bill_mcgonigle ( 4333 ) * on Monday January 12, 2015 @08:59PM (#48799261) Homepage Journal

      That is not a skill set most IT departments have.

      Many IT departments don't even have enough skill overage to deal with one guy being sick, much less have excess expert capacity.

      Back in the 90's I watched a big medical center show the door to the guy who maintained the disaster recovery plan. He was "a cost center and never produced anything that anybody used."

      That's about the timeframe when professional IT ended in the general population. Or maybe it's just when the general population got an IT staff.

      • lol. Yeop.

        I work at a large org that still has a history of what might have been professional IT.

        Today, it results in project managers running around asking who can fill out this disaster recovery document? Anyone? Anyone?

        And it gets filled in somehow but no one really knows anything.

  • Answer.... (Score:5, Insightful)

    by bobbied ( 2522392 ) on Monday January 12, 2015 @07:57PM (#48798765)

    Yes.... a million times YES

    The "Be Prepared" motto isn't just for Boy Scouts, and it is not just about having what you need at hand, it's also about KNOWING what to do and being mentally prepared to do it quickly when required.

    • by tlhIngan ( 30335 )

      The "Be Prepared" motto isn't just for Boy Scouts, and it is not just about having what you need at hand, it's also about KNOWING what to do and being mentally prepared to do it quickly when required.

      And documenting it all. Don't forget that.

      And better yet, running it regularly lets you make sure the documentation is up to date (oh, the server is gone, it's been replaced by the new server and you need these new steps).

      It's also good about figuring out what you don't know - you don't know what you don't kno

  • the only reason we do fire drills is their mandated by law. Every business I know of is trying to cut IT costs. There's no way in hell this idea would fly. It's always cheaper to pick up the pieces as long as you don't really care [youtube.com] about the damage.
  • next question?

  • Nope (Score:5, Interesting)

    by sexconker ( 1179573 ) on Monday January 12, 2015 @08:15PM (#48798931)

    Just like real fire drills, they're pretty pointless and no one takes them seriously because there's no fire.
    So you either have a fruitless exercise that costs money because of all the interruptions, or you have a semi-fruitful exercise that costs a lot of money because of the extended interruptions caused by trying to simulate a real event.

    The latter will marginally improve the response to an actual incident. Neither will fly, because they cost money and aren't mandated by law.

    • In the UK fire drills *are* mandated by law: > You should carry out at least one fire drill per year and record the results. You must keep the results as part of your fire safety and evacuation plan. https://www.gov.uk/workplace-f... [www.gov.uk] I completely agree with your other points.
      • by BVis ( 267028 )

        A government regulation requiring a company to do something? Socialism! Communism! Totalitarian oppression! Kenya! Benghazi! Birth certificate! Secret gay marriage! Cold dead hands!

        (all of the previous have been seriously argued by certain elements in the American Right.)

      • That's my fucking point. We do fire drills because they are required. And we do the bare fucking minimum, making them useless.
        An "IT Security Fire Drill" will never be done until it is mandated by law. And when it is, we will do the bare fucking minimum, making them useless.

  • Yes there should be. It would mean IT security and correct procedures would be much more likely to be followed. It would also raise the profile of IT within the organisation. Too often IT is treated like the red-headed step child janitor, until it hits the fan.
    • by BVis ( 267028 )

      It would mean IT security and correct procedures would be much more likely to be followed.

      What are the consequences for not following correct procedures at any time? Basically none. IT policy is considered a list of suggestions at most companies.

      It would also raise the profile of IT within the organisation.

      As an IT worker, you don't want a high profile. The tall nail gets hammered down. You don't want to be easily visible when it's time to pick a scapegoat. An IT department is doing its job when nobody

      • It would mean IT security and correct procedures would be much more likely to be followed.

        What are the consequences for not following correct procedures at any time? Basically none. Seriously? Major problems happen, this is backed up by your own points below. IT policy is considered a list of suggestions at most companies. This is part of the problem. It would also raise the profile of IT within the organisation As an IT worker, you don't want a high profile. The tall nail gets hammered down. You do

  • by fuzzyfuzzyfungus ( 1223518 ) on Monday January 12, 2015 @08:25PM (#48799025) Journal
    We don't need 'fire drills', we need Cold War style 'bend over and kiss your ass goodbye' drills. Unfortunately, I don't know of anyone, or any technique that prevents drills from turning into impromptu coffee breaks within a couple of rounds. People sharp enough to be with drilling just aren't fooled, and the dumb ones aren't much use. Unless IT security gets real, non drill, respect, what's the point? Any moron can point at a production environment and say "yeah, we could be doing that; but users and/or management would punch us." And this isn't even referring to esoteric stuff, I'm talking about boring, included-by-default stuff like software restriction policies(make sure that user-writeable locations and executable locations are a disjoint set and watch most trivial drive by and phishing attacks melt away...) Until we get to at least that level, why fuck around?
    • by pla ( 258480 )
      Unless IT security gets real, non drill, respect, what's the point?

      IT security won't get real respect until they actually know more than the people they annoy with their (literally) useless rules.

      When you have some moron with a CISSP telling people who write network protocol stacks for a living what browsers they can use (this week), do you really expect to see a lot of "respect" flowing in that direction?

      Modern InfoSec amounts to little more than snake-oil. AV vendors have admitted that their produ
      • Re:Hopelesss (Score:5, Insightful)

        by fuzzyfuzzyfungus ( 1223518 ) on Monday January 12, 2015 @10:22PM (#48799663) Journal
        Arguably (on a systemic level, not on the level of how wonderful your current IT guy isn't) 'IT' being something that attracts actual talent qualifies as 'non drill respect'.

        As long as "IT" means 're-image the desktops and reboot the mailserver when it needs it, monkey!', you aren't exactly going to get the IT people whose prowess impresses you. On the plus side, you'll save money. On the minus side, it's going to be a bloodbath if you get unlucky in terms of hostile attention.

        So long as 'IT' is handled as a cost-center, necessary-evil, bunch of obstructionist ethernet janitors, that's how it'll be. On the plus side, modern technology is actually pretty easy to use, so if nothing atypically bad happens you can get away with some fairly dubious expertise at the wheel, and save accordingly; but if that's the philosophy at work you probably won't end up with an IT group capable of rising very far to the occasion should things go to hell(either because something that shouldn't have been complex went bad, or because lizard squad is on you).

        What is unclear, at present, is how, culturally and financially, any but the most zealously paranoid and deep pocketed companies and state entities are going to have IT groups that are good for much more than the bare minimum. So long as you don't expect IT to be much better than a bunch of fuckups, there really isn't any reason to pay more or recruit more carefully(doing day-to-day IT is really more logistics and a little scripting than anything even remotely approaching CS or even code monkeying); but if that is how IT groups are recruited, no sane person will expect better of them; because why would they be capable of better?

        (Please note, I freely acknowledge, as an institution's IT person, that I'd be up shit creek if something genuinely nontrivial came gunning for me. I'm a hell of a lot cheaper than a real expert, I have good rapport with the users, strong command of standard logistics and management tools, things go nice and smooth; but I'm hardly a guru, nor do I expect to be treated as one. However, that's why I'm skeptical about this 'drill' thing. If you want to know that We Are Fucked if things get serious, I can tell you that for free(though we do have backup tapes, and I am perfectly capable of restoring, were the hypothetical attack to stop); but if you aren't interested in doing anything that might actually make you less fucked; because that'd cost a whole lot more, upset users, or both, what's the drill for? Perhaps there are organizations that actually live in ignorance, believing that they have hardcore experts willing to do routine IT stuff at relatively low prices; but those are likely a delusional minority. Everyone else just knows that having a bulletproof IT team would be an eye-watering outlay(that would spend most of its time twiddling its thumbs and swappping the occasional toner cartridge until something actually happens), while having an adequate-for-daily-use IT team is markedly cheaper and you can always claim that you 'followed industry best practices' if something goes pear shaped.)
        • but if you aren't interested in doing anything that might actually make you less fucked; because that'd cost a whole lot more, upset users, or both, what's the drill for?

          That's a very good point.
          A separate issue is bare metal restore drills for things with complex procedures, but that's a one per person per type of complex system issue instead of a regular drill idea. If in three years time the next version of whatever has a few differences that probably not enough to have to rerun the "drill".

        • Everyone else just knows that having a bulletproof IT team would be an eye-watering outlay(that would spend most of its time twiddling its thumbs and swappping the occasional toner cartridge until something actually happens), while having an adequate-for-daily-use IT team is markedly cheaper and you can always claim that you 'followed industry best practices' if something goes pear shaped.)

          The same reason that small and medium businesses don't have full time lawyers, but aren't totally fucked if they do get into a scrape with the law: You find a good one, start a working relationship, and keep them on retainer for a fraction of the cost of hiring them to work full time when you only need them three days a year. Security/risk firms, that will do everything from forensics to auditing to physical penetration testing and "fire drills", are out there. Find one you like, give them a contract to

        • If you want to know that We Are Fucked if things get serious, I can tell you that for free(though we do have backup tapes, and I am perfectly capable of restoring, were the hypothetical attack to stop); but if you aren't interested in doing anything that might actually make you less fucked; because that'd cost a whole lot more, upset users, or both, what's the drill for?

          Yeah, that's kind of my first thought. I've been doing this IT thing for a while, and I think doing an occasional fire drill is great. But the fire drill itself costs money, and there's no point in doing it if you're not committed to fixing the problems you've found. So if you do a test restore to make sure your backups can be restored successfully, that's great. But if you find your backups don't restore successfully, are you willing to put in whatever time and money are required to fix those problems,

  • If your information security department isn't investigating issues and possible incidents on the regular, they probably aren't doing any monitoring of any kind.
  • I think it would make a ton of sense for every organization to do a DR "drill" periodically where they attempt to actually use their DR plan (restore a group of servers, reload a switch configuration, etc).

    This just seems like a sensible part of that.

    What worries me, though, is how they will know when to actually implement a security plan and deal with the consequences. A lot of security breaches are subtle, and you don't know they've happened or at least not always with a definitive sign like a defacement

    • by RLaager ( 200280 )

      At least with DR, the key is to exercise the plan as part of routine maintenance. That is, fail over to the backup (server/site/whatever), work on the primary, fail back. Since this provides immediate value, it'll actually get done. And since people do it regularly, they remember how to do it.

    • I think it would make a ton of sense for every organization to do a DR "drill" periodically where they attempt to actually use their DR plan (restore a group of servers, reload a switch configuration, etc).

      This just seems like a sensible part of that.

      What worries me, though, is how they will know when to actually implement a security plan and deal with the consequences. A lot of security breaches are subtle, and you don't know they've happened or at least not always with a definitive sign like a defacement page, etc.

      I would assume a "real" security response would be something akin to putting a lot of resources "in lockdown" -- shutting down servers, cutting network links, etc, which could have major business consequences. I can see where uncertainty about a breech and hesitancy to isolate key systems (perhaps necessary to contain a breech) could lead to a real clusterfuck.

      I think a key part of developing the plan is deciding when you know there is a real breach and making sure that the responses are well-known ahead of time to avoid a lot of head-scratching and internal conflict.

      Treat it just like a DR exercise. The first phase would be confirming the breadth and depth of the incident. Your IDS goes off, or a department reports some missing/vandalized files, or notices some logs with audit warnings that are out of place, and raises the red flag. Next, you need to gather forensic information from every last piece of equipment in your entire organization, quickly, and move it to a sterile location. Whether that is possible or not will determine your ability to move forward strate

      • by frisket ( 149522 )

        Everyone's talking about DR saying that a server has mysteriously gone offline or some disk has gotten corrupted and we need to restore to the last known backup point.

        No-one seems to be thinking of a real disaster: 50' tidal surge, earthquake, or a fire destroying the entire IT setup.

        Backups? Onto what, pray?
        Use the cloud? There is no connectivity here.
        Rig some borrowed PCs? Powered by what, exactly?

        Unless you have a duplicate datacenter a long way away from your personal Ground Zero, no amount of drill

        • Everyone's talking about DR saying that a server has mysteriously gone offline or some disk has gotten corrupted and we need to restore to the last known backup point.

          No-one seems to be thinking of a real disaster: 50' tidal surge, earthquake, or a fire destroying the entire IT setup.

          Backups? Onto what, pray?
          Use the cloud? There is no connectivity here.
          Rig some borrowed PCs? Powered by what, exactly?

          Unless you have a duplicate datacenter a long way away from your personal Ground Zero, no amount of drill on earth is going to prepare you for a real disaster. You'll be too busy shooting the guys who have come to take your food and fuel.

          You make a good point, but indeed most medium-sized and up orgs do keep some sort of hot-spare facility at a distance, whether it's a privately owned building, colocation space, or cloud service. Traditional localized disasters (5 alarm blaze, earthquake, tornado, etc) are planned and drilled for, sometimes specifically down to which disaster has struck. If the entire eastern seaboard gets wiped out by a "real disaster", chances are your customers aren't going to be keen on getting online anyway, and ever

          • by swb ( 14022 )

            I sure run into a lot of medium sized organizations that do nothing of the sort.

            Most talk about it but when they see the price tag they get cold feet. The "better" ones will do some kind of off site setup, but it's often done with old equipment retired from production and some kind of copying/replication from the production site with little or no solid plan on how to actually bring up the remote site in a way that's useful.

            The ones that seem the best off are the ones running VMware SRM.

  • by Enry ( 630 )

    Not much more to be said about it. The staff will know how to react when there's real problems rather than searching for passwords and documentation for some system they haven't touched in 6 months..

  • TFA is utterly void. I suspect it was written by a bot.
  • by raymorris ( 2726007 ) on Monday January 12, 2015 @09:33PM (#48799447) Journal

    Just a friendly reminder - test your backups TODAY.
    The MAJORITY of home and small business backups don't actually work when you try to restore. Often, it quit backing up 18 months ago and nobody noticed.

    Disaster recovery is part of security, so that's one security drill. To handle an intrusion, often the best course of action is to unplug the network cable and call your expert. Do not power down the machine. Do not delete anything. Do not try to fix it. Just unplug the network and call the guy. That shouldn't be hard, but it is hard if you don't know who to call. If you're shopping for somebody during a panic, you'll likely pay too much for somebody who isn't as expert as you'd like. So find your expert ahead of time and you're most of the way there.

  • At a minimum, you should have a DR plan, you should periodically review your DR plan, and you should from time to time actually test your DR plan. There's a zillion other things you can do above and beyond that -- but many an organization has had their DR plan utterly fail them in the face of a real emergency because nobody took it seriously.

    No boom today; boom tomorrow. Always boom tomorrow. Plan for it, and you might come out of it fine. Don't, and you could be screwed.

    If the executive fails to under

  • by GoodNewsJimDotCom ( 2244874 ) on Monday January 12, 2015 @10:13PM (#48799617)
    They come in, test security via social engineering like if someone falls for phishing or whatnot. Then they educate based on what failed.

    I interviewed with a firm once, and said,"Hey, maybe people don't even know they need your security product. How about sending phishing emails to all companies you might want to work for :P" He got a laugh and he said something like,"The window salesman doesn't go around throwing rocks through people's windows to stir up some business." I don't think the analogy is applicable, but my marketing suggestion was mostly a joke anyway.
    • by dbIII ( 701233 )

      They come in, test security via social engineering like if someone falls for phishing or whatnot

      Why bother with testing the social engineering angle at all? Have enough people in the place and somebody is going to fail. It's best to assume that some idiot will click on a link, IE will "helpfully" run it, and everything that user can connect to is potentially compromised.

    • by jaseuk ( 217780 )

      Nah, most penetration testers / ITHC etc. are more interested in breaches of confidentiality and integrity. I've never known a standard test deal with availability. You certainly don't need those sort of firms to help you test out your BC and DR plans.

      Insurers are quite keen on this stuff. Both on how you'd deal with lowering the risks (e.g. fire alarms, gas suppression, UPS etc.) as well as your plans in place for any recovery efforts. A lack of planning and preparation would push the costs up astronom

    • by dave420 ( 699308 )
      Judging by the illogical nature of the website in your sig I doubt anyone should take anything you say without a rather large pinch of salt... I know you probably think of having that site as spreading the word of the lord, but it just makes you look a bit loopy to those of us who think god is made up, which makes everything you write in the vicinity of that link sound far less credible...
  • Quick, cut the Internet connection! Ok, restore the connection, drill is done.
  • Do your company also do real life security "fire drills," supported by executive management should be conducted regularly in organizations, in order to understand the appropriate course of action in advance of a physical security breach. This includes recovering evidence, identifying and resolving the root cause of the incident (not just the symptoms), and undertaking a forensic investigation?

    No? Then perhaps you don't need to do IT security fire drills for the same reason.

  • IT departments get plenty of field testing.
  • ... Very simply, either have someone in your IT department or an outside consultant hack your system or compromise it in some way.

    Then task the department to deal with it.

    Let us say your fake attacker gets a hold of some admin passwords? Or they slip a remote access program through your security? Something like that. Then task the department to solve the problem and then make the system harder to compromise.

    Ultimately what needs to happen is that systems need to be compartmentalized so that the compromising

  • by cwills ( 200262 ) on Tuesday January 13, 2015 @12:15AM (#48800075)

    What you described is nothing more then a full security / disaster recovery audit. If your data center (and management) is really serious about it the company will need to invest both time and money to protect itself.

    • Create your security policies. This has to be directed from a management level that can put teeth into it, as well as people who understand what the real risks to the business are. Company lawyers and people with business continuity experience might be involved depending on the consequences of what a data breach or disaster might do to the business.
      • determine what risks your business has
      • determine what needs to be done to mitigate the identified risks
      • determine what needs to be logged in order to allow forensic analysis (assume that the compromised system(s) logs themselves may have been corrupted as part of the breach)
    • Make sure that the policies do not break the business. Also realize that security policies may require some processes to change.
    • Understand that implementing security polices can be expensive.
    • Employee education is a necessary step. Make sure employees understand what is being asked of them, and make sure that they understand what the policies are.
    • Ensure that you have a designated security focal point.
    • You will probably need an exception process. Make sure that any exceptions are documented with management, what is being done to mitigate any risks the exception have exposed and how long the exception needs to be in place.

    Once you have your policies in place and everyone has "signed off" that they are in compliance, you can start with the auditing.

    • Have some level of auditing where it's a "friendly" review of the systems.
    • Audits should not instill fear, however there may need to be real consequences for negligent audit failures (depending on the business and type of data).
    • Depending on the business, you may want to have an independent auditing group come in and review your systems and policies
    • During an audit, system or process owners should only be held accountable to what is in the security policies. If the audit finds issues that are outside the policies, then management and the policy owner needs to respond.

    One additional comment, depending on the size of the organization, there may be a security group. If there is one, then it should be the responsibility of this group to perform any security monitoring or testing. Individuals outside the group should not be performing their own security or intrusion testing of systems that they are not directly responsible for. If a vulnerability is uncovered, it should be documented and reported to the security focal point and management.

  • Comment removed based on user account deletion
  • I thought they would be awesone until I realized what they were. Mostly a way to show off to higher ups. The bulk of them end up being about showing off pretty charts and dashboards no matter how useless those charts are. How you can make these work is tell your staff that management will be hiring a pen test sometime in the next six months but they won't get any more detail. This allows you to test your staff whole making them be more on their toes in case a real attack happens.
    • by BVis ( 267028 )

      Mostly a way to show off to higher ups.

      Or, once you expose the atrocious security (non)behavior of the "higher ups", and forget to leave that out of the report, you get fired.

  • 1. Find out which salesman caused it
    2. Fire them
  • http://www.vthreat.com/ [vthreat.com] was founded, by Marcus Carey, accelerated by http://www.mach37.com/ [mach37.com] and recently funded to provide "IT fire drills" to organizations. I'd say if you can get funded & launch a product, it's an important thing to be doing. At the very least have some table top exercises where you or others ask some what if's, then take the answers or lack there of and fix them, and do it again.

  • Speaking from the viewpoint of someone high up in the echelon of LARGE U.S. corporation I.T. Middle management on up through top management are NOT even slightly interested in data retention. Primarily because it would incriminate them. AND I know this because I've had that conversation WITH upper management.

  • What they do prevent changes in security controls until the deadline, then panic! Add in a serious vulnerability that needs to be patched too for good measure. Of course this could be just bad management as well.

    Of course policies and procedures should be developed and tested. Otherwise, it's crap. Seen it my entire life. Untested code/procedures don't work.

"Being against torture ought to be sort of a multipartisan thing." -- Karl Lehenbauer, as amended by Jeff Daiell, a Libertarian

Working...