Do We Need Regular IT Security Fire Drills? 124
An anonymous reader writes: This article argues that organizations need to move beyond focusing purely on the prevention of security incidents, and start to concentrate on what they will do when an incident occurs. IT security "fire drills," supported by executive management should be conducted regularly in organizations, in order to understand the appropriate course of action in advance of a security breach. This includes recovering evidence, identifying and resolving the root cause of the incident (not just the symptoms), and undertaking a forensic investigation.
Pro- vs Re- (Score:2, Insightful)
Re: (Score:3, Interesting)
I've seen several departments that made reactive approaches a policy. Proactive employees were criticized and repeat offenders let go. I don't get it at all. It costs more money and makes more work and stress. Who wants to keep patching the same problem over and over?
Re: (Score:2)
Why does? very very low IQ managers and executives.
Any place that is reactive only needs to be outed so others can be warned away.
Re: (Score:1)
Well, one of the places I'm thinking of was bought out years ago. It doesn't exist anymore.
Re: (Score:2)
We can only pray that the management there was fired and not promoted into the new company.
Re: (Score:1)
I worked in a multi-national Fortune 500 corp. People from headquarters would regularly drop-in on local IT rooms unannounced, unplug a server, and say to the local manager "Bang! Your server is dead. What do you do now?" then evaluate the managers actions.
Re: (Score:3, Funny)
Call the police, have the goon arrested then walk over and plug the server in. easy as lyin.
Re: (Score:2)
Continue working as if nothing happened because the server is mirrored in three copies on different sites, then bring it up again.
Re: (Score:3)
IMHO a backup is not a backup unless there is something preventing you from immediately changing it - preferably an air gap of some sort.
Re: (Score:3)
"There's no back-up, I quit, you're screwed".
Re: (Score:2)
Re: (Score:2)
well but if your "proactive" is doing a fake reactive to the point of doing a "forensics investigation"*... then you're just playing games.
*imagine doing a fake murder investigation at work and invading everyone's privacy in the process in the way a real investigation would do..
Re: (Score:2)
provide anyone with a fake backstory first.
Fun and teambuilding for the whole office crew and training in deductive thinking and the general process of securing evidence for the IT crew.
Re: (Score:3)
well but if your "proactive" is doing a fake reactive to the point of doing a "forensics investigation"... then you're just playing games.
When your proactive penetration testing finds a vulnerability, or one of your vendors issues a critical patch, follow through as if it were for real.
Re: (Score:2)
Re: (Score:2)
Lots of sense in shutting the barn door after only half of your horses ran out. Probably still enough sense in shutting it, as long as more than one hrse is still in.
And DEFINITLY more sense in shutting it immedeatly and not wasting any time by counting horses first.
Incident Response Plan (Score:2)
Re: (Score:2)
Seriously, this is IT 101. I am used to having drills every 6 months.
Re: (Score:2)
Seriously, doesn't everyone have contingency plans?
When in trouble,
Or in doubt,
Run in circles,
Scream and shout.
(R. Heinlein)
Re: (Score:3)
When in danger or in doubt,
Run in circles, scream and shout.
p. 101. Herman Wouk: THE CAINE MUTINY. Garden City, NY: Doubleday & Co., Inc. 1951. (p. 120 of the 1954 Doubleday pb ed.)
Heinlein lifted a lot of things. And it seems to be even older than that if google can be trusted
Re: (Score:2)
No, that's Burma Shave.
Re: (Score:2)
If your problem is 20 year old solaris machines, perhaps a fire drill is just what you need to demonstrate to the executive level that they need to budget for new equipment. "According to the consultant, our machines failed the disaster recovery exercise so if we had a real problem we'd be out of business."
Or maybe they already know that, and their business plan includes a suspicious lightning strike next fiscal quarter?
Re: (Score:2)
Life in a bureaucracy (Score:1)
Well, I do have my instant pop-up Blame Finger ready. (Careful, don't confuse those things with the Commute Finger.)
Re: (Score:3)
Oh, what's that, you say that I'm also the new graphic designer, and I have deadlines for that stuff? OK, I'll get to that first...
You say that they can't print in Accounting either? And someone is having issues with their mouse, but you aren't sure who it was or what the problem was, but it needs fixed right now, and all the guys we hired three months ago had their passwor
Re: (Score:2)
Seems like no department suffers from the "make 1 person do 4 jobs" phenomenon like IT does. Oddly enough, they don't pay you four salaries..
Re: (Score:2)
No. (Score:1)
If you every worked in IT not management, you would know finding the "root cause" most times is a wild goose chase. Do you think doctors besides House ever find a "root cause".? No you recognize the symptoms, and fix accordingly. I post this as the same time Slashdot just gave me a 503 error, please tell me the "root cause". Your current "server instability" is not the answer management is looking for in this case.
Re:No. (Score:4, Funny)
That reminds me of one of those classic lists of airline mechanic log entries:
"Evidence of oil leak on landing gear. Signed, Joe Pilot"
"Evidence removed. Signed, Bob Mechanic"
That's a different skill-set (Score:4, Insightful)
This includes recovering evidence, identifying and resolving the root cause of the incident (not just the symptoms), and undertaking a forensic investigation.
That is not a skill set most IT departments have.
Re:That's a different skill-set (Score:5, Insightful)
Having a plan can be we have a contract with these guys to do this sort of work along with all the info they need. Along with all the paperwork and checking required.
Re:That's a different skill-set (Score:4, Insightful)
That is not a skill set most IT departments have.
I think that's the point.
Re: (Score:2)
Re:That's a different skill-set (Score:5, Funny)
90% of all IT departments can be driven bat shit crazy by installing a simple light timer on a router or switch and hiding it in the rats nest of power and other cables. Set the timer to be "anti burgular" mode where it adds randomness and have it drop power to a piece of gear for only 10 minutes once a day, because in 10 minutes by the time they get to the network closet, it will be back on and running.
It will drive them nuts and it will take MONTHS for them to find it, bet you they replace the router/switch befoer they find the timer. Bonus points if you make a decoy cable so that the timer is in the center of the cable hidden in the power tray and both ends look factory standard IEC.
Re: (Score:2)
Re: (Score:2, Funny)
Re: (Score:2)
If your do-it-yourself skills are a little weak, the annoyatron [amazon.com]
Re: (Score:2)
This includes recovering evidence, identifying and resolving the root cause of the incident (not just the symptoms), and undertaking a forensic investigation.
This message brought to you by the Unemployed Computer Forensics Investigators Institute, Placement Counselor's division
That is not a skill set most IT departments have.
I highlighted the space between the lines. HTH
Re:That's a different skill-set (Score:4, Informative)
That is not a skill set most IT departments have.
Many IT departments don't even have enough skill overage to deal with one guy being sick, much less have excess expert capacity.
Back in the 90's I watched a big medical center show the door to the guy who maintained the disaster recovery plan. He was "a cost center and never produced anything that anybody used."
That's about the timeframe when professional IT ended in the general population. Or maybe it's just when the general population got an IT staff.
Re: (Score:2)
lol. Yeop.
I work at a large org that still has a history of what might have been professional IT.
Today, it results in project managers running around asking who can fill out this disaster recovery document? Anyone? Anyone?
And it gets filled in somehow but no one really knows anything.
Re: (Score:2)
As late as 2000, Apple ][e to log and print velocities in a test rig.
Answer.... (Score:5, Insightful)
Yes.... a million times YES
The "Be Prepared" motto isn't just for Boy Scouts, and it is not just about having what you need at hand, it's also about KNOWING what to do and being mentally prepared to do it quickly when required.
Re: (Score:2)
And documenting it all. Don't forget that.
And better yet, running it regularly lets you make sure the documentation is up to date (oh, the server is gone, it's been replaced by the new server and you need these new steps).
It's also good about figuring out what you don't know - you don't know what you don't kno
Good luck with that (Score:2)
Re: (Score:2)
Well is that so bad? If an end user is not doing what they're supposed to, and it impacts security, they should be fired.
*wakes up*
Oh, you mean IT folks getting fired for the consequences of a bad management decision made without their input. What was I thinking, firing people for gross violations of IT policy allows the terrorists to win.
End users:
Got PCI scope on a machine, and put the password on a sticky note in a public area? Oh well, everyone makes mistakes.
Stored PII on a laptop that is subsequent
Yes. (Score:2)
next question?
Nope (Score:5, Interesting)
Just like real fire drills, they're pretty pointless and no one takes them seriously because there's no fire.
So you either have a fruitless exercise that costs money because of all the interruptions, or you have a semi-fruitful exercise that costs a lot of money because of the extended interruptions caused by trying to simulate a real event.
The latter will marginally improve the response to an actual incident. Neither will fly, because they cost money and aren't mandated by law.
Re: (Score:2)
Re: (Score:2)
A government regulation requiring a company to do something? Socialism! Communism! Totalitarian oppression! Kenya! Benghazi! Birth certificate! Secret gay marriage! Cold dead hands!
(all of the previous have been seriously argued by certain elements in the American Right.)
Re: (Score:2)
That's my fucking point. We do fire drills because they are required. And we do the bare fucking minimum, making them useless.
An "IT Security Fire Drill" will never be done until it is mandated by law. And when it is, we will do the bare fucking minimum, making them useless.
Re: (Score:2)
The point of fire drills is to test if your evacuation procedures are fast enough. If people somehow get out faster in a real fire, well, good for them.
What takes us to this article proposal... What's the point?
Re: (Score:2)
The point of fire drills is to test if your evacuation procedures are fast enough. If people somehow get out faster in a real fire, well, good for them.
What takes us to this article proposal... What's the point?
Fire drill:
Sally: Okay, everyone walk out of the front door and meet at the big tree in the courtyard so Steve can do a head count.
Joe: Single file, please.
Bob: Did you hear about Kelly? She's cheating on her husband!
Fire:
Sally: OH MY GOD I CAN'T SEE ANYTHING WHERE'S THE DOOR?!
Joe: It's covered in FIRE! We have to find another way out but I can't see through all this smoke!
Bob: I think I found Steve! I see his safety vest over there.
Sally: OMG it's just the vest! Where's Steve?!? Is he out today?
Yes (Score:2)
Re: (Score:2)
What are the consequences for not following correct procedures at any time? Basically none. IT policy is considered a list of suggestions at most companies.
As an IT worker, you don't want a high profile. The tall nail gets hammered down. You don't want to be easily visible when it's time to pick a scapegoat. An IT department is doing its job when nobody
Re: (Score:2)
What are the consequences for not following correct procedures at any time? Basically none. Seriously? Major problems happen, this is backed up by your own points below. IT policy is considered a list of suggestions at most companies. This is part of the problem. It would also raise the profile of IT within the organisation As an IT worker, you don't want a high profile. The tall nail gets hammered down. You do
Hopelesss (Score:3)
Re: (Score:3)
IT security won't get real respect until they actually know more than the people they annoy with their (literally) useless rules.
When you have some moron with a CISSP telling people who write network protocol stacks for a living what browsers they can use (this week), do you really expect to see a lot of "respect" flowing in that direction?
Modern InfoSec amounts to little more than snake-oil. AV vendors have admitted that their produ
Re:Hopelesss (Score:5, Insightful)
As long as "IT" means 're-image the desktops and reboot the mailserver when it needs it, monkey!', you aren't exactly going to get the IT people whose prowess impresses you. On the plus side, you'll save money. On the minus side, it's going to be a bloodbath if you get unlucky in terms of hostile attention.
So long as 'IT' is handled as a cost-center, necessary-evil, bunch of obstructionist ethernet janitors, that's how it'll be. On the plus side, modern technology is actually pretty easy to use, so if nothing atypically bad happens you can get away with some fairly dubious expertise at the wheel, and save accordingly; but if that's the philosophy at work you probably won't end up with an IT group capable of rising very far to the occasion should things go to hell(either because something that shouldn't have been complex went bad, or because lizard squad is on you).
What is unclear, at present, is how, culturally and financially, any but the most zealously paranoid and deep pocketed companies and state entities are going to have IT groups that are good for much more than the bare minimum. So long as you don't expect IT to be much better than a bunch of fuckups, there really isn't any reason to pay more or recruit more carefully(doing day-to-day IT is really more logistics and a little scripting than anything even remotely approaching CS or even code monkeying); but if that is how IT groups are recruited, no sane person will expect better of them; because why would they be capable of better?
(Please note, I freely acknowledge, as an institution's IT person, that I'd be up shit creek if something genuinely nontrivial came gunning for me. I'm a hell of a lot cheaper than a real expert, I have good rapport with the users, strong command of standard logistics and management tools, things go nice and smooth; but I'm hardly a guru, nor do I expect to be treated as one. However, that's why I'm skeptical about this 'drill' thing. If you want to know that We Are Fucked if things get serious, I can tell you that for free(though we do have backup tapes, and I am perfectly capable of restoring, were the hypothetical attack to stop); but if you aren't interested in doing anything that might actually make you less fucked; because that'd cost a whole lot more, upset users, or both, what's the drill for? Perhaps there are organizations that actually live in ignorance, believing that they have hardcore experts willing to do routine IT stuff at relatively low prices; but those are likely a delusional minority. Everyone else just knows that having a bulletproof IT team would be an eye-watering outlay(that would spend most of its time twiddling its thumbs and swappping the occasional toner cartridge until something actually happens), while having an adequate-for-daily-use IT team is markedly cheaper and you can always claim that you 'followed industry best practices' if something goes pear shaped.)
I think the above poster nailed it (Score:2)
That's a very good point.
A separate issue is bare metal restore drills for things with complex procedures, but that's a one per person per type of complex system issue instead of a regular drill idea. If in three years time the next version of whatever has a few differences that probably not enough to have to rerun the "drill".
Re: (Score:2)
Everyone else just knows that having a bulletproof IT team would be an eye-watering outlay(that would spend most of its time twiddling its thumbs and swappping the occasional toner cartridge until something actually happens), while having an adequate-for-daily-use IT team is markedly cheaper and you can always claim that you 'followed industry best practices' if something goes pear shaped.)
The same reason that small and medium businesses don't have full time lawyers, but aren't totally fucked if they do get into a scrape with the law: You find a good one, start a working relationship, and keep them on retainer for a fraction of the cost of hiring them to work full time when you only need them three days a year. Security/risk firms, that will do everything from forensics to auditing to physical penetration testing and "fire drills", are out there. Find one you like, give them a contract to
Re: (Score:2)
If you want to know that We Are Fucked if things get serious, I can tell you that for free(though we do have backup tapes, and I am perfectly capable of restoring, were the hypothetical attack to stop); but if you aren't interested in doing anything that might actually make you less fucked; because that'd cost a whole lot more, upset users, or both, what's the drill for?
Yeah, that's kind of my first thought. I've been doing this IT thing for a while, and I think doing an occasional fire drill is great. But the fire drill itself costs money, and there's no point in doing it if you're not committed to fixing the problems you've found. So if you do a test restore to make sure your backups can be restored successfully, that's great. But if you find your backups don't restore successfully, are you willing to put in whatever time and money are required to fix those problems,
Should be already (Score:2)
Part of me says yes, like DR (Score:2)
I think it would make a ton of sense for every organization to do a DR "drill" periodically where they attempt to actually use their DR plan (restore a group of servers, reload a switch configuration, etc).
This just seems like a sensible part of that.
What worries me, though, is how they will know when to actually implement a security plan and deal with the consequences. A lot of security breaches are subtle, and you don't know they've happened or at least not always with a definitive sign like a defacement
Re: (Score:1)
At least with DR, the key is to exercise the plan as part of routine maintenance. That is, fail over to the backup (server/site/whatever), work on the primary, fail back. Since this provides immediate value, it'll actually get done. And since people do it regularly, they remember how to do it.
Re: (Score:2)
I think it would make a ton of sense for every organization to do a DR "drill" periodically where they attempt to actually use their DR plan (restore a group of servers, reload a switch configuration, etc).
This just seems like a sensible part of that.
What worries me, though, is how they will know when to actually implement a security plan and deal with the consequences. A lot of security breaches are subtle, and you don't know they've happened or at least not always with a definitive sign like a defacement page, etc.
I would assume a "real" security response would be something akin to putting a lot of resources "in lockdown" -- shutting down servers, cutting network links, etc, which could have major business consequences. I can see where uncertainty about a breech and hesitancy to isolate key systems (perhaps necessary to contain a breech) could lead to a real clusterfuck.
I think a key part of developing the plan is deciding when you know there is a real breach and making sure that the responses are well-known ahead of time to avoid a lot of head-scratching and internal conflict.
Treat it just like a DR exercise. The first phase would be confirming the breadth and depth of the incident. Your IDS goes off, or a department reports some missing/vandalized files, or notices some logs with audit warnings that are out of place, and raises the red flag. Next, you need to gather forensic information from every last piece of equipment in your entire organization, quickly, and move it to a sterile location. Whether that is possible or not will determine your ability to move forward strate
Re: (Score:2)
Everyone's talking about DR saying that a server has mysteriously gone offline or some disk has gotten corrupted and we need to restore to the last known backup point.
No-one seems to be thinking of a real disaster: 50' tidal surge, earthquake, or a fire destroying the entire IT setup.
Backups? Onto what, pray?
Use the cloud? There is no connectivity here.
Rig some borrowed PCs? Powered by what, exactly?
Unless you have a duplicate datacenter a long way away from your personal Ground Zero, no amount of drill
Re: (Score:2)
Everyone's talking about DR saying that a server has mysteriously gone offline or some disk has gotten corrupted and we need to restore to the last known backup point.
No-one seems to be thinking of a real disaster: 50' tidal surge, earthquake, or a fire destroying the entire IT setup.
Backups? Onto what, pray?
Use the cloud? There is no connectivity here.
Rig some borrowed PCs? Powered by what, exactly?
Unless you have a duplicate datacenter a long way away from your personal Ground Zero, no amount of drill on earth is going to prepare you for a real disaster. You'll be too busy shooting the guys who have come to take your food and fuel.
You make a good point, but indeed most medium-sized and up orgs do keep some sort of hot-spare facility at a distance, whether it's a privately owned building, colocation space, or cloud service. Traditional localized disasters (5 alarm blaze, earthquake, tornado, etc) are planned and drilled for, sometimes specifically down to which disaster has struck. If the entire eastern seaboard gets wiped out by a "real disaster", chances are your customers aren't going to be keen on getting online anyway, and ever
Re: (Score:2)
I sure run into a lot of medium sized organizations that do nothing of the sort.
Most talk about it but when they see the price tag they get cold feet. The "better" ones will do some kind of off site setup, but it's often done with old equipment retired from production and some kind of copying/replication from the production site with little or no solid plan on how to actually bring up the remote site in a way that's useful.
The ones that seem the best off are the ones running VMware SRM.
Yes (Score:2)
Not much more to be said about it. The staff will know how to react when there's real problems rather than searching for passwords and documentation for some system they haven't touched in 6 months..
You can skip that one (Score:2)
test your backups / disaster recovery TODAY (Score:3)
Just a friendly reminder - test your backups TODAY.
The MAJORITY of home and small business backups don't actually work when you try to restore. Often, it quit backing up 18 months ago and nobody noticed.
Disaster recovery is part of security, so that's one security drill. To handle an intrusion, often the best course of action is to unplug the network cable and call your expert. Do not power down the machine. Do not delete anything. Do not try to fix it. Just unplug the network and call the guy. That shouldn't be hard, but it is hard if you don't know who to call. If you're shopping for somebody during a panic, you'll likely pay too much for somebody who isn't as expert as you'd like. So find your expert ahead of time and you're most of the way there.
DR at a minimum ... (Score:2)
At a minimum, you should have a DR plan, you should periodically review your DR plan, and you should from time to time actually test your DR plan. There's a zillion other things you can do above and beyond that -- but many an organization has had their DR plan utterly fail them in the face of a real emergency because nobody took it seriously.
No boom today; boom tomorrow. Always boom tomorrow. Plan for it, and you might come out of it fine. Don't, and you could be screwed.
If the executive fails to under
This is what security firms do (Score:3)
I interviewed with a firm once, and said,"Hey, maybe people don't even know they need your security product. How about sending phishing emails to all companies you might want to work for
Re: (Score:2)
Why bother with testing the social engineering angle at all? Have enough people in the place and somebody is going to fail. It's best to assume that some idiot will click on a link, IE will "helpfully" run it, and everything that user can connect to is potentially compromised.
Re: (Score:2)
Nah, most penetration testers / ITHC etc. are more interested in breaches of confidentiality and integrity. I've never known a standard test deal with availability. You certainly don't need those sort of firms to help you test out your BC and DR plans.
Insurers are quite keen on this stuff. Both on how you'd deal with lowering the risks (e.g. fire alarms, gas suppression, UPS etc.) as well as your plans in place for any recovery efforts. A lack of planning and preparation would push the costs up astronom
Re: (Score:2)
Data Breach Detected (Score:2)
Do you also do real life security drills? (Score:2)
Do your company also do real life security "fire drills," supported by executive management should be conducted regularly in organizations, in order to understand the appropriate course of action in advance of a physical security breach. This includes recovering evidence, identifying and resolving the root cause of the incident (not just the symptoms), and undertaking a forensic investigation?
No? Then perhaps you don't need to do IT security fire drills for the same reason.
Apparently not. (Score:2)
It would be better if they hacked their own system (Score:2)
... Very simply, either have someone in your IT department or an outside consultant hack your system or compromise it in some way.
Then task the department to deal with it.
Let us say your fake attacker gets a hold of some admin passwords? Or they slip a remote access program through your security? Something like that. Then task the department to solve the problem and then make the system harder to compromise.
Ultimately what needs to happen is that systems need to be compartmentalized so that the compromising
These are simply audits (Score:4, Interesting)
What you described is nothing more then a full security / disaster recovery audit. If your data center (and management) is really serious about it the company will need to invest both time and money to protect itself.
Once you have your policies in place and everyone has "signed off" that they are in compliance, you can start with the auditing.
One additional comment, depending on the size of the organization, there may be a security group. If there is one, then it should be the responsibility of this group to perform any security monitoring or testing. Individuals outside the group should not be performing their own security or intrusion testing of systems that they are not directly responsible for. If a vulnerability is uncovered, it should be documented and reported to the security focal point and management.
Re: (Score:2)
Re: (Score:2)
I have been in cyber security exercises (Score:2)
Re: (Score:2)
Or, once you expose the atrocious security (non)behavior of the "higher ups", and forget to leave that out of the report, you get fired.
good drill (Score:2)
2. Fire them
Definitely (Score:2)
http://www.vthreat.com/ [vthreat.com] was founded, by Marcus Carey, accelerated by http://www.mach37.com/ [mach37.com] and recently funded to provide "IT fire drills" to organizations. I'd say if you can get funded & launch a product, it's an important thing to be doing. At the very least have some table top exercises where you or others ask some what if's, then take the answers or lack there of and fix them, and do it again.
executives are not interested in data retention. (Score:1)
Speaking from the viewpoint of someone high up in the echelon of LARGE U.S. corporation I.T. Middle management on up through top management are NOT even slightly interested in data retention. Primarily because it would incriminate them. AND I know this because I've had that conversation WITH upper management.
Some places already do this (Score:1)
What they do prevent changes in security controls until the deadline, then panic! Add in a serious vulnerability that needs to be patched too for good measure. Of course this could be just bad management as well.
Of course policies and procedures should be developed and tested. Otherwise, it's crap. Seen it my entire life. Untested code/procedures don't work.