What Hurricane Sandy Taught IT About Disaster Preparedness 68
StewBeans writes: The National Oceanic and Atmospheric Administration Climate Prediction Center is calling for calmer than normal storm activity this hurricane season, which runs through Nov. 30. But it's likely that data centers and IT companies in NYC are still taking disaster preparedness seriously. Three years ago, Hurricane Sandy devastated homes, businesses, transportation, and communication in New York, and taught many companies (the hard way) how to keep the lights on when the lights were literally off for weeks on end. Alphonzo Albright, former CIO of the Office of Information Technology in New York City, gives a behind-the-scenes account of what life and business were like in the dark, cold days following Hurricane Sandy in NYC. He also shares tips for other tech leaders to create their own Business Continuity Plan in case this year's storms take a turn for the worse.
Geographic diversity (Score:5, Informative)
First rule: have facilities capable of running your business in more than one location. Everywhere is susceptible to disaster of one sort or another, but if you pick areas far apart that aren't geographically similar they probably won't both suffer disasters at the same time.
Second rule: the probability of disaster taking out your main facilities is 100%. It will happen. The only question is exactly when it'll happen, and the only constant in the answer is that it won't be at a good time. If anyone in your organization doesn't like this, remind them that reality doesn't really care what they like.
Re:Geographic diversity (Score:5, Insightful)
I should add a rule zero then: Take your time to properly understand your costs and revenues so you can make a sensible investment. Maybe it ends up being cheaper just to close door for a week every 30 years than your A-Bomb-proof continuity plan.
And then a zero-plus: Make sure you get business-aligment in written. Maybe the board member that agreed to your investment-sensible less-than-A-Bomb-proof continuity plan wants you as scapegoat once the shit hits the fan.
Re:Geographic diversity (Score:5, Interesting)
A week? most data disaster you are down for at least 30 days. Hell you cant get an order for servers in from DELL even on rush faster than 2 weeks.
If your company can survive zero revenue and 100% loss for 30 days, you either are sitting on a mountain of money, or your business is more of a hobby than anything else.
Oh and if you lose your accounting data due to lack of a bomb proof plan, expect fines in the high 6 figure range.
Re: (Score:1)
I see your low ID and I can't but asking... are you really *that* obtuse?
Re: (Score:1)
Lumpy is actually completely right, and as usual you kiddies that have zero experience or education in managing IT just don't have a clue. You cant get replacement hardware instantly, no corporations don't go to best buy. Also I siggest you look up sarbanes oxley, and the fines for not having a working backup plan.
Re: (Score:2)
C'mon, Lumpy, we all know it's you.
Re: (Score:1)
No, it ain't him!
Re: (Score:2)
A week? most data disaster you are down for at least 30 days. Hell you cant get an order for servers in from DELL even on rush faster than 2 weeks.
Do companies actually bulk-order from Dell any more? This is actually the most I've heard about Dell for months.
I know Google manufactures their own computers, for the most part. They do use Dells as build machines for things like Chrome and ChromeOS, and they're a cheap way of throwing CPUs at the problem, instead of making the Ubuntu build process actually effective and efficient, for that matter, but those are pretty specialized use cases.
Google also routinely runs "This data center got destroyed in an
Re:Geographic diversity (Score:4, Interesting)
Do companies actually bulk-order from Dell any more? This is actually the most I've heard about Dell for months.
I know Google manufactures their own computers, for the most part.
So you think just because Google builds their own servers, it must be that everyone else does the same? There are a few companies out there that aren't Google, and yes, many of them still buy from Dell or HP.
Re: (Score:3, Informative)
"I know Google manufactures their own computers, for the most part."
As a former Google employee, I must say you are full of shit.
Show me Google's manufacturing plants, please.
Re:Geographic diversity (Score:4, Funny)
Show me Google's manufacturing plants, please.
Aren't they on the North Pole? I hope they can float...
Re:Geographic diversity (Score:5, Insightful)
"I know Google manufactures their own computers, for the most part."
As a former Google employee, I must say you are full of shit.
Show me Google's manufacturing plants, please.
As a former Google employee myself, I'm bound by my NDA from naming the East Asia contractors who build the actual equipment. Google generally only provides the reference implementation.
Do you think Dell builds their own boards? They don't. The majority of their server class motherboards are manufactured by ASUS, based on Intel reference designs (Intel also no longer manufactures desktop motherboards, as of Haswell -- yields were too low).
If you are curious about who made your motherboard, and run Windows, use the following command:
wmic baseboard det product,Manufacturer,version,serialnumber
(If you want a GUI version, download "Speccy", run it, and either look for the "Motherboard" section in the "Summary" view, or click on the "Motherboard" list item to get only that information by itself).
Other OS's have their own commands, as an exercise for the student.
P.S.: If the information has been obfuscated, you can usually back-track by looking at the BIOS vendor and version information, and then using searches for updated/same versions of the BIOS based on that, to see which platforms the BIOS vendor says it's for. You are welcome.
Re: (Score:3)
wmic baseboard get product,Manufacturer,version,serialnumber
Re: (Score:2)
"Do you think Dell builds their own boards? They don't."
As a former HP and Dell engineer, uh, yes, they do.
They build the original design and then hand that off to a company for mass production.
Google does NOTHING OF THE SORT. They used pre-built designs that fit their particular form factor and desired specs.
Re:Geographic diversity (Score:4, Interesting)
Maybe true, but you can get a cloud server deployed in a matter of minutes, and you can use that as a temporary (expensive) alternative to servers under your complete control.
You're making a lot of assumptions that aren't necessarily valid. The amount of downtime and the impact depends heavily on the nature of the company, and in particular, whether sales/income depends on maintaining continuous operation of the business. Take, for example, a company that makes software:
So unless a software company requires critical server infrastructure beyond what they get for free via iCloud, etc., it probably needs very little in the way of disaster preparedness, because the very nature of the work and the tools involved lends itself to being prepared for a disaster automatically.
On the opposite end of the spectrum, cloud service providers and Internet service companies must have disaster preparedness plans in place, or else everybody who uses their services is screwed. And if they're down for even a couple of days, they're probably going out of business. If Facebook went down for a week, Google+ would become the #1 social network.
Re: (Score:1)
Please tell me how you get a 120Tb database back in that amazon server? quickly over the internet. most companies do not run very simple things with barely any data. and you are trusting the amazon people a whole lot with your company key information. Not many CSO's would allow what you suggest.
Re: (Score:2)
As I said, most software companies these days can take advantage of infrastructure like iCloud to avoid keeping their own databases. This makes it somebody else's problem. If Apple loses all of the iCloud data, it would be an end-of-the-world-level crisis, as I said, but there's no feasible way for individual software companies to back up that data, making it entirely out of their control.
Amazon, of course, is different in that some of their cloud services are much closer to being servers under your cont
Re: (Score:2)
Indeed. It doesn't matter much if my wife's business has a hot spare data center or cloud installation to back up the local one... because a catastrophe that destroys local capacity will almost certainly destroy or seriously damage the physical premises - and
Re: (Score:3)
It does bring up a good point though. There is a lot of space between the A bomb proof data center with the geographically diverse duplicate data center with hot cutover and DR, what's that?
As you point out, data backup is essential, but that doesn't imply a full duplicate data center. It may be that a very minimal setup is enough to limp along for a few weeks while things get back to normal. Limping doesn't necessarily mean no revenue.
It's also useful to note that downtime due to storms and such doesn't ne
Re: (Score:1)
Meh. After hurricane Irene our whole company was shut down for a week and we survived just fine. We lost a server for some reason but it only took less than a day to rebuild it.
Re:Geographic diversity (Score:5, Insightful)
Take your time to properly understand your costs and revenues so you can make a sensible investment. Maybe it ends up being cheaper just to close door for a week every 30 years than your A-Bomb-proof continuity plan.
This is an amazingly difficult concept to get people to understand. I've had way too many conversations with people who are sure they need an instantaneous failure-proof disaster recovery plan. They believe their servers should be constantly in sync with multiple copies in various places, such that in the even of a short internet outage, their servers will fail over to an outside copy, and then fail back when the outage ends, automatically and without skipping a beat. Unfortunately, they're willing to spend approximately $0 to achieve this, but that should be fine, because "the cloud" is pretty much free, right?
It's a similar problem with security. Everyone wants all of their data to be completely secure without any possibility of being compromised under any circumstances, but they also want it to be as convenient as if the data is unsecured, and they don't expect to pay extra for any of it.
I always try to explain that it's about trade-offs. I can make your data much more secure than it is now, but it'll cost you money, and you'll have to jump through extra hoops to get access to your own data. I can replicate what you need to a remote server, yes, but then you have to pay for the remote server. Depending on exactly what we're talking about, it might not be a real-time sync, or it might not result in anything like an automatic failover. Those things might require special software or services or licenses. Pay enough, and yes, I can probably get you a real-time sync with automatic failover and fail-back, but even then, you could still have an outage. The system that keeps everything in sync and triggers the failover could be the component that fails. Or if there's a total blackout on the east cost, it might not matter that there's a complete replica automatically started on the west coast, if all your employees are on the east coast and without power.
It's trade-offs. Spend enough money and put up with enough limitations, and you'll get something that does what you want, although imperfectly. Most of the time, for most businesses, it doesn't make sense. "Good enough" is good enough. But people don't like to be told, "A pretty secure network with a pretty good disaster recovery plan is appropriate for you." It makes them feel unimportant, which most executives and business owners can't live with. They want to know that they should have the best thing possible.
Re: (Score:3)
"This is an amazingly difficult concept to get people to understand. I've had way too many conversations with people who are sure they need an instantaneous failure-proof disaster recovery plan."
Given your nickname it seems no wonder that you grasp the concept. Exactly yes to all you say.
"It makes them feel unimportant, which most executives and business owners can't live with."
That's true, but I'd say it's only half of the story. Specially business owners are quite sensible to the money part and quickly
Re: (Score:2)
From my experience, with each of your anecdotes, I'd half expect a response like: "Look, I'm not interested in all the technical details. That's your job. Why can't you just give me what I'm asking for?"
For some reason, this comedy sketch [youtube.com] comes to mind.
Re: (Score:3)
What about software diversity? (Score:1)
You talk of geographic diversity, but that's only part of the picture. Software diversity is critical, too. Not all disasters are storms. Sometimes we have disasters of software architecture. Some will say that systemd is an example of this. Some of its architectural decisions, such as the use of binary logging and how it has subsumed so much unrelated functionality, prove to be very problematic for many users. That's totally separate from its implementation. Even a perfect implementation, which of course i
Not an option for High Frequency Traders (Score:2)
Not an option for High Frequency Traders. Geographic diversity means locating your fiber optic connect further way from the transatlantic fiber head ends which make HFT possible.
Re:Not an option for High Frequency Traders (Score:4, Insightful)
Terrible. How will we cope without them?
Re: (Score:3)
Not an option for High Frequency Traders. Geographic diversity means locating your fiber optic connect further way from the transatlantic fiber head ends which make HFT possible.
Nope -
FHTs will typically have a presence in two colos per exchange and, depending on where they trade, with multiple exchanges each exchange being in a different country or region.
If the disaster is big enough to take out both colos for a given exchange it has probably taken out the exchange as well.
Re: (Score:2)
The optimum location for US High Frequency Traders is a few miles east of the transatlantic fiber head ends.
Final Rule: Test the Disaster Recovery Plan (Score:3)
Story time: A few years ago I was working on a web app for a US intel/LEO agency in northern virginia. The app had started as a demo, then kind of grew. Like a fungus. It was never really designed, much less designed to shut down and restart unexpectedly. There were some other similarly "designed" apps running in the data center.
The data center, being under the flight path for an airport, had a continuity of operations ("coop") plan and hardware. The "UPS" was a big generator with a switch so that it
Sandy was not a hurricane (Score:2)
Re: (Score:2)
Re: (Score:2)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
http://www.climatecentral.org/news/nws-confirms-sandy-was-not-a-hurricane-at-landfall-15589 [climatecentral.org]
New York (Score:1)
Sandy was a much bigger storm when it was hitting Cuba and fucking up the southern end of the Atlantic coast. It was an actual hurricane then, in fact.
But allllllllllll we fucking hear about is how New York was unprepared. New York isn't special and doesn't deserve special attention for being unprepared, but it sure turned into a fucking media event that's still going strong today.
I don't know if the media wanted another Katrina or simply wanted to pander to their favorite place in the world (NYC), but it
Re:New York (Score:4, Funny)
"And what the fuck is up with "The National Oceanic and Atmospheric Administration Climate Prediction Center is calling for calmer than normal storm activity this hurricane season"? You don't call for that, you predict it. Calls are to be answered (or not). Predictions are to be met (or not)."
Nononono... This is the United States of Almighty America. When NOAA calls, hurricanes abide! (or else, we send Chuck Norris).
Re: (Score:2)
Hey! That was uncalled-for!
Re: (Score:2)
What it should have taught (Score:4, Insightful)
Nothing. Disaster recovery plans are like backups... if you don't test them every so often then you assume that they don't work.
Companies should have already tested their plans and known that they worked so that when any interruption from the storm kicks in their backups would take over as planned.
Re: (Score:2, Troll)
In the days of corporate douche baggery, corporate recovery plans are like service and support are like reliability and security. Corporations spend more on executive bonuses when executives spends little or nothing on those things because bigger profits now. Then it all fucks up and the executives wanders off with those bonuses and a golden parachutes, this is not by accident this is because they do not give a fuck about anything but more greed now. You hire psychopaths as corporate executives and that is
Lengua Gato (Score:2)
Katrina & A Datacenter/ISP (Score:3)
Wasn't there a datacenter guy who posted here on /. when Katrina hit about all the stuff they went through keepign things up and running at some sort of minimal level?
Been drinking and google-fu is off but perhaps someone can post it. IIRC it included a blog of what was goign on, etc.
Re: (Score:3)
http://interdictor.livejournal... [livejournal.com]
https://en.wikipedia.org/wiki/... [wikipedia.org]
What it taught us? (Score:2)
Hurricane taught nothing (Score:2)
You can guarantee these IT idiots are going to leave the status quo intact for job security.
No Serious Precautions Taken (Score:3)
Nothing learned (Score:1)
The immense sums forwarded after 9/11 to harden the infrastructure in the NY area were wasted on largess and patronage as usual and a storm came along and proved it. What to do was known, they just didn't do it, and still haven't.
1920s electricity infrastructure (Score:3)
Funny thing is I know a transmission guy who said "I told you so" based on what he said in the 1960s. Fifty years later that shit was still in service and it burned.
History repeats itself (Score:3)
Eventually things pretty much go back to the way they were before. I remember seeing a discussion about the lessons learned from Hurricane Andrew (not just IT specific) and how after 7 years things that were important were forgotten or deemed less important. I'm sure the same happened with Hurricane Katrina, Sandy, and many others. It seems to be our human nature that these things eventually wear off and become less important. I think Neil Degrasse Tyson was on Joe Rogan's podcast a few years ago and touched on the subject as well.
Test your backup generators .. (Score:3)
Re: (Score:3)
Backup generators stopped running when they ray out of fuel because the tanks couldn't be refueled for various reasons. Power outages, roads blocked, behind on schedule, etc, etc. .... Nobody thought they would need an SLA on refills.
Re: (Score:3)
Or they thought having an SLA for refills meant their fuel trucks get through... even when trees/power lines/National Guard soldiers are blocking the road.
Re: (Score:3)
Exactly. PHBs think an SLA is a guarantee. Even a "guarantee" isn't a guarantee in some circumstances. I live in "hurricane alley" and can tell you that when a storm hits that things don't usually go as planned.
Rule 1 for I.T. (Score:1)
Don't shutdown the servers that have the emergency disaster plans saved on them.
That's what my I.T. department did. It took us 3 meetings to convince them to turn them back on.