Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Businesses IT Technology

What Hurricane Sandy Taught IT About Disaster Preparedness 68

StewBeans writes: The National Oceanic and Atmospheric Administration Climate Prediction Center is calling for calmer than normal storm activity this hurricane season, which runs through Nov. 30. But it's likely that data centers and IT companies in NYC are still taking disaster preparedness seriously. Three years ago, Hurricane Sandy devastated homes, businesses, transportation, and communication in New York, and taught many companies (the hard way) how to keep the lights on when the lights were literally off for weeks on end. Alphonzo Albright, former CIO of the Office of Information Technology in New York City, gives a behind-the-scenes account of what life and business were like in the dark, cold days following Hurricane Sandy in NYC. He also shares tips for other tech leaders to create their own Business Continuity Plan in case this year's storms take a turn for the worse.
This discussion has been archived. No new comments can be posted.

What Hurricane Sandy Taught IT About Disaster Preparedness

Comments Filter:
  • Geographic diversity (Score:5, Informative)

    by Todd Knarr ( 15451 ) on Monday September 21, 2015 @07:09PM (#50570869) Homepage

    First rule: have facilities capable of running your business in more than one location. Everywhere is susceptible to disaster of one sort or another, but if you pick areas far apart that aren't geographically similar they probably won't both suffer disasters at the same time.

    Second rule: the probability of disaster taking out your main facilities is 100%. It will happen. The only question is exactly when it'll happen, and the only constant in the answer is that it won't be at a good time. If anyone in your organization doesn't like this, remind them that reality doesn't really care what they like.

    • by turbidostato ( 878842 ) on Monday September 21, 2015 @07:17PM (#50570917)

      I should add a rule zero then: Take your time to properly understand your costs and revenues so you can make a sensible investment. Maybe it ends up being cheaper just to close door for a week every 30 years than your A-Bomb-proof continuity plan.

      And then a zero-plus: Make sure you get business-aligment in written. Maybe the board member that agreed to your investment-sensible less-than-A-Bomb-proof continuity plan wants you as scapegoat once the shit hits the fan.

      • by Lumpy ( 12016 ) on Monday September 21, 2015 @07:32PM (#50571045) Homepage

        A week? most data disaster you are down for at least 30 days. Hell you cant get an order for servers in from DELL even on rush faster than 2 weeks.

        If your company can survive zero revenue and 100% loss for 30 days, you either are sitting on a mountain of money, or your business is more of a hobby than anything else.

        Oh and if you lose your accounting data due to lack of a bomb proof plan, expect fines in the high 6 figure range.

        • I see your low ID and I can't but asking... are you really *that* obtuse?

          • by Anonymous Coward

            Lumpy is actually completely right, and as usual you kiddies that have zero experience or education in managing IT just don't have a clue. You cant get replacement hardware instantly, no corporations don't go to best buy. Also I siggest you look up sarbanes oxley, and the fines for not having a working backup plan.

        • A week? most data disaster you are down for at least 30 days. Hell you cant get an order for servers in from DELL even on rush faster than 2 weeks.

          Do companies actually bulk-order from Dell any more? This is actually the most I've heard about Dell for months.

          I know Google manufactures their own computers, for the most part. They do use Dells as build machines for things like Chrome and ChromeOS, and they're a cheap way of throwing CPUs at the problem, instead of making the Ubuntu build process actually effective and efficient, for that matter, but those are pretty specialized use cases.

          Google also routinely runs "This data center got destroyed in an

          • by nine-times ( 778537 ) <nine.times@gmail.com> on Monday September 21, 2015 @09:02PM (#50571447) Homepage

            Do companies actually bulk-order from Dell any more? This is actually the most I've heard about Dell for months.

            I know Google manufactures their own computers, for the most part.

            So you think just because Google builds their own servers, it must be that everyone else does the same? There are a few companies out there that aren't Google, and yes, many of them still buy from Dell or HP.

          • Re: (Score:3, Informative)

            by Khyber ( 864651 )

            "I know Google manufactures their own computers, for the most part."

            As a former Google employee, I must say you are full of shit.

            Show me Google's manufacturing plants, please.

            • by fustakrakich ( 1673220 ) on Monday September 21, 2015 @10:30PM (#50571807) Journal

              Show me Google's manufacturing plants, please.

              Aren't they on the North Pole? I hope they can float...

            • by tlambert ( 566799 ) on Monday September 21, 2015 @10:38PM (#50571837)

              "I know Google manufactures their own computers, for the most part."

              As a former Google employee, I must say you are full of shit.

              Show me Google's manufacturing plants, please.

              As a former Google employee myself, I'm bound by my NDA from naming the East Asia contractors who build the actual equipment. Google generally only provides the reference implementation.

              Do you think Dell builds their own boards? They don't. The majority of their server class motherboards are manufactured by ASUS, based on Intel reference designs (Intel also no longer manufactures desktop motherboards, as of Haswell -- yields were too low).

              If you are curious about who made your motherboard, and run Windows, use the following command:
              wmic baseboard det product,Manufacturer,version,serialnumber

              (If you want a GUI version, download "Speccy", run it, and either look for the "Motherboard" section in the "Summary" view, or click on the "Motherboard" list item to get only that information by itself).

              Other OS's have their own commands, as an exercise for the student.

              P.S.: If the information has been obfuscated, you can usually back-track by looking at the BIOS vendor and version information, and then using searches for updated/same versions of the BIOS based on that, to see which platforms the BIOS vendor says it's for. You are welcome.

              • That should be

                wmic baseboard get product,Manufacturer,version,serialnumber

              • by Khyber ( 864651 )

                "Do you think Dell builds their own boards? They don't."

                As a former HP and Dell engineer, uh, yes, they do.

                They build the original design and then hand that off to a company for mass production.

                Google does NOTHING OF THE SORT. They used pre-built designs that fit their particular form factor and desired specs.

        • by dgatwood ( 11270 ) on Monday September 21, 2015 @09:31PM (#50571585) Homepage Journal

          A week? most data disaster you are down for at least 30 days. Hell you cant get an order for servers in from DELL even on rush faster than 2 weeks.

          Maybe true, but you can get a cloud server deployed in a matter of minutes, and you can use that as a temporary (expensive) alternative to servers under your complete control.

          If your company can survive zero revenue and 100% loss for 30 days, you either are sitting on a mountain of money, or your business is more of a hobby than anything else.

          You're making a lot of assumptions that aren't necessarily valid. The amount of downtime and the impact depends heavily on the nature of the company, and in particular, whether sales/income depends on maintaining continuous operation of the business. Take, for example, a company that makes software:

          • On the development side, even if a company's entire repository went away tomorrow, and even if half the development team died, a typical software company could still get back all but the last few days' work (and perhaps a few old branches) by configuring a github instance on Amazon's Elastic Cloud and having the remaining developers push all of the branches from their local checkouts. Downtime would be minimal.
          • On the distribution site, most software companies would be completely unaffected, because distribution is usually handled by a large third-party merchant (Apple, Google, etc.).

          So unless a software company requires critical server infrastructure beyond what they get for free via iCloud, etc., it probably needs very little in the way of disaster preparedness, because the very nature of the work and the tools involved lends itself to being prepared for a disaster automatically.

          On the opposite end of the spectrum, cloud service providers and Internet service companies must have disaster preparedness plans in place, or else everybody who uses their services is screwed. And if they're down for even a couple of days, they're probably going out of business. If Facebook went down for a week, Google+ would become the #1 social network.

          • by Anonymous Coward

            Please tell me how you get a 120Tb database back in that amazon server? quickly over the internet. most companies do not run very simple things with barely any data. and you are trusting the amazon people a whole lot with your company key information. Not many CSO's would allow what you suggest.

            • by dgatwood ( 11270 )

              As I said, most software companies these days can take advantage of infrastructure like iCloud to avoid keeping their own databases. This makes it somebody else's problem. If Apple loses all of the iCloud data, it would be an end-of-the-world-level crisis, as I said, but there's no feasible way for individual software companies to back up that data, making it entirely out of their control.

              Amazon, of course, is different in that some of their cloud services are much closer to being servers under your cont

          • You're making a lot of assumptions that aren't necessarily valid. The amount of downtime and the impact depends heavily on the nature of the company, and in particular, whether sales/income depends on maintaining continuous operation of the business.

            Indeed. It doesn't matter much if my wife's business has a hot spare data center or cloud installation to back up the local one... because a catastrophe that destroys local capacity will almost certainly destroy or seriously damage the physical premises - and

        • by sjames ( 1099 )

          It does bring up a good point though. There is a lot of space between the A bomb proof data center with the geographically diverse duplicate data center with hot cutover and DR, what's that?

          As you point out, data backup is essential, but that doesn't imply a full duplicate data center. It may be that a very minimal setup is enough to limp along for a few weeks while things get back to normal. Limping doesn't necessarily mean no revenue.

          It's also useful to note that downtime due to storms and such doesn't ne

        • by Anonymous Coward

          Meh. After hurricane Irene our whole company was shut down for a week and we survived just fine. We lost a server for some reason but it only took less than a day to rebuild it.

      • by nine-times ( 778537 ) <nine.times@gmail.com> on Monday September 21, 2015 @09:26PM (#50571567) Homepage

        Take your time to properly understand your costs and revenues so you can make a sensible investment. Maybe it ends up being cheaper just to close door for a week every 30 years than your A-Bomb-proof continuity plan.

        This is an amazingly difficult concept to get people to understand. I've had way too many conversations with people who are sure they need an instantaneous failure-proof disaster recovery plan. They believe their servers should be constantly in sync with multiple copies in various places, such that in the even of a short internet outage, their servers will fail over to an outside copy, and then fail back when the outage ends, automatically and without skipping a beat. Unfortunately, they're willing to spend approximately $0 to achieve this, but that should be fine, because "the cloud" is pretty much free, right?

        It's a similar problem with security. Everyone wants all of their data to be completely secure without any possibility of being compromised under any circumstances, but they also want it to be as convenient as if the data is unsecured, and they don't expect to pay extra for any of it.

        I always try to explain that it's about trade-offs. I can make your data much more secure than it is now, but it'll cost you money, and you'll have to jump through extra hoops to get access to your own data. I can replicate what you need to a remote server, yes, but then you have to pay for the remote server. Depending on exactly what we're talking about, it might not be a real-time sync, or it might not result in anything like an automatic failover. Those things might require special software or services or licenses. Pay enough, and yes, I can probably get you a real-time sync with automatic failover and fail-back, but even then, you could still have an outage. The system that keeps everything in sync and triggers the failover could be the component that fails. Or if there's a total blackout on the east cost, it might not matter that there's a complete replica automatically started on the west coast, if all your employees are on the east coast and without power.

        It's trade-offs. Spend enough money and put up with enough limitations, and you'll get something that does what you want, although imperfectly. Most of the time, for most businesses, it doesn't make sense. "Good enough" is good enough. But people don't like to be told, "A pretty secure network with a pretty good disaster recovery plan is appropriate for you." It makes them feel unimportant, which most executives and business owners can't live with. They want to know that they should have the best thing possible.

        • "This is an amazingly difficult concept to get people to understand. I've had way too many conversations with people who are sure they need an instantaneous failure-proof disaster recovery plan."

          Given your nickname it seems no wonder that you grasp the concept. Exactly yes to all you say.

          "It makes them feel unimportant, which most executives and business owners can't live with."

          That's true, but I'd say it's only half of the story. Specially business owners are quite sensible to the money part and quickly

          • From my experience, with each of your anecdotes, I'd half expect a response like: "Look, I'm not interested in all the technical details. That's your job. Why can't you just give me what I'm asking for?"

            For some reason, this comedy sketch [youtube.com] comes to mind.

            • That made my day, thanks. I'm familiar with the "We think we should have all the greatest and best services that Fortune500 companies enjoy, even though we're funded like a corner convenience store.. and can we implement this with fewer employees and a smaller budget than we had 5 to 10 years ago?"
    • by Anonymous Coward

      You talk of geographic diversity, but that's only part of the picture. Software diversity is critical, too. Not all disasters are storms. Sometimes we have disasters of software architecture. Some will say that systemd is an example of this. Some of its architectural decisions, such as the use of binary logging and how it has subsumed so much unrelated functionality, prove to be very problematic for many users. That's totally separate from its implementation. Even a perfect implementation, which of course i

    • Not an option for High Frequency Traders. Geographic diversity means locating your fiber optic connect further way from the transatlantic fiber head ends which make HFT possible.

    • Story time: A few years ago I was working on a web app for a US intel/LEO agency in northern virginia. The app had started as a demo, then kind of grew. Like a fungus. It was never really designed, much less designed to shut down and restart unexpectedly. There were some other similarly "designed" apps running in the data center.

      The data center, being under the flight path for an airport, had a continuity of operations ("coop") plan and hardware. The "UPS" was a big generator with a switch so that it

  • it was a storm, whatever.
  • Sandy was a much bigger storm when it was hitting Cuba and fucking up the southern end of the Atlantic coast. It was an actual hurricane then, in fact.

    But allllllllllll we fucking hear about is how New York was unprepared. New York isn't special and doesn't deserve special attention for being unprepared, but it sure turned into a fucking media event that's still going strong today.
    I don't know if the media wanted another Katrina or simply wanted to pander to their favorite place in the world (NYC), but it

    • Re:New York (Score:4, Funny)

      by turbidostato ( 878842 ) on Monday September 21, 2015 @07:32PM (#50571041)

      "And what the fuck is up with "The National Oceanic and Atmospheric Administration Climate Prediction Center is calling for calmer than normal storm activity this hurricane season"? You don't call for that, you predict it. Calls are to be answered (or not). Predictions are to be met (or not)."

      Nononono... This is the United States of Almighty America. When NOAA calls, hurricanes abide! (or else, we send Chuck Norris).

  • by CanadianMacFan ( 1900244 ) on Monday September 21, 2015 @07:31PM (#50571025)

    Nothing. Disaster recovery plans are like backups... if you don't test them every so often then you assume that they don't work.

    Companies should have already tested their plans and known that they worked so that when any interruption from the storm kicks in their backups would take over as planned.

    • Re: (Score:2, Troll)

      by rtb61 ( 674572 )

      In the days of corporate douche baggery, corporate recovery plans are like service and support are like reliability and security. Corporations spend more on executive bonuses when executives spends little or nothing on those things because bigger profits now. Then it all fucks up and the executives wanders off with those bonuses and a golden parachutes, this is not by accident this is because they do not give a fuck about anything but more greed now. You hire psychopaths as corporate executives and that is

  • Not at all irrelevant,about how much is taught, really, with not a bit of disaster..
  • by i.r.id10t ( 595143 ) on Monday September 21, 2015 @08:49PM (#50571383)

    Wasn't there a datacenter guy who posted here on /. when Katrina hit about all the stuff they went through keepign things up and running at some sort of minimal level?

    Been drinking and google-fu is off but perhaps someone can post it. IIRC it included a blog of what was goign on, etc.

  • Almost certainly nothing.
  • You can guarantee these IT idiots are going to leave the status quo intact for job security.

  • by Jim Sadler ( 3430529 ) on Monday September 21, 2015 @11:39PM (#50572043)
    First we must be willing to build very large facilities capable of storing thousands of tents and food enough to last people for many weeks. And we need to do this in regional centers such that deliveries can take place the day after a storm leaves an area. Areas such as Miami simply can not be evacuated as the population is way too large. A strong storm can knock out roads and rails and put a large area into severe isolation. A few days worth of groceries will simply not help much. In my area our grocery stores were destroyed when three storms hit us back to back. Getting a car on the road was next to impossible and quite dangerous. Gasoline was not available at all as the gas wells were all flooded. I had no power for three solid weeks. A situation like that can get to the point at which people raid each other trying to keep from starving. I see no effective measures at all. If another Katrina hit New Orleans the results would be very much like the original Katrina. The repairs made are not designed to stand up to a class 5 hurricane. And it is only a matter of time before New Orleans gets hit by class 5 storm. Miami is in the same boat. We roll the dice here constantly and count on good luck not to bring on a class 5 storm.
  • by Anonymous Coward

    The immense sums forwarded after 9/11 to harden the infrastructure in the NY area were wasted on largess and patronage as usual and a storm came along and proved it. What to do was known, they just didn't do it, and still haven't.
     

  • by dbIII ( 701233 ) on Tuesday September 22, 2015 @05:03AM (#50572879)
    It taught us that 1920s electricity infrastructure shouldn't be in use in one of the richest cities on the planet - wet wood in contact with high voltage is a bad idea and the inevitable fires happened.
    Funny thing is I know a transmission guy who said "I told you so" based on what he said in the 1960s. Fifty years later that shit was still in service and it burned.
  • by Monoman ( 8745 ) on Tuesday September 22, 2015 @05:03AM (#50572881) Homepage

    Eventually things pretty much go back to the way they were before. I remember seeing a discussion about the lessons learned from Hurricane Andrew (not just IT specific) and how after 7 years things that were important were forgotten or deemed less important. I'm sure the same happened with Hurricane Katrina, Sandy, and many others. It seems to be our human nature that these things eventually wear off and become less important. I think Neil Degrasse Tyson was on Joe Rogan's podcast a few years ago and touched on the subject as well.

  • by nickweller ( 4108905 ) on Tuesday September 22, 2015 @06:58AM (#50573359)
    The backup generators failed as the fuel pumps couldn't be powered as there wasn't any electricity to power the pumps ref [computerworld.com]. Don't site your critical infrastructure in the basement. ref [facilitiesnet.com].
    • by Monoman ( 8745 )

      Backup generators stopped running when they ray out of fuel because the tanks couldn't be refueled for various reasons. Power outages, roads blocked, behind on schedule, etc, etc. .... Nobody thought they would need an SLA on refills.

      • Or they thought having an SLA for refills meant their fuel trucks get through... even when trees/power lines/National Guard soldiers are blocking the road.

        • by Monoman ( 8745 )

          Exactly. PHBs think an SLA is a guarantee. Even a "guarantee" isn't a guarantee in some circumstances. I live in "hurricane alley" and can tell you that when a storm hits that things don't usually go as planned.

  • by Anonymous Coward

    Don't shutdown the servers that have the emergency disaster plans saved on them.

    That's what my I.T. department did. It took us 3 meetings to convince them to turn them back on.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...