Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security

ISP Recovers in 72 Hours After Leveling by Tornado 258

aldheorte writes "Amazing story of how an ISP in Jackson, TN, whose main facility was completely leveled by a tornado, recovered in 72 hours. The story is a great recounting of how they executed their disaster recovery plan, what they found they had left out of that plan, data recovery from destroyed hard drives, and perhaps the best argument ever for offsite backups. (Not affiliated with the ISP in question)"
This discussion has been archived. No new comments can be posted.

ISP Recovers in 72 Hours After Leveling by Tornado

Comments Filter:
  • Heh (Score:4, Funny)

    by B3ryllium ( 571199 ) on Thursday September 04, 2003 @01:15PM (#6871086) Homepage
    Hopefully no one was hurt when the trailer park got levelled.
    • Yea, that really blows.

    • But... (Score:5, Funny)

      by macshune ( 628296 ) on Thursday September 04, 2003 @01:56PM (#6871489) Journal
      Can they recover from the slashdot effect???

      The slashdot effect differs from a tornado in a few subtle ways:

      1) You can't see it coming (unless you pay money to be a subscriber)

      2) It doesn't hurt anything, except for webservers, the occasional OC line lit up like New Year's Eve, spammers, and the odd *IAA executive.

      3) A tornado doesn't typically smell like armpits, cheetos, empty 64oz soda cups, burning plastic, your parent's basement and/or too much cologne for that first date.

      4) It travels at the speed of light, a lot quicker than a tornado.

      5) Does not require specific atmospheric conditions to be present...just a link on the front page.

      Anything else?
      • Re:But... (Score:5, Interesting)

        by BMonger ( 68213 ) on Thursday September 04, 2003 @02:19PM (#6871751)
        Hmmm... what if a website admin did become a subscriber. Could they theoretically take the RSS feed to know when a new post was made, pull the article text, scan it for their domain and if their domain was linked to just have a script auto-block referers from slashdot for like 24 hours or so? Somebody less lazy than me might look into that. Then you could sell it for like $100! It'd be like paying the mob not to beat you up! But only if somebody affiliated with slashdot wrote it I guess.
  • by Anonymous Coward on Thursday September 04, 2003 @01:15PM (#6871087)
    when Munchkins overrun the web now that this ISP got relocated by the twister.
  • by Anonymous Coward on Thursday September 04, 2003 @01:16PM (#6871107)
    "So, ah, your ISP here.. what's your uptime for the last year?"

    "99.18% for our service, and 96.2% for our building."
  • by dswensen ( 252552 ) on Thursday September 04, 2003 @01:17PM (#6871114) Homepage
    And I'm sure every minute of those 72 hours was characterized by irate phone calls to tech support.

    "Are you guys down again? You're down more than you're up! I'm going to find another service... etc..."

    "Ma'am our facilities have been entirely leveled by a tornado, we'll be back up in 72 hours."

    "72 HOURS?! I have photos of my grandchildren I have to mail! Worst ISP ever! Let me speak to your supervisor!"

    "Ma'am our supervisor was also leveled by the tornado."

    *click*

    Not that I work tech support for an ISP and am bitter...
    • by koa ( 95614 ) on Thursday September 04, 2003 @02:50PM (#6872120)
      Actually.. I ran a technical support department for a small ISP for a couple years.

      It amazing how accurate you are in reguards to customer viewpoint on downtime.

      After having done it myself, I actually have MUCH more respect for technicul support engineers/supervisors becuase within reason most "downtime" is fixed even before the customer knows about it (i.e. small blips in service).

      And the majority of people who purchase an ISP's services have absolutely no idea what it takes to respond to an outtage.
  • by Trigun ( 685027 )
    Now that that's out of the way, it never ceases to amaze me how many companies have little to no severe disaster recovery plans, and how a little bit of ingenuity(sp?) can go a long way in a company.
    Times of crisis and how one deals with them are the mark of successful businesses/employees/people. I don't think that we could recover so quickly should a disaster of that size hit my job, but it'd be fun to try.
  • Nice work! (Score:5, Insightful)

    by Tebriel ( 192168 ) on Thursday September 04, 2003 @01:17PM (#6871119)
    This is what happens when people make intelligent plans and the modify them as they see other plans work or fail. I'm glad to see that this was a work in progress rather than some arcane plan in a binder somewhere that no one ever looked at.
    • Re:Nice work! (Score:3, Insightful)

      by blackp ( 217413 )
      One of the problems with a plan in a binder somewhere, is that the tornado would have probably taken out the binder as well.
    • Do you work for Utopia Corp?
    • Re:Nice work! (Score:3, Interesting)

      by Mooncaller ( 669824 )
      Right on! I have been involved with the design of at least 5 disaster recovery plans. The first one was while I was in the Air Force. I guess I was pretty luck, learning at the age of 19, that preparing for successfull disaster recovery is a continous process. The main output of a disaster recovery development process, are those binders. I guess thats why so many people confuse disaster recovery with the binders. But they are only the result of a process; just like a piece of software is created as part of
  • by JCMay ( 158033 )
    That is some very well thought out planning. Big props to those guys!

  • by Bob Vila's Hammer ( 614758 ) * on Thursday September 04, 2003 @01:18PM (#6871124) Homepage Journal
    When your business gets pelted with the equivalent force of 100,000 elephants [slashdot.org], you better have a friggin contingency plan.
  • "Move somewhere where the wind don't blow quite that much" =)

    However, it amazing how soon after a 'total disaster' a system can be up and running again. I distinctly recalls seeing a lot about just that in the paper (the one made from dead wood) after 9/11. Kudos, I say!.

  • Twisters, hurricanes, floods (oh my)

    SEPTEMBER 03, 2003 ( CIO ) - The evening of Sunday, May 4, 2003, at Aeneas Internet and Telephone began as any previous Sunday evening had. The Jackson, Tenn.-based company that serves about 10,000 Internet and 2,500 telephone customers was closed for the weekend, awaiting the return of its 17 employees the next morning. Just before midnight, however, all hell broke loose. An F-4 category twister touched down just outside of town, then tore through Jackson's downtown are
  • Fire... (Score:5, Insightful)

    by Shut the fuck up! ( 572058 ) on Thursday September 04, 2003 @01:19PM (#6871131)
    ...is a good enough argument for off site backups. If you don't have them, your backup plan is not enough.
    • Very true, but like everything else you don't realize how important it is, until after the fact.
      • Re:Fire... (Score:3, Insightful)

        by Stargoat ( 658863 )
        Everyone should have off-site backups. It's not very expensive (>100 dollars for tapes). It's not very hard (drive tapes to site). It's not difficult to get the backups if you need them (drive to site with tapes). It just makes sense.
        • Re:Fire... (Score:5, Insightful)

          by Zathrus ( 232140 ) on Thursday September 04, 2003 @01:45PM (#6871381) Homepage
          Everyone should have off-site backups. It's not very expensive (>100 dollars for tapes)

          Er, for how much data? For your personal computer, maybe (but the tape drive will cost you considerably more than that $100), but I don't think you're going to back up a few hundred gigs of business data on ~$100 of tapes. And I suspect you meant 100... although if the latter then you're almost certainly correct!

          It's not very hard (drive tapes to site). It's not difficult to get the backups if you need them (drive to site with tapes)

          If your offsite backup is within convienent driving distance then odds are it's not far enough offsite. A flood, tornado, hurricane, earthquake, or other large scale natural disaster could conceivably destroy both your onsite and offsite backups if they're within a few miles. The flipside is that the further the distance the more the inconvienence on an ongoing basis and the more likely you are to stop doing backups.

          There's far more to be considered here, but I'm not the DR expert (my wife is... seriously). It does make sense to have offsite backups, but you have to have some sense about those too.
          • Not to mention that once you have those tapes in hand; they're not likely to be much good to you unless you have a contract with a business continuity services group which will allow you to load your data back onto the systems they have on hold.
          • You can use a fireproof safe for near-line backups and then move them offsite in batches, thus seriously minimizing the amount of trouble, while still more or less ensuring that they will be safe. Extra points for putting them in ziplock baggies.
    • yeah... what idiots... i've been keeping off sight backups for years in my closet. i can barely even see them when the door is open!
    • by DiveX ( 322721 ) <slashdotnewcontact@oasisofficepark.com> on Thursday September 04, 2003 @02:02PM (#6871547) Homepage
      Many companies in the World Trade Center thought that off-site backup meant the other building.
      • by hawkbug ( 94280 ) <.psx. .at. .fimble.com.> on Thursday September 04, 2003 @02:14PM (#6871689) Homepage
        Exactly, that statement is very true - I had a buddy who worked for a company there in tower 2. He worked offsite in Iowa, and one day couldn't vpn in to continue his programming. Turned on the news, and you know the rest. The problem was, he had all his java source on their servers. Sure, they backed it up daily and had an offsite backup in the other tower... The bad news was he lost all his work, and a lot of coworkers. The good news is that the company survived, and simply contracted him on for another 2 years to complete the project. He had to start from scratch, but gets paid more as a result. I'm sure insurance covered the companies losses.
  • by Anonymous Coward
    Hah, they can recover from a tornado. That's no biggie. How 'bout a SLASHDOTTING, then!
    • by cindik ( 650476 ) <solidusfullstop@ ... m ['k.c' in gap]> on Thursday September 04, 2003 @01:37PM (#6871316) Homepage Journal
      That's actually interesting - how many sites have contingency plans for the /. effect? How many businesses? It's not just /., but just about any media can refer people to a real business site. For small companies, this could bring them down for some time. Imagine the "Bruce Almighty" effect, only with some business with a small-to-medium capacity connection, bombarded just because someone used http://www.slashdotme.com/ or spam@.me.into.oblivion.org in their movie. The fact that so many sites are taken down by the /. effect causes me to believe that few sites and those who run them are truly prepared.

      • I think a lot of sites already have contingency plans for sudden traffic increases, and if not, they begin to think about them very seriously once they get a large spike in traffic that causes disruption of service. Even with traffic spike contingency plans, the level you establish as the maximum amount of traffic that you need to be able to sustain, and what amount of latency or down time is acceptable to business, can be and often is debated ad nauseum. It costs a lot of money to maintain readiness for
    • by Fishstick ( 150821 ) on Thursday September 04, 2003 @02:03PM (#6871559) Journal
      that's computerworld receiving the /.ing

      the isp is here [aeneas.net]

      picture of the aftermath here [aeneas.net]
  • by Doesn't_Comment_Code ( 692510 ) on Thursday September 04, 2003 @01:20PM (#6871145)
    A Tornado huh?

    Well that's what you casemodders get for installing twenty overpowered cooling fans in every one of your 1000 servers!
  • Tornad'oh! (Score:5, Funny)

    by AtariAmarok ( 451306 ) on Thursday September 04, 2003 @01:22PM (#6871160)
    Let the OZ jokes flow:

    "Bring me the router of the wicked switch of the Qwest!"

    Although, I am starting to wonder. Has anyone checked to see if this ISP has a record of resisting RIAA subpeonas? Perhaps the RIAA levelled it after acquiring cloudbuster [geocities.com] equipment.
  • by ptomblin ( 1378 ) <ptomblin@xcski.com> on Thursday September 04, 2003 @01:23PM (#6871169) Homepage Journal
    A couple of friends of mine were badly burned because the web hosting company they were using lost all their data (customer and their own) in one humungous crash, and didn't have any backups. They didn't even have a spare copy of their customer database, so they couldn't even contact their customers to tell them what was going on. Nor could they tell what customers they had and how much service they'd paid for, etc.
    • A couple of friends of mine were badly burned because the web hosting company they were using lost all their data
      It sounds like your friends got badly burned because they didn't back up their data, not because of their ISP. Always back up your data. That goes doubly so if your data is stored on someone else's computer.
    • A couple of friends of mine were badly burned because the web hosting company they were using lost all their data (customer and their own) in one humungous crash, and didn't have any backups.

      Err, burned or they got what they paid for?

      If your friends really cared about thier data, they would still have it. Period.

      Who are thier customers going to blame? Not the ISP. ISPs are a commodity item that can be hosted just about anywhere, and I'm sure that some of them provide backups/offsite backups as part
  • . . . how long will it take the article's host to recover from the slashdot effect?
  • by wo1verin3 ( 473094 ) on Thursday September 04, 2003 @01:29PM (#6871229) Homepage
    No, in Russia Tornado does not own you. Neither does ISP. It is not, step 1) tornado step 2) ??? step 3) ISP recovers. There is not a beowulf cluster of these, and the tornado doesn't run Linux.
    • by Trigun ( 685027 ) <evil.evilempire@ath@cx> on Thursday September 04, 2003 @01:37PM (#6871317)
      the tornado doesn't run Linux.

      No, it runs .NET. There's a lot of huffing and puffing, nobody knows too much about it, and in the end your business is in shambles and half your IT staff is no longer.

      -3 Stupid.
    • by MachineShedFred ( 621896 ) on Thursday September 04, 2003 @01:49PM (#6871418) Journal
      I, for one, welcome our new Tornado-beating ISP overlords.
    • How about (Score:3, Interesting)

      by phorm ( 591458 )
      1) Implement good disaster-recovery plan
      2) ??? (aka mad-scramble to initiate plan)
      3) Profit (or at least don't go under)


      This must have been a pretty in depth recovery plan though. I mean, even with backups and a redundant connection elsewhere... I think that for myself processing the fact that my office had just been bowled over by wind-on-steroids would faze me for a little while (office...tornado...holy...shit...must...recover. . .data)

      Now they're up and running, but what of their old office? It mu
  • Build a better building!
  • so... (Score:3, Insightful)

    by 2MuchC0ffeeMan ( 201987 ) on Thursday September 04, 2003 @01:36PM (#6871302) Homepage
    let me get this straight, all the houses around the isp have no power, no phone... but they still need to get online?
  • by MicroBerto ( 91055 ) on Thursday September 04, 2003 @01:36PM (#6871305)
    While that's awesome, I still think that small businesses and big ones should both have offsite tape backups. Even if this means the owner brings back and forth a case of tapes to his home once a week or so. That alone would have saved much of this trouble.

    Then I've seen the other end of the spectrum - a 6 Billion dollar corporation's world HQ IT center... wow. They have disaster recovery sessions and planning like I never would have imagined. Very cool facility, but it has to be like that. Some day if they get burned, it's all over.

  • Truly stunning (Score:5, Insightful)

    by dbarclay10 ( 70443 ) on Thursday September 04, 2003 @01:40PM (#6871341)

    What amazes me isn't that these people were able to restore service to their customers in 72 hours. They used standard systems administration techniques. BGP was specifically mentioned.

    No, what amazes me is that this is news. The IT industry is so full of idiots and morons and MCSEs that taking basic precautions earns you a six-figure salary and news coverage. These folks didn't even have off-site backups, it was luck that they were able to resume business operations (ie: billing) so soon.

    Moral of the story? When automobile manufacturers start getting press coverage for doing a great job because unlike their competition, they install brakes in their vehicles, you know that the top-tier IT managers and executives have switched industries.

    • Re:Truly stunning (Score:5, Interesting)

      by HardCase ( 14757 ) on Thursday September 04, 2003 @01:57PM (#6871495)
      No, what amazes me is that this is news. The IT industry is so full of idiots and morons and MCSEs that taking basic precautions earns you a six-figure salary and news coverage. These folks didn't even have off-site backups, it was luck that they were able to resume business operations (ie: billing) so soon.


      I agree, although maybe not so vehemently. For the IT managers who need a clue, the article is evidence that a sound disaster recovery plan works. Obviously, in the case of the ISP, the plan wasn't completely sound, but the other, possibly more important, point of the article is that the ISP's management recognized that their recovery plan was incomplete. Based on the lessons they learned, they made changes.


      I work for a large (~20,000 employees) company, with about 10,000 employees at one site. The IT department (actually the entire company as well) has a disaster recovery plan in place. But beyond having a plan, we also have drills. As an example, we are in the flight path of the local airport (possibly not the best place in the world for a manufacturing site). What happens if a plane crashes smack in the middle of the plant? Hopefully we'll never know for sure, but the drills that we've run showed strong and weak points of the disaster plan. The strong points were emphasized, the weak points were revised and the disaster plan continues as a work in progress.


      Specifics aside, and maybe this is just stating the obvious, but considering a disaster recovery plan to be a continuously evolving procedure could be one of its strongest points.


      -h-

    • In defense of the "idiots", many IT people and system administrators are hobbled by the lack of time, money, and equipment. There is the "right way" to do things and the "real world" way to do things. If management isn't willing to spend the money, and doesn't care, what can you do? At my last job, I had to bring a spare CD-RW drive and blank CDs in to work from my home to back up the critical files on my work PC.
  • OK I just may be jaded I work in a secor that thinks 5 minutes is earth shattering ammounts of downtime. 72 hours would ahve me everybody that works for me and some C level guys fired at the companies I work for. First things first what did they do wrong backups stored on site this is page 2 of a disaster recovery howto backup need to be stored onsite and remote, they also need to be verified as functional (yes I am that manager that insists that servers be restored and checked for functionality on the bac
    • Yup... definetly a manager concerned about the minutes, rather than the details.

      Details like it not being one box or even one rack that went down, but ALL RACKS, ALL WIRES, ALL ELECTRICITY, ALL WALLS, FLOORS, AND CELINGS.

      Also too busy to bother with details like punctuation or a proper paragraph from the look of it...
    • I think I speak for everybody when I say, "Uh, what?"
    • That's like saying it's unacceptable for a 747 to fly without wings - they're a midsized ISP, certainly large enough that offsite backups would be wise, but be reasonable for a second; the entire physical facility was completely obliterated, electronics, electrics, offices, building...72 hours isn't THAT bad. 24 would be impressive ehough to get a headline though, IMHO.
    • yes I am that manager
      So that's why your post is such a lovely formatted and readable text ;)
  • by EvilTwinSkippy ( 112490 ) <yoda.etoyoc@com> on Thursday September 04, 2003 @01:45PM (#6871378) Homepage Journal
    Our ISP was leveled in a Tornado.
  • but isn't the new moderation system leading to the first few good posts on any topic all getting modded up to 5 while the rest get ignored?
  • "Though the tape and hard drives were stored onsite at the Jackson location, Hart and Warren figured onsite backup was better than none."

    They had to recover the drives from the rubble and after numerous failed attempts, finally found a data extraction company that could retrieve the data.

    While their recovery, and foresight is impressive, I don't think we should raise them up as the example, when they ommited something as simple as carrying a backup home every once and a while. They got lucky, with regar
    • I tend to agree. Hell, my insurance carrier questions me, every year, if we maintain offsite backups. YES (!)

      This isn't a debate on this backup method or that one -- just the fact that you NEED ONE. I personally gave up on tape and went to live hard drives for pure ease and speed, while at the same time cutting costs drastically. All servers, RAID-5, dump their data/configurations to a local RAID-1 IDE based system (encrypted of course).

      Daily it's running 35-40G currently. Dump that data to a portable dri
  • by Daniel Wood ( 531906 ) * on Thursday September 04, 2003 @02:14PM (#6871686) Homepage Journal
    I am also a former Aeneas customer.
    Unless Aeneas has made some major changes they are quite certainly the worst ISP I have ever worked with. Aeneas has contracts with the Jackson-Madison County School System to provide internet service district wide. The quality of such service is, bar none, the worst I have experienced.
    I did some volunteer work at a local Elementary school helping teachers work out any lingering computing problems they had(Virii, printer drivers, misconfigured ip settings, file transfer to a new computer, etc). The internet service I experienced while I was there lead me to believe I was on a 128k ISDN line. Not until I went to the server room did I realize that I was, infact, on a T1. Now this is during the middle of summer, mabye four other persons were in the building, three of which were in the same room as myself. The service was also intermittent, having several dead periods while I was working. Needless to say, I remained unimpressed by said experience.

    When I was an Aeneas dialup customer, in 1998, the service provided by Aeneas was also subpar. The dialup speeds were averaging 21.6kbps, where as when I switched to U.S. Internet(now owned by Earthlink) my dialup speeds were always above 26.4kbps(Except on Mother's Day). There were frequent disconnections, and they had a limit of 150hrs/month.

    I'm not supprised how easy it is to restore subpar service. All they had to do was tie together the strings that are their backbone.
    • When I was an Aeneas dialup customer, in 1998, the service provided by Aeneas was also subpar. The dialup speeds were averaging 21.6kbps, where as when I switched to U.S. Internet(now owned by Earthlink) my dialup speeds were always above 26.4kbps(Except on Mother's Day). There were frequent disconnections, and they had a limit of 150hrs/month.

      Have you never learned what line quality means? Not just from you to your local POP, but beyond the local loop, on the trunks that go across town (or further) to

  • Really ought to have all their hardware in a basement, with a diesel generator to run both it and the pumps (gotta keep the water out of the subbasement somehow.) Of course all the phone poles in the area would be down so unless all their communications wiring was buried (ha ha ha) then there's not much point to being up.

    Point is, if I had more than just a few thousand dollars worth of equipment, especially if I had a million's worth, I'd want to keep it safe. This is earthquake country (California) so he

  • if I were doing a bunch of levelling by a tornado. I mean, I usually do my levelling in locations such as the Royal Crypt south of Endor (Dragon Warrior IV), the Northern Crater (Final Fantasy VII), or Zeon's Lair (Shining Force II), but certainly not by a tornado.
  • by sllim ( 95682 ) <`achance' `at' `earthlink.net'> on Thursday September 04, 2003 @02:32PM (#6871908)
    The company I work for practices disaster recovery once a year on all our major systems.

    In the article the writer was talking about how much work it was to migrate the T1 connections, and how they hadn't forseen that. That is exactly the sort of thing that a practice disaster recovery uncovers.

    If you want the model from the place I work it is simple enough:

    1. Run the disaster recovery during a 24 hour period
    2. Pat yourself on the back for what worked.
    3. Ignore what doesn't work.
    4. Repeat next year.

    Of course next year gets a new step:
    3.5 Act surprised that stuff didn't work.

  • by fuqqer ( 545069 ) on Thursday September 04, 2003 @02:37PM (#6871977) Homepage
    72 hours seems way too long to be out of business. That's 3 days of money that the ISP is not pulling in dough. Unless the whole internet is crippled, I'd ditch an ISP that was out for three days. One of the main selling points for ISP is connectivity rain, snow, shine, OR rabid squirrels...

    The company (ISP/consulting/services hosting) I used to work for had a DR plan to be executed in 24 hours with 75% functionality. Offsite servers and backups of course...

    More impressive to me is the World Trade Center folks like American Express and other companies that had DR plans situated across the river. A lot of datacenters and information services were functional again within 18-24 hours. That's PPP PPP (prior planning prevents piss-poor performance).

    I write good sigs on my bathroom wall...but this is not a real sig.
    • My Dad worked in the IT department at one of those banks, across the street from the WTC. I found it interesting that according to him, the year-2000 bug scare turned out to big a big help when the real disaster struck. Of course, their systems were orders of magnitude more complex than this ISP's, but then they, had that much more redundancy built in to everything.

      Prior to 2000, they built an entirely new system and ran it in parallel with the current one, for six months. Every transaction went through bo

  • this is exactly why i have my backup tapes stored offsite. they're actually on a two week rotation. the current week is onsite - too frequently i have to get something off yesterday's tape because someone hosed a project file or changed their mind after emptying the trash - and the previous/next week's tapes are stored in my secure, climate-controlled offsite facility.

    okay, it's my house, but it counts.

    if my house burns down, it's unlikely the office will suffer the same fate, and vice versa - it's a 20 m
  • tape backups? (Score:3, Interesting)

    by Musashi Miyamoto ( 662091 ) on Thursday September 04, 2003 @03:03PM (#6872286)
    From the article, it looks as if the only thing they had to restore from tape/disk was their customer database, so that they could send out the next month's bills. So, the 72 hours was basically putting in new hardware and turning it on. They probably lost all their user's web sites and other "expendible" data.

    How about talking about disaster recovery for a REAL company with tens to hundreds of terabytes of data sitting on disk? The kind of data that you cannot lose and must have back on-line asap?

    This article is like congratulating them for putting up detour signs when a road is destroyed, or rerouting power when a power line goes down.

    Just about everything that was destroyed was not-unique, manufactured items that could be recreated and repurchased. The only exception was the user data, which was pulled off of a nearly destroyed drive by a data recovery company. (Lucky for them!)

    I would like to hear more about companies that lose tons of difficult to replace, unique items, such as TBs of user data, prototype designs, business records, etc.

    I would bet that if a company were to permenantly lose these types of things, they would nearly go out of business.
  • by Tsu Dho Nimh ( 663417 ) <abacaxi@ho t m a il.com> on Thursday September 04, 2003 @03:06PM (#6872320)
    I was playing minute-person at a "disaster recovery" meeting (the first one) where high-level suits were figuring out what to do in case of a disaster at their multi-state bank. Their core assumptions were initially as follows:
    • They would all survive whatever it was. (I was looking out the window, and seeing jetliners coming in for a landing ... a few feet too low and the meeting would have been over).
    • All critical equipment would survive in repairable condition.
    • Public services would not be affected over a wide area or for a long time.
    • Critical personnel would be available as needed, as would the transportation to get them there.
    • The disaster plan only needed to be distributed to managers, who would instruct people what to do to recover.

    That was on a Monday. The next Monday was the Northridge quake.

    • One critical person woke up with his armoir on top of him, and a 40-foot chasm between him and the freeway.
    • One of their buildings was so badly damaged that they were banned from entering ... and there was mission-critical info on those desktop PCs. Had it not been a holiday, the casualty toll would have been horrendous.
    • The building with their backups was on the same power grid as the one with no power and the generators could only power the computers, not the AC they also needed.
    • None of the buildings had food or water for the staff who had to sleep over, nor did they have working toilets or even cots to nap on.
    • One of the local competitors was back in business Tuesday morning, because their disaster plan worked. They rolled up the trailers, swapped some cables and were going again.

    They came into the next meeting a couple of weeks after the quake with a whole new perspective on disaster planning and training:

    • Anyone who survives knows what the disaster plan is and copies of it are all over the place.
    • Critical equipment is redundant and "offsite" backups are out of the quake zone.
    • They have generators and fuel enough to last a couple of weeks for the critical equipment and it's support, survival supplies for the critical staff. This is rotated regularly to keep it form going stale.
    • They cross-trained like mad.
    • They started testing the plan regularly.
    • by Zachary Kessin ( 1372 ) <zkessin@gmail.com> on Thursday September 04, 2003 @03:22PM (#6872568) Homepage Journal
      Well a solid disaster plan would (if you are big enough to afford it) have a second location far away. If you had a location in California and a second say in Boston you would be ok. Ofcourse that costs a lot of money and many small to mid sized firms could not afford it in the first place.

      But one thing with disaster recovery is you need to figure out what is and is not a disaster you should worry about. I live in Jerusalem, terorism is something very real here but mostly hits soft targets. On the other hand major blizards are a non issue. In Boston we worried about Nor'easters and occationaly a huracane. If you live in Utica NY you probalby don't have to worry to much about terrorism. Fire can happen anywhere.

      I don't know how you figure out what is or is not a probable event in your location. I suppose you talk to the insurance folks they have spent a lot of time figuring this out.

      The other question is how much recovory can you afford? If your disaster recovory plans puts your company into chapter 11 it was not a very good plan.

      I like saying "Utica"
  • by account_deleted ( 4530225 ) on Thursday September 04, 2003 @03:30PM (#6872664)
    Comment removed based on user account deletion
  • Hmmm... (Score:3, Funny)

    by EverDense ( 575518 ) on Thursday September 04, 2003 @03:55PM (#6873000) Homepage
    You should have posted a link to the ISP's website.
    Then we could've kicked a dog while it was down.
  • Not good enough (Score:3, Insightful)

    by vasqzr ( 619165 ) <vasqzr@@@netscape...net> on Thursday September 04, 2003 @04:01PM (#6873102)

    When you go to a DRP seminar, they make the claim that the majority of business that are knocked out for longer than 48 hours go out of business within 1 year.
  • by n7ytd ( 230708 ) on Thursday September 04, 2003 @04:36PM (#6873459)
    Miraculously, the vendor discovered a recent copy of the customer records database on all four computers and was able to recover all of the customer data and return it to Aeneas, delaying printing of its May bills only minimally.

    This was from a mazazine for managers, after all. Now there's some good news that pointy-haired bosses can understand!

  • by jtheory ( 626492 ) on Thursday September 04, 2003 @05:05PM (#6873773) Homepage Journal
    Did anyone else read "Kroll OnTrack" as "Troll OnKrack"?

    Wait, did anyone else even read the article?
    Oh, never mind.

This restaurant was advertising breakfast any time. So I ordered french toast in the renaissance. - Steven Wright, comedian

Working...