Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Bug Media The Almighty Buck

How Would You Handle a $1,000,000 Coding Error? 878

theodp writes "The Chicago Tribune's efforts to upgrade its computer system over the weekend turned into a fiasco when the system crashed, halting all printing operations and leaving about half of the Trib's subscribers without papers. The software contained 'a coding error,' according to a spokesman who estimated the cost to resolve the problem at 'under $1 million.' Any advice for the poor schmuck who's going to get the blame?"
This discussion has been archived. No new comments can be posted.

How Would You Handle a $1,000,000 Coding Error?

Comments Filter:
  • by justanyone ( 308934 ) on Monday July 19, 2004 @11:45PM (#9744919) Homepage Journal
    I didn't get my paper this morning and was angry until I read this.

    I'm not angry anymore, I'm sympathetic for the poor schmuck as well as all the customer service people who probably got yelled at this morning.

    -- Kevin J. Rice
  • Fix it. (Score:5, Interesting)

    by wideBlueSkies ( 618979 ) * on Monday July 19, 2004 @11:52PM (#9744996) Journal
    Simple enough.

    Take responsibility and ownership of the problem. Don't make excuses, but give real reasons.

    Fix it..do whatever it takes, even if it means working over a weekend.

    Write a good post mortem, explaining how th e fix is different from the original problem.

    And hope to god that your management is understanding enough to keep you on.

    This is comong from a guy, who in 1997 blew a $100,000 test weekend by kicking off the systems tests by loading the wrong generation of tapes.

    I took the blame, and expected to lose my job. But I knew that the right thing to do was to try to recover from the problem. I stayed in the office from 1:00AM Sunday to 10:00AM Monday morning rerunning every job and report and proving out the results.

    Not only did I keep my job, but I got promoted a year later. I made a name for myself that weekend....sure I could f*k up, but I work hard to keep things right for the company.

    wbs.
  • by Anonymous Coward on Monday July 19, 2004 @11:53PM (#9745003)
    Years ago, when the then new FBI headquarter building was being built, the foundation reinforcing steel was installed incorrectly. The North/South design orientation had been actually been installed in error as East/West. It was an error which cost the company thousands of dollars to correct.

    The foreman responsible for the error wasn't fired, to the surprise of almost everyone. The owner was asked why the guy wasn't fired. He answered something along the lines of,

    "That mistake cost the company $10,000. He's never going to make that mistake again. I paid for his education. If he's fired he'll go to work for anther company. Why should I let another outfit get the benefit of the lesson I paid for?"
  • by TheTXLibra ( 781128 ) on Tuesday July 20, 2004 @12:02AM (#9745092) Homepage Journal
    True story. I was working an assignment as a tester for Microsoft. I apologize for the use of variables, rather than names, but I don't want to get sued for breaking NDL. There was a deadline on the release, and if we missed it, there was a penalty of $1 per copy shipped. 20 million copies were due to be shipped on date X. The day of date "X", we realize there's a fatal bug that causes Product "Y" to crash after running any segment that lasts longer than "Z" minutes. Somehow, I'd completely missed this bug. I have no idea how, don't ask, but I completely missed it. We even checked back 3 months worth of revs...the bug was sill there in each one. Of course, the product was late, costing Microsoft a whopping $20 million. What did I do?

    I was "allowed" to resigned gracefully, quietly, and have learned a valuable lesson about software testing: It's not whether you miss something, it's whether or not someone else will find it in time to cost you your job. (nods sagely)

  • Re:Deployment? (Score:3, Interesting)

    by Alien Being ( 18488 ) on Tuesday July 20, 2004 @12:08AM (#9745150)
    The article said that the problem was in transmitting the pages from the newsroom to the printing facility across town. I wonder if they could have used a removable hd and a motorcycle as a backup plan.
  • Funny you should mention that. According to the Chicago Tribune [chicagotribune.com](subscribtion required),

    ...technology crews started a planned upgrade to increase the newspaper's Sun Microsystems servers from so-called 10K models to 15K machines. To do this, experts from the company that makes the newspaper's core Windows-based publishing software, Denmark-based CCI Europe A/S, needed to install upgrades of its Newsdesk brand software that the Tribune and other clients use.

    So was it Sun or Microsoft?? Or maybe Apple?

    Frantic hours went by as deadline after deadline slipped while crews struggled to find a fix. Malone said he went so far as to start setting up the newspaper's pages on the art department's Macintosh desktops, hoping to get at least something printed.
  • Bah (Score:5, Interesting)

    by Sandman1971 ( 516283 ) on Tuesday July 20, 2004 @12:18AM (#9745238) Homepage Journal
    Bah, this is absolutely nothing compared to the coding error that brought down Canada's Royal Bank last month, leaving millions of customers without paychecks, access to their accounts, etc.... And this too was attributed to human error [globetechnology.com], but had far more drastic repurcusions than not getting your morning paper, and cost RBC a heck of a lot more than a million dollars.
  • Re:Just one (Score:3, Interesting)

    by Rei ( 128717 ) on Tuesday July 20, 2004 @12:49AM (#9745460) Homepage
    I'd suggest that the coders ask the developers of the Therac-25 [vt.edu] how they dealt.
  • by theshowmecanuck ( 703852 ) on Tuesday July 20, 2004 @12:51AM (#9745469) Journal
    In my experience, it probably was not totally the QA department's... or the coder's... fault either. It was probably shitty managers paying too little attention to the need to allocate sufficient time for QA and realistic testing environments.

    Most project managers (especially ones with no technical experience... who shouldn't be let near a technical project) plan their projects with timelines with rose colour glasses. They assume there will be no coding issues discoverered in testing. Or worse, they do, but then let scope creap come into it, and borrow time from testing for the new items introduced in the scope creep. Bye bye testing time.

    Mind you, I have also seen QA managers who believe that the testers only need to understand the software, and not the business where the software is to be used. This has sometimes leads to problems in end use. In any case, I tend to blame poor management before I blame the little guy. Projects like this are big enough that the process should have been able to catch things like this... unless the process was flawed.

    My opinion... ready, set, slag away!

  • Re:Just one (Score:5, Interesting)

    by TastyWords ( 640141 ) on Tuesday July 20, 2004 @12:56AM (#9745499)
    I believe there are apocrophal stories about the guy who made a $27M and told his boss, "I guess I'm fired, huh?" and the response was, "No, I just spent $27M to educate you."

    That, and the story from one of Tom Peters' books about the guy who rented a helicopter on the fly (intended pun) to get up to the top of a mountain to restore clientele service. I consider these to be things we'll never see, only hear about.
  • by Anonymous Coward on Tuesday July 20, 2004 @12:56AM (#9745500)
    I have two million of errors and omissions, and a five million umbrella policy. If I make a million dollar error... it's covered.

    These levels of coverage are pretty standard, so I'd be a bit surprised if the person responsible actually had a problem.
  • by kent.dickey ( 685796 ) on Tuesday July 20, 2004 @01:09AM (#9745554)
    I agree, $1 million is really not a big deal.

    The problem is that customers of software do not really understand how they need to treat software upgrades.

    Here's a useful analogy: A customer getting a software upgrade should treat it the same way they would treat being moved to a brand-new building. Sure, the building contractor might say the new building is exactly like the old one except for a minor change, and that they have installed exact copies of all the equipment from the first building.

    Software upgrades are like this except the new building has no warranty, and to save money, the customer burns down the old building before even inspecting the new one.

    So who's fault is this?
  • by Altus ( 1034 ) on Tuesday July 20, 2004 @01:11AM (#9745560) Homepage
    Absolutely.

    A few years ago I worked for a publishing company that sold software to newspapers and magazines for publishing (mostly ad layout stuff). we became the re-seller of a pice of content management software that was being customized by us and installed (for the first time ever anywhere) at one of the larger magazines published by one of the largest mega-media companies.

    We didnt just rush in headlong and try to install and run the software in production the first time. for a while the system ran in paralell with the production system as a proof of concept (just a few of the pages at the time). Then, when it was deamed ready those few pages were published live out of the system (still had other sources if it went bad)

    the system worked as designed and we were able to publish the pages out of it. unfotuantely the software wasnt very usefull or costeffective so the project was ultimately scraped. Still, this is obviously the way to handle something like this, dont just rush headlong and detach your old software and systems for the new ones. run them in parallel in a production environment... its realy the only way to be sure.

  • by plover ( 150551 ) * on Tuesday July 20, 2004 @01:14AM (#9745572) Homepage Journal
    The Tribune acquired customized software for the upgrade from an outside provider, and it contained a "coding error," he said.

    As you pointed out, QA should have caught something this basic. There had to be a lot of careless decisions made here, and none of them are necessarily any one coder's fault. Blaming a "coding error" is simple, and makes people forget that a manager didn't do their job correctly. I've seen this particular scenario played out a dozen times before:

    Last Monday Suzy Manager shouted at her team, "The schedule says we install on July 18th, so this damned product damned well better be installed on July 18th, you all got that?!"

    But the vendor's ship dates slipped, and testing dates got pushed back, even though there was nothing particularily important about July 18th; except for Suzy Manager's promise to the CIO that she'd get WhizBang 2.0 installed by July 18th. And she would, too -- she had 25 points on her review riding on that very promise.

    By the 14th, when a new patched version arrived that fixed the bug they discovered on the 10th, Suzy was visibly distressed. "They damn well better have that transmit bug fixed, they've been dragging their feet long enough."

    Perhaps the testers just kept testing the version from the 10th instead of upgrading to the version of the 14th. It was beautiful on Saturday, so maybe the tester called in with a bad case of 'weekend flu.' Perhaps they got the patch late Friday afternoon, and the vendor swore up and down that it was just one little bug, our guy knows it's fixed, don't worry, it's better now. Whatever -- Suzy was under the gun, so she simply said "ship it."

    Regardless, some nameless coder is flapping in the breeze today. Suzy is probably running around the IT department at the Tribune screaming, "we'll never buy code from those bastards again, I swear!" in a vain attempt to deflect criticism from her department.

    But the CIO usually knows better, and Suzy knows the CIO knows better, and she's already sent out her interview suit to the cleaners. Even so, she'll feign total surprise to her department as she boxes up the little wooden carving she picked up during a drinking cruise to Mazatlan a couple years ago. A couple of tears later, she's interviewing over at Microsoft Consulting Services.

    Or, maybe I'm completely off the mark. Perhaps they've been testing the code for a month and it's worked fine, but they installed the new code with the old libraries, or the new libraries with the old code, or the destinations were SP2 with some new security turned on. Of course, the QA department should be testing the installation packages as well, but we all know that in hindsight, right? As Yogi Berra might once have said (were he an IT manager,) "In theory, there's no difference between the lab and production, but in production there is."

  • Been there (Score:5, Interesting)

    by Inthewire ( 521207 ) on Tuesday July 20, 2004 @01:21AM (#9745611)
    I write software for a company that handles $45,000,000+ of client cash every week.
    A mistake I made in May (discovered this very day, by yours truly) had backed up about $400,000 per week.

    Did I get stomped?
    No.

    A bottleneck had been identified, repaired, and eliminated!
    Behold the power of positive thinking.

  • by prockcore ( 543967 ) on Tuesday July 20, 2004 @01:25AM (#9745639)
    I'm a programmer for a large, (US) national newspaper chain and screwing up the publication cycle is somewhat more common that you might think

    We had a reporter screw up and drag a folder into the trash instead of the volume it was in (MacOS is absofuckinglutely retarded for having you unmount volumes by dragging them to the trash).

    He went on with his business, and then around 5pm he emptied the trash. He suspected something was wrong when it was taking over 5 minutes to empty the trash.

    Turns out the folder he trashed contained *all* the quark documents for the paper (the next day's stories and advance stories).

    While there were backups, some people had to scramble to rewrite their stories. Paper was a little light the next day.

    That's the problem with OS9 and OSX. The users need permission to delete stories in order to have permission to modify stories.
  • by Anonymous Coward on Tuesday July 20, 2004 @01:36AM (#9745693)
    I was in charge of a very similar upgrade albeit at a much smaller paper. We had plans in place to build the pages manually if the system didn't come up in time. It seems that the people at the tribune have gotten so used to automation they forgot that they didn't use to have these systems. I am surprised that the second things didn't start to go right they didn't have people instantly creating the paper the old fashioned way. The real cost isn't in the code but in the advertising dollars they won't collect because it didn't get out on time. That hurts.
  • Re:Testing? (Score:2, Interesting)

    by ryen ( 684684 ) on Tuesday July 20, 2004 @01:49AM (#9745741)
    Lets step back for a moment and tell ourselves that it was more than just a "transmission" problem, and that simply driving a disc up to their north side plant wasn't the issue at all.
    Because if thats really the case, then the Tribune has bigger idiots working for it then the "outside providers" working on the software.

    I'm going to assume thats not the case, and that there is more to it than just a "transmission" problem. I'm thinking it had more to do with the way the software controls the machinery, if thats applicable.
  • Re:Dogbert Strategy (Score:4, Interesting)

    by ooze ( 307871 ) on Tuesday July 20, 2004 @03:11AM (#9746130)
    Wasn't that a Nietzsche quote? Sort of:

    Money lost is money best spent, since it directly pays off into wisdom.
  • Re:planning? (Score:5, Interesting)

    by geekoid ( 135745 ) <dadinportland&yahoo,com> on Tuesday July 20, 2004 @03:18AM (#9746172) Homepage Journal
    exactly.
    I can not count they number of battles I have fought just to get some time to design an emergency rollback plan.
    I wish I had more balls to jump up in a emergency meeting and sream "I TOLD YOU TO GIVE ME A FEW DAYS SO I COULD DESGIN A ROLLBACK PLAN, ASSHOLE. BUT NOW ALL THE DATA CORRUPTED, AND WE CAN'T DO ANYTHING ABOUT IT BECASUE OF YOU!!"

    Instead, I just keep a copy of the emails where I made the request and was denied, and then forward them to the CTO.
  • by Zerikai ( 645450 ) on Tuesday July 20, 2004 @03:20AM (#9746184)
    Well, although can't say the guy did a great job... if the DB was so important, why was there not a regular backup?

    You are pointing out two problems taking place simulataneously.

    One is a minor human error, but it is obviously an unintended act.

    NOT having a recent-enough backup IS a serious issue. This issue has been pending for, as you say, 6 weeks, and it is a critical issue (if the data is valuable as you seem to imply).

    You do not go around deleting all entries in your DB for fun, but you know some software is going to go bananas on you one day and start messing up with your DB, whether it is in such an obvious form as deleting all the records or simply altering them all in a subtle way that takes a while to notice... (change all prices from euros to dollars?).

    A succesful project or business is much more than the sum of little individual acts. There is such thing as planning for things going wrong. And in this day and age, a database backup is no longer a problem.
  • by nevets ( 39138 ) on Tuesday July 20, 2004 @03:32AM (#9746229) Homepage Journal
    This is pretty much what happened with the first launch of the shuttle. Remember when the Columbia was to first time lift off, and it was just around the final 10 count when they abandoned the mission due to a software error. The problem was then searched by many programmers to find what happened, and it was finally found by the guy who made the mistake! Of course this guy got a huge bonus for finding it, although no one seemed to care that he was the one that made it. But that's the life of a programmer :-)
  • by carldot67 ( 678632 ) on Tuesday July 20, 2004 @03:51AM (#9746294)
    "The poor schmuck" will, in my experience, have spent the last 18 months hearing phrases like:

    "Time / Quality / Functionality: Choose Two"
    "You can't test quality into a system"
    "Measure twice, cut once"
    "We need to parallel run the UT system"
    "Engineers shouldn't be testing their own code!"
    "I wouldn't be using NT for that, mate"

    and so on.

    These are the words technical people use to warn management of impending doom. Managers on the other hand have other things to worry about like delivery dates, sales, penalty ratchets and so on. When the "go" decision was made it will have been made by senior managers who get paid the big bucks to take the big decisions and the big sh*t when it all goes pear shaped.

    The question is how the management handled mitigation by way of backups to manual processing, rollbacks to the old system or risk analysis during project planning.
    Automation of an entire printing plant is a big job and it is probable they planned for a failure as a worst case scenario and will just put the 1M loss down to experience.
  • by warrax_666 ( 144623 ) on Tuesday July 20, 2004 @04:52AM (#9746490)
    it's easier [...] to debug perl than many languages simply because you have less lines you have to look at to figure out whats going on.


    Take a look at some K [kx.com] code (there are examples in the user manual) and then come back and say that. If K is too exotic, then try looking at some macro-heavy LISP code -- it has the same problem just slightly less so.

    Code density can be good when you're trying to see the big picture (fewer screenfulls of code is a good thing in this case), but it can work against you when you're trying to understand the little details.


    since you get to look at regex patterns to figure out what's going on (looks to hard to manage? get over it. Regex is a small, orthogonal set of commands).

    Regular expressions are nothing more than a hack to make up for the fact that generalized LR parsers were quite inefficient up until a few years ago. Just compare a reasonably complex regular expression to the BNF form of a grammar for parsing the same input to see how much easier GLR is to use -- you can see some examples of just how easy GLR parsing is to use here [sourceforge.net]. And it can actually handle more general patterns with nesting, etc. I really think regexes are really just a question of premature optimization -- with GLR you just start out with an incredibly readable and simple grammar, and if it proves to be slow (i.e. if there are lots of points of ambiguity along certain parse trees) you can optimize it towards a purely LR(k) grammar.
  • by HenchmenResources ( 763890 ) on Tuesday July 20, 2004 @04:54AM (#9746495)
    Working for a newspaper myself I can vouch for the stupidity of Management when it comes to new software.

    /rant

    The newspaper I work for recently purchased a production system (server, archive, Workstations, etc.) the problems we saw came about because management went about looking for a new system the same way parents go about looking for their kids first car, ie. "how much is this going to cost us?" verses "Will this system be easy to migrate to both from an IT standpoint as well as end users and will it cover all our needs?"

    The consequence of the managements decision is that my newspaper now has a system that parts of it are still being beta tested (at the expense of my work not the companies we bought it from), a system that before hours of hard work by our IT staff just kind of randomly lost files, and still does on occasion, and a system that has an archive thats database is limited by the number of entries and not how big its hard drives are, what this means is that our archive will be full after a year and a half, two years if we're careful.

    all of this came after our IT department and all of their advisors let our management know that the wise decision would be to go for the next least expensive system that has show itself to be a good relyable system through use at multiple large news papers. Did Management listen, no they just saw a pricetag. what it got us is a system that after all the extra work that had to be done by our IT staff is mostly usable, and cost us more than managements second choice system. On top of all that the new system almost kept us from putting out our paper. The final effects of putting in this system is that to cover the extra costs jobs had to be cut.

    /end rant

    So like theshowmecanuck says yea the code may have been flawed, but that should not have been a problem had management gone about impimenting the code properly having done proper testing before implimentation this problem may have been avoided, the same goes for the system I'm forced to use, sure there are bugs in the code, but we got what we paid for when my company purchased a system that still has parts of it in beta.

  • Re:UAT/QA anyone? (Score:3, Interesting)

    by MikeHunt69 ( 695265 ) on Tuesday July 20, 2004 @04:54AM (#9746496) Journal
    I work as a performance tester, and typically make more than your average coder (current rate is UKP50/hr or about US$90/hr, which is actually a little low at the moment but the job is walking distance from home).

    Whats cheaper, getting me to write scripts for a couple of weeks then simulate a 3000 user test or pay 3000 users to come in on the weekend and test the system?

  • by Scud ( 1607 ) on Tuesday July 20, 2004 @05:46AM (#9746635)
    Which time? I'm the guy who (unintentionally) wrecked the first Saturn ever wrecked (job #65). Since then I've wrecked one other (job 2 million and something), so my track record isn't that bad :)

    Most of the time you don't actually break something (be it product or be it equipment), but fixing the bug and getting everything rolling again takes time.

    And since the "value" of the product that is running on the line is about $5000 a minute, time is indeed money.

    I've probably had a couple 1+ hour breakdowns, but this doesn't even compare to the time my buddies plant went down for three days x 2 shifts per day ($14M).

    They were Lear-jetting parts in on a daily basis (they kept blowing up the new stuff and didn't seem to have the sense to order spares). Ron would show up at the service entrance at the airport to pick them up and it got to the point where the guys would just open the gates when he drove up :)

    My most recent one was when we changed the line speed of the skillet line and the thumbwheel switch messed up and opened up the 8's bit in the ten's digit (faulty thumbwheel switch) so that instead of running at 42 jobs an hour it was trying to run at 80 JPH (it would have tried to run at 122 but it's limited in the software to 80 JPH)

    Zoom zoom.

    Oh wait, that's the other guys :)

    John

  • Re:Just one (Score:5, Interesting)

    by ray-auch ( 454705 ) on Tuesday July 20, 2004 @05:52AM (#9746643)
    Not true - plenty of jobs where people on the ground are working with kit worth more than that. Easy for a forklift or truck driver to cause a lot of damage when moving stuff around.

    Or say this incident [209.157.64.200] - blamed on technicians...

    Or say you were an air-traffic-controller... - how big a mistake do you want to make.

  • Re:Just one (Score:2, Interesting)

    by Nefarious Wheel ( 628136 ) on Tuesday July 20, 2004 @06:21AM (#9746729) Journal
    Specifically, Big Blues - The Unmaking of IBM [amazon.com] and it was Wall Street Journal, not NYT.
  • by Anonymous Coward on Tuesday July 20, 2004 @06:41AM (#9746824)
    I work as a system administrator for a newspaper since 7 years back. 5 Years ago we were out-sourced to another company, my job stayed the same (save for extra work needed) but the decision paths and cost terms has changed a lot. -- More management, less money, cutting corners, less contact with customers has actually led to an increase in costs by 25% for the newspaper.

    For 5 years we have worked on cutting costs instead of doing what we originally did; produce a newspaper. This has led to a lot of cut corners, patchy systems and above all stupid decisions. Now we have to spend most of our time with our hands tied behind our backs because there's no way to prove a _direct_ profit we can put on the price-tag we show to a (non-technical) customer when we are suggesting a change. It's always cost > functionality.

    Companies that only sell services to customers has no goal, does not work. There has to be something you produce, something to live for instead of just being a money making machine.

    Management cannot be just management to be management. A good manager is someone involved working with something they have a passion for. My boss didn't create this newspaper, nor did the boss of the actual newspaper and they probably don't have a special interest in media, it's just a career pushing money making machine for them.

    Oh, I guess this turned into a rant :)
  • Re:Just one (Score:2, Interesting)

    by orcrist ( 16312 ) on Tuesday July 20, 2004 @07:57AM (#9747150)
    Or say you were an air-traffic-controller... - how big a mistake do you want to make.

    Like this (late) traffic controller, for example:
    http://www.cnn.com/2004/WORLD/europe/02/25/swiss.s tabbing/ [cnn.com]
    http://www.cnn.com/2004/WORLD/europe/03/16/swiss.s tabbing/ [cnn.com]

    -chris
  • by Ungrounded Lightning ( 62228 ) on Tuesday July 20, 2004 @11:23AM (#9749443) Journal
    plenty of jobs where people on the ground are working with kit worth more than that. Easy for a forklift or truck driver to cause a lot of damage when moving stuff around.

    It happened where I worked once.

    Auto company using a "Sel 32" computer as the central tool for automating distributor testing-calibration. Systems Engineering Labs warned them that they were going to discontinue the line so they needed to stock any spares while production was still happening, so the company they were leasing it from jacked up the price to pay for stocking a bunch of spares.

    Company decided to buy their own to save a few bucks and be sure the plant kept running and wasn't hostage to future price rises on an irreplacable, mission-critical machine. Cost was a few hundred short of a million. (The 98 cents pricing phenomenon, no doubt.)

    Box showed up on the loading dock. One rack, floor to ceiling. Forklift operator picked it up, took it down the asile, took a corner too fast, and it fell off the forklift. Hit so hard it not only set off the tip/shock detectors but BENT THE RACK.

    SEL, of course, wouldn't warranty it. The auto company was self-insured. So they buoght ANOTHER one (and kept the "clunker" for spare boards if anything failed in the future.)

    Forklift driver was NOT fired. (Union, hadn't been notified he was toting a megabuck this time, and lift drivers are allowed a quota of oopsies.)
  • Re:Just one (Score:2, Interesting)

    by jcjewell ( 675426 ) on Tuesday July 20, 2004 @01:10PM (#9750235)
    I heard Chris Proctor (guitarist) tell a story that went something like this. I don't remember the exact words, but you'll have to trust that the punchline is what is important: "A quality guitar owes much of its sound to the large sheet of wood on the back of the guitar. A well known guitar maker was saving a particular pristine piece of such wood for a one special client who was thus far unidentified. When I met with them to get my next guitar, that piece of wood was taken out of it's special storage place in my honor and when the guitar was finally complete, I picked it up and played it a bit. It was indeed an exquisite instrument, and I resolved to use it in public for the first time at my next concert. I rushed off to the airport with the guitar in it's special case, checked it and my luggage and we were off to our destination. Upon arriving, I went to retrieve my luggage and the guitar, only to be greeted by an airline representative, saying that there had been a problem with my luggage."

    You guessed it. A forklift tine had been run right through the guitar, destroying it.

    By the way, check out Chris Proctor's music if you haven't.

Never test for an error condition you don't know how to handle. -- Steinbach

Working...