Risk Management - A Cautionary Tale 203
Mr. Ghost writes "By now many people have heard about the fiasco and financial blunder Comair had over the 2004 Christmas holiday. An article on CIO provides a timeline of the decisions that led up to the system failure costing the division of Delta Airlines $20 million. The article points out the need for proper risk management and what can occur when a risk analysis is not performed or ignored. It goes on to mention that although this was a very public failure, this type of system failure can occur in other companies." From the article: "The prospect of replacing the ever-maturing crew management system was floated again the following year, with plans laid out to select a vendor in 2000. But that didn't happen. Over the next several years, Comair's corporate leadership was distracted by a sequence of tumultuous events..."
Why didn't the CIO yell louder? (Score:5, Insightful)
Re:Why didn't the CIO yell louder? (Score:2, Informative)
Re:Why didn't the CIO yell louder? (Score:4, Funny)
How is this different from what a good PM or CIO does every day? Darrell Hamilton is a "Strategic Director"? Strategic Director of what? Blame avoidance & CYA?
Re:Why didn't the CIO yell louder? (Score:2)
The CIOs job is defined by the investors and management. Not by a slashdot post or even a standard definition. If the CIO is given many other things and told they are his priorities by the proper people, those things are his priorities. However, a good CIO would make risk management a priority with his company and if he could not...he would seek employment while he still had a
What if they won't listen? (Score:4, Informative)
When they do listen, they tend to reduce it to profit/loss and destroy the subtlety of the information and its meaning. CIOs that "push" issues, especially when they're expensive, tend to get canned as gadflys, big spenders or for not being "team players".
When it comes to technology, managers often don't care and don't want to know, except when it costs money.
Re:What if they won't listen? (Score:3, Insightful)
That's their job. Companies exist to make money - end of story. Technology for technologies sake is foolish and wasteful unless you're in an R&D department.
That being said, all technology spends (e.g. upgrades, redesigns, rewrites, replacements, etc.) can and should be boiled down to dollars that either fall into a profit/loss or risk/benefit catagory (hopefully over 3-5 years). If a CIO
CIO's are often very distant from such issues. (Score:2)
When one works in an environment with several hundred in-house applications, it's easy for something to get lost in the shuffle, paricularly if the application in question isn't normally a source of issues or is using a techology which isn't "mainstream" for the company...
Re:Why didn't the CIO yell louder? (Score:4, Interesting)
Here's the situation. The company had an old green screen application that was working just fine. It was old, but it did what the company needed. There was no hint that there was any fault.
Now, one day the company had to cancel 90% of its flights - and whammo some double byte counter overflowed.
What's all this crap in the article about old software "getting brittle"? This wasn't brittle aging software, this was software that was hit by an event that took it outside of its design parameters.
How would *you* have judged the risk of this software failing? How would that risk compare with the risk of installing a new untested package?
Re:Why didn't the CIO yell louder? (Score:5, Insightful)
Re:Why didn't the CIO yell louder? (Score:2)
I suspect this situation repeats itself in many companies.
Re:Why didn't the CIO yell louder? (Score:2)
Perhaps the docs for the software would indicate this problem. Did anyone RTFM at Comair?
Re:Why didn't the CIO yell louder? (Score:2)
Analyze all you want, (Score:3, Interesting)
We need an analogy here... (Score:4, Interesting)
Then again, even if you do, you're still going to regret it.
So, I guess the moral of the analogy is that it's better to patch your system and risk your hardware not working properly than having spyware or a virus on your system.
Interesting Technical Detail ... (Score:4, Insightful)
From the article:
Sounds like some sort of overflow problem. Hmmm....
The big issue is, of course, the business units and IT playing "After you, Alfonse..." but it's fun to seek out the pebble that set off the avalanche.
Re:Interesting Technical Detail ... (Score:2, Insightful)
"These systems are just like physical assets," says Mike Childress, former Delta CTO and now vice president of applications and industry frameworks for EDS. "They become brittle with age, and you have to take great care in maintaining them."
You can easily run software for 20 years and it will not fail so long as you don't exceed its operating parameters. That's also assuming you can source replacement kit for hardware failures.
Software does not age.
Re:Interesting Technical Detail ... (Score:5, Insightful)
Becase of the fact that NO ONE knew of the particular limit that was exceeded, those who were supposed to calculate risk never knew what the tipping point was.
All they could say was "our software is old, someday it may not work any more, but I cannot say for what reason, because I do not know FORTRAN."
How the hell can you calculate risk if your only input is the chronological age of a software system?
Re:Interesting Technical Detail ... (Score:4, Insightful)
That wasn't the the only input in this case. In fact, you don't have to know the gory details of the implementation to determine risk, just the business impact of a problem to the system.
Re:Interesting Technical Detail ... (Score:5, Insightful)
With that, you've hit the heart of the matter, and what the article should have focused on rather than the "old software breaks down" BS. This was a bug which could have hit at ANY time since the software was installed; it was an overflow, not a rusting subroutine that fell off. I can't personally see any way that they could have foreseen this particular problem but when you have a system that is so critical to your operation, you don't look for problems it might have--you look for alternatives to fall back to when it DOES have problems.
You never see them coming. But you'd better plan for them anyway.
"Old software breaks down" is not BS (Score:3, Insightful)
Second, operating constraints change over time. If a piece of software meets its initial demands, greater and greater demands are placed on it over time. If a piece of software is kept in use for many years, it will likely f
Re:Interesting Technical Detail ... (Score:2)
>That's also assuming you can source replacement kit for hardware failures.
And how the hell is that different from what he said?
(Systems = hardware + software)
Re:Interesting Technical Detail ... (Score:5, Insightful)
Software does age. As a program grows older, people change it, its inputs and how it is used, and the older a program gets, the less the people making the changes are likely to understand it.
In addition, some bugs don't manifest themselves under usage patterns from 20 years ago, or when the software is run on hardware from 20 years ago, but they do manifest themselves under usage patterns or on hardware that's in use now. The more you change, especially without understanding all of the ramifications of that change, the greater the risk for error.
That's what software aging is.
Re:Interesting Technical Detail ... (Score:2)
And usage patterns are not conveniently tied to time, but rather, well, usage. The airline could have hit it big two months after this package was deployed and run into the exact same bug.
I think it is an easy to sell analogy to people who work on airplanes, but in fact software does not "age", and treating it as if it does is a fundamental risk factor in and of itself because doing so invites a complete misunderstanding of why
Re:Interesting Technical Detail ... (Score:2)
It is the system that changes when software ages; the system is comprised of software, hardware, data, documentation, users and business practices. It's not necessary for every one of those to change in order for change to occur.
This is a commonly held myth, and it leads people to think that maintenance of software-based systems is not necessary. That's a big mistake.
Software doesn't we
Re:Interesting Technical Detail ... (Score:2)
It escapes me why people feel
Re:Interesting Technical Detail ... (Score:3, Insightful)
That's what software aging is. When people talk about software aging, they're not talking about something that doesn't exist, they're talking about the effect that I described: ongoing changes with less and less understanding of the system.
Change is inevitable. It is common and reasonable to expect change in the hardware, the input
Re: Interesting Technical Detail ... (Score:2, Insightful)
That depends. I suppose you could call the software involved here mission-critical. In that case one might expect limits like the ~32000/month to be documented (not in this case if I read it right). If that limit had been documented, then the failure would not have been overflow, but not RTFM/using the system out-of-spec, which is management/operator error.
Also it matters how exceeding a limit is handled (graceful degradation). Did this system say: "I'
Graceful Degradation ... (Score:2)
>it matters how exceeding a limit is handled (graceful degradation)
Your point on correct software design is exceedingly well taken ...
... but I just love the term "Graceful Degradation". Is it from Faulkner, or a New Wave band?
Re:Interesting Technical Detail ... (Score:2)
Not necessarily. (Score:2)
Sometimes using an old 36-bit mainframe architecture (where an INT is 36-bits) is an advantage.
Article text (Score:5, Informative)
Bound To Fail
The crash of a critical legacy system at Comair is a classic risk management mistake that cost the airline $20 million and badly damaged its reputation.
BY STEPHANIE OVERBY
When Eric Bardes joined the Comair IT department in 1997, one of the very first meetings he attended was called to address the replacement of an aging legacy system the regional airline utilized to manage flight crews. The application, from SBS International, was one of the oldest in the company (11 years old at the time), was written in Fortran (which no one at Comair was fluent in) and was the only system left that ran on the airline's old IBM AIX platform (all other applications ran on HP Unix).
SBS came in to make a pitch for its new Maestro crew management software. One of the flight crew supervisors at the meeting had used Maestro, a first-generation Windows application, at a previous job. He found it clumsy, to put it kindly. "He said he wouldn't wish the application on his worst enemy," Bardes recalls. The existing crew management system wasn't exactly elegant, but all the business users had grown adept at operating it, and a great number of Comair's existing business processes had sprung from it. The consensus at the meeting was that if Comair was going to shoulder the expense of replacing the old crew management system, it should wait for a more satisfactory substitute to come along.
And wait they did. The prospect of replacing the ever-maturing crew management system was floated again the following year, with plans laid out to select a vendor in 2000. But that didn't happen. Over the next several years, Comair's corporate leadership was distracted by a sequence of tumultuous events: managing the approach of Y2K, the purchase of the independent carrier by Delta in 2000, a pilot strike that grounded the airline in 2001, and finally, 9/11 and the ensuing downturn that ravaged the airline industry.
A replacement system from Sabre Airline Solutions was finally approved last year, but the switch didn't happen soon enough. Over the holidays, the legacy system failed, bringing down the entire airline, canceling or delaying 3,900 flights, and stranding nearly 200,000 passengers. The network crash cost Comair and its parent company, Delta Air Lines, $20 million, damaged the airline's reputation and prompted an investigation by the Department of Transportation.
Chances are, the whole mess could have been avoided if Comair or Delta had done a comprehensive analysis of the risk that this critical system posed to the airline's daily operations and had taken steps to mitigate that risk. But a look inside Comair reveals that senior executives there did not consider a replacement system an urgent priority, and IT did little to disrupt that sense of complacency. Though everyone seemed to know that there was a need to deal with the aging applications and architecture that supported the growing regional carrier--and the company even created a five-year strategic plan for just that purpose--a lack of urgency prevailed.
After the acquisition by Delta, former employees say Comair IT executives didn't do the kind of thorough management analysis that might have persuaded the parent airline to invest in a replacement system before it was too late. Instead, Delta kept a lid on capital expenditures at Comair, with unfortunate consequences. The failure of the almost 20-year-old scheduling system not only saddled Delta with a plethora of customer service and financial headaches that the airline could ill afford but it also provides a cautionary tale for any company that thinks it can operate on its legacy systems for just...one...more...day.
The five-year plan that wasn't
Today, Cincinnati-based Comair is a regional airline that operates in 117 cities and carries about 30,000 passengers on 1,130 flights a day, with three or four crew members on each. But back in 1984, when Jim Dublikar joined the company as director of finance and risk management, Comair had
Blowing smoke up your donkey (Score:5, Funny)
Posts above this line have not RTFA.
Re:Blowing smoke up your donkey (Score:5, Funny)
--------------------- Cut Here ---------------------
Why did this system fail? (Score:4, Interesting)
I have a real question. Why did Comair's system fail in the first place? Was it due to a design flaw requiring it's replacement in 2004? Was it an irreplaceable piece of hardware which died?
The Article smacks of FUD, only because systems fail for a reason. The article conveniently leaves out the reason for the failure. I think this is critical to any risk analysis. For example, if I have a 20 year old system that I can't get parts for, that's a high risk system. However, if I can get parts for a 20 year old system, then the risk is lower.
I don't like the idea of making assumptions that just because a system is 20 years old, that it absolutely must be replaced. I also don't like the assumption in the article that I already know the facts, so here's the analysis for you. I want the facts to back it up so I can come to my own conclusion.
Re:Why did this system fail? (Score:2, Interesting)
If I understand the article correctly, the database could only handle 32,000-odd transactions in a month. In December 2004, rescheduling caused by bad weather caused the database to hit its limit exactly on Christmas Day, and everything shut down. It wasn't until December 29th that everything was back up again.
Oh, and they're still using the old system: they've divided the database up, with each half having its own 32,000-transaction limit, but that's about
Re:Why did this system fail? (Score:2)
The reaction most people are having is to say 'code is 20 years old, throw it out and redo it right!' which is a really bad philosophy for proven systems. In this case, for example, the prudent response is to examine the code and
Re:Why did this system fail? (Score:5, Informative)
No, the article conveniently explained that the sw had a limit of 32000 schedule changes per month. A severe winter storm necessitated enough changes to make the system fall over.
32767 + 1 = -32767, or maybe zero, or maybe NaN (Score:2)
Re:Why did this system fail? (Score:2)
So there were really two design/coding flaws that caused the crash. First, the limit on the number of changes. Second the lack of proper error handling when the maximum number of changes was reached. So it took both of these
Bonehead, it's the same reason... (Score:2)
Re:Why did this system fail? (Score:2, Insightful)
>...if I have a 20 year old system that I can't get parts for, that's a high risk system.
> However, if I can get parts for a 20 year old system, then the risk is lower.
Good points. The article does contain some facts, though. The system was Fortran based, ran only on one aging hardware platform, and no one at Comair knew Fortran. Those are risk factors with older software.
Re:Why did this system fail? (Score:2)
Even if you do have a couple, they'll be older and likely not replaceable at retirement. Documentation is help
Re:Why did this system fail? (Score:2)
I don't like the idea of making assumptions that just because a system is 20 years old, that it absolutely must be replaced. I also don't like the assumption in the article that I already know the facts, so here's the analysis for you. I want the facts to back it up so I can come to my own conclusion.
How about this: every few years, reexamine the limitations and requirements of the system. Upgrade or replace the system when it gets too close to those limits.
Re:Why did this system fail? (Score:2)
What, you wanted it in decimal?
I have a friend that lived this nightmare. (Score:2, Insightful)
And this was for a federal agency.
Scary no?
Risk Management is Complex (Score:5, Interesting)
I used to work in the Risk Management department of the capital markets division of a large international bank [jpmorganchase.com] as a programmer.
When I started, 4 years ago, the reports generated were basically compilations by a cut-and-paste-monkey staff (despite being highly trained, very conciencious individuals) of reports generated by other departments. I was part of a team that reformed the IT basis for creating risk reporting, and found that while there was a lot of expertise and complex methods available, what was actually implemented was much much smaller for the simple reason that it was tough to get the right reports generated given the inputs the department was given.
The project I worked on parsed the input data from the Excel spreadsheet inputs and loaded it to a database, where it could then be queried intelligently and nice reports generated. These reports were growing very fast in complexity, building towards the best toolsets available for determining the actual risk the bank was taking.
Several points about this job were fascinating:
1. How much many departments are so caught up in the minutae of "getting the report out" that they don't have time to examine the contents of it;
2. How much money can be made by knowing what the actual risk is. If you don't know the risk, you estimate high, and put lots of dollars in a reserve account. If you do know the risk accurately, you usually can greatly lower reserves to accurately meet even very bad case estimated losses, and use the rest of the money to fund interest-generating ventures.
3. How much the banking consolidation trend is increasing, due to the repeal of glass-steagal (sp?) allowing multi-state banks to gobble and grow. This makes a consumer's life better because of more resources being available (auto-bill-pay, check images, etc.
It was a fun job. Then I found another one where I get to play with Python!
-- Kevin
Re:Risk Management is Complex (Score:3, Interesting)
>
>[...]
>
>It was a fun job. Then I found another one where I get to play with Python!
Huh? The story's supposed to end with the line "VAXen, my children, just don't belong some places." [syr.edu] :-)
Re:Risk Management is Complex (Score:2)
-Jay
Re:Risk Management is Complex (Score:2)
This approach usually defines risk independently (typically as variance around a mean) for each individual item. The items are then observed (or just a
Re:Risk Management is Complex (Score:2)
The reality of modern banking risk management (in my experience at Bank One, which became JPMC) was that there were many different measures of risk attached to each exposure. The popular ones are the standard short term 'delta', or DV01, which measures a specific 1-day interest rate risk, gamma, vega, etc.
There's also something called stress testing, and it usually involves lots of cycle time to run (we ran it over weekends). This would take several scenarios, includin
software decays (Score:5, Interesting)
Unfortunately, you can't see a crew management system age the way you can see an airplane rust. But they do.
I find that an interesting if not slightly obvious insight. The interesting part is that you can know that software is decaying, but I don't know of any effective way to measure that decay. I don't even know of any particularly good ways to characterize the decay. It's not as if new defects are being introduced into code that's not changing. But the environment in which the software operates changes, and that change is analagous to weather corroding a pieces of physical equipment. Every time the OS gets a patch, the filesystem changes, a shared library is upgraded, the underlying hardware changes, there's a chance of triggering a failure in the software.
Can it be proven, or should we otherwise reasonably believe, that the probability of catastrophic system failure approaches 1 as the age of the system increases? Maybe a good topic for a research paper...
Re:software decays (Score:2)
It sounds academic, but it's full of level-headed dissection of all kinds of software-related disasters, ranging from the hilarious, like the USS Yorktown dead in the water [ncl.ac.uk] after a divide by zero, to the horrifying. The contributors are skeptical but polite, and I learn new stuff with every issue.
Re:software decays (Score:4, Insightful)
But the environment in which the software operates changes, and that change is analagous to weather corroding a pieces of physical equipment. Every time the OS gets a patch, the filesystem changes, a shared library is upgraded, the underlying hardware changes, there's a chance of triggering a failure in the software.
It's rather sad, to me, that we design these wonderful machines that can perform logical operations in great quantities with a high degree of repeatability and low occurance of failure, then create a culture around them that encourages sloppiness, and ultimately introduces a large measure of uncertainty into the operation of these machines. I am baffled at the perverse desire-- nay need-- that people seem to have to make software suffer from entropy.
The only "decay" in software should happen as a result of changing business requirements. There's no reason that, provided the business requirements don't change, that a well designed and properly implemented piece of software should not be usable in perpetuity. There may be changes in the underlying hardware and operating system software, but provided that the application is sufficiently abstracted from the underlying platform (or, provided that an emulation-layer for the original platform can be constructed) there's no reason other than changing business requirements for software to be "thrown away".
Let's put this a different way: How does a patch to the underlying operating system cause an application to fail? If the patch changes the behaviour of the underlying operating system in such a manner as to return unexepected values to the application, the patch is the cause of the failure. A flawed patch doesn't make an application "age" or "decay"-- it's simply a flawed patch. An application has to make assumptions about the underlying operating system. These assumptions are based on the API documentation-- the contact between the operating system and the application. When the OS violates the terms of the contract, that doesn't mean the application "decayed"-- it means some moron who coded the operating system patch messed up, and the operating system manufacturer/maintainer didn't perform good regression testing.
We should be designing software systems with 10 to 20 year usability goals. It would do a lot for the frustration level that the "suits" have with IT if we stopped being proponents of hugely expensive but "throwaway" systems, and started designing systems with an eye for longevity.
Re:software decays (Score:5, Insightful)
Exactly. This software would have failed the month after it was installed if Comair had needed to do 32,001 changes in that month. But when it was installed, Comair wasn't that big, so having to do that many changes was not something that was considered. Now that Comair has grown considerably, the business requirement has changed but the application has not kept up.
Re:software decays (Score:2, Informative)
Throwaway systems are cost effective in the short term. That makes them popular with people who look at this quarter's stock price as both a goal an duration of their attention.
Re:software decays (Score:2)
There's a relatively easy way to measure such "decay". When first designing to software do proper requirements gathering and write a full formal requirements specification (there are specification languages specifically for this purpo
Re:software decays (Score:2)
Modern airplanes don't rust. They die of metal fatique, which aluminum is much more prone to than 4130 steel.
Re:software decays (Score:2)
Assuming for the moment that the software program itself remains static, can
/.ed (Score:5, Funny)
Crew Scheduling system? How about Aircraft maint (Score:4, Insightful)
Maintenance manuals and procedures are written in blood. The next tragedy will be no different.
It's a legacy system (Score:3, Funny)
I told you so ... NOT! (Score:4, Insightful)
Even worse, is when these types of failures happen, then comes in the ole "policy and procedure" routine kicks in.
To tell a story, one time I went to a boarding school, and at the beginning of the year they had almost no rules, and then when ever something went wrong they added a new rule. Well needless to say at the end of the year there were so many rules, people could get repramanded for flushing the toilet twice instead of once! Not having their shoes tied left over right, etc
Well I grew up and found the same is true in companies, how much you wanna bet they are gonna loose more than 20 million from too many piled up policy and procedures that keep anyone from getting anything done?
Risk management (Score:3, Insightful)
OTOH, what does "Risk management" in IT really mean, besides drawing nice PowerPoints and putting a chapter "Risk analysis" into change request forms, that are normally filled in with "No risk, no fun!" or "If I make a very big mistake, it will extinguish mankind"?
A game of Jenga (Score:5, Insightful)
Legacy addiction is the big problem (Score:4, Interesting)
As the article says, a lot of resistance to upgrades comes from employees who know how to do things a certain way, and won't retool without much screaming and kicking. I suspect that this is often the problem, and other problems -- distractions like strikes and the Y2K bug, managment that doesn't pay sufficient attention to the problem -- are just just secondary.
Here's some personal experience that isn't nearly the same scale, but neatly illustrates what I mean. I once worked for a pubs department that delivered copy to printshops as raw Postscript. There was a push from management to upgrade to Acrobat-generated PDF. This should have been a no-brainer -- print shops hate dealing with raw Postscript, and the existing process relied on an ancient, unsupported printer driver that ran only on Windows 98. But the people who managed the process just totally balked, claiming that tight schedules left them no extra time to learn Acrobat. A lame excuse? Sure. But it took a new pubs manager, and escalation to the do-it-or-your-fired level, to get the chage made.
I think this kind of issue had a lot to do with the failure of IBM's famous plan to use Unix or Linux for all their internal bureaucratic needs. Too many people dug in their heels, claiming that they couldn't possibly retool their Windows-based workflow.
When you talk about this stuff, somebody always says, "If people can't get with the program, they should be fired!" Well, it often comes to that, as it almost did with the PDF issue. But you can't just abitrarily fire everybody who resists policy and process changes. It's expensive, there are legal ramifications -- and you risk destroying the very corporate infrastructure you're trying to save.
Any application, once written, is "legacy". (Score:2)
If a rewrite effort requires 50,000 or 100,000 man years to complete, you're talking serious money...
Re:Any application, once written, is "legacy". (Score:2)
Am I? Consider this... (Score:2)
The software was written in a modern language on a modern platform, but the employer did not have any of its own expertise in that language. Some of the folks there took shots at making small changes, but for the most part the thing was a black box.
Was it a legacy application or not?
My point: there's a HUGE grey area.
Even the data supposedly "locked" on so-called legacy s
Unmaintained code (and shoddy work) is the problem (Score:2, Insightful)
And why is that a bad thing? If the software is a good tool for the task at hand, they should keep using it. In fact, the article clearly says that this program was in many ways superior to newer programs on the market - which is why they didn't upgrade earlier. They say they were able to create good workflows based around the software -
Hmm... (Score:3, Interesting)
Airlines have been using IT for 40+ years. (Score:3, Insightful)
One of the main problems with a "central IT shop" for the airlines is the fact that, operationally, each airline is somewhat unique in terms of the internal operational procedures they use, and many of the software applic
Old? (Score:3, Insightful)
This article rings more as a sales article than anything else - only it isn't selling anything. Which puts it squarely in the "wtf" category for me.
Re:Old? (Score:2)
What it should be emphasising is the importance of risk evaluation in the context of "disaster recovery". Had the business sat down to write a proper disaster recovery plan on the basis of "OK, what happens if this system goes completely kaput and all we have left are the offsite backups?" then it would have become clear that here was a business critical system which had no coherent DR plan.
Some flaws in the article... (Score:3, Insightful)
Now, first the article states: First off, IBM AIX platform can be very new. Just because the application is old and possibly has bugs in it, doesnt mean the OS and hardware inst updated, or that HP Unix is any better.
Secondly, the following scenario makes perfect business sense: The article sets this up as the root of all thier problems. Good grief!!! dont waste resources on an inferior product for goodness sakes! If the product doesnt perform any better, and there are no known issues with the current product, forget it, its a waste of money.
Then a series of unfortunate events lead to 4 more years of no funding for a replacement product. So what, the business is under a financial crunch, why go back and fix something that isnt broken (that they know of)? The business still needs to survive dont they? I'm guessing they maintained the hardware and OS, otherwise we'd be here talking about how stupid they were for not updating maintenance contracts.
Application lifecycles (Score:3, Insightful)
The truly relevant cautionary tale... (Score:3, Insightful)
But after nearly 15 years in use, the business had grown accustomed to the SBS system, and much of Comair's crew management business processes had grown directly out of it.
(emphasis added)
Talk about putting the cart in front of the horse. This system would never have been replaced before it's crash--the cost of readjusting process and any other attached technology would have dwarfed simply updating the software. There was no business case you could make that would appear to justify the expense. Other than the little matter of "your company won't function if something goes wrong", of course...
Also, you'd never find a decent replacement product--since it's functionality would have to mirror those same system-driven business processes.
The truly major oversight was in letting the package drive how Comair did this part of it's business in the first place. Done otherwise, the meltdown might still have happened, for plenty of reasons outlined in the article. But left this way, this result was pre-ordained. No amount of planning or "risk assessment" was going to counter the inertia created by this process/technology inversion.
Great now what? (Score:2)
SDLC? (Score:2)
Re:Yep (Score:3, Interesting)
How could nobody in 11 years see that the changes were counted with a 16 bit signed integer? The company grows, I would think that making sure the sw can keep up with the numbers would require very little foresight, yet from the article, it seems that the only considerations were in the UI? I won
Re:Yep (Score:5, Insightful)
If this company was run by a typical big company, somebody DID complain about this 16-bit signed integer. Chances are, they were told to shut up about it and not rock the boat. This frequently happens when someone points out a bug which would require a fundamental change to the system.
Most companies only like employees who think inside the box, despite telling people to think outside the box.
Re:Yep (Score:5, Insightful)
I got irritated. I would find stuff that was just STUPID. Horrendously mangled logic. Algorithms from other parts of the code applied completely wrong. Whenever I tried to improve the code I got the "It's working. Don't change anything" line. I left, determined to find a job where I could actually write code.
That was several years ago. I've gotten smarter since. I've worked on several large-scale, 5-9's systems. After several major and minor fuck-ups, now I know....
If it's working, don't change anything.
Re:Yep (Score:2)
This is exactly why people who can "think outside the box" are hired. They know exactly what the box is, and exactly what thinking inside it consists of - and they are then informed that's exactly what they are to do - think inside it and not shake things up.
Those who can't think outside the box might accidentally do so, causing horrible things to happen like things get fixed. Obviously, thos
Re:Yep (Score:3, Funny)
I can assure you it was indeed a hardware or software limit.
Re:Yep (Score:2, Insightful)
But during the holidays, the storm forced a *lot* of crew *changes*, several time, and they went outside the limit.
It seems that the feeling was "there is no way we have 32K *changes* in a single month", and
Re:Yep (Score:3, Insightful)
Then, no one looked at the code for 11 years.
That's how this happens. Not because people are stupid, but because people simply aren't looking at the old crufty code. They're too busy with new projects.
Re:Yep (Score:5, Insightful)
First thing 32767 changes are a lot. A whole f*ck*ng lot. It averages over 1310 changes per day. For a company that flys over 1300 flights a day, it means they averaged a change every flight every day. That's insanely high.
I'm personally getting sick of people asking about backup systems. It was a problem with the data. Too much of it. Given the safety and goverment oversight that hinges on this data, you don't mess with it. Any backup system, whether one or one hundred backup systems, when presented with the same data, would also fail.
The DOT report issued back in March (sorry don't have karma link handy) said neither Comair nor SBS (the closed source vendor that supplied the application) were aware of the limit.
Eric Bardes (Yes, the one from TFA)
Re:Yep (Score:3, Insightful)
One or two changes per flight is unlikely, but possible. Yeah, it's insanely high. Yeah, such a thing might only occur once every 15 years. However, the value should have been a 32-bit unsigned integer instead of a 16-bit signed i
Not a lot. At all. (Score:3, Interesting)
Re:Yep (Score:2)
Depends on what's meant by a "change." Sure, 1300 planes (100% of flights) rerouted per day for a month is insane, but if one flight attendent being replaced by another is a "change," then 1300 per day during/after a major snowstorm across half the US might not be all that insane. Just having to cancel/delay/reroute ~20% of your daily flights for a month might be enough to hit
Re:You made lots of people cry. (Score:2)
That last bit was sarcasm, by the
way.
You're too young to understand (Score:3, Insightful)
So as a programmer, you make a choice. You either make the counter smaller, or you limit the system in some other way.
Computers today have 3 orders of magnitude more memory, and the choice between a short and a long is easy to make. But back then, it wasn't.
To help you understand, if a programmer from that era used a long int, he'd better have
Re:Yep (Score:5, Insightful)
This software has been working for over 20 years! What will your code look like in 20 years? I doubt it has the same track record. I'm not sure foresight was a problem. I think they did the best they could with language and hardware of the day.
The comair meltdown wasn't a software problem if you ask me, it was the business changed.
Re:Yep (Score:2)
Then again, the Y2K expert digging through the code probably didn't know enough about the business to realize that the signed 16-bit value wasn't sufficient, so he probably glossed right over it without making any suggestions at all.
Re:Yep (Score:2, Informative)
Re:Yep (Score:5, Funny)
Or for another example of hindsight and the law of unanticipated consequences, just sing the first few bars of "Alice's Restaurant".
Re:Yep (Score:5, Insightful)
Re:risk management 101 (Score:4, Informative)
for the do it yourselfers : http://www.cse-cst.gc.ca/en/publications/gov_pubs
Re:risk management 101 (Score:2)
Just off the top of my head, I can recall three disasters that disrupted air travel and required mass crew reschedulings: '99 Midwest blizzard (75 planes stuck on the runway in Detroit, passenger
The airline industry has lots of Fortran systems. (Score:2)
I'm working mostly in F77 now. It's a good language for what it does.
Re:Risky business (Score:2)