Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
United States IT

Struggling With Major IT Projects 316

Ant writes "This article discusses the poor track record of IT projects undertaken by the U.S. government, and says experts blame poor planning, rapid industry advances and the massive scope of some complex projects whose price tags can run into billions of dollars at U.S. agencies with tens of thousands of employees. 'There are very few success stories,' said Paul Brubaker, former deputy chief information officer (CIO) at the Pentagon. 'Failures are very common, and they've been common for a long time.'... Seen on Blue's News."
This discussion has been archived. No new comments can be posted.

Struggling With Major IT Projects

Comments Filter:
  • by Total_Wimp ( 564548 ) on Sunday January 30, 2005 @09:33PM (#11524653)
    Major IT projects touch a lot of people. If you can't get everyone on board then the project is going to be very tough to complete succesfully. For that reason, the only real blame for "most" IT projects failing is leadership problems.

    It's harsh, but true. If these agencies had had better leadership and management, the projects would have been delivered, or at least never started in favor of something better. Blaming is on anything else is an excersise in passing the buck.
    • by phamlen ( 304054 ) <phamlen&mail,com> on Sunday January 30, 2005 @11:11PM (#11525261) Homepage
      Some projects fail because they don't have enough resources, or not enough time has been allotted. Sometimes, the project planners choose the wrong implementation or try to solve the wrong problem. Sometimes, projects fail because the users don't really tell you what they want.

      Are all of these 'leadership problems'? Sure, you can blame the leader of the project (or his leader) for those problems - after all, they should have seen them and fixed them.

      But then all you've done is group a lot of problems together under "who to blame" and not tackled the harder problem of avoiding those pitfalls. So while I agree that leaders should stop projects from failing, the root causes of the failures are far more complex than just "leadership problems".

      -Peter
      • by Total_Wimp ( 564548 ) on Monday January 31, 2005 @12:09AM (#11525569)
        Some projects fail because they don't have enough resources, or not enough time has been allotted. Sometimes, the project planners choose the wrong implementation or try to solve the wrong problem. Sometimes, projects fail because the users don't really tell you what they want

        Amount of resources and time allotted are both directly related to leadership. Leaders decide both of these things. Choosing the wrong implimentation could go to the tech folks, but solving the wrong problem is most certainly a leadership decision too.

        I'm not trying to lump every decision on the leader and I'm not trying to say others can't screw up, but the cold hard facts are that projects that reach a certain size tend to fail because of things leaders should be taking care of.

        Specifically:
        Q. Who dicides how much time a project has?
        A. Management.
        But... what if the underlings give management a false impression about how long a task should take?
        A2 This is management 101: Work with what you know about your people ans statistics about past projects to determine if they're proposing adequate time. It's TOTALLY a management failure if the amount of time given a project is wrong.

        Q. Who decides how much resources should be thrown at a project.
        A. Once again, this is TOTALLY a management decision. Yes your underlings may give you incorrect data to base this on, but if you're consistantly getting BAD DATA then it is a leadership FAILURE if you continue to believe the bad data.

        Q. How do you make sure you're solving the right problem?
        A. If GM builds a the wrong car for the wrong market are you going to blame it on anyone but GM leadership? If not, then why are you going to give them a pass on implimenting a network infrastructure that fails so meet their needs?

        Leaders solve the macro problems of the company. Large IT projects are part of this solution. A large IT project is about as complex as building a new HQ building. Leadership does not allow new HQ buildings projects to fail. Why on earth are you letting them get away with not managing IT projects (infrastructure by defininition)with as much dilligence?
        • mod parent up! (Score:3, Interesting)

          by PaulBu ( 473180 )
          Q. How do you make sure you're solving the right problem?

          A. If GM builds a the wrong car for the wrong market are you going to blame it on anyone but GM leadership? If not, then why are you going to give them a pass on implimenting a network infrastructure that fails so meet their needs?


          Excellent point! Somehow when things fail in a private company, it is always the fault of the "suits", while when things fail in the Gov't -- well, it was just too tough, and not enough funded (please increase my tax rat
    • Just to add to this, while leadership is mostly to blame, the Government can be a very bad client as well.

      After an initial project is agreed upon, the Government is notorious for changing features, designs, due dates as well as budgets. While leadership should just put their foot down and either say "no" or renegotiate, they would rather keep on the good side of the "G" and kill the project than lose contracts later.
      • As someone working to develop a system that will be used by the UK government, I can tell you that statement is very true. Not only that, they often don't actually know what they want. They'll give you a vague specification and they'll tell you they want features X, Y and Z, but when you hand over the completed system, they turn round and say they didn't ask for that, they wanted something different.
      • by basingwerk ( 521105 ) on Monday January 31, 2005 @05:53AM (#11526821)
        We have the same problem in the UK, and I expect it is the same everywhere. I have worked on good projects backed by government money, but the problems seem to arise when the government is the provider of the detailed requirements. I think this is because the government can only provide top level requirements, and the detail should be fleshed out by experienced requirements analysts. Unfortunately, government organisations do not consist of high achievers who have made their way to the top through good decisions. Government organisations consist of people who have made their way to the top by cautiously waiting long enough, and their natural instinct is to avoid tough decisions. Of course, tough trade-offs have to be faced and dealt with to reach a coherent set of requirements. Any project with the government will not be able to do this without a lot of futile hand wringing and back tracking. Coupled with all that is the fact that government people have an innate belief that they, not the technologists, are in control. They believe they can realise change faster that the technologists can provide it. Technology development has a pace that technologists understand, while government people have different objectives that may be changed in a hurry as public opinion sways this way and that.
  • Management? (Score:4, Informative)

    by loony ( 37622 ) on Sunday January 30, 2005 @09:33PM (#11524655)
    The only IT projects that failed that I know are the ones that have bad managers... Most of the time that means someone who doesn't listen to the people that do the actual work but there are other reasons...

    Peter.
    • Re:Management? (Score:5, Interesting)

      by superpulpsicle ( 533373 ) on Sunday January 30, 2005 @09:50PM (#11524784)
      Just like the sports analogy, the coach needs to put the players in a position to win. Bad management = bad coach. Unfortunately the techies are always the ones to get blamed and get fired.

      • Correct. Good management should replace bad techies long before the project fails. They rarely do, however, especially in big companies and government.
        • Re:Management? (Score:4, Informative)

          by Frymaster ( 171343 ) on Monday January 31, 2005 @12:01AM (#11525536) Homepage Journal
          Good management should replace bad techies long before the project fails

          great... but who replaces the bad managers?

          my experience is that projects fail because of managers who get caught between two opposing forces, clients and tech staff, and can't broker a compromise.

          the client wants a million features for next to no money and wants the product by thursday noon. since they're paying the bills, they can exert a lot of pressure on a manager. the techies want clear direction on technical issues that neither the client nor management really groks, tonnes of time and the right to overrule bad decisions made by the client. since the techies are the people who are actually doing the building, they have a lot of leverage.

          when management cannot broker a compromise between these two positions, the project fails. i've seen management say 'yes' to every demand and timeline from a client, then go to the techies and say the client is clueless and stubborn and insist that corners be cut to meet the deadline. when the poject fails, the manager blames the techies and hands out some pink slips to mollify the client.

    • Re:Management? (Score:3, Interesting)

      by lottameez ( 816335 )
      As a manager, I agree with the first part but disagree with the second. It is indeed a failure of management when the IT project fails. However, it's usually because management has lost sight of what the original objective was and/or was unable to commmunicate it to the people doing the work.
    • by darnok ( 650458 ) on Monday January 31, 2005 @01:17AM (#11525851)
      Several times I've worked on projects where "the people doing the actual work" (i.e. techies) have been responsible for the ultimate failure; they've been given too much authority and made decisions that've ultimately sunk the project.

      Examples:
      - "we know it'll scale to 10k users, once we take the time to optimise it. We'll do that later" (project ultimately scaled to ~100-200 users max)
      - "upgrading to the new version of tool XXX will let us solve lots of our problems straight away" (maybe so, but it added lots of new problems and dependency issues that blew the project out of the water)
      - "we'll redo the crappy UI later" (not after you've made loads of incorrect assumptions about workflow based on a UI that already you knew was wrong, you won't!)
      - "we've just attended a MS Web development seminar that told us our objects should be stateless, so that's what we'll do" (...to the point of having to verify e.g. user identity multiple times for each page loaded. This particular project brought down a country wide intranet when it was deployed, without prior testing because the developers thought "it wasn't necessary")
      - "Bob's got that problem under control. We don't need to worry about it" (...until Bob, the single gun Tandem guy, left the company and we were left with a totally undocumented Tandem interface that nobody understood in the slightest)
      - "that's OK; Microsoft are sending out a consultant to deal with that problem" (...sigh... Does no-one understand that the main job of a field MS consultant is to sell more software licences, not to fix problems?)

      Every one of these issues was dealt with by a lead techo in the manner described above. Mgmt deferred to the lead techo in each case, and the projects suffered as a result.

      Yep, I understand that you could call all these people "managers" rather than "techos", but each of these decisions was made on a supposedly well thought out technical basis. If these people are "management", then so are 90% of the people on Slashdot.

      If we're going to characterise all management as PHBs, then why not also characterise all techos as:
      - making incorrect assumptions, then extrapolating endlessly without attempting to verify the original assumptions
      - assuming tools from Vendor X are golden (yep, I'm continually amazed how often this happens)
      - relying on vendor X to provide a solution to problems when they occur, and not investigating workarounds in a timely fashion
      - believing acceptable performance is always just a few code tweaks away
      - assuming they know more about usability than designated subject matter experts
      - endlessly reinvent wheels ("What? That object isn't a 100% fit for our problem? Better create a whole new one that works 1% differently and requires 1000 new lines of code to be maintained")

      After 20+ years in the industry, I firmly believe that the best IT managers are those who have worked in multiple technical areas, as they can then see through the tech crap as well as the mgmt/project crap.
      • Several times I've worked on projects where "the people doing the actual work" (i.e. techies) have been responsible for the ultimate failure; they've been given too much authority and made decisions that've ultimately sunk the project.
        - "Bob's got that problem under control. We don't need to worry about it" (...until Bob, the single gun Tandem guy, left the company and we were left with a totally undocumented Tandem interface that nobody understood in the slightest)

        That problem is entirely managerial. A
  • I concur (Score:5, Insightful)

    by countach ( 534280 ) on Sunday January 30, 2005 @09:36PM (#11524675)
    I can only agree that very few large IT projects are succeeding. I put the blame partly on managers in charge of the project that are too non-technical and distant from the nuts and bolts of what is going on. They push the freight train on with the theory that the project can be brought in through determination and hard work. It can't. It has to be brought in by clever people who know what they are doing. And these manager types will push the train on till it goes over the cliff when better people would have known much earlier that the bridge was out.
    • Re:I concur (Score:3, Interesting)

      by Otter ( 3800 )
      Out of curiosity, how do you define "succeeding"? Is it a failure when user satisfaction falls short of what had been hoped for, or only when the new project is scrapped entirely in favor of the old setup?

      • Usually you don't get to the point of quibbling over the definition of "succeeding". The whole thing is scrapped and consigned to the bit bucket.
    • Re:I concur (Score:4, Interesting)

      by Chanc_Gorkon ( 94133 ) <<moc.liamg> <ta> <nokrog>> on Sunday January 30, 2005 @10:15PM (#11524942)
      Hit nail on head fits here.....

      I would also say it's the management's fault entirely when the entire technical staff is saying that they should have dumped this POS and should have never started the project.

      Example:

      Where I work, we had a mainframe based system with some web enhancments added on. 99.9 percent of the whole thing was custom written with COBOL using very specific specs and used a non industry standard DB called DATACOM. It was developed over a 20 year period at least with parts of it being in service for almost that long. It was very stable and it worked and was able to serve the company well. Then, some management muckety muck who is not even there anymore, at the suggestion of the president, started looking for a entirely new system. One that the users would not be using a mainframe terminal session. One that was client server based in a period of time the mainframe was and still is at this point getting a resurgence. TGhis was about 6 years ago now. We are entering the third quarter with the new system and after only one quarter of production, we had to upgrad ethe production boxes spending another 500,000 to a million with implementation time taken into account. Now that the system is generally faster, users are mostly satisfied and the IT staff still isn't. Few examples:

      One end of quarter process that took 30 minutes to an hour, running in real time with users still logged in to the system on the legacy mainframe. The new system? 2-3 hours or more and the system it's running against must be locked out from changes during the process because the realtime process can take DAYS!

      Patches as delivered by the vendor regularly don't work. I don't mean that they don't do what they are supposed to do after you get them in....they just won't install. The version of the product we have is a hybrid of what it should be. It still has it's dependence on the older datatbase layer being there and thats just so it can translate it's programming calls to Oracle SQL calls. The systems course they provide to train sysadmins on the specifics on the application does not include about 60-80 percent of the new version of the product....and this product has been out for at least 4-5 YEARS.

      We went from a mainframe to a system that has about 4 times of the power, yet the system STILL runs slower. This is even on VERY modern hardware and using 2GB fiber for the SAN. Sure, thigns are getting better, but this product we bought into absolutly blows, yet gartner (they suck anyway) and others STILL laud it as a good system We're not the ONLY ones to complain.

      The deal here is, I would not be surprised if we stuck with it, but it sucks so bad I would not be surprised if they finally cannned it. We went from a very efficient system that if there was a problem with it, the staff knew EXACTLY what to do and now we have to make half a dozen phone calls to support...support that's only available for 8 hours a day on a system expected to run the company 24 hours a day. It could be weeks or months before we find a solution. The funny thing is, this thing was presented as software that would do everything, yet we had to write many custom modules and paid the vebndor to write several custom modules, non of witch worked on the INSTALL DAY! Even after doing post-install setup the program was delivered bug filled. We had a higher up in the vendor's company tell the programmers who are working on our part of the system and WROTE the majority of the old system that programming was TRIAL AND ERROR! WHOO!

      And all of this was for the sake of dumping terminal screens in favor of nice purty client/server interfaces. We STILL are not to the point we were 6 years ago. We're closer today then we were when we initally launched, but was are so far off, it's going to take another 3 years and likely another hardware upgrade to accomplish what's needed.

      Sometimes management should just leave well enough alone.

      You should get scared if your manager starts bringing in books like the Tom Peter's Project '04 and Who Moved my Cheese,,,,
      • Re:I concur (Score:3, Insightful)

        by crimoid ( 27373 )
        How much did the original system suck 20 years ago when it was first tuned on? Did IT love it then? Would the original system still be supportable 20 years from now?

        My point is that from time to time groups are asked (if not on purpose) to eat dog food for a few years. While it may be true that your new system might be a steaming pile that is destined for the trash bin it may also be equally true that the new system is a few steps toward a flexible, modern system that will carry your organization throug
      • And all of this was for the sake of dumping terminal screens in favor of nice purty client/server interfaces.

        Why not just write a screen scraper that uses the mainframe and presents a shiny new interface? Bet it'd be cheaper and easier to support.

        • Re:I concur (Score:2, Insightful)

          by danielrose ( 460523 )
          Because we have about 10 layers of screen scraper type things, which is not fun. One gets the info from the terminal session on a HP-UX box (actually I think its a largish cluster of boxes), writes it out to files on a network drive, which get imported into a datawarehouse which gets pulled into some access database at seemingly random intervals, and back again through a similar process!
          Weeee! It's also fun when the datawarehouse is around the other side of the world and it goes down..
          Its not an enjoyable s
          • Re:I concur (Score:3, Insightful)

            Because we have about 10 layers of screen scraper type things, which is not fun.

            It's not fun, but at least it works (in the scenario I responded to). I don't know your situation, but the other one looked like a political move more than anything.

    • Re:I concur (Score:4, Insightful)

      by Evil Pete ( 73279 ) on Sunday January 30, 2005 @10:24PM (#11524990) Homepage

      I put the blame partly on managers in charge of the project that are too non-technical and distant from the nuts and bolts of what is going on.

      Another factor is that the complexity of some of these projects is non-linear with respect to the 'size' (as say measured in the number of requirements). Government project managers should have a new mantra, something like "Small is achievable". The old 'divide-and-conquer' strategy, one of the first things you learn in programming. Break up the problem into achievable units and then use those to construct the solution. Sometimes the bleeding obvious is the first thing forgotten. Man, I'm just full of cliches today.

    • Re:I concur (Score:5, Insightful)

      by demachina ( 71715 ) on Monday January 31, 2005 @01:39AM (#11525968)
      Heh. Post on slashdot and ask why IT projects fail and surprise, surprise all the geeks blame it on the managers, venting angst they still feel from some recent IT project that was a disaster and they blame on their manager.

      Big government IT projects, and government contracts in general, fail because there is zero reason for them to succeed, they are designed to fail. Big companies who live to drain tax dollars out of the government, like Lockheed [counterpunch.org], CSC or ESD can fail on project after project and they will still have a thouroughly good chance of winning new ones. The government rarely withholds payment even if the project craters. So if the government never punishes failure why would contractors care if the succeed or fail. The worst that will happen is a little bad press, they wont get the next contract for the department they just cratered but there are always plenty more. After CSC botches it, ESD gets it, they botch and then they will try CSC again etc. Same thing for Boeing and Lockheed. Thanks to merger mania in the 90's there are only two aerospace giants left and its not much different for every other big government contracting sector. The government has to pick one or the other of the big players no matter if they've cratered on contract after contract.

      You really need to understand how these companies are structured. They are well oiled machines for identifying opportunities, submitting impressive proposals, using undo influence and landing contracts. They put their best people on winning contracts. Once they win it its another story. Then they are just putting warm bodies in there to fill slots and bill hours as they march through milestones.

      The irony is a contractor will probably make more money if the project goes bad. If the project goes bad their contract will be extended year after year. The civil servants will just throw more money at the project in the hope that if they just put in a little more they will turn the corner and pull it through.

      If the project comes in on time and on budget the contractor will make less on it than if it goes bad and overruns, so why should come in on time and on budget.

      Consider Lockheed's F-22 as described in the link above. In some respects it an impressive fighter but at $300 million a copy it ridiculously expensive. It was supposed to be operational a decade ago but the government just keeps pouring more and more money in to though the U.S. already completely dominates every other Air Force on the planet with the much cheaper planes it already has. Lockheed can continue to develop it for another 20 years and may never field an operational squadron. They were punished for their failure with another $200 billion contract for the Joint Strike Fighter. They were just given a contract to build 5 or 6 Presidential helicopers for $1.5 billion dollars. Thats $300 million each for a helicopter.

      Why does the government do it. Simple, the government/contractor system has devolved in to massively corrupt system. There is a giant revolving door between government, the military and these big contractors. The ambitious and greedy are only taking government and civil service jobs so they can establish connections and influence, do favors for big companies, retire from government and cash in a massive way with executive positions at those same contractors. Any Air Force general whose ever steered contracts to Boeing or Lockheed has a gold plated job waiting when they retire, where they can continue to influence contracts pulling strings with people who use to be below them in the chain of command.

      Darlene Druyan is another classic example. She was one of the Air Forces' top civil servants for procurement. She steered a ridiculously lucrative contract to Boeing for 767 tankers, and before the ink is dry she gets a lucrative senior executive position at Boeing. Only catch is it was so blatant that Congress said enough is enough and damended the
  • Corporations too. (Score:4, Interesting)

    by Anonymous Coward on Sunday January 30, 2005 @09:41PM (#11524706)
    The summary ignores that corporations are just as bad, but that corporations don't admit their problems. This is one area we find the Fed fessing up.
    • Exactly. Example [retrosoftware.com]
    • The summary ignores that corporations are just as bad, but that corporations don't admit their problems.

      Public corporations sure DO admit their problems. Then their stock takes a pummeling, people in charge of the failures might be fired, there's accountability for the failure in some way in almost all cases.

      Now compare that to the government. If the blunder is serious enough, someone might be fired. But then they are right back asking for more money of my money to flush down the drain.
    • by mc6809e ( 214243 ) on Sunday January 30, 2005 @11:32PM (#11525382)
      The summary ignores that corporations are just as bad, but that corporations don't admit their problems. This is one area we find the Fed fessing up.

      This just isn't true. Corporations usually do admit their problems. They often have a legal obligation to inform shareholders about why their company lost $$$ dollars.

      And besides, the Fed can afford to fess up. What's it going to cost them? If anything they'll get MORE money claiming they didn't have enough in the first place to make the project a success. Failure is almost always used to justify more spending. More money is the award for failure.

      And it's not like the government is going to go out of business but companies that chronically screw-up will and people will lose their jobs.

      Seems to me companies have a much greater incentive to make sure projects succeed.
  • by originalhack ( 142366 ) on Sunday January 30, 2005 @09:41PM (#11524712)
    The government doesn't generally have programmers familier with exactly how they really work. So, they outsource to big software companies. Instead of rapidly creating a prototype and trying it out and making sure they are on the right track, they create a massive project and find out way too late that they did not ask for exactly what they needed.

    Businesses trying to outsource their business application development also learn this the hard way. If your programmers are not intimately familier with how you operate, it doesn't matter how smart they are. Also, if you are trying to create a new way of operating, try experiments first.


    • Instead of rapidly creating a prototype and trying it out and making sure they are on the right track, they create a massive project and find out way too late that they did not ask for exactly what they needed.


      Sounds like the Feds need some Perl Hackers :)

    • > The government doesn't generally have programmers familier with exactly how they really work.

      This is true not just of governments, but of any large organisation bringing in an external resource to implement a project. I've been involved in projects more than once where the spec I've been given hasn't adequately covered the intent of the project.

      In bringing in external resources there's so much scope for misunderstanding that problems will always surface. I've found the key (for me at least) is to ge
  • by antdude ( 79039 ) on Sunday January 30, 2005 @09:42PM (#11524722) Homepage Journal
    Click here [wired.com].
  • by mosel-saar-ruwer ( 732341 ) on Sunday January 30, 2005 @09:45PM (#11524739)

    Our state Department of Transportation is rumored to have embarked on a PeopleSoft ERP/CRM project that has stumbled along for seven or eight years now, has cost the taxpayers something like $125M dollars, and has all of zero PeopleSoft modules up and running and functioning properly.

    The "consultants" on the project are rumored to charge in excess of $200/hr.

    Boy, I sure could use me a gig like that.

    • Uh yeah, and now that Peoplesoft is bought out by Oracle, they'll probably have to restart the project before they can bring it in working.
    • The "consultants" on the project are rumored to charge in excess of $200/hr.

      In the broadcast industry I know a consultant that charges $2,000/hr. Consultants generally charge a lot for two reasons. 1) They are highly trained specialists with 20+ years of experience. 2) They don't always have a contract and are self employed. (Taxes become much hihger in that case). On the other hand, this guy makes more in an hour than my boss does in a week.
  • by Dancin_Santa ( 265275 ) <DancinSanta@gmail.com> on Sunday January 30, 2005 @09:46PM (#11524751) Journal
    The basic idea of an "IT project" is to implement something that has never been implemented before, or to replace something wholesale.

    This flies in the face of every software engineering textbook. Software, like flora, grows in its environment. Trying to introduce something brand new into an ecosystem is asking for widescale decimation of existing services as well as the increased likelihood of the introduced-species death.

    So the key to getting working "IT projects" to succeed is to build on past successes. It's never the "Start from nothing, plan, implement" projects that do well. These typically go way over budget and way past the deadlines. It is the little "I need a little tool" projects that start off small and then are brought together or have extra features added to them that succeed.

    Look at your bank's ATM system. When those machines first arrived, they didn't do half of what they can do now. It was through a gradual building upon what works and weeding out what doesn't that allows us the ease of personal banking today. Same with any system, even Linux. Linux started out as a small project to implement a Unix-like kernel. Now it is a huge business and the project itself is much larger in scope than the original idea of Torvald's.

    Improvement, not creation, is the key to successful projects.
    • It's never the "Start from nothing, plan, implement" projects that do well. These typically go way over budget and way past the deadlines. It is the little "I need a little tool" projects that start off small and then are brought together or have extra features added to them that succeed.

      It's true that small projects are never huge failures (by definition). The problem with incrementalism is that eventually you wind up with a big hairball where trying to fix one thing breaks three other things.

      • This is true, but you can consider your incremental approach "phase 1" and plan for a from-scratch rebuild (of course it takes discipline to actually do it instead of continuing to hack phase 1). This way you've got a working, deployed model that the nicely architected "phase 2" can use as its base.
        • More blatant stupidity I say. The vast amount of people who do a from-scratch rebuild end up with something worse than what they had. Refactoring, documentation and maintenance of software is the only way to make systems that work.
          • In incremental projects, little attention is paid to long term performance and architectural issues. Sometimes the only honest way to correct them is thru a rewrite.

            I'm not saying you can't/shouldn't reuse segments of the code, but putting lipstick on the pig doesn't solve the problem either.
            • There's nothing that can be achieved from a rewrite that cannot be achieved from refactoring. Nothing. Starting again from scratch is a political act. It's a "blame it all on the other guy" technique and nothing more. You wanna change the underlying architecture? Cool, do it incrementally and maintain the current code base as you do so.
          • I am in the incrementalist school and agree with you, and agree with Joel Spolsky's take on "clean-sheet of paper rewrites."

            On of the classic big software projects gone bad was the failure of the revamp of the air traffic control system, which was stiched together with creaking mainframes and had old CRT terminals. The rewrite/redo/new system was a classic "Bridge too Far" (Operation Market Garden from WW-II a classic "massive rewrite" instead of an incremental approach. They were going to use paratroop

            • But what ever happened to ATC? Are they still using battleship CRT monitors driven by creaking mainframes? Did they come up with something that works using newer technology?

              As I recall, they threw that $3B loser of a system out the window and shifted to a more modular approach. Now, instead of trying to develop a huge monster system that will be all things to all people, they're upgrading the bits and pieces one by one. One I remember hearing about was a modern controller terminal that could emulate the

    • The basic idea of an "IT project" is to implement something that has never been implemented before, or to replace something wholesale.

      I believe that you can have success either way- it just depends on how it's implemented. There is some important groundwork to laid, either way - first and foremost, you need to have organizational "buy-in". In other words, if the entity you're working with is not on board (as in, unsure of what is going on, or resistent to the proposed changes), it's going to make things f
  • by trolluscressida ( 564353 ) on Sunday January 30, 2005 @09:46PM (#11524755)
    When a deputy CIO of the Dept. of Labor and than Homeland Security Department has bogus degrees [fcw.com] and has never been officially questioned about her educational experience (or lack thereof) for years, its not hard to see how gov't IT could be atrociously run.

    From other articles about her, she was notorious in promoting her cronies, many of whom were also incompetent while passing over for promotion and bonuses those who knew what they were doing. Apparently Laura Callahan had a reputation for going ballistic when the occasional techie caught on to her and questioned some of her decisions. In hindsight, its rather obvious why she was so insecure.
    • A degree means nothing---absolutely nothing---in determining whether someone can do a job. Educational experience means nothing in determining whether somone can do that job, either (unless said job is Education, but then how many Profs have you had that couldn't teach their way out of a web paper sack? I thought so).
      • Re:Not really (Score:3, Insightful)

        by TykeClone ( 668449 ) *
        A degree means nothing---absolutely nothing---in determining whether someone can do a job. Educational experience means nothing in determining whether somone can do that job, either (unless said job is Education, but then how many Profs have you had that couldn't teach their way out of a web paper sack? I thought so).

        I don't disagree - but using a false degree to obtain a job is fraud and does say something about the character of the holder.

      • Re: (Score:3, Insightful)

        Comment removed based on user account deletion
    • When a deputy CIO of the Dept. of Labor and than Homeland Security Department has bogus degrees and has never been officially questioned about her educational experience (or lack thereof) for years, its not hard to see how gov't IT could be atrociously run. From other articles about her, she was notorious in promoting her cronies, many of whom were also incompetent while passing over for promotion and bonuses those who knew what they were doing. Apparently Laura Callahan had a reputation for going ballisti
  • Many projects I work on managers tell us almost nothing about what they want. With the descriptions they give it could be a game that involves you moving packages from one rabbit to another.

    It's not the grunts, its the managers who have no idea what they are doings, and we wonder what the hell they are doing in IT.

  • by DelawareBoy ( 757170 ) on Sunday January 30, 2005 @09:50PM (#11524783)
    Most of my development experience tells me that IT development failures almost always are due to a lack of defined requirements. People *think* they know what they want, but they usually don't. And what they do know, they often cannot articulate. Kinda like the Supreme Court's view of pornography "I can't define it, but I know it when I see it."

    Most of us in the Development / Software Engineering fields see this on a regular occurrance. The same thing is probably happening here.

    • Scope! (Score:5, Insightful)

      by steve_vmwx ( 824627 ) on Sunday January 30, 2005 @10:36PM (#11525049) Homepage Journal
      I thoroughly agree. I'm a project manager that used to be a tech. I become a pm because I'd worked under too many of them that had *no* idea.

      Projects are all about scope. Defining what it is that you're doing. Everone thinks that's bleedingly obvious... and they're right. But it ain't easy to do.

      Once you're got the scope, the rest should be easy. But isn't. Another classic big project blunder is the lack of realistic funding and schedule. Nobody want's to say it's going to take megabucks and go for years.

      So instead you end up with "it won't cost much or take very long". Guess what... budgets and schedules "blow out"! More likely they take as much and as long as someone who understood the aforementioned scope would have said in the first place.

      Even when the basics are followed things go wrong. This is the final in the classic series of blunders. If something is starting to look bad - don't tell the project sponsor... we'll be right... maybe.

      Big no no! Tell the project sponsor *now*! What's wrong, why it went wrong and how you intend to fix it!! You'll get more respect and less stress. Both of which make it more likely you'll get it sorted.

      Ahhhh. I feel better now :) These lessons came to me through the development of a fair amount of scar tissue. These days I never employ a project manager that doesn't burst into tears when asked about their worst project.
    • by Usquebaugh ( 230216 ) on Sunday January 30, 2005 @10:40PM (#11525075)
      It's not that the requirements are undefined, that is problem but not the major one. Every project I've worked on has had requirement changes.

      The major problem is that management and techies do not fully comprehend what a lack of requirements means.

      If open source is showing anything it is showing that release early release often is the way to go. Let the users pay with your system as early as possible and be prepared to change everything because it's doubtful they know what they want either.

      Techies should be writing code that can be easily ported/changed/rebuilt/removed fluid software. The number of arguments I've had with other developers who want to build a system the one right way. Then they cry foul when the requirements change.

      Managers need to get on board with the change culture. Requirments is at best a guess, don't go planning a release date based on them. If you have to implement on a fixed date then you'd better be ready to go live with a non-complete system. Plan for a release 2 etc.

      Of course, we've known this since the 70s. I wonder how long before people realise that software should be configured not custom built.

  • The real reason... (Score:4, Informative)

    by br00tus ( 528477 ) on Sunday January 30, 2005 @09:52PM (#11524795)
    The reason these projects are failures, or cost too much is because they are not being done out of need, but from strings pulled to dole out corporate welfare. Every industry the US is internationally competitive in (except maybe Hollywood) has (or had) most of it's R&D paid for by Uncle Sam - aerospace, the Internet, pharmaceutical companies and so forth. It's the old Keynesian thing of the government burying bills in old wine bottles and having some company come and dig them up. Government spending, which in the US usually means Pentagon spending, has been greasing the wheels of the US economy since FDR took office. The only difference between the two major parties is Republicans tend to want to build rockets/lasers that can shoot down rockets and that sort of thing, while Democrats want the money to go towards biotechnology and things like that. If you want to see what's going on, don't look at the end result and try to discern what went wrong, but look at the legislative process, and what pressures are in effect there. Billions of dollars was not really wasted - it made work for many people, imagine what unemployment would have been if it hadn't. It's the old bills in buried wine bottles story. I mean think of some of the ridiculous things proposed - billions for a "missile-defense shield"? It's just a way to spread money around. I don't like how the Democrats or Republicans do this, I have other ideas of how that money could be used for make-work.
  • They are trying to make all the solutions work in one fail swoop. When you build a comprehensive system to do it all from the get go expect comprehensive problems.

    I think projects of that scope should stage such large developments, start with a general specification for the system and the desired end result and interoporability, then develop and roll-out modules progressively. As you debug the core modules and define the additions you can tune/revamp as you go. Then when you have a an unexpected problem with your thousand clients you are only dealing with a portion of the functionality not some monolithic spaghetti code. By the time you get to the end of the development you are working on a field-hardened platform.

    Would it take longer or cost more? - well given the time and cost overruns of many of these big projects it might be more economical if not mre timely.

    • Shamelessly picking on the innocent typo:

      They are trying to make all the solutions work in one fail swoop.

      Well, no wonder it isn't working! Maybe the Fed should try the much more effective 'success swoops'...

      -Peter "One swell foop" Hamlen
  • by Anonymous Coward
    The Navy-Marine Corps Intranet (NMCI) is an ongoing example of a federally-funded IT project that has failed to deliver. For over $6 billion, the Navy was promised a complete data, voice and wireless communications system that was fast, secure and reliable.The voice and wireless portions were vapor before the project even began. The data network is a discombobulated mess of machines with software that's already three generations behind and a level of bandwidth that's adequate at best and pitiful at worst.
    • The reason the NMCI is so bureaucratic: users are idiots.

      Yes, it's harsh. But as long as users want to install their own software "just like at home" or do other things that risk security, secure networks must be physically isolated from others. No other method will ensure that Bob down the hall won't accidentally share secret information on Kazaa by accident.

      Stupid users cause stupid policies. Add to that the military-bureaucratic mindset, and you've got exactly what the parent describes.
  • I've seen is that the government hires contractors to do the work, asks them for input, then totally ignores what the contractors recommend. As an example: recently, a group I work with was asked what it would take to improve the computer systems and build an identical setup for a hot-backup. We did our research, talked with vendors, and put together a proposal that we presented to the government (and we didn't even gold-plate it).

    They took our research and gave it to someone who doesn't know much about
  • Primarily written for a UK audience, where we are well used to spectacular goverment IT project failure and runaway spending (ID cards? Erm, with your track record? You must be joking!)

    This book is excellent:

    http://www.iee.org/OnComms/PN/Management/Troubl e d_ IT_Projects.cfm

    It contains much wisdom on the subject of major IT project failure and quite a lot of insightful material taken from notable historical project cock-ups.

    I like the approach of identifying 40 common root causes (a good proportion of w
  • by Futurepower(R) ( 558542 ) on Sunday January 30, 2005 @09:59PM (#11524842) Homepage

    'Failures are very common, and they've been common for a long time.'

    They aren't really failures. They are deliberate government corruption. Anything that has been "common for a long time", with no effective corrective action, is deliberate.

    The IT corruption is small compared to the military procurement corruption, where the dishonesty can be kept more secret. The U.S. government is the world leader in buying equipment to kill people and destroy their property. (The least socially skilled way of relating to other people is killing them.)

    U.S. citizens, it's 7 PM. Do you know what your government is doing? Unfortunately, you don't.
  • by Kwil ( 53679 )
    ..I wonder why? [slashdot.org]
  • Sure- large, monolithic projects often fail. For the same reasons that large, monolithic anything [coldwar.org] often fails. IT projects, development projects, governments, all have this in common. Despite all the highly paid consultants and theorists (who seem to specialize in incompetence and high billing rates), this basic fact of human nature still keeps keepin on.

    Smaller projects [google.com] succeed more often, involve smaller risks, shorter schedules, and faster results. Open source software development projects are an

  • Business analysts don't get enough time with business experts because the experts are doing their jobs and can't spare enough time to pass on their experience. They're experts because they do their job well and can't be spared, and infinitely more so in a life-or-death job like law enforcement or counter-terrorism. And, because of this dilemma, things get messy. System architects can't plan the infrastructure accurately, developers can't code the application that the business experts want, and testers can't
  • by Tablizer ( 95088 ) on Sunday January 30, 2005 @10:39PM (#11525069) Journal
    My observation is that is a combination of territorial office politics, automating bad processes instead of fixing processes, and not learning from past projects.

    The second is a case where there are all kinds of intertangled, unnecessarily complicated business rules that are required or requested. Often these are dictated by legislation or attempts to "satisfy all stake-holders".

    There should be some kind of bidding process on features such that features which gum-up the works will be charged to the customer somehow. Perhaps have a cost/benefit analysis/estimate be done on each feature, and chop off the ones that rank low (by being either too low priority or too costly).

    Another thing I find totally lacking is any documentation of the design decisions. Before spending gazillion dollars on a fat project, the design and architecture should be seen and/or suggested by several expert eyes and every one of their written critiques and evaluations should be saved, whether used or not.

    Then when a project succeeds or fails, one can see which ideas and/or which consultant/expert seemed to have the best insite or vision. Otherwise you keep reinventing the same mistakes over and over again.
  • There are large IT project failures all over the planet - not just the US. The UK had several big implosions and here in Germany we had bad projects like the Maut (satellite truck tracking system). Job database for Bundesagentur für Arbeit (unenployment agency for the government) and a big database of criminals for the BKA (like the FBI).

    Most of them go down big time, because companies take the government/taxpayer for a ride:

    (sorry all links in german)

    http://www.heise.de/newsticker/meldung/45522

    htt
  • It seems to me that a major problem is accountability. If a major IT project fails, I don't see any of the people in charge in the government fired. I don't see the contractors put on the list of "people we are *never* hiring again". In fact I think in the UK at least the same contractor responsible for most of the major government IT failures is hired over and over again. Why on earth *should* these projects succeed when everyone knows that if they fail in the end they will not be personally held accou
  • ...is the 'sunk cost' fallacy.

    Any major public project that gets off track tends to stumble on too long because nobody is game to pull the plug - the political consequences are too great. For example, if a project is clearly hopeless, and has absorbed, say, $100M, it's difficult for a politician to shitcan it, because people will be up in arms at having spent all that money for nothing. They'll expect the government to throw more money at the project to finish it, rather than "waste" the money already spen
  • by Raul654 ( 453029 ) on Sunday January 30, 2005 @10:52PM (#11525136) Homepage
    With BlueGene, the US gov't approached IBM and told them "We want the fastest super computers in the world. We want to eventually reach the the Petaflop range. Here's some money. Do it" and IBM happily complied. Late last year, the BlueGene/L prototype recaptured the title of world's fastest computer from the Japanese. The BlueGene/C design is due (on time) in June and should be available from the foundry in August (full discloser - my grad work involves testing & verficiation of this). The lesson? Where IT is concerned, the Government has a legitimate interest in outsourcing it to reliable companies (prefarably US based for security reasons).
    • (prefarably US based for security reasons)

      US based for political reasons perhaps. But for security reasons? Why would the developers need to have access to real data? They wouldn't. So you are proposing building a secure system via obscurity, and keeping that obscure information to citizens? Why do you assume that one of those citizens with obscure knowlage won't want to get fucked by some hot spy someday? Or want a pile of cash?

    • Is this the same IBM who just sold their PC business to a Chinese company. The government is currently scrambling to evaluate the security implication of letting a Chinese company deliver personal computers to a bunch of government and military contracts IBM has already landed. Not sure it really gives them great street cred as a pillar of trustworthiness in government contracting.

      Not sure you you can really cite any of big supercomputers as really illustrative of how a company will perform on a big and
  • Mismanagement (Score:3, Insightful)

    by OverflowingBitBucket ( 464177 ) on Sunday January 30, 2005 @11:01PM (#11525181) Homepage Journal
    Mismanagement. Same as large projects in commercial organisations.

    Basically, layers of layers of people produced by a hierarchical system that encourages mediocrity in ability and excellence in deception. Liars with superb brown-nosing skills and minimal ability leading skilled developers without those career-climbing skills. Recommendations from the knowledgeable ignored for reasons as petty as favouritism and wounded pride. Uninformed directives passed down. Valuable feedback distorted or disregarded going upwards. Career employees who couldn't care less doing the minimum they can and maximising the amount they seem to be doing. Burnt-out and defeated employees who believed once but can no longer care. Code and credit theft. Incompetents hiding their errors by sabotaging the work of others. Narcissistic managers who simply don't want negative feedback. Accomplishments delayed or distorted to fit the drip-feed system of delusional managers. Sacrifices of the innocent to protect the careers of the guilty. All held in place by a system that encourages all the negative aspects and hides it away in a nice, neat, convoluted bundle.

    Did I miss anything? ;)

    And people are surprised that large projects frequently fail.

  • Hey, the $170 million the FBI "lost" in funding the overhaul made SAIC (the vendors) $170 million richer. So I guess that's a good thing, right?
  • I think this mindset tends to miss the obvious point. Industry wide (pick any industry), large IT projects fail. The same is true of medium and small projects too. Arbitrarily focusing on U.S. Govenrment projects is pointless.

    Having worked for a large private sector business that sold its soul up the SAP river I know a little about how systemic this problem is. Managers foolishly believe marketing material and expect miracles. From what I have seen your best hope is to start a bunch of projects and hope a

  • I actually live and work in the Washington, DC metro area, and I have been involved in many government computer projects, both with my current company and with other companies. A serious source of problems is the military's personnel system that assigns an officer for two years in a particular position, and then he or she moves on.

    This is just long enough to figure out that what's there is not working, figuring out what needs to be done, writing up the necessary paperwork, slashing through the procurement
  • It seems to me (as well as half the other posters!) that the a big part of the problem is the propensity of government IT to go for "big bang" projects, with huge scope/requirements, and huge cost. By virtue of the project size, these contracts can only go to large companies, and they tend to be hard to manage ... on both sides of the fence.

    If the wheels come off a big project, the gov't typically pays much more than it anticipated, and/or ends up with problematic software. The problems are things like:

  • I'm really not sure that private business does much better. I've been involved in a lot of projects where we get done and then for a variety of reasons the new software doesn't get fully used. I've had an entire year where, if I had bluffed and done nothing the entire year, it would have had the same effect. The only difference in the scale. I suspect if every IT project undertaken by corporations succeded 100% every time, that the average IT staff could be at least halved.
  • The problems of a terrorist tracking database have already been well discussed in Trevanian's classic Shibumi [mysteryinkonline.com]:

    The value of color-coding came under criticism when the system was applied to more intricate problems. For instance, active supporters of the Provisional IRA and of the various Ulster defense organizations were randomly assigned green or orange cards, because Fat Boy's review of the tactics, philosophy, and effectiveness of the two groups made them indistinguishable from one another.

    Another m

  • I have said this before and at the risk of sounding like a broken record will say it again: If you sold used cars like software is sold, you would be in prison. If you sold real estate like software is sold, you would be in prison. 'No' is not an effective sales tool, so sales droids end up selling vaporware with poor requirements. 6 months seems to be the magic number, whatever it is it can be done in 6 months.

    Add in the rampant anti-intellectualism in IT and management and you have a recipe for disaster

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...