Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Bug

Everyone Cites That 'Bugs Are 100x More Expensive To Fix in Production' Research, But the Study Might Not Even Exist (theregister.com) 118

"Software research is a train wreck," says Hillel Wayne, a Chicago-based software consultant who specialises in formal methods, instancing the received wisdom that bugs are way more expensive to fix once software is deployed. Wayne did some research, noting that "if you Google 'cost of a software bug' you will get tons of articles that say 'bugs found in requirements are 100x cheaper than bugs found in implementations.' They all use this chart from the 'IBM Systems Sciences Institute'... There's one tiny problem with the IBM Systems Sciences Institute study: it doesn't exist." The Register: Laurent Bossavit, an Agile methodology expert and technical advisor at software consultancy CodeWorks in Paris, has dedicated some time to this matter, and has a post on GitHub called "Degrees of intellectual dishonesty". Bossavit referenced a successful 1987 book by Roger S Pressman called Software Engineering: a Practitioner's Approach, which states: "To illustrate the cost impact of early error detection, we consider a series of relative costs that are based on actual cost data collected for large software projects [IBM81]." The reference to [IBM81] notes that the information comes from "course notes" at the IBM Systems Sciences Institute. Bossavit discovered, though, that many other publications have referenced Pressman's book as the authoritative source for this research, disguising its tentative nature.

Bossavit took the time to investigate the existence of the IBM Systems Science Institute, concluding that it was "an internal training program for employees." No data was available to support the figures in the chart, which shows a neat 100x the cost of fixing a bug once software is in maintenance. "The original project data, if any exist, are not more recent than 1981, and probably older; and could be as old as 1967," said Bossavit, who also described "wanting to crawl into a hole when I encounter bullshit masquerading as empirical support for a claim, such as 'defects cost more to fix the later you fix them'."

This discussion has been archived. No new comments can be posted.

Everyone Cites That 'Bugs Are 100x More Expensive To Fix in Production' Research, But the Study Might Not Even Exist

Comments Filter:
  • by luvirini ( 753157 ) on Friday July 23, 2021 @01:44PM (#61612763)

    I mean if you look at the control software for 737 MAX, I am sure that making the thing properly would have been a LOT cheaper than the billions lost to the problems.

    • by mykepredko ( 40154 ) on Friday July 23, 2021 @02:03PM (#61612907) Homepage

      From everything I've read on the issue, the software was well written and worked exactly as specified.

      The requirements for the software (ie paying extra for the software to use the inputs from two AOA sensors) was the problem.

      • by jellomizer ( 103300 ) on Friday July 23, 2021 @02:19PM (#61613011)

        I remember getting in a lot of trouble in my MBA Software Engineering Class, The biggest issue I had was how they assumed the end user/actors knew what the requirements were. That we should focus on building the objects based on what the user requirements were.
        I was the only Developer in the class, but it was frustrating, because the other class members were thinking that we can just get the requirements by asking people what they need.

        Here is the thing they don't know what they need, they don't know what they want. What they think they need, may not what someone else needs, but that someone else cannot give that data they need the first person.

        UML and OOP fall apart very quickly in the real world. They are good for a general guideline but strictly following them will leave you to make a POS product (even if it is not a Point of Sale system).

        Experience has taught me to put in hooks into my program to deal with what I feel may be a new requirement. Other than using an Object, perhaps an enumerated array or List would work better. Because if a new data entry point is needed, All I need to do is add that column to the database, and boom, I can use it without a scarry recompile and redeploy of the product, plus if it fails, I can roll it back.
           

        • Here is the thing they don't know what they need, they don't know what they want. What they think they need, may not what someone else needs, but that someone else cannot give that data they need the first person.

          Sounds a bit like marketing. Supposedly those customers don't know they need or what they need till someone tells them.

        • UML and OOP fall apart very quickly in the real world.
          Perhaps you are living in a very sad world then?

          In my world they work perfectly fine.

          I can use it without a scarry recompile and redeploy of the product
          No you can't. As your "product" knows nothing about that new column.

        • by jythie ( 914043 )
          I would not say they fall apart, but they have gotten overused. A lot of that software engineering methodology was based off the idea that you had customers with known requirements, things like they had a system they were building and they needed something else to interface with it in a specific way because they were also designing things out. There, it is extremely useful. .. but applying it to consumer or business systems gets a lot less helpful since consumers and businesses only care about the final f
        • Comment removed based on user account deletion
          • Read a software engineering text some years ago that had a chapter on experimentation on software engineering, an almost unheard-of thing for a S-E text to do. They prefaced their later chapter on OO with a paragraph that said, in effect, "there's been no real evidence that this stuff makes things any better, but we're including it here because it's the current fashion".

            So there are two answers to your question, firstly a lot of this stuff is fashion-based rather than science-based so there's as much empi

        • ARH! I remember when I wrote the property management software for one company, the instructions from the owners was nothing like the information the people who processed the data needed. Management had the wrong idea of how their own company worked. For another company I was to write the software for monitoring the making of concrete, but the higher ups I was force to work with could not give me the needed details of what and when different items were entered into the furnaces. And worse they did not let
      • But creating the requires is a part of the development process! Don't look just at coding, coding is the easy part.

        But in practical terms, I have seen the huge expenses that have come from major bugs being shipped; customer contracts get canceled, prospective customers back off, everything is put on hold while the team rapidly tries to figure out the bug and fix. And often the same issues are at fault when I've seen (or created) these issues: lack of design reviews (or even design), lack of proper testing

      • This is a fallacious way to measure the cost/benefit of fixing bugs.

        It is like saying that lottery tickets are a good investment because you get a million times your money back.

        Sure, the 737MAX sensor debacle cost billions and would have been cheap to fix. If Boeing had spent money upfront to prevent the problem, they would have won the lottery.

        But there are many other bugs that went unfixed and made no difference. Identifying and fixing them would have incurred costs for no benefit, like buying losing lo

        • by Zak3056 ( 69287 )

          But there are many other bugs that went unfixed and made no difference. Identifying and fixing them would have incurred costs for no benefit, like buying losing lottery tickets.

          I think you're making the same mistake the author of the article is, here, by conflating the idea that "fixing errors earlier is cheaper" (true) with "exhaustive QA is always justified to find errors early in the process, because it's cheaper" (which is false--it MAY or may not be true for a given application, depending on a host of factors).

          The idea is not QA at any cost, rather it's stating a simple truism: fixing a mistake is always more expensive than never having made the mistake, and fixing your mista

          • by dgatwood ( 11270 )

            The idea is not QA at any cost, rather it's stating a simple truism: fixing a mistake is always more expensive than never having made the mistake, and fixing your mistake as early as possible is less expensive than fixing it later.

            Except even that isn't true. Often, fixing a problem later, after you've thoroughly debugged everything else, is far easier (and thus less expensive) than fixing that bug earlier in the process, when you're fighting with unrelated problems while trying to debug it, or worse, when aspects of the overall design are still in flux.

            Whether a bug is cheaper to fix earlier or later largely depends on whether fixing the bug requires a significant design change or not. If so, then it will probably be cheaper to fi

        • Since they had no crystal ball to tell them which bugs would cause future problems, they would have had to make a broad investment in better quality, and that would have cost far more than just fixing one bug.

          Assuming a retarded point of view: supposed the guys working on "fixing" bugs early all earn a million per year. Because they are so important: how many of them can you hire and let them work, for let's say, 10 years?

          You have complete misunderstanding about "fixing bugs". It is not the fixing, it is th

          • by cusco ( 717999 )

            To anyone with a history of working in the aircraft industry or for that matter anyone with experience writing mission-critical real-time never-fail software would sent the design specs for the 737MAX MCAS back with some thoroughly acerbic comments. Unfortunately those sort of people are expensive, and Boeing has laid all their experienced programmers off. Instead the company contracted with the lowest bidders, who normally wrote software for (IIRC) banks. The reliance on a single sensor was bad enough,

    • The problem was not the software.

      Problem was Boeing claimed no additional pilot training was required, and did not mention the existence of the software assistance in any manuals.

      Then it compounded the error by not mandating additional redundant angle-of-attack sensors. Some airlines ordered additional sensors as optional equipment, but not all did.

      Further we have repeatedly cut funding for FAA and it is not capable of doing any independent verification of the airplanes. All airplanes are not basicall

      • by uncqual ( 836337 )

        There clearly was a problem with pilot training. However, existing pilot training should have worked fine -- the situation looked just like the uncontrolled trim changes that it was and the answer was always "Shut the damned automatic trim adjustment off and use the manual controls", but pilots didn't do so or do so quickly enough.

        However, if the problem lingered for a while before a poorly trained pilot figured it out, it became impossible to use the recovery procedure because the manual trim wheel became

        • it became impossible to use the recovery procedure because the manual trim wheel became impossible to turn due to aerodynamic forces on the aero surfaces.
          Nope.
          It became impossible to trim, because the trimming system was not switched off.
          So 1 or 2 pilots had to fight the engines that actually wanted to counter their manual trimming.
          Has absolutely nothing to do with aero dynamics.

          However, if the problem lingered for a while before a poorly trained pilot figured it out
          That is insulting. The pilots were not "p

          • by uncqual ( 836337 )

            Investigation and simulator trials shows that the trim wheels, could not be turned enough by pilots once the planes got into a dive beyond some angle and/or overspeed condition (both of which existed in the latter stages of flight in these crashes) - with the automatic trimming system turned off. In other words, well before "uncontrolled flight into terrain", it would have been impossible for the pilots to recover. Yes, earlier, they could have obviously recovered if they quickly figured out to follow the "

        • by cusco ( 717999 )

          Actually one of the flights found the switch to disconnect the MCAS, turned it off, and between the three cockpit crew managed to crank the elevator trim wheel back. They thought they had turned off the automatic trim, so after they had recovered at a lower altitude turned it back on. The switch didn't actually reset the MCAS though, just disconnected it, so when they turned it back on the MCAS was still at its max setting and flew them into the ground before they could recover.

      • by cusco ( 717999 )

        Oh, the problem was most certainly the software. I posted this link in the post above, but it's worth a repost here. It was incredibly badly written, there were errors that should have been laughed out of BASIC Programming 101 in it.
        https://spectrum.ieee.org/aero... [ieee.org]

    • Years ago, IBM published the number 12X not 100X. To me this sounds like the school game of passing a message from one person to the other then comparing the original to the one that arrives from the last student. People can't help but exaggerate. There's another reason that this study is meaningless - nobody cares about bugs. Most s/w if provided free on the internet and nobody cares if your user experience sucks because you are not the customer. You are the product. The advertisers who fund the s/w deve
    • I mean if you look at the control software for 737 MAX, I am sure that making the thing properly would have been a LOT cheaper than the billions lost to the problems.

      There's a difference between a bug and a braindead design flaw that anyone who has even taken a 2 hour corporate workshop on safety dumbed down for management could have told you is a bad idea.

    • by ras ( 84108 )

      I am sure that making the thing properly would have been a LOT cheaper than the billions lost to the problems.

      True, but that's not what the headline says. The headline uses weasel words. I've not seen anyone claim this:

      Everyone Cites That 'Bugs Are 100x More Expensive To Fix in Production' Research,

      Many of software's problems are caused because everyone knows the reverse is true. For the software developer, it's far cheaper to ship it, lets your customers do the work of finding the problems, and fix the

    • by rtb61 ( 674572 )

      Well, you see, it's like this. All software corporations customers, well they pay 10,000 times the costs in the losses associated with bugs disrupting their operations, than 100 times the cost of fixing bugs that way, as in, fuck the customer make those bastards pay for it, MWA HAH HAH, seriously they laugh like that, I am not kidding, they really do. They do not give one fuck about the cost of the bug upon you, no shits given as in FUCK YOU PAY FOR THE NEXT UPGRADE with more fucking bugs, yes you moron, yo

  • by UnknowingFool ( 672806 ) on Friday July 23, 2021 @01:45PM (#61612769)
    In my experience, bugs that cause production down issues are more expensive. I cannot quantify that they are 100x more expensive.
    • by Entrope ( 68843 ) on Friday July 23, 2021 @01:51PM (#61612821) Homepage

      Yeah, just about everyone who develops software that gets delivered to, and used by, someone else understands that the basic idea -- of roughly geometric growth in defect cost as a function of stage -- is true.

      There is room to argue about the exact scale, but when a single-character typo can keep millions of people from logging into their Chromebooks, 100x may also be an understatement.

    • There are more than just one study supporting this point. As you say, quantifying it is difficult, but it's large.

    • I'm not sure anyone can prove this, but I don't think it's an exaggeration. It sounds about right in my experience. I've seen bugs that certainly cost way more than 100x what it would have taken to avoid them. Not 100x to fix, specifically, but 100x the total cost including the cost of resulting downtime, loss of reputation, and loss of sales. And even the occasional lawsuit.

      Good software may be costly, but, in my experience at least, bad software is WAY more so.

      • by jythie ( 914043 )
        There is also the 'technical debt' issue of fixing bugs : any error that effects a system will probably have other parts of the system built around it, meaning if you fix the bug, you break everything that depended on the bug being there.
    • Possibly 1000x more expensive. I have seen the millions of dollars being lost due to a trivial bug, possibly more with add-on effects (bad word of mouth causing loss of future sales).

      Sure, the development and testing is expensive. Lots of testers, lots of time, lots of documentation, etc. But on the other side, when it's all-hands-on-deck with a customer emergency you have all that expense plus the added expense of half the company wanting hourly status updates, the manufacturing lines are halted, contra

      • Also depends on the nature of production down and if it can be mitigated temporarily before a code fix, ie switching to manual. Sometimes what appears to be a bug is not a bug. One time my company's software failed over a weekend and all production was halted at a client site. They did not lose any product; work was just stopped until our software was "fixed". The problem turned out not to be our software but a major database update had changed settings that needed to be restored. We were not informed that
    • Well,
      I know bugs that costed a company I worked for 600.000.000 million Euro.
      Per Year For more than 10 years or so.

      It was however a conglomerate of unappropriated tools (Excel), slightly flawed algorithms (not to much), and programming mistakes.

      Main problem: that Excel solution was to slow to do the calculations several times with slightly varying input parameters that an expert can judge over 3 or 4 possible results.

      So: something we fixed in a few month, for development costs of about 4 million Euro, had c

  • Maybe the initial inspiration for this "wisdom" comes from book publishing / printing. I imagine it is a LOT cheaper to fix errors in development than production ... Just a SWAG though.

  • May, but a lot has also changed in forty years as well. Both in how we create software, and fix it. So even if there was a study would it's results be applicable to now?

    • Agreed.
      I have done some coding in FORTRAN 77, if I had to fix the code I would first print out the code, with a pencil trace the flow of the code (they loved to use GOTOs). After I have traced out that code, I would see where I can put in the fix, and then I would see what else that would effect. It took me a whole day to do a line of code.
      Where a python program that does the same thing, Is much easier to fix, because the code changes are better isolated, in functions or classes, with less goto spaghetti

    • Software is often just a tiny part of a the full product development cycle. A disruption from the bug affects far more than the team who are diverted to fix it. And even though things have changed in forty years, the costs have also gotten bigger. Instead of just a few customers, you may have thousands, or if it's a consumer item it could be millions of affected items. If you told me back in 1980 that I would have a bug that potentially affected tens of millions of devices I would have laughed; but I wa

  • by 140Mandak262Jamuna ( 970587 ) on Friday July 23, 2021 @01:53PM (#61612831) Journal

    bugs found in requirements are 100x cheaper than bugs found in implementations.'

    The Agile method, when correctly implemented using Agile developers, will save many millions of dollars in software development. The technique depends on astute observation bugs found cost money. So it makes bugs impossible to find. If you don't test it, you won't find it. Save money in both testing and fixing! Since the developers are Agile, they would have fled the scene of crime long before the bugs are found in implementations.

    • Following on your lead, our team has gone all-in on microservices. Each line of code is its own service.

      As you can imagine, our bug count per module is extremely low. We don't worry about integration bugs.

      • by ffkom ( 3519199 )
        Yes, and should there be any concerns about the performance, then that is not your fault, latency is the responsibility of all those fancy "web frameworks", load-balancers etc., and throughput is achieved by having some cloud provider auto-scale your micro-services to 100,000 or so instances, each.
        • Since computer time is cheaper than programmer time, no effort should ever be expended to make code more efficient. Never.

  • by SuperKendall ( 25149 ) on Friday July 23, 2021 @01:56PM (#61612855)

    There's one tiny problem with the IBM Systems Sciences Institute study: it doesn't exist."

    Well that stands to reason, since production bugs sometimes don't exist when you try to reproduce them on development systems... :-)

    • Pfft. IBM never has production bugs. They are feature which you can find on the 5th paragraph on page 2458 of the product specification. You did read the product specification didn't you? Well now that you want to deviate from the spec that is going to cost you. I'll have bill get in touch with you with a quote.

  • Often times, software anomalies identified as bugs by neophytes are not errors at all but a failure of the designer or the end user to accurately convey what they wanted or needed to begin with.

    • It's still a bug though. Doesn't mean the coder's job begins and ends with the code; they need to read and understand all the specifications and requirements, create a design, properly convey the necessary information to test the code and its implementation and its integration, and so forth. If the coder saw that there was a problem then it should be reported I have seen a dev saying effectively "I onlly did what I was told" despite being the senior person on the team who should have stepped up and said

      • It's still a bug though.
        Strictly speaking: nope.
        It is a defect in the design.

        Doesn't mean the coder's job begins and ends with the code;
        Define coder then. If it is a low level programmer that converts specs into code: then it is most certainly not his job to grasp that the business demands him something to do which is not useful for said business!! A coder is not required to be an business expert in the area his code is supposed to work.

        A requirements engineer, yes. A high level expert programmer, yes, mo

        • As far as a product goes, the development effort includes everything. Software and firmware developers are not some special case sitting off in a silo. If the design is wrong and the product does not work as expected, then it's a bug. The customer thinks that it's a bug and that is what matters. That does not mean pointing fingers are the software people. But the software people need to get out of their silo and raise issues with the design if they see them, they are not automata who should just shut up

          • If the design is wrong and the product does not work as expected, then it's a bug.
            No it is not.

            The customer thinks that it's a bug and that is what matters.
            A smart customer will realize, he specced it wrong. There is really nothing a "programmer" can do about that.
            And fixing it, is a chage request: not a bug fix.

            But the software people need to get out of their silo and raise issues with the design if they see them
            Of course: if they see them

  • by Zak3056 ( 69287 ) on Friday July 23, 2021 @02:02PM (#61612903) Journal

    The idea that fixing errors earlier is less costly is not confined to software--it applies to ANY process. Adding labor on top of your mistakes results in even more errors that have to be fixed, ergo cost is higher to fix errors later in the process.

    It's also easily falsifiable (all you need to do is provide counter examples where fixing something later in the process is less expensive than sooner). I'm going to go out on a limb and suggest you will fail in this endeavor (unless you build a strawman by doing something like comparing the cost of exhaustive code reviews vs seat of the pants "fuck it, we'll do it live" mentality).

    • I see this sort of attitude with web development, products that are essentially just web sites. Then you just have continuous integration and rollout, with developers and operators being literally the same people So who cares if it's a bug we can fix it the next day! The end result being buggy web sites and web sites that continuously change and baffle the users (including products that are web based but masquerade as applications such as Microsoft Teams).

      It's still expensive though when customers start l

  • by sheph ( 955019 ) on Friday July 23, 2021 @02:04PM (#61612921)
    It may not be 100x the cost, but just using basic logic should tell you it's more advantageous to find bugs in the design phase. Once software has been built and shipped to customers fixing problems becomes exponentially more expensive. I had to deal with it working in software development back in the late 90s. Company gets tons of calls and buries support. Support spends hours trying to resolve the issue. Development finally gets involved and goes "oh, didn't think about that". Developer works late to fix the issue. Re-compile, retesting, extra testing to make sure we didn't miss anything else, lots of meetings with managers, and when you finally get a gold image ship that off to create CDs to send out to all of our customers. Everyone is frustrated, lots of time has been spent, and if someone would have put a meeting together and really spent a few hours of quality time thinking about implementation of design it could all be avoided.
  • I have never claimed this 100x cost difference in any of the papers I have published and have never cited this Software Engineering by Roger S Pressman. So the claim everyone asserts/cites/claims this is not correct.
  • "wanting to crawl into a hole when I encounter bullshit masquerading as empirical support for a claim, such as 'defects cost more to fix the later you fix them'."

    People who look for excuses to not fix/improve their code inevitably have bad code. It's a cause/effect relationship.

  • by WaffleMonster ( 969671 ) on Friday July 23, 2021 @02:12PM (#61612953)

    Leave it to an "Agile methodology expert" to go out of their way to claim bugs later in cycle are cheaper than common dogma.

    • by deKernel ( 65640 )

      I came here to say the same darn thing.

    • by bsolar ( 1176767 )

      You don't need to go that much "out of the way", the issue is pretty simple and it's not calculating the cost of finding the bug earlier. You know the cost of the bug found later in the cycle, but not how much would have costed finding it earlier. Many underestimate that cost for some class of issues.

      If we're talking of simple programming errors which can be found with e.g. better code reviews or testing, of course they should be found earlier in a relatively cheap fashion, so the cost of finding them later

    • Perhaps you should look into the two most common Agile Methods:
      * Scrum
      * Extreme Programming

      Both emphasize bug free code!
      In Scrum the team is supposed to literally stop all work on the current sprint and fix the new found bug together.

      A proper agile team has an completely empty list of defects, bugs or smells.

      No idea what this "agile bashing" on /. is about.

      You most likely never did anything agile, never met an agile expert or at least have read the simplest summaries about the two agile methods mentioned ab

      • Both emphasize bug free code!
        In Scrum the team is supposed to literally stop all work on the current sprint and fix the new found bug together.

        Setting aside time to apply Band-Aids to make up for lack of proper design as demanded by Fragile's iterative nature does not compensate for predictable piss poor outcomes.

        If you regularly find bugs in "finished sprints" - then you are not agile, but do it (everything?) wrong!!

        No idea what this "agile bashing" on /. is about.

        This is precisely why Agile is never taken seriously. All of its failings are excused by the phrase "you're doing it wrong". It's effectively an unfalsifiable religion.

        • This is precisely why Agile is never taken seriously. All of its failings are excused by the phrase "you're doing it wrong".

          If you think that, you are plain stupid.

          Can't be so hard to once do one of the simplest software development methods correct.
          And then lean back and say: wow, it is so easy.

          You are more caught in your own stupid vicious circle. If you do not do it right, you are obviously doing it wrong. So simple. If you have no success: then you are doing wrong. So: figure what you are doing wrong, an

          • If you think that, you are plain stupid.

            It doesn't matter what I think or what others think about me. My opinions on this topic are widely shared and won't be changed by pointless ad hominem.

            Just look at the totality of your response it is primarily name calling: "plain stupid", "extremely stupid", "obviously utterly incompetent", "stupid vicious circle".

            Followed by "If you do not do it right, you are obviously doing it wrong. So simple. If you have no success: then you are doing wrong. So: figure what you are doing wrong, and fix it. "

            The core

            • Again, for you as well:
              https://en.wikipedia.org/wiki/... [wikipedia.org]

              You claim something to be an ad hominem, when it clearly is not.

              That basically describes your "knowledge" about agile methods.

              The core problem with Agile it encourages piecemeal design and iteration with predictably poor results.
              Care to point out - pick what ever agile method you are "comfortable" with - which principle encourages that?

              Which agile method encourages "piecemeal design"?
              Which agile method has "iteration(s) with predictably poor results"?

              • You claim something to be an ad hominem, when it clearly is not.

                It clearly is. Your derisive commentary coupled with lack of any substantive merit based argument makes your remarks entirely ad hominem. A trend continued with baseless assertions of my "knowledge".

                Care to point out - pick what ever agile method you are "comfortable" with - which principle encourages that?

                The whole point of Agile is piecemeal predictable deliverables at a set frequency. In Agile parlance its called an iteration / sprint / cycle.

                Strange that I had no bug in production since ~20 years, how about you?

                Strange you can't answer my question "How does one verify whether or not the Agile methodology had culpability in the failure of a project?"

                You did after all claim Agil

                • So you did not read the ad hominem link.

                  Just like you have no clue about agile methods.

                  Point: I did not "ad hominem" you in anyway. And as you have not grasped it by now (as you did not read the link), likewise you never learned anything about agile methods. Good luck.

                  "How does one verify whether or not the Agile methodology had culpability in the failure of a project?"
                  Simple: you can't. there is no way to proof or disproof if the method was the problem. How would that be possible.

                  But you can find what they

  • The entire premise of the quote is based on the cost of fixing a bug. The cost can be small or great and varies wildly. But the impact of the bug, and the cost and complexity of the fix, are pretty easy to determine. We can see from a simple logical deduction below that production bugs are always more expensive to fix.

    - The cost of the bug is the impact to the business due to the bug being in production ("Did we lose money due to production being down", "Did we lose money as a result of our brand be

    • The statement in the article isn't that fixing a bug in production is less expensive than in development. The statement is that there is nothing approaching rigorous evidence that fixing a bug in production is 100x more expensive than fixing that bug in development.

      I agree that production bugs are more expensive to fix than those caught in development. However, there is a strong business case for getting something into production so that you can A) scare out more bugs and design flaws with production use;

    • The article is not about the question how expensive a bug is when it is fixed, early, late, or in production.

      The article is about, the most cited study which compares such costs: was never made!

      No idea, if the study was made, but I learned in university that the study came from IBM - who knows. Perhaps a world wide conspiracy :P

      Obviously your are completely right with your reasoning, but that was not really the point of the summary.

  • by flyingfsck ( 986395 ) on Friday July 23, 2021 @02:15PM (#61612975)
    Getting a product to market faster and fixing it later can be better for cash flow. The pursuit of perfection can bankrupt a company. All the companies that achieved 6 Sigma, have gone the way of the Dodo.
    • You should know better than to try to inject rudimentary economics into a discussion of software bugs.

    • Getting a product to market faster and fixing it later can be better for cash flow. The pursuit of perfection can bankrupt a company. All the companies that achieved 6 Sigma, have gone the way of the Dodo.

      True, there is a point at which testing and bug hunting starts to cost more than it saves, but very few projects actually hit that part of the spectrum.

      During a project short-term incentives rule the day. Deadlines need to be hit and requirements need to be crossed off, buggy code and robust code look the same by those metrics.

      The whole reason for unit tests, test code coverage, QA departments, etc, is an attempt to turn a low number of bugs into a requirement because they'll make it through to production o

  • Most actual software bugs are easily fixed. The problem is when the software faithfully implements a defective design.

    I'm working on final testing of a project that is 6 years old. The requirements documents have been updated many times. At no time did the people writing the documents know that the source data contained vertical bar characters that the final destination could not deal with cleanly.

    The fix took 10 minutes to implement. Finding that a fix was necessary took 6 years. Getting authorization to make the fix took 2 months.

    • Most actual software bugs are easily fixed. The problem is when the software faithfully implements a defective design.

      I'm working on final testing of a project that is 6 years old. The requirements documents have been updated many times. At no time did the people writing the documents know that the source data contained vertical bar characters that the final destination could not deal with cleanly.

      The fix took 10 minutes to implement. Finding that a fix was necessary took 6 years. Getting authorization to make the fix took 2 months.

      True, but once that project makes it out into the field every bug, not just the design ones, is going to get arduous.

      First there's the bug report(s), and the wasted support time talking with customers figuring out the problem.

      Then there's scheduling time for the fix internally.

      Then there's setting up the release, patch notes and all, including a specific QA plan to test that bug.

      And of course the 10 minutes to fix the bug.

      The 100x "study" has survived not because people though it reflected a well done and g

  • by HotNeedleOfInquiry ( 598897 ) on Friday July 23, 2021 @02:28PM (#61613059)
    Fixing a design flaw in engineering - $10
    Fixing a design flaw in manufacturing - $100
    Fixing a design flaw at the customer's site - $1000

    Numbers represent scaled values, but were generally close for my business.
    • by BenBoy ( 615230 )

      Fixing a design flaw in engineering - $10
      Fixing a design flaw in manufacturing - $100
      Fixing a design flaw at the customer's site - $1000

      Fixing your reputation with that customer from step 3: more expensive still.

      • Absolutely. Part of the 1000x is stuff like working overtime to come up with a fix and overnight shipping to mitigate the problem quickly. Those aspects are expensive, but necessary to preserve reputation.
  • Times have changed (Score:5, Insightful)

    by RobinH ( 124750 ) on Friday July 23, 2021 @02:30PM (#61613063) Homepage
    The arduousness of delivering software updates (such as mailing floppy disks around) would have made such statements more likely to be correct in the 80's than now. The internet and automatic updates makes it less true.
    • The internet and automatic updates makes it less true.
      There is plenty of software that is never - and definitely not automatically - updated. But replaced by a new version, most often in the sense that the old and new one are run in parallel, so you have a fallback if the new one has an ugly quirk in a corner case only one or two users are regularly using.

      This concerns mostly software that is only used "in house". E.g. a power company calculating an offer for a customer with 2000 metering points. The calcul

    • "The internet and automatic updates makes it less true."

      This reduces the cost of updating each enduser, but increases the number of endusers that receive the update before any feedback on operation can be returned. It's plausible that the total cost for a bad update is the same order of magnitude.

  • An ounce of prevention is worth a pound of cure.

    If PG&E were to replace overhead powerlines with buried cables in high wildfire risk zones it might cost $15B-$30B, for the 10,000 miles or so they'd have to do. This is almost certainly cheaper than the settlements they face for multiple wildfires, just the Camp Fire settlement alone was around $12B (cash+stock).

    Now is there a difference between software and powerlines? It boils down to it costs for a business and how that business might justify costs. Be

  • by neilo_1701D ( 2765337 ) on Friday July 23, 2021 @02:47PM (#61613117)

    Years ago, before remote connections to clients were a thing, I would sit at my desk and simulate the machine I was coding for. Then, when the simulation said the code was good enough I'd go down to the test rig and test on actual hardware. The sales guys would also test on the actual hardware. All was good, and the machine with the code would be shipped to a customer.

    When bugs were found, because a mechanical integration wasn't quite right or there was some hard-to-reproduce bug lurking in the system, it was my butt on a flight to the customer site to debug and fix.

    A day in the office might not be a 1/100th of a flight to a customer site, but the analogy works well.

  • by david.emery ( 127135 ) on Friday July 23, 2021 @02:48PM (#61613123)

    https://www.amazon.com/Softwar... [amazon.com]

    WTF? In the 1990s this was a must-read for -anyone serious about software or software-intensive systems-. Are we all now so collectively ignorant that no one (else) has read this book?

  • by nucrash ( 549705 ) on Friday July 23, 2021 @02:50PM (#61613127)

    Somewhere out there a long time ago, some manager probably made up this statistic in order to bolster his team to focus on addressing issues in the present rather than waiting until after production to address potential issues.

    Was what he said accurate? Probably not, but as others stated with like the Boeing 737 MAX, 100x the cost could be an understatement.

    While the research in this field is apparently lacking, that doesn't mean that other research in product development and marketing can't apply to software as well. Think of a product to deployed in the field such as a car. A recall can easily cost millions. On top of that, if you have a software you aren't locked into, shifting to a new platform might be lucrative. I know of several software packages that I have been asked to find alternatives to due to bugs. Enough of them and that is costly in the long term.

  • On top of that, I'm sure production bugs were a hell of a lot more expensive when fixing them meant sending engineers out to every single mainframe client in the pre-internet era.
  • Comment removed based on user account deletion
  • by bustinbrains ( 6800166 ) on Friday July 23, 2021 @06:35PM (#61613995)

    The Win32 CreateProcess() API is how processes get created on Windows. Here is its hefty parameter list:

    BOOL CreateProcessW(LPCWSTR lpApplicationName, LPWSTR lpCommandLine, LPSECURITY_ATTRIBUTES lpProcessAttributes, LPSECURITY_ATTRIBUTES lpThreadAttributes, BOOL bInheritHandles, DWORD dwCreationFlags, LPVOID lpEnvironment, LPCWSTR lpCurrentDirectory, LPSTARTUPINFOW lpStartupInfo, LPPROCESS_INFORMATION lpProcessInformation);

    4 strings + 4 major structures + awkward bitfields + tons of gotchas. No other OS I know of has such an overly complex API just to start a process.

    Documentation:

    https://docs.microsoft.com/en-... [microsoft.com]

    Microsoft, simply put, can't change this function...ever. The cost to change it would be astronomical and would likely end both Windows as an OS and Microsoft as a company. And there are notable design bugs in the function that they can't fix either. For example, stdin, stdout, and stderr passed to the target process have to be blocking anonymous pipes, which makes deadlocking single-threaded console applications extremely simple to do. Any substantial change would require rewriting large sections of the kernel from the ground up, which has obvious performance and security implications. Substantially changing that function would break every software application ever written for Windows. The world is therefore, unfortunately, stuck with a bug-ridden, parameter-laden function for starting processes on a major OS. Not that Microsoft could ever perceive Windows becoming as popular and widely used as it has.

  • There's no question in my mind, from years of building enterprise infrastructure software/hardware, that there are some bugs where the cost factor is 10,000x even between avoiding a bug during architecture design (i.e., well after requirements) and resolving it after it got into production across hundreds of installations.

    On the other hand there are a lot of "bugs" that never get fixed, or even discovered, and are rarely encountered and almost cosmetic in nature where the cost factor may be even less than 1

  • You fix ALL the bugs in production - can see why he'd want to trash the report :-)

    (FWIW I joined IBM in 1975. At the time, IBM collected a heck of a lot of statistics and had expert statisticians analyse the results; they probably still do.)

  • There's also no study to be found that paying off your mortgage at above minimum repayment will result in paying less interest.
  • We all know that 72% of statistics are pulled straight out of the ass of the person citing them. :D I'm grateful someone is pushing for attribution, or the proving of lack of credibility.
  • Experience is merely the name men gave to their mistakes --Oscar Wilde (b. 1854)

Someday somebody has got to decide whether the typewriter is the machine, or the person who operates it.

Working...