Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Stats Security IT

Stop Fixing All Security Vulnerabilities, Say B-Sides Security Presenters 88

PMcGovern writes "At BSidesLV in Las Vegas, Ed Bellis and Data Scientist Michael Roytman gave a talk explaining how security vulnerability statistics should be done: 'Don't fix all security issues. Fix the security issues that matter, based on statistical relevance.' They looked at 23,000,000 live vulnerabilities across 1,000,000 real assets, which belonged to 9,500 clients, to explain their thesis."
This discussion has been archived. No new comments can be posted.

Stop Fixing All Security Vulnerabilities, Say B-Sides Security Presenters

Comments Filter:
  • How about you fix what you can?

    • Re:erm, no? (Score:5, Informative)

      by MacTO ( 1161105 ) on Thursday August 08, 2013 @01:03PM (#44511351)

      The article is talking about fixing what you can. It simply outlines how to prioritize the issues in order to figure out what you can fix with limited resources.

      • Re: erm, no? (Score:2, Insightful)

        by Anonymous Coward

        They say stop when they mean prioritize. Theoretically, there should be some computer scientists who know how to use English.

      • Re:erm, no? (Score:5, Interesting)

        by Anonymous Coward on Thursday August 08, 2013 @01:15PM (#44511485)

        The article is talking about fixing what you can. It simply outlines how to prioritize the issues in order to figure out what you can fix with limited resources.

        That's a pretty damn weak model. It doesn't take a genius to understand that if you use statistics to prioritize security issues to address (or more to your point, cull out ones you won't address due to limited resources), then it's only a matter of time before attackers start figuring out ways to use those statistical models against you, ultimately learning about the "can't-get-to-it" threat list and focusing attack vectors there.

        Not to mention management being "sold" on this model and cutting 20% of your IT support staff next year due to the "increased efficiencies of patch management". Have fun doing more work.

        • by bberens ( 965711 )
          This model is already largely in place. Companies will focus on patching the vulnerabilities that are already being exploited in the wild. Then after that they will focus on some amalgamation of lowest hanging fruit and most likely to be exploited.
        • it's only a matter of time before attackers start figuring out ways to use those statistical models

          They already do. They use attacks that hit the largest number of targets. Using uncommon vulnerabilities would be wasteful when you could attack more common ones.

          • by ediron2 ( 246908 )

            No, GP is right. It's a different scenario, but it's valid:

            If 1 in a thousand users installs X, find a way to target X across a corporation. One hit gets you in. Beachhead there, figure out where to go next or what you can collect.

            **THAT** is how to discreetly pwn a corporate net.

            Some attackers go big, because their payout is # of machines taken. Some attackers are after a narrow niche: what'll company X be announcing, how their stock is likely to perform, data of value to competitors, etc. Their payou

        • Re:erm, no? (Score:4, Interesting)

          by martas ( 1439879 ) on Thursday August 08, 2013 @03:06PM (#44512631)
          That's why you need a game-theoretic, adversarial model instead of a simple statistical model based on past observations. Regret minimization, multi-arm bandits, etc.
          • You should read Mark Twain's The McWilliamses and The Burglar Alarm. Your suggestion is peddling an overly complex burglar alarm that will take more time and effort and resources than just fixing the bugs as they come in.
      • Re: (Score:3, Insightful)

        I believe the word is 'triage'..

      • by amorsen ( 7485 )

        Yes the article is wrong. It assumes that software vendors would leave security vulnerabilities with entries in Metasploit unfixed for days or even weeks. Surely no vendor would be that irresponsible. Right? Right???

      • Sorting bugs into "security vulnerabilities" and "other" is prioritizing.

        Security people talk as if the start point is security vulnerabilities. It's not. From a functional view, there's not much difference between a bug that breaks crucial functionality, and a DoS attack.

        It's amazing how many security vulnerabilities rely on age old bugs such as buffer overruns and dirty memory that are easily fixed if we're willing to live with slightly slower computers. We can program the OS to blank memory whenev

    • Re:erm, no? (Score:5, Insightful)

      by ackthpt ( 218170 ) on Thursday August 08, 2013 @01:28PM (#44511607) Homepage Journal

      How about you fix what you can?

      That's the fly-swatter approach - you hit the flies you can and ignore those you can't get to.

      'Don't fix all security issues. Fix the security issues that matter, based on statistical relevance.'

      That line reminds me of the old TQM which was run past us decades ago (and then promptly forgotten by 90% of the Franklin Planner-toting crowd), fix what really needs fixing first. I'm sure this bit of wisdom didn't require TQM to come along (you can probably find it in Hamlet if you know where to look), you fix your most grievous would first and worry about your bruises later, but we (in my department) felt rather put-upon when these TQM zombies came around and told us what a sea-change it would be for our practices and productivity when we embraced what we already knew.

    • by sjames ( 1099 )

      because some things just aren't worth fixing, even if you can.

  • by intermodal ( 534361 ) on Thursday August 08, 2013 @12:57PM (#44511297) Homepage Journal

    Prioritize on the important vulnerabilities. But that should in no way discourage people from fixing the less important ones.

    Don't let perfect become the enemy of good.

    • by Joce640k ( 829181 ) on Thursday August 08, 2013 @01:05PM (#44511385) Homepage

      Everybody knows hackers will just shrug and give up after you fix 90% of your vulnerabilities.

      • by SirGarlon ( 845873 ) on Thursday August 08, 2013 @01:43PM (#44511743)

        If the attacker's objective is something fungible like credit-card data, then he may, indeed, shrug and move on to an easier target after his first several attacks fail. Why would he waste time on a locked door when there is probably an unlocked house next door? (Figuratively speaking, of course.)

        If the attacker's motivation is specifically against *you*, say politically-motivated attacks like Anonymous makes or industrial espionage, then the bar for the defender is a lot higher because the attacker can't improve his progress toward goals by attacking someone else.

        So how much effort you should expend on defense depends on your threat model.

        • move on to an easier target after his first several attacks fail

          Of course, it's simply a matter of a lucky attacker choosing one of the "low priority fix" vulnerabilities as an attack vector and figuring out how to use it. Suddenly, that unfixed vulnerability made that difficult target into an easy one.

          In terms of your analogy, the lock may be exceedingly difficult to pick until the thief realizes they can crawl into the open window on the second story. They just needed a ladder.

          • by SlashV ( 1069110 )
            Why get a ladder when you can just walk in next door without one?

            The argument is moot.

            It's like it is with bike locks. Getting a better one does not guarantee that your bike won't get stolen, but it does help! And a 100% security is always unattainable.
        • Feynman went on to say something disparaging about religion:

          It doesn't seem to me that this fantastically marvelous universe, this tremendous range of time and space and different kinds of animals, and all the different planets, and all these atoms with all their motions, and so on, all this complicated thing can merely be a stage so that God can watch human beings struggle for good and evil — which is the view that religion has. The stage is too big for the drama.

          The problem here is, Feynman was thin

    • There's also priority based on ease of fix or mitigation. If you can mitigate a problem and then fix the core of it later, that should be done. This is nothing but basic risk management that any security or system administration professional already does.

    • by fermion ( 181285 )
      Before profilers became common, developers would just waste time optimizing functions that were only run once instead of leaving seldom used functions alone and spending most of their time optimizing the functions that actually took up 80% of the run time. Cost benefit.
    • by oGMo ( 379 )

      Don't let perfect become the enemy of good.

      How is ignoring the lesser issues in favor of the glaring issues "perfect" over "good"? This is not about twiddling with the colors of the buttons and the size of fonts. Those aren't the big issues, unless you're a bad manager. This is about fixing the critical vulnerabilities and terrible bugs and ignoring the trivial, perfectionist stuff.

  • How about (Score:5, Insightful)

    by Monoman ( 8745 ) on Thursday August 08, 2013 @01:01PM (#44511337) Homepage

    Important items get fixed first. Easy items usually come next. Everything else gets fixed after that.

    • That's exactly what they're saying, and providing a method for rating importance.

      I know this is /., but did you at least read the summary?

      • Re: (Score:3, Funny)

        by Anonymous Coward

        That's exactly what

        Sorry, but maybe you should know by now, this is /., so that's all I had time to read before my self-centered attention span waned and drifted back to myself. Now, since I'm more important than you, I'm going to lecture you on why my opinion is better than yours based on the amount of your post I was able to read before I bored looking at something that isn't me. First...

        Oh, wait, I found something more important. Someone's being WRONG about my favoritest cartoon in the whole wide world evar, so I need t

      • by Monoman ( 8745 )

        Yes I read the article. The hard part is defining what is important. The authors felt the likelihood of something happening should be given more weight when determining importance. Not everyone is going to agree if you have a group of people deciding which things should get fixed first.

    • by davidwr ( 791652 )

      How about everything else being equal, important items get fixed first. Easy items usually come next. Everything else gets fixed after that.

      If I have an important item that will take 2 weeks and a team of 2 developers to fix, or 5 items that are only half as important but which take 1 developer 1 day to fix, well, you do the math.

      If I have a defect that's affecting 100M customers of an end-of-life, low-revenue product only used by relatively-unimportant customers but it's hurting them in a pretty bad way an

  • by Samantha Wright ( 1324923 ) on Thursday August 08, 2013 @01:04PM (#44511367) Homepage Journal

    Their real point is, if you have limited resources, prioritize the vulnerabilities that are (a) currently being exploited and (b) most likely to be exploited given the habits of your favourite boogeyman. Sometimes that means not starting on vulnerabilities as soon as they come in, because you're saving your resources for the chance there's a bigger problem later. Their thesis is about saving your money and time for the most important stuff, and assumes that threats only come from lazy blackhats who prefer certain classes/types of vulnerabilities. Buried in this is the assumption that a given piece of software has an infinite number of vulnerabilities that are discovered at random.

    Statistically, what they're saying is sound if organized crime is your biggest enemy, assuming organized crime's habits don't change any time soon. It's obviously not good enough if you're concerned about, say, a malicious government organization with an absurd budget.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      Their real point is, if you have limited resources, prioritize the vulnerabilities that are (a) currently being exploited and (b) most likely to be exploited given the habits of your favourite boogeyman.

      Sounds good! So, everyone who has UNLIMITED resources can ignore this article. It only applies to the VERY SMALL NUMBER of people who have limited resources.

    • Yeah, the road to hell is paved with statistical good intentions.

    • by msobkow ( 48369 )

      As most assaults are performed by mindless script kiddies running the hack tools of the week, it's sound advice. There are very few actual black hats creating those tools, but thousands upon thousands of ignorant kids who think themselves l33t because they can download something and click "run".

    • by khasim ( 1285 )

      Buried in this is the assumption that a given piece of software has an infinite number of vulnerabilities that are discovered at random.

      That's the part that I found to be the weirdest bit in there. And then they put a sensationalistic title on it.

      Instead, I'd prioritize work based on my own categorization.

      1. A remote attack that gains root access that does NOT require human intervention or other app running.

      2. A remote attack that gains non-root access that does NOT require human intervention or other app

      • I'd adjust that. 9 and 10 in particular can be used in a DoS attack, which can be just as damaging as an attack that gains access to the data behind an application. I'd tend to prioritize it 1, 2, 5, 6, 9, 10, 3, 4, 7, 8, 11, 12. And the first 6 would all be high-priority items that need to be fixed soonest, the prioritization would be strictly relative (if I don't have enough people I need to decide which to put people on first, but all of them take priority over anything else).

        • by khasim ( 1285 )

          I'd adjust that.

          No problem. Everyone will have their own idea of which are most important.

          My rational for that order is because of the possibility that other apps with similar exploit levels (or even lower in some cases) can be "chained" together to get root access (whether local or remote).

          Looking at the order you placed them in, I'd guess that you prioritized exploits for remote access over root access.

  • by Joining Yet Again ( 2992179 ) on Thursday August 08, 2013 @01:12PM (#44511453)

    How does software like djbdns seem to be nearly free of discovered vulnerabilities? Is it a popularity/type-of-user thing? Or has the code genuinely been written to be almost impenetrable?

    tl;dr Why do so many things need fixing in popular pieces of software which could easily command the most competent developers?

    • Re:djbdns (Score:5, Informative)

      by Todd Knarr ( 15451 ) on Thursday August 08, 2013 @01:22PM (#44511555) Homepage

      Attitude. Some software is written by anal-retentive paranoid cynical bastards who make sure every bit of code is iron-clad and air-tight, who take any flaw as a personal insult to be exterminated. Flaw? Forget flaw, even a slight deviation from what they've determined to be correct operation is hunted down mercilessly no matter how long it takes. Any cruft in the design, anything that's not clean and perfect, is lopped off and re-done until everything fits together correctly. If that results in a delay, so be it. The only work that's discarded is work that doesn't contribute to the correctness of the result.

      Other code is produced by people who're fine with leaving cruft and ugly bits in as long as they don't detect any errors coming from it. Rework and clean-up is fine, as long as it doesn't impact the delivery schedule.

      3 guesses which kind of developer produces which kind of software.

      • One gets some or no money for their code. Others get money to ignore errors and add features. That about sums it up?
    • by Keruo ( 771880 )
      DNS is something which should be easy to document by providing bunch of examples.
      There isn't that many ways to configure it if you consider the variations you can do.
      For some reason djbdns does not do this, it gives vague hints and makes you read 50 man pages followed by 100 blog post and 200 websites with obsolete/slightly relevant info on what you're trying to accomplish and if the position of the moon is decent, your tinkering will eventually work.
      When you reach the "oh it works" phase, you follow "if
      • The code is simple in a way which reminds me of some early cisco code I've seen for stuff like switches and routers.

        Really? Where did you see this? Is there some available online? I want to see it......

    • by rlh100 ( 695725 )

      Because the crackers have not put their full attention to it. Bind used to be able to sneer at sendmail. But now we are seeing problems with bind. If the target it tempting enough or is the ultimate pinnacle of a secure popular server application, the crackers will devise completely new strategies to compromise the program. Not a new example of a known flaw, but something completely new. Sendmail saw this over and over again. I don't think postfix has gotten the same honor since most servers do not li

  • The author is assuming that the opposition is dumb. It used to be, back when it was a kid in their parents' basement. Now the serious opposition is the Russian Business Network and the People's Liberation Army.

    Detected breaches tend to come from the dumb opposition. Those are the ones that put fake login sites on Wordpress blogs.

    • by rlh100 ( 695725 )

      Truth is that most of the opposition is dumb. Fix the bugs that are easiest to exploit or are most likely to be exploited, then work on the rest. No it does not fix all the vulnerabilities, but it does tend minimize your risk footprint.

      It would be silly to be slogging through the vulnerability list working on hard to fix obscure problems while simple to fix easy to exploits just sat in the queue.

      And I don't think the talk said don't fix vulnerabilities. I think it brought up the point that in real enviro

    • The author is assuming that the opposition is dumb. It used to be, back when it was a kid in their parents' basement.

      Actually, the "kid in the basement" usually wasn't dumb. Typically they'd be far above the average for their school.

      Callow, yes.

      Further, they had an advantage over the professionals: They could spend a LOT of time, in long, unbroken, sessions, pursuing a problem of their choosing down to the nitty-gritty-bits, until it fell before their persistence. Not having to earn a living, meet a sche

  • I have no intrinsic problems with what they say, but a lot of this prioritizing is reactionary guesswork based on past experience.

    Where that can give problems is that they don't look at it from a logical perspective, rather they try to package it as a simple calculation, with statistics and all. The fact is that these stats could be off by orders of magnitude, they are based on real data, but you have no idea how this data really applies to you. It may just as well prepare you for the previous war, instead

  • by nuckfuts ( 690967 ) on Thursday August 08, 2013 @01:32PM (#44511633)
    OpenBSD takes the approach of proactive code audits and of fixing all bugs found [openbsd.org], even those that have no apparent potential for exploitation. This has really paid off over the years. Often when vulnerabilities came to light, they were found to not affect OpenBSD because the underlying bug had already been fixed.
    • Fixing all bugs is great if you have the resources for it. But how many organizations have those kind of resources? I suspect even OpenBSD does not.

      And are you saying that OpenBSD performs no prioritization whatsoever on their bugfix efforts -- that everything is done in a first-in-first-out order?

      • My impression (based on a talk by Theo years ago) was that in their initial audit they went through the entire source tree and fixed every bug as they found them. I don't know about subsequent audits or current practice.
  • Disclaimer I have not read the paper.
    Once upon a time I did software documentation for a fast moving product. I was never given updates and worked basically in the dark. One brilliant manager asked me to, "Document all the bug fixes for this product." There were over 2,000. At 15 minutes each that time span was a bit over the week I was given. Doing the math, this comes out to 12.5 40 hours weeks, uninterrupted. At half time -- a better estimate this would have been half a year. One week is not 25.
    I requ
    • You must also assess the likelihood of someone finding/using that hole.

      You must also take into account that fixing the hole means the "someone" will just MOVE ON TO THE NEXT HOLE, raising its probability of being found.

      Unless you fix enough that a substantial fraction of the attackers give up and move on to different targets or a different line of work, you've engaged in a futile effort.

      This "fix the big, findable problems" approach is an obfuscated form of a familiar system design pathology: Pushing the p

  • Corporations focus the most attention on the greatest number of the most trivial problems because more is always better when it comes to management metrics. Whereas the actual problems are turned into projects that linger for lack of funding and political turf. See no one wants a problem they have spend money on and then explain. Better to foist it off on someone else. Whereas getting funding to paint all the switchplates green is a slam dunk because it's easy to do easy to measure, gains turf and has no do

  • A cybernetician knows that everything flows. We know that after the Sensing, and Decision leads to Action, the whole process will happen again beginning with the Sensing. You will sense the environment change and thus change the actions; However, sometimes the correct decision comes a bit too late...

    If we were to Sense the statistical prevalence of exploits, then decide which bugs to fix only based on it, then act to fix those bugs: What do you think we would sense afterwards? The obvious conclusion wo

  • "First you go through all the bugs we know--then you work on the bugs we don't know."

  • Has anyone modelled the potential impact of XP end of support over time?

  • This is common knowledge already. A vulnerability is not the same as a risk. A risk is the impact of the vulnerability, multiplied by the damage you'll sustain if it is exploited. chance*damage=risk.

    You could very well have a vulnerability that is real, that will be exploited, but will not lead to any damage. Since there is no economical viability to fix it, you leave it be. You could have a vulnerability that will be very unlikely to be exploited. However, if it should be exploited, your business will inst

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...