Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security Technology

Rethinking Security Advisory Severities 30

An anonymous reader writes: The recent OpenSSL vulnerability got the internet all hyped up for a security issue that, in the end, turned out to have a very limited impact. This is good news of course, we don't need another Heartbleed. But it raises the question: should security advisories be more clear on the impact and possible ramifications of such a vulnerability, to avoid unnecessary panic? Developer Mattias Geniar says, "The Heartbleed vulnerability got the same severity as the one from last night. Heartbleed was a disaster, CVE-2015-1793 will probably go by unnoticed. ... Why? Because CVE-2015-1793, no matter how dangerous it was in theory, concerned code that only a very small portion of the OpenSSL users were using. But pretty much every major technology site jumped on the OpenSSL advisory. ... The OpenSSL team is in a particularly tricky situation, though. On the one hand, their advisories are meant to warn people without giving away the real vulnerability. It's a warning sign, so everyone can keep resources at hand for quick patching, should it be needed. At the same time, they need to warn their users of the actual severity.
This discussion has been archived. No new comments can be posted.

Rethinking Security Advisory Severities

Comments Filter:
  • by Anonymous Coward

    I don't disagree but I think the Heartbleed bug caused a lot of trauma so now people jump whenever there is a vulnerability in OpenSSL,,

  • It's going to be difficult for a library developer to know precisely how many people are using a particular feature, even if they have a general sense of feature or version popularity. Moreover, to the relatively few people affected by this, this absolutely WAS a critical bug. Unless you are clear and candid about the seriousness of the bug, those people that need to patch may not hear about it. It's probably best for them not to make wild guesses about who they think are affected. Stick to the facts, a

    • by bondsbw ( 888959 ) on Friday July 10, 2015 @07:27PM (#50085933)

      But isn't the point to try to match the level of panic with the level of practical danger?

      • Ideally, yes, but... do they actually know the level of practical danger? That depends on knowing how many organizations are using a specific version of the library, or a feature within the library. I think they probably have a vague idea about these things, but I'm not sure it's a good idea to be trying to scale the level of urgency based on their own interpretation of what may be very incomplete data.

  • By trying to not say too much, the advisories are inherently vague and therefore can be interpreted as insignificant or a dire emergency depending on the day.

    That's not useful to anyone.

    Because the NSA and GCHQ have effectively eliminated all network security, thanks to their backdoors in things like Cisco devices, it should be automatically assumed that all the bad guys capable of exploiting the issue already have all the information they need and the bad guys not capable of exploiting the issue aren't an

    • The advance notification was extremely useful to me. It allowed us to catalog our use of OpenSSL and to start planning our maintenance. This significantly improved our responsiveness on the 9th.

      There needs to be (and is) a disctinction between advance notification and full disclosure. The advisory of July 6th was advance notification. The June 9th advisory has the details you desire. It came out at the same time as the fix.

    • This is wrong. We're not trying to protect against National-Scale Adversaries, who probably have all the traffic the want, anyway. immediate full disclosure means that any script kiddie or criminal gets access immediately. That would be bad.

      • by laffer1 ( 701823 )

        That's not true. Script kiddies have to wait for someone to write a tool for them to use to actually exploit it. It takes a few days for these things to get out there in mass.

        When an upstream has a security advisory, I have to run around in circles to get the patch out to my users and then they have to run around patching everything. That's just how it works. When you don't get enough information to make a decision, it makes it hard to know if you should risk patching. For some folks, they're in system free

  • watch out. something could happen. any day now.
  • Oh wait. It's called the CVSS. Only your system admins and security folks will know how vulnerabilities apply to your organization. Temporal and environmental factors and only be assessed by people in the know. Windows shops obviously don't care about Linux vulnerabilities and vice versa. The base ratings are strictly focused on the vulnerability. Other factors you need to determine yourself... And there's already a system for that.
    • by mx+b ( 2078162 )

      Temporal and environmental factors and only be assessed by people in the know. Windows shops obviously don't care about Linux vulnerabilities and vice versa.The base ratings are strictly focused on the vulnerability. Other factors you need to determine yourself... And there's already a system for that.

      Yeah that's kind of the problem, most companies don't use temporal or especially environmental factors. If you base everything on the base score only, you're not getting a really accurate feeling for the severity of the vulnerability.

      The other problem is that CVEs tend to be treated in the researcher community as gold. You list CVEs on your resume, for example. CVEs are not meant to indicate severe vulnerabilities, or even all types of vulnerabilities -- many things that are important don't get CVEs, while

  • From what I read of the vulnerability, it was severe enough to merit the severity level given to it. If you were affected by it. That's the catch. This is the canonical "severe but unlikely" scenario, somewhat like one where cars are known to randomly explode killing everyone within a 10-mile radius but only the Ford Focus will do this and only if it's got the metallic purple paint job that was a custom order and there were only a couple dozen sold. You can't rate it low severity because losing that big a c

    • "the notification is of no use to me." Then ignore/killfile the notifications. But many others see benefit from advance notice and starting to crank up their patch machinery. Even if they end up not patching, they seem to find it worthwhile as "disaster prep."

  • CVSSv2 is a perfectly usable metric by which to classify risk of security incidents.

    From what I have seen, Mitre and NIST often show inaccurate CVSS scores on the CVE pages. In order for the metric to be truly useful, every organization has to localize measurement to their environment and each vendor needs to measure impact against their use or non-use of the underlying code. At the end of the day, it's all about risk measurement, but with those steps you end up with a reasonably accurate assessment.

    I speak

    • From what I have seen, Mitre and NIST often show inaccurate CVSS scores on the CVE pages.

      Have to stop you there, sorry for perhaps being a bit pedantic, but the NIST score is more or less the "official" score of a vulnerability, given how closely they work with organizations like MITRE. The CVSS scoring rules have some nuance to them, and in some scenarios the official rules on scoring a vector is not what you'd expect. NIST tries to follow the official scoring rules as strictly as possible. You may not agree with the rules (and many people don't, I'm not trying to knock you), but technically t

      • by radicimo ( 33693 )

        Fair enough on your pedantry, so what I should have said is perhaps "often enough". Case in point:

        https://web.nvd.nist.gov/view/... [nist.gov]

        There is no logical reason that should be a 10, unless I am missing something. I presume that Hanno is the guy who found it ... "The bug does not crash less, it can only be made visible by running less with valgrind or compiling it with Address Sanitizer. The security impact is likely minor as it is only an invalid read access."

        https://blog.fuzzing-project.o... [fuzzing-project.org]

        That was just the

  • You mean, can we somehow keep our users from needing to RTFM? There is an easy answer for stupidity, or laziness. You make bad decisions. The severity is the severity; it is up to the user to know if they are running the code or not. If they can't be bothered to know that, then they should not be in charge of security, since they'll be making shitty decisions anyway.

    I like Einstein's quote: Make everything as simple as possible, and no simpler. We miss that last part, usually.
  • This latest episode was announced as if it had serious and broad impact. It did not, but that didn't help those of us who, on less than two days notice, moved things around to prepare for another round of mitigation of a "severe" security issue. Yes, we're all glad it's not as bad as it might have been, but the point is that somebody was aware of that when the announcements went out. They should have been more forthcoming.
  • The recent OpenSSL vulnerability got the internet all hyped up for a security issue that, in the end, turned out to have a very limited impact.

    Oh right, just like the hype around Y2K turned out to be nothing right? The point is that some big problems would have resulted if the problem hadn't been hyped and fixed beforehand or in the early hours of the problem being exposed. Whenever I hear someone say "That Y2K thing turned out to be nothing" I just shake my head. Why is this concept of prevention so hard for the general public to understand?

  • I think the level of severity is something that should be determined by the individual or organization.

    Disclosing severity gives some idea of what we're up against. But the actual damage is dependent on environment.

There are two ways to write error-free programs; only the third one works.

Working...