Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Bug Software

Thinking of Security Vulnerabilities As Defects 158

SecureThroughObscure writes "ZDNet Zero-Day blogger Nate McFeters has asked the question, 'Should vulnerabilities be treated as defects?' McFeters claims that if vulnerabilities were treated as product defects, companies would have an effective way of forcing developers and business units to focus on security issue. McFeters suggests providing bonuses for good developers, and taking away from bonuses for those that can't keep up. It's an interesting approach that if used, might force companies to take a stronger stance on security related issues."
This discussion has been archived. No new comments can be posted.

Thinking of Security Vulnerabilities As Defects

Comments Filter:
  • by ardle ( 523599 ) on Saturday June 28, 2008 @04:53PM (#23984733)
    If they weren't, they would be in the program design.
  • Intentional misuse (Score:3, Insightful)

    by asc99c ( 938635 ) on Saturday June 28, 2008 @05:13PM (#23984955)

    If a user was intentionally mis-using software I had written, I wouldn't consider it a bug. Although a vulnerability is generally mis-use by someone other than the owner of that piece of software, I'd still have to conclude it's not a bug. If I'd built a car, I would be more than a little annoyed if I got the blame that someone had broken into it and run someone else over with it.

    I think it needs to be left to the market to decide what is acceptably secure software. Many Ford cars from the early 90s had locks that were far too easy to break - just stick a screwdriver in and it opens - even did it myself when I locked the keys in the car once. They got a bad reputation, and Ford improved the security to a level the market was happier with.

    The market in software doesn't work quite as well as for cars unfortunately, but that's another issue.

  • by jonaskoelker ( 922170 ) <jonaskoelkerNO@SPAMyahoo.com> on Saturday June 28, 2008 @05:25PM (#23985055)

    Everybody, please laugh at the subject of my post which has no relation to its contents ;)

    What I meant to write when I wrote the subject is that, from the point of view external to the organization developing insecure software, you are, according to the wisdom of the /. masses, supposed to vote with your wallet.

    Yet, how's that expected to take place? To apply some of Schneier's observations, you have multiple parties, each with their own security agenda; the sysadmin might want the most secure option because anything less will be a nightmare to maintain, whereas the phb will want the cheapest because that'll make him look good in the eyes of those who set his salary.

    Guess who makes the purchasing decision. Guess which security agenda will be reflected in that decision. Sometimes, the insecure option will be the cheapest even when the cost of bad security has been factored in.

    Also, consider the fact that writing "perfectly secure code" is hard and time-consuming, and thus expensive. Given that it's hard enough to write reasonably non-buggy code when there's enough of us, what does that predict for security issues? Now add in the variability in skill level of the developers, and the varying experience with the particular code base they work on.

  • by awitod ( 453754 ) on Saturday June 28, 2008 @05:25PM (#23985059)

    "The problem of course is I'm saying how the companies should handle them, and I have no authority at any of these places, save people actually valuing my ideas. Personally, I've done some development in the past, and there was the concept of defects. Your bonus would depend on how many defects were in your application at delivery time. These were feature-based defects, but shouldn't vulnerabilities be considered defects as well?"

    So, the author freely admits he is neither a developer or a manager. If he was a developer he'd know that these are defects and everyone treats them as such.

    If he was a manager, he'd know that one of the surest ways to wreck a good shop is to start doing comp based on defects. Here is what invariably (in my experience) happens when a shop includes defect counts in there comp plans.

    1. Relationships between Dev, QA, Product Management and Operations get worse because the terms 'defect' and 'bug' become toxic. In reality these things always exist in software. The last thing you want to do is create barriers to dealing with them. Making the acknowledgment of a defect cost someone money means you will have arguments over every one of them unless they cause an out right crash.

    2. Culture becomes overly risk-averse - No one wants to take on difficult problems or blaze new territory. The smartest people will naturally pick the easiest work to minimize the risk of defects.

    3. Over-dependence on consultants - More CYA behavior. If it's too complex people will outsource to keep the defects away. This is a very bad thing if the nasty problems are because of business and not technical challenges. Now the people who know enough about the problem domain to understand the risk are hiring proxies who know nothing to avoid responsibility for 'defects'.

  • by SamP2 ( 1097897 ) on Saturday June 28, 2008 @05:29PM (#23985087)

    ...the nature of the security issue.

    A defect, by definition, is an unintended behavior of a program. Something was designed to work, but for whatever reason, doesn't. Compare this to a lack of a feature, which means that something doesn't work because there was never the intention for it to work in the first place.

    A buffer overflow or SQL injection related issue is almost definitely a defect, since there is a dedicated, designed parsing mechanism to process input, and if some types of inputs are not processed as intended, it is a defect of the software.

    On the other hand, for example, a security issue arising from plaintext transmission of sensitive data over the net, is not necessarily a defect. If the site in question was never designed to use SSL or another encryption mechanism, then it's a lack of a feature. If the site in question is an online banking site, then it is a blatantly poor and inexcusable design shortcoming, but nontheless, not a defect. (Of course, if the site DID intend SSL to work properly, but for whatever reason there is a hole allowing to crack or circumvent the encryption, then it IS a defect).

    Besides, assigning a "defect" status to a security issue is not necessarily useful for it's own sake. The understanding is that a responsible company should treat a security issue with much higher priority than a non-security related one, defect or not (compare "we released an emergency hotfix to download" to a "we'll ship the patch in the next release cycle"). Saying a security issue is a defect, is like saying that a cardiac arrest is "organ numbness" - true, but not very useful.

  • Sometimes they are. Remember that there are some times when a vulnerability is a technical exploit of something subtle, and those possibilities are always bugs and defects and should be treated as such. But there are many other times when there is a trade off to get that additional security. There is very often a balancing act between usability and security.

    This example is certainly not ideal as it does not involve software design, but it is analogous and one that I have personally seen happen. Consider a small company where literally everyone knows everyone and let us say it is a highly technical company so we can assume that everyone knows the basics of what they are doing. They may choose given everyone system administrator priveleges on the database and give everyone a domain admin account even if they normally log in on a lower privileged account. They have absolutely no security in the sense that any user can mess up the system in any way they chose. But they also have none of the usability problems that comes with security. They never have to wait for the network admin to be available if some setting needs to be changed, and never have to worry about a user not being able to get a document they need.

    As the company grows, this will become unacceptable. But once security is laid on, now you have to make sure that everyone has the right permissions to read the documents they need. It adds layers of overhead and usability. It increases security, but that security comes at the price of tremendous man hours for a select few domain admins and often forcing users to wait for a designated admin when they need basic things like software installed.

    Was the lack of security and the potential vulnerabilities a defect or a design flaw for the small company?

    For something more immediately applicable there can also be a trade off between security and efficiency. For instance, I write a lot of SQL scripts that use dynamic SQL. Adding code to protect against SQL Injection requires more of my time to write, more of the computers time to process, and makes the code more complex for someone else who has to maintain it. It comes at a trade off. When I write something that will be used by a broad audience, I always favor security, but when I am writing scripts that will only be used by my internal team I often favor the efficiency and readability.

    Clearly, there are cases where a vulnerability is a definite defect, but there are other times when vulnerabilities are consciously accepted for usability, performance, or code maintainability reasons. I will agree that performance and code maintainability become less compelling when it is a commercial product being sold, but they can be major factors within a company where you can often give the user at least a certain minimum level of trust, and usability can be a valid reason to consciously permit small security risks in even a commercial package.
  • by fastest fascist ( 1086001 ) on Saturday June 28, 2008 @05:33PM (#23985117)

    Also, if anything external to the way you work (i.e. the promise of more money) can make you work better, you're slacking off in your daily work: why don't you deliver peak performance without the extra money?

    There's two ways to look at performance vs. compensation. Employees, ideally (at least from the employer's viewpoint) will look at it the way you do: you're being paid to do your best, so you should need no extra incentive to do so. Project management, on the other hand, should be pragmatic about it. Sure, employees SHOULD do their best no matter what, but maybe cash incentives can add motivation. If that is found to be the case, a good manager will choose results over principles.

  • by John Hasler ( 414242 ) on Saturday June 28, 2008 @05:34PM (#23985119) Homepage
    You might want to read up on merchantability [findlaw.com], implied warranty [wikipedia.org], and fitness for use. These legal concepts apply to cars and other tangible goods but not to software. They should.
  • by darkPHi3er ( 215047 ) on Saturday June 28, 2008 @05:49PM (#23985223) Homepage

    In the RW, i'd suggset that we should consider the following;

    You are Programmer Sian (notice the trendily androgynous name), you work for a gigantic software company, or conglomerate or industrial that does all its own major development inside, you are potentially confronted with;

    1. Antiquated Developer Tools -- in general, the larger the development environment, unless you're Disgesting Your Own Pet's Nutrition, you are very likely to be using multi-year and/or multi-generation old development platforms and tools.

    The question here is then, how can you effectively hold poor Sean accountable for vulnerablities, that are intrinsic to many older tools?

    Who's more accounatable here? Sian or the managers who make the procurement decisions?

    2. "Science Fiction" Application Programming Interfaces - depending on whether you are programming on a well-established product or not, if you are, Poor Sian is probably stuck with API's that were developed many years before and have been the victim of Design Creep, and its, Lunatic Cousin, Design Implosion.

    In many instances the APIs, while they may once have had a large degree of Paradigmatic and Philosophic Design Integrity, as their initial Designers and Implementers have moved on to other; products, companies or, Worst Case, Inpatient Mental Health Facilities. Many New Designers have come in to add "Their Own Programming Uniqueness" to the APIs, frequently rendering the API's a jumble of radically different approaches to similar algorithms.

    Should Sian be subjected to having their pay docked because 9/10 Functions implement a Library Call one way, and some "Johnny-Come-Lately" API function implements a similar looking, but substantially different in output function?

    Shouldn't the API Designers/Architects be held more responsible for this one?

    3. PHB Stupidity - As QC forwards endless application/OS defect notices to the Development/Maintenance Team, these defects are reviewed by the Team Managers and Supervisors. It's understandable, given the 11 hours per day of Absolutely Vital Meetings that most PHBs love to, i mean are forced to attend, that Defect Prioritization will suffer.

    Sian can't choose what defects to repair, and in what order to repair them.

    This is a management function, and one, in my experience, that Mgt usually jealously and zealously guards.

    SOOOO, it's been the case in every Development project that i've worked on and know about, that PHB's have a well-understood tendency to prioritize Defect repair, according to external pressures, especially from Sales and Marketing.

    Sales and Marketing organizations are usually likely to priortize according to their immediate impact on quarterly projections.

    Vulnerablities are only likely to affect quarterly results when they are CATASTROPHIC defects, i.e. App or OS Killers. Otherwise, the majority of vulnerablities, which are usually well submerged in the Defect numbers, tend to get shoved aside for the higher priority defects that S&M believe impact immediate sales.

    There are numerous other considerations here; including Contract Programmers, Legacy Compatability (ask the Vista Team about that one), Vendor Driver Teams that don't even know what to do with a new code base, etc, etc..

    But it seems to me, that, while financial incentives CAN BE, useful as a Mgt tool for improving product quality, they should, to be even-handed, applied across the entire product team, with specific ***POSITIVE*** incentives used to take care limited, high priority problems across the product line.

    There's already a tendency to "blame the programmer", and my Best Guess is, that any attempt to lay the responsiblity for vulnerabillites, THAT AREN'T CLEARLY THE RESULT OF SLOPPY/POOR/INCOMPETENT CODE PRODUCTION, at the feet of the programmer, will merely increase the employee turnover in the Development Team. Something that already is a problem most places.

    from my experience: "The Fault, Horatio, Usually Lies Not In Our Code, But In Our Process"

  • by dvice_null ( 981029 ) on Saturday June 28, 2008 @05:52PM (#23985247)

    Actually you really need just one person in the company with "haxor" skills to test the security of the products that others make. A single person can very quickly find a lot of common holes. That person doesn't need to a developer. He/She can be there just for testing or even just for supervising others that make the testing, to make sure that they test for security vulnerabilities also.

  • It's the other way round Dammit!

    The vast majority of security vulnerabilities are merely exploits of defects!

    How do you hack a system? Find a bug, that's usually pretty easy....

    Then you have the system operating already, "not as the designer intended" and you're more than halfway there...just add a bit of creativity and malice aforethought.

  • by TubeSteak ( 669689 ) on Saturday June 28, 2008 @06:21PM (#23985449) Journal

    Of course vulnerabilities are defects

    If they were defects in the eyes of USA law, they'd be considered a material defect or design defect under existing contract or product liability law respectively.

    There are a few possible outcomes from such a scenario
    A) Nobody writes software anymore because they'd be sued into oblivion
    B) Prices go up because coders & companies have to buy software defect insurance
    C) Prices go up because companies spend more in labor to produce defect free code
    D) The EULA lists every possible failure scenario (plausible or not) in the interests of full disclosure and business continues as usual

  • by Heembo ( 916647 ) on Saturday June 28, 2008 @06:25PM (#23985469) Journal
    Functionality tests are easy to prove through unit and integration testing. Normal users spot functionality bugs quickly during normal product cycles.

    However, security bugs are not easy to test or discover. In fact, it's very expensive to do testing to uncover even some easy classes of security vulnerabilities. Normal users do not stumble on security problems like they do with functionality issues.

    Also, none of your developers were ever taught anything about application security in college. They professors are clueless. Even Michael Howard from MS who is hiring out of the best universities in the world cannot find a new grad who has any clue how to build secure software.

    Functionality bugs and Security bugs are apples and oranges and deserve very different consideration. (Like measurement of Risk, etc)

    Last, you can make a piece of software work. But you an never make a piece of software secure, only reduce risk to an acceptable level.
  • by turbidostato ( 878842 ) on Saturday June 28, 2008 @07:45PM (#23985977)

    "Was the lack of security and the potential vulnerabilities a defect or a design flaw for the small company?"

    How can somebody twist a simple concept into such a contorted one?

    Defect is nothing more and nothing less than something not working as expected. If something is there by a concious decision is a feature; if something is misdoing, it's a defect. It's as simple as that. Really.

    Now, on defects: if something works as designed, but the designers didn't thought in advance of a given (misdoing) situation it's a design defect (in your example, if somebody misuse his admin rights and the boss feels it unacceptable *now*, that means that his security design was flawed. If he answers "well, these things happen, let's move on", then it's a feature). If something doesn't work as designed, and it's misdoing, it's an implementation defect. If something is working as designed and the designer doesn't feel some behaviour to be misdoing, then it's a systemic defect (either an unethical seller or an idiot/uniformed buyer).

    And that's all.

  • by turbidostato ( 878842 ) on Saturday June 28, 2008 @07:48PM (#23985987)

    If problems on cars were defects in the eyes of USA law, they'd be considered a material defect or design defect under existing contract or product liability law respectively.
    There are a few possible outcomes from such a scenario
    A) Nobody builds cars anymore because they'd be sued into oblivion
    B) Car prices go up because builders & sellers have to buy car defect insurance
    C) Prices go up because companies spend more in labor to produce defect free cars
    D) The EULA lists every possible failure scenario (plausible or not) in the interests of full disclosure and business continues as usual

    Well, I don't see car bussiness to be in bad shape lately.

  • by tchuladdiass ( 174342 ) on Saturday June 28, 2008 @08:20PM (#23986187) Homepage

    Except, when I bought a new car, there was a small defect in the paint job -- a nearly unnoticeable paint bubble. I'm sure that every car that comes off the lot has a blemish somewhere. Doesn't cause the car to crash, and life goes on. Same thing ain't true with software -- that same "blemish" could easily be turned around and allow someone to break into the software.

    The only way we could get comparable results with software v.s. physical objects is if computer systems develop the ability to withstand a certain percentage of defects without adverse affects. Kind of like how a few bolts on a bridge can be bad, because the way it is engineered is if a section calls for 5 bolts, they put 8 bolts in there just to be safe. Not a lot of practical ways to do that with software, at least not without a performance trade off.

  • by spidr_mnky ( 1236668 ) on Saturday June 28, 2008 @11:15PM (#23987195)
    If the code works as designed, and that's how it's designed, then that's a mental defect.
  • by KGIII ( 973947 ) <uninvolved@outlook.com> on Sunday June 29, 2008 @03:42AM (#23988325) Journal

    As the company grows, this will become unacceptable. But once security is laid on, now you have to make sure that everyone has the right permissions to read the documents they need. It adds layers of overhead and usability. It increases security, but that security comes at the price of tremendous man hours for a select few domain admins and often forcing users to wait for a designated admin when they need basic things like software installed.

    Two things... The first is that therein lies the rub. Permissions are a flaked out aspect. Why? See the second point. The unfortunate reality is that when there are permissions there are also methods to enforce them. Second? You're describing, to the letter, DRM and we can't have a discussion about DRM as a viable tool. The reality is that CHMOD is a basic form of DRM.

    So security, as you state, is a matter of usability vs. security. No, not any, online computer is secure. To deny that is just blind. If it is online it is vulnerable. If one is more a realist than ANY computer that is powered up is vulnerable in some manner. We geeks love to have a secure OS and then use a simple lock on our basement door. Nothing, in truth, is secure. If you tell a secret to one person it is no longer a secret. If a computer turns on and connects to the internet it is even less secure, turning one on at all is a vulnerability. Security is not, and never will be, an absolute. It can't be because information, even secret information, will to be shared. One can even argue that information needs to be shared before it is of any value at all but this is here and that means it isn't the place.

  • but then what do you call design features like windows networking telling you if you got the first letter of your password right, even without the rest of the password, and then letting you do that for the next letter, and so on and so on.

    Seem to me that is still a defect. Not in the software itself, but in the software design.

  • by KGIII ( 973947 ) <uninvolved@outlook.com> on Sunday June 29, 2008 @12:51PM (#23991407) Journal
    In very basic form one could call passwords DRM, albeit an aspect of digital rights management and not a whole solution. DRM and "copy protection" overlap and the term DRM has some really awful connotations. There is more to DRM than just copy protection however.

    The current restrictions (and methods) of the widely used copy protection are barbaric and disturb me greatly. The way that the term DRM has been used has pretty much tarnished its reputation forever and some people have come to the conclusion that all things DRM are bad without actually realizing how much they already rely it. DRM is just access control technology, that's all it is. It is a generic term that some people have associated more closely with the copy protection aspects when it is, in reality, just another layer of security - not just for things with a copyright.

    So, yes, to answer your question again. I'd call that ol' password DRM at its finest. It is an example of DRM being used properly and effectively. DRM != copy protection. Copy protection is one aspect of DRM, it is the portion that gets the most noise from the vocal minority but is just one element of DRM.

"If anything can go wrong, it will." -- Edsel Murphy

Working...