Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security IT

The Rise of Software Security 79

Gunkerty Jeb writes with an article in Threatpost. From the article: "Perhaps no segment of the security industry has evolved more in the last decade than the discipline of software security. At the start of the 2000s, software security was a small, arcane field that often was confused with security software. But several things happened in the early part of the decade that set in motion a major shift in the way people built software ... To get some perspective on how far things have come, Threatpost spoke with Gary McGraw of Cigital about the evolution of software security since 2001."
This discussion has been archived. No new comments can be posted.

The Rise of Software Security

Comments Filter:
  • by Anonymous Coward

    Finally

  • by Hognoxious ( 631665 ) on Tuesday September 13, 2011 @04:02PM (#37391736) Homepage Journal

    tisement much?

    • Mod parent up. (Score:2, Interesting)

      by khasim ( 1285 )

      TFA is nothing but name dropping and unsupportable claims.

      So in 2001, C was a disaster, C++ was a disaster, Java was getting better, .NET was getting even better.

      Yeah. Right. Check your dates. If you were using .NET in 2001 ...

      You can write secure code in almost any language. It is up to the skill of the coder. Look at the various *BSD's out there.

      • by Anonymous Coward

        Yeah. Right. Check your dates. If you were using .NET in 2001 ...

        I personally started programming .NET in late 1999 (it was called something like Next Gen Services then) as a pre-release Alpha tester. It was released publicly as a Beta in 2000.

      • Re:Mod parent up. (Score:5, Informative)

        by lennier ( 44736 ) on Tuesday September 13, 2011 @05:04PM (#37392318) Homepage

        You can write secure code in almost any language.

        Perhaps you want to believe that claim.

        And yet, the ongoing real world persistence of privately reported array out-of-bounds errors [microsoft.com] in critical security-dependent code continues to show that apparently, even the best programmers objectively can't write secure code even if their professional reputations depended on it.

        At least, they may be occasionally capable of writing secure code, but they're not capable of never writing any insecure code, or even testing for the existence of insecure code in the code they have released. Third parties have that priviledge. We don't know how many of the third parties who find these bugs are black hats, because we only hear from the white hats. But a 50/50 split between white and black security researchers seems like a good wild-ass guess. So figure one zero-day for every reported monthly security bug. Are you scared yet? You should be.

        Is this ongoing security massacre the fault of the language programmers are using? Absolutely yes. The point of security is that 99% correct isn't good enough when that 1% of errors your toolchain didn't automatically detect can get your entire customer base simultaneously rooted. And array out-of-bounds errors have been a solved problem in some languages since 1970 [wikipedia.org].

        In 2011, insisting on using a language, or any other tool, which doesn't solve a forty-one year old already solved problem is simply an error.

        • My only question is, why would you assume a 50/50 split for white hat vs. black hat. Certainly in the general population there are not 50/50 good vs. evil. If that was the case, people would be a whole lot worse off than they are now. I would say that a much larger percentage of hackers are white hat. That being said, even a small number of black hat hackers can do a serious amount of damage.
          • by Pieroxy ( 222434 )

            My only question is, why would you assume a 50/50 split for white hat vs. black hat. Certainly in the general population there are not 50/50 good vs. evil. If that was the case, people would be a whole lot worse off than they are now. I would say that a much larger percentage of hackers are white hat. That being said, even a small number of black hat hackers can do a serious amount of damage.

            It's not 50% bad 50% the rest of the world. It's 50% black hat, 50% white hat. But there is still 99% of the population not in either categories.

          • Rose colored glasses much? The general population is 0.01% white hat and 99.99% black hat. The cool thing about humans...they use ingenious psychological tricks on themselves so that they can ignore reality and logic when it's needed. One of the places humans need to toss objectivity and logic is when evaluating Self, close family, and close friends. Once logic and objectivity are out of the picture it's perfectly possible to believe that you, your family, and your friends are all good people; it's those

        • C is not insecure per se, it simply requires that all the securing is done in the source code. One of the most important principles of C as I understand it is that no hidden actions are performed in the binary that are not written by the programmer in the source code, i.e. the compiler never adds code on its own. Bounds checking would be just that - silent code that executes in the binary but is not there in the source code. Destructors and a garbage collector would fall in the same category.

          I understand th
        • by kmoser ( 1469707 )

          And yet, the ongoing real world persistence of privately reported array out-of-bounds errors [microsoft.com] in critical security-dependent code continues to show that apparently, even the best programmers objectively can't write secure code even if their professional reputations depended on it.

          If they persistently write such buggy code then I wouldn't consider them the "best" programmers. And that's not even consider that we're talking about Microsoft to begin with.

    • by kafka47 ( 801886 )
      Exactly my thought. What was that supposed to convey?
  • by hesaigo999ca ( 786966 ) on Tuesday September 13, 2011 @04:05PM (#37391766) Homepage Journal

    Let's ask a nobody, as compared to say a full fledged AV engineer who has been in the field since day 1....nothing like getting your information from the source....
    oh wait, I get it now, this was just blatant publicity for this upcoming software security firm that needs to make a name for themselves....remind me next time I need to become someone, I should get someone to post on /. a story on me and my company....

    Dont look for a sig, I aint got one...

    • by Dunbal ( 464142 ) *
      That's why most slashdotters never bother with TFA.
      • LOLOLOLOLOLOLOLOLOL

        Of course even if we RTFA , we would have to take their word that whatever is in it is 100% fact and truth...... ^_^

    • by Anonymous Coward

      "AV engineer"?! You clearly have no idea what the article is talking about do you? Software security and end point security are barely distant cousins.

    • Multiple published books (useful ones) aren't credentials?

      Anyway, an AV engineer wouldn't necessarily be the person to listen to about SDLC.

    • Gary McGraw is a very well-known and well-respected individual in the field of software/application security.
  • by 93 Escort Wagon ( 326346 ) on Tuesday September 13, 2011 @04:05PM (#37391772)

    It's been so nice watching those weekly SANS vulnerability emails get shorter and shorter over the past decade.

    /sarcasm

  • I guess banner ads are not enough.

  • by Anonymous Coward on Tuesday September 13, 2011 @04:35PM (#37392048)

    Claiming that C and C++ are disasters from the standpoint of security is like saying that the IP protocol is a disaster from the standpoint of reliable transmission of data streams, or that HTTP is a disaster from the standpoint of security. Arguably so, but only if you don't supplement them with any other layers or tools.

    • by FormOfActionBanana ( 966779 ) <slashdot2@douglasheld.net> on Tuesday September 13, 2011 @04:44PM (#37392148) Homepage

      C and C++ ARE disasters. gets() and >> can NOT be used safely. Period. Tons of functions in the standard libraries have been rewritten with secure variants, to try to make it vaguely possible for developers to keep track of buffer lengths. Still, some APIs screw it up and it's nearly impossible for an intelligent human to get it right every time without static analysis tools to back him up.

      HTTP is not a disaster but it clearly was not envisioned with security in mind. All attacker provided data is strings, input data comes from a variety of sources; there are way more HTTP verbs than strictly necessary. The authentication provided with the spec is "encrypted" with Base64. Actually, if this protocol were designed today to its original form, it would be laughed out of its security architecture review.

      • by nzac ( 1822298 ) on Tuesday September 13, 2011 @06:17PM (#37392870)

        C and C++ ARE disasters. gets() and >> can NOT be used safely. Period. Tons of functions in the standard libraries have been rewritten with secure variants, to try to make it vaguely possible for developers to keep track of buffer lengths. Still, some APIs screw it up and it's nearly impossible for an intelligent human to get it right every time without static analysis tools to back him up.

        So don't use gets() and >> as you said there are a number of alternatives. You can stuff up the API in any language if you aren't careful and everyone has access to static analysis tools. Yes the record is poor but there are no other alternatives to compare it against. Once you build the checking into the language no one will want to use the slower executables it produces.

        Open and other BSDs prove you can make a reasonably secure OS in C. People relying on C/C++ to be intrinsically secure is the disaster.

        • "Once you build the checking into the language no one will want to use the slower executables it produces."

          Except they do. Languages such as Ada, and really any language that allows compilers to understand what is happening in the code, can optimise out many of the bounds checking. The cost of bounds checking isn't as great as you may think it is, and if it eliminates -all- buffer overflows wouldn't you choose it?

          • by nzac ( 1822298 )

            Except they do. Languages such as Ada, and really any language that allows compilers to understand what is happening in the code, can optimise out many of the bounds checking.

            Excuse my ignorance but where are these benchmarks. All the ones I see have Ada as marginally slower (but i guess not statistically significantly) and everything else not even close. Coding in C you can manually put the bounds checking back in at your digression and if the coder does not know where its needed then the program can hardly be trusted to be secure.

            http://shootout.alioth.debian.org/u64/benchmark.php?test=all&lang=gnat&lang2=gcc [debian.org]
            If you want to quote me a better/more relevant benchmark feel

      • Hmm....I'm aware of the problems of gets(), but haven't heard of possible issues with >> before. Can you give an example or link to an article that explains the issues, so I know what patterns to avoid in the future?

        • Hmm, this operator is super difficult to Google for, that's for sure. The >> operator is unsafe to use when reading into a character buffer because it does not perform bounds checking on the size of its input. An attacker can easily send arbitrarily-sized input to the >> operator and overflow the destination buffer.

  • The vast majority of commentors I've seen on both /. and the article itself are all kinds of cynical and this does not help /., and it doesn't help the community. It makes me sad.

    Yes, we realize that you are an amazing h4X0r capable of creating code devoid of buffer overflows, race-conditions, (all sorts of) injection attacks, etc. Perhaps you've forgotten there is a spectrum of programmers and like it or not, you are probably an AVERAGE coder. (They don't call it average because everyone thinks they a

    • Solution: don't let "average coders" write software. If standards were that low in medicine, people would drop dead right and left thanks to "average doctors" not being able to tell flu from the hole in the ground. Fortunately, "average doctors" like those are simply not allowed to practice, and only competent ones are actually treat sick people.

      Software development has exactly the same problem -- decades ago all software had lax requirements, so programmers with abilities of an average shaman from a primit

    • There is exactly one possible situation where security is a non-issue: When the person who wrote the code is also the one running the code with no other code running on the hardware used. In other words, the only application I could now think of anymore is some kind of IC firmware with no connection to any kind of network or other form of communication.

      As soon as code has to interact either with unknown other programs or unknown users, or worse, other, possibly unknown machines, security becomes a topic. I

  • real disaster (Score:4, Interesting)

    by roman_mir ( 125474 ) on Tuesday September 13, 2011 @05:22PM (#37392448) Homepage Journal

    the disasters of beginning of this century include XML. In everything. "Agile" development. IE6 and ActiveX controls. IBM Lotus Notes. "Visual" programming, especially mixed with UML and RUP. Passing parameters from URLs directly to database layers as input without sanitation. Not checking data structure boundaries and sizes. Using root for everything, this one is especially nice when combined with a password, that is used for everything in a corporation. Especially when password is some variation of "adminpass1". Buying more and more BEA/Oracle licenses to set up more and more nodes where the real speed problem could be solved with very little code on a single machine, but obviously that's not sexy and doesn't buy perks. Having no testing cycles, never having enough testing, doing irrelevant testing (this even includes automated testing, which can be huge, but still irrelevant).

    Producing huge meaningless documents that end up copied in email to everybody, but eventually don't get read by anybody who they should address, having template "Architecture", where past documents are copied, whatever names are replaced, no thought is given to the project and all the details are left over to the team for the time of implementation. This, especially when combined with time lines that give 80% time to meetings/architecture, 12% time to all of the development combined and then whatever remains is running around like chickens with heads cut off, from users to testers to admins, trying to get any of it working.

    All of the above and more, much more are disasters.

    But the real disaster here is that pathetic article that this story refers to.

    • "Having no testing cycles, never having enough testing, doing irrelevant testing"

      Not properly testing is the worst flaw. We barely test if a package works, as designed, and under the planned circumstances. Testing for security just does not happen.

      The cost of testing properly is likely no where near as large as the potential loss. The compromise against great and secure code in exchange for profit is made. There is no business sense in making the existing code faster or more efficient either, as the leased hardware will be swapped out for faster stuff in the next couple of years.

  • by ka9dgx ( 72702 ) on Tuesday September 13, 2011 @07:26PM (#37393364) Homepage Journal

    I keep watch on "security" threads like this one, hoping to find sanity in at least one answer prior to mine.... and keep getting disappointed.

    You're all wrong, so far.

    Why? It's simple, it's not an application programming issue, it's an Operating System design issue.

    The default permit environment present in everything except IBM's VM is the root cause of 99% of our problems.

    Instead of giving each PROCESS a list of resources and permissions, Linux, OS-X, Windows, and pretty much everything else, does it at the USER level. (Yes, I know about app-armor, but that's a special case)

    This means that all of the defenses are pointed in the wrong direction. (Imagine building a fort with 10 foot thick perimeter wall as its sole defense in the age of paratroopers and helicopters to get an idea of the scale of the problem).

    It doesn't matter how careful or professionally trained the application programmers are, nor how safe the programming language used to write the application is, when the OS isn't even designed to limit what they can do. All programs have bugs, you shouldn't have to trust them not to have them.

    Now, those skills and language enhancements are useful for building the operating system, especially when constructing the micro-kernel to run everything, so it's not wasted effort.

    I predict we'll see stories like this for at least 10 more years, regardless of the effort or money put in, because we haven't changed our approach yet. It's going to take a few more years until the cognitive dissonance gets loud enough in peoples heads to prompt them to find a better OS, and a few more years to actually have something reasonably solid available. Until then, buckle up... it's going to be a VERY bumpy ride.

    • SELinux addresses that. I like the idea of capabilities-based OSes, but worry that there may be a reason that projects like CapROS haven't caught on in the market.

      • Re:CapROS (Score:4, Interesting)

        by ka9dgx ( 72702 ) on Tuesday September 13, 2011 @07:58PM (#37393606) Homepage Journal

        The reason things like CapROS haven't caught on is inertia... it's a huge pain in the ass to move things to a different desktop, let alone a completely different operating system, where you have to rewrite things, and adjust to a whole new set of annoyances.

        The IBM VM model of things is a pure capability model, you give RAM, DISK, and I/O to a virtual machine, and it can't exceed it's authority. Of course the granularity is a bit rough when you have to do it in terms of disk systems instead of specific files, folders, etc. The need to have a system admin set things up, which isn't very user friendly either, so it's clearly not the exact way things will end up when capabilities go mainstream.

        I see the easiest way as looking just like Windows, Linux, etc... except the shortcut to an application also includes a list of files, resources, quotas, etc... this would allow things like Accounting Applications which can't access the internet (unless you change their settings), etc.

        Eventually, people will figure it out, but it's going to be a LONG wait. In the meanwhile, the insecurity of everything will get used to sell a lot of software, firewalls, etc... and the worst part is it offers the perfect excuse for filtering and censoring the internet.

        I'm glad to see you here... now there are at least 2 of us. ;-)

      • by cras ( 91254 )

        SELinux doesn't address the problem. I agree with grandparent, although I think the focus should be more about on the UI side. The really low level implementation could perhaps be addressed with SELinux, but it's not a practical solution for any GUI app currently. For example how would you prevent Open Office from deleting everything in your home dir with SELinux, while still allowing it to read and write arbitrary documents? Yeah, you can't unless you manually go changing the labels every time you want to

        • by Pieroxy ( 222434 )

          I read (part of) your paper. This seems overly naive to me. If a program is hidden from the taskbar, it is effectively running in the background and can do any kind of access. If this program has a security flaw, it is a gateway to malware just as much as current operating systems are. Granted, the malware will run in userspace. But if your program is an FTP server (for example) it will have RW access to all your drives. Hence, the malware will feel right at home.

          Now, what you propose is likely to reduce ma

          • by cras ( 91254 )

            I wasn't planning on fixing all of security problems, but the typical case of clicking open random email attachments or running random programs from internet should and could be made safe, while still keeping the user interface user friendly. Those are the reasons for most of today's security problems.

        • by ka9dgx ( 72702 )

          Thanks for giving me a different viewpoint to consider... you've covered a lot of ground in that write-up.

          I like the idea of giving access to write to an email address as a privilege supplied to a program, instead of letting it just spam the world.

          I'd to the same thing when it came to reading from a website, perhaps filtering to the domain *.slashdot.org, for example. If the user wanted to follow a link outside the domain, they would have to drag/drop the link into something outside the browser's contro

    • Very interesting point. But a bit of a one trick pony.
      You really don't think cross site scripting or sql injection are a big deal?

      • Cross-site data leakage IS a problem, of course...

        One of the good design ideas that comes up when describing or building capability based security is the idea of moving file dialogs OUTSIDE of the control of the application. If done properly, the user might not even notice the difference, which is a very good thing.

        If we move the selection of web sites outside the control of the browser in the same fashion, you could then add facilities to filter, log, proxy, etc... without the need to do anything differen

    • by Anonymous Coward

      I completely agree that, as far as operating system security goes, you are spot on.

      It's helpful for people to realise that this isn't because OS designers of the past were stupid; they were just building protection for a completely different environment and threat model.

      A world where many users shared one computer, had almost no network connectivity, and only the Sys admin ever installed software. Things are different now...

  • "Perhaps no segment of the security industry has evolved more in the last decade than the discipline of software security"

    The only thing that's evolved is the amount of money lost to the `software security' sector and I stooped reading after seeing Microsoft in the same sentence as `Secure Code':

    `We've also published books like "Writing Secure Code," by Michael Howard and David LeBlanc, which gives all developers the tools they need to build secure software from the ground up' - Bill Gates Jan 15 2002
  • Buffer overflows, formal techniques, specialist languages were all created decades ago, Bill gates wrote the trustworthy computing memo because of the bad press that MS was getting, it was pure marketing. Systems from other vendors have evolved to provide smaller attack targets upon install that they did a decade ago. This is system design not software techniques.
    This guy is a knob blowing his own trumpet.

  • by Skapare ( 16644 ) on Wednesday September 14, 2011 @02:11PM (#37402186) Homepage

    The author clearly does not understand programming languages if he says that, or else he's one of those kinds of programmers that just needs to stay away from languages that let you do whatever you want.

    C and its somewhat object oriented cousin C++ are just tools to let programmers do whatever it is they want to do. If you know the languages and how they work, you can make whatever you like. You can make insecure programs. Or you can make secure programs. Your choice. What C and C++ suffer from is the stain of blood splatters from programmers that simply do not know what they are doing. Too many programs were written insecurely, not because the language forced them to, but because these programmers didn't know how to write different code into the programs that would make them secure.

    I do admit that C has a few flaws, like certain functions that fail to properly test for string overflow conditions. One example is sprintf. Use snprintf in its place so the size of the target is known and checked in the function's own logic. But these are things good C programmers already know about.

    I have my doubts about any other language. If a language is flexible enough to develop a major application with, then that language is also flexible enough to let an idiot write an insecure program. However, other programming languages might be useful to C and C++ to draw away the bad programmers. Let those other languages suffer from them for a while. Java certainly has in more recent years.

    • You know, Skapare, even Kernighan and Ritchie admit that C has serious security problems. I've talked to them both about it. As for C++, it is a complete dog of a language that would be best stricken from the planet.

      You may snicker, but I am a scheme programmer at heart.

      gem

Professional wrestling: ballet for the common man.

Working...