Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security IT

Michal Zalewski On Security's Broken Promises 125

Lipton-Arena writes "In a thought-provoking guest editorial on ZDNet, Google security guru Michal Zalewski laments the IT security industry's broken promises and argues that little has been done over the years to improve the situation. From the article: 'We have in essence completely failed to come up with even the most rudimentary, usable frameworks for understanding and assessing the security of modern software; and spare for several brilliant treatises and limited-scale experiments, we do not even have any real-world success stories to share. The focus is almost exclusively on reactive, secondary security measures: vulnerability management, malware and attack detection, sandboxing, and so forth; and perhaps on selectively pointing out flaws in somebody else's code. The frustrating, jealously guarded secret is that when it comes to actually enabling others to develop secure systems, we deliver far less value than could be expected.'"
This discussion has been archived. No new comments can be posted.

Michal Zalewski On Security's Broken Promises

Comments Filter:
  • It's true. (Score:3, Informative)

    by Securityemo ( 1407943 ) on Friday May 21, 2010 @03:25PM (#32297354) Journal
    Computer security will kill itself.
  • IT & PC security companies will never "fix" things or come up with a solid and secure foundation for computer security - because it would put them out of business.
    • by fuzzyfuzzyfungus ( 1223518 ) on Friday May 21, 2010 @03:39PM (#32297590) Journal
      Do you actually think that all IT and PC security companies have a giant cartel going, where they all secretly agree to suck? Somehow including all the "independent security researchers", which includes anybody with a computer, a clue, and some free software?

      Seriously? If there were some magic bullet, the temptation for one cartel member to make a giant pile of cash on it would be overwhelming.

      Much more troublesome, for security, is the fact that there are no known methods of secure computing that are economically competitive with insecure ones, not to mention the issue of legacy systems.

      You can buy a lot of low end sysadmins re-imaging infected machines for what it would cost to write a fully proven OS and application collection that matches people's expectations.
      • by Jurily ( 900488 )

        If there were some magic bullet

        Eliminate users?

      • Re: (Score:2, Funny)

        by syousef ( 465911 )

        Do you actually think that all IT and PC security companies have a giant cartel going, where they all secretly agree to suck?

        They are called security conferences and 'best practice; documents

        Seriously? If there were some magic bullet, the temptation for one cartel member to make a giant pile of cash on it would be overwhelming.

        They appear to have found the magic bullet. it is called "the principle of least privellege". Basically they take away your ability to do anything but log on. Then when you shout loudly enough that you can no longer do your job, they make you fill out so much paperwork that you'll never want to ask for access again. Finally when you have just enough access to do enough of your job that you don't get fired (ineffectively and poorly) they conti

      • With the resounding success of Win3.0, Microsoft demonstrated that you don't need to provide a secure computing platform if you market your product to customers who know nothing about the technology. Things have gone downhill from there.

      • by DdJ ( 10790 )

        Much more troublesome, for security, is the fact that there are no known methods of secure computing that are economically competitive with insecure ones, not to mention the issue of legacy systems.

        You hit the nail on it right there, with the "economically competitive" part. That's the problem.

        Sure, if you've got a bunch of custom hardware running custom software that's thoroughly engineered and audited, and that never exchanges data with the rest of the world, you can have considerably higher security tha

      • Oh yeah, we also write our own malware, or else we'd go out of business. Didn't you get the memo?

      • Do you actually think that all IT and PC security companies have a giant cartel going, where they all secretly agree to suck? Somehow including all the "independent security researchers", which includes anybody with a computer, a clue, and some free software?

        No. And no one is saying that.

        Seriously? If there were some magic bullet, the temptation for one cartel member to make a giant pile of cash on it would be overwhelming.

        You might want to look at this article.
        http://www.ranum.com/security/computer_securit [ranum.com]

    • I would gladly go out of business and do something useful. Maybe design a slick database. Or write a cool game. Instead I'm sitting here, improving the ability of my VM at detecting "pointless" loops in malware.

      Allow me to tell you something: AV people tend to be amongst the best in the business. We know more about the Intel architecture than maybe most people at Intel. We know quirks of Windows that even people at MS don't know about (how I know? If they knew they wouldn't have put that crap in there!).

      Do

    • we do not even have any real-world success stories to share.

      "We didn't get hacked or release our entire customer database this month"

  • by Monkeedude1212 ( 1560403 ) on Friday May 21, 2010 @03:27PM (#32297392) Journal

    When Virtual Security mirrors Physical Security - people should expect more from virtual security? How is a Night watchmen not a form of "vulnerability management" and "attack detection"?

    All security in general is reactive. You can't proactively solve every problem - this philosophy goes beyond security. The proactive solution is to plan on how to handle the situation when a vulnerability gets exploited, something I think virtual security has managed to handle a lot better than physical security.

    • by fuzzyfuzzyfungus ( 1223518 ) on Friday May 21, 2010 @03:45PM (#32297668) Journal
      Probably because, at least in theory, the rules of Virtual security are more favorable?

      In the real world, security is hard because matter is malleable. When an armored vehicle gets blown up, we don't say that it "failed to validate its inputs". It just didn't have enough armor. Even in cases where it survives, all it would have taken is larger projectile, or one moving a bit faster... When somebody pulls an SQL injection or something, though, it is because the targeted program did something wrong, not because of the inescapable limitations of matter.

      The only real class of security issues that mirror real-world attacks are DOS attacks and the like, because computational capacity, memory, and bandwidth are finite.
      • In the real world, security is hard because matter is malleable. When an armored vehicle gets blown up, we don't say that it "failed to validate its inputs". It just didn't have enough armor. Even in cases where it survives, all it would have taken is larger projectile, or one moving a bit faster... When somebody pulls an SQL injection or something, though, it is because the targeted program did something wrong, not because of the inescapable limitations of matter.

        Not all real life attacks are blowing somet

      • But thats where virtual security is LESS favourable than physical security.

        There was a time where SQL injection wasn't even a concieved idea - so how do you protect from that kind of threat?

        With Physical security, the amount of things involved are very few. It basically boils down to keeping bullets out or keeping people out. And both of those get a bonus with the more armour you add or the more people you hire.

        With virtual security, you can take a million computer scientists and tell them to get cracking b

        • Re: (Score:3, Insightful)

          The difference is that programs are mathematical constructs and thus(if you are willing to take the time, and possibly constrain the set of programs it is possible for you to write) you can prove their behavior.
          • In reply to this, I'd like to quote a rather famous computer scientist:

            Beware of bugs in the above code; I have only proved it correct, not tried it.

          • The most correct program in existence consists of exactly one instruction:

            NO-OP

            and it is unfortunately not correct in all contexts.

        • by greed ( 112493 )

          Many attackable flaws--like SQL injection--are also bugs. That is, unsanitized data is put into something that's parsed for meaning.

          (This is a long-known problem, at least in UNIX circles, as it is the SQL equivalent of command quoting problems.)

          These bugs show up as crashes and odd behaviour with incorrect user input, or unanticipated user input. (Ask Apple how much it cost for an incorrectly quoted "rm" command in the iTunes update script.)

          You test for this stuff by feeding your program the whole range

        • That particular example is a bad one for the point you're making.

          Things happen when you have control logic and peripherals.

          By "peripherals" I mean anything the code can control. It could be a database, or a Space Shuttle main engine.

          Dan Bernstein's theory, which he sharply distinguishes from least privilege, is to ruthlessly eliminate the code's control over anything not actually required. No matter how complex the code, it can't do anything that the computer can't. No compromise of my laptop could damage a

      • Comment removed based on user account deletion
    • Re: (Score:3, Insightful)

      by melikamp ( 631205 )

      When Virtual Security mirrors Physical Security - people should expect more from virtual security? How is a Night watchmen not a form of "vulnerability management" and "attack detection"?

      I agree about the physical security: with software, we are confronted with a very similar set of problems.

      All security in general is reactive.

      I am not sure what that means. If I have a safe, for example, as a solution to my policy of restricting access, then I have something that is both proactive and reactive. The safe is proactive because it make unauthorized access via a blowtorch much more expensive than authorized access via a combination. It is reactive because it makes undetectable unauthorized access prohibitively expensive. I don't

    • by curunir ( 98273 ) *

      One way in which virtual security does not mirror physical security is in jurisdiction. Physical security can rely, to some extent, on local law enforcement, be it truly local or on a national level. On the internet, the ability of law enforcement to aid in security is limited because those attempting to break in can do so without ever setting foot in the country containing the target machine.

      The whole situation feels similar to the depression era where bank robbers successfully used jurisdictional limitati

  • Reactive security is not necessarily a bad thing. Only by challenging today's security can we seek to inspire people to improve security for tomorrow.

    I do, however, feel that security in the digital age is laughable at best. It turns out telling a computer not to do what it's told is significantly harder than telling it to do what it's told.

    • Reactive security is not necessarily a bad thing.

      It's indeed a very good thing. When coupled with Preemptive security. To give an example in a "traditional" realm, Preemptive security would be locking your doors when you leave the house (and setting an alarm/installing bars on windows, etc). Reactive security would be having the police come to your house with guns drawn because someone is inside (and then later figuring out how they got in and closing that hole). Neither on their own would be sufficien

      • That should have been "not click links from non-reputable sources"... Dam lack of an edit button...
      • If you ask a normal (non-geek) person if they would leave their house unlocked during the day in a bad neighborhood, what do you think they'd say? But if you ask them to not click links from reputable sources (or those that look suspicious), they look at you like you're crazy. THAT's the problem with preemptive security today. Not that it's hard or costly, but that the normal users are not convinced that they should care at all...

        Not clicking suspicious links is not equivalent to not locking your door. It's

      • by Maarx ( 1794262 )

        I think the lack of preemptive security is self-balancing. Those who do not believe preemptive security is necessary are those who fall victim and promptly change their tune.

        It is unfortunate that such victims traditionally pay an exorbitantly disproportional price for their misconception (such as identity theft victims), but again, I believe the system to be self-balancing. If the thousands and thousands of bone-chilling horror stories about identity theft aren't enough to get you to take it seriously, wha

  • I just get this feeling like the approach is all wrong to security.

    At the heart of the security concept is that CPUs generally aren't designed with security in mind. I blame Intel, ARM, Motorola, IBM, and anyone else I can. CPUs are just executing code they're told to execute. NX, ASLR, and other "security" features don't work. Particularly when the underlying architecture itself is flawed.

    Well, no, IBM gets a pass. Given that the PS3 has yet to see a major exploit, I believe that the Cell may have sec

    • Re: (Score:2, Informative)

      by mrnobo1024 ( 464702 )

      The underlying architecture is fine. Ever since the 286 it's been possible to run code while limiting it to accessing only a specified set of memory addresses. What more is it supposed to do? It's not the CPUs' fault that OSes are failing so hard at the principle of least privilege.

      They're just "executing code they're told to execute"? Well, of course - do you want them to refuse to execute "bad" code? If so, please show me an implementation of an IsCodeBad() function.

      • I've read through all the comments, and this is the only sane one that stands out. The principle of least privilege, as I see it, is the idea of letting the user give privileges to a program at run time, and they would chose the least possible set of resources to get the job done.

        The main thing is that with cabsec, you NEVER trust a program with the full resources of a user, and thus it never has enough resources to take out your system.

        Consider if Outlook were only allowed to talk to a mail server, and a d

    • by reiisi ( 1211052 )

      So, your suggestion is to get rid of CPUs?

      Should I translate that to, let's convert all of our software to run on dedicated finite-state machines? One machine per program?

      • My suggestion is to dump *crappy* CPUs.

        Which unfortunately, is a lot of them.

        We've got the Processing part down, now we need to make sure that they play nice when talking to foreign machines. Unfortunately, that's not happening.

        Common CPU exploits we see now weren't new when Intel designed the 386. It's just that well, when Intel designed the 386, I don't think anyone would've expected that architecture to stay the same and be used in mission critical, 24/7 support situations.

  • Security is hard (Score:3, Insightful)

    by moderatorrater ( 1095745 ) on Friday May 21, 2010 @03:35PM (#32297518)
    Computer security is roughly equivalent to real-world security, only the malicious agents are extremely fast, can copy themselves at will, and can hit as many targets as they want simultaneously. When considered from the point of view of real-life security, our software security problems seem almost inevitable.

    The central insecurity of software stems from the fact that security requires time and effort, which makes it hard to get management to fully commit to it, and there's nothing in the world that can make a bad or ignorant programmer churn out secure code. There have been solid steps taken that have helped a lot, and programmers are getting more educated, but at the end of the day security requires a lot of effort.
  • Motivation (Score:3, Interesting)

    by 99BottlesOfBeerInMyF ( 813746 ) on Friday May 21, 2010 @03:37PM (#32297554)

    Security can be widely deployed by enterprise IT, OS vendors, and possibly some hardware OEMs. The larger the footprint, the easier it is for such real security to be rolled out. The thing is, while some IT departments have very good security, just as many have terrible. Hardware vendors are unlikely to have the expertise and are unlikely to be able to profit using an integrated security platform as a differentiator. This pretty much leaves OS vendors. MS has a monopoly so they don't have much financial motivation to dump money into it. Apple doesn't really have a malware problem, with most users never seeing any malware let alone making a purchasing decision based upon the fear of OS insecurity. Linux is fragmented, has little in the way of malware problems, and has niche versions for those worried about it.

    I'm convinced malware is largely solvable. It will never be completely eliminated by the vast majority could be filtered out if we implemented some of the cool new security schemes used in high security environments. But who's going to do it? Maybe Apple or a Linux vendor if somehow they grow large enough or their platform is targeted enough. Maybe if MS were broken up into multiple companies with the IP rights to Windows, they're start competing to make a more secure product than their new rival. Other than that, we just have to sit in the mess we've made.

    • I'm convinced malware is largely solvable.

      Not as long as social engineering is possible.

      • I'm convinced malware is largely solvable.

        Not as long as social engineering is possible.

        Social engineering relies upon deceiving a user and getting the user to authorize someone they don't know to do something they don't want. By making sure the user is informed of who is contacting them and what exactly that person is doing, as well as making sure something very similar is never required, yes we can eliminate pretty much all cases of social engineering as it is generally understood.

        • A big part of social engineering is that users don't have the patience for the sorts of full explanations required to implement that. Consider Microsoft's new UAC system, for example—that's close to what you described, but users tend to either just hit "yes" as quickly as possible to get on with their work, or disable it entirely.

          • Re:Motivation (Score:4, Insightful)

            by 99BottlesOfBeerInMyF ( 813746 ) on Friday May 21, 2010 @05:15PM (#32299134)

            A big part of social engineering is that users don't have the patience for the sorts of full explanations required to implement that.

            Why would they need patience if you provide them with immediate verification of who they're talking to, if they're affiliated with who they claim, and if what they are doing is a normal procedure or something strange?

            Consider Microsoft's new UAC system, for example—that's close to what you described,

            No, not really.

            but users tend to either just hit "yes" as quickly as possible to get on with their work

            UAC is a study in how operant conditioning can be used to undermine the purpose of a user interface. It's a classic example of the OK/Cancel pitfall documented in numerous UI design books. If you force users to click a button, the same button, in the same place, over and over and over again when there is no real need to do so, all you do is condition them to click a button and ignore the useless UI. Dialogue boxes should be for the very rare occasion when default security settings are being overridden, otherwise the false positive rate undermines the usefulness. Dialogue boxes should be fairly unique and the buttons should change based upon the action being taken. If your dialogue box says "yes" or anything other than an action verb, you've already failed. Further UAC is still a failure of control. Users don't want to authorize a program to either have complete control of their computer or not run. Those are shitastic options. They need to be told how much trust to put in an application and want the option to run a program but not let it screw up their computer. Where's the "this program is from an untrusted source and has not been screened: (run it in a sandbox and don't let it see my personal data)(don't run it)(view advanced options)" dialogue box?

            • I'm convinced that the software companies intentionally fuck up the interfaces like that. That way they are not responsible if the user installs something bad.

              And, exactly like you posted, the user will NOT read the pop-ups after the first few. All they will see is a "click 'yes' to continue" the same as they see on the EULA's and every other pop-up. The same as "do you want to run this".

              A basic white list would be better for the users than the current situation. And pop-up a DIFFERENT box when the user is

            • by drsmithy ( 35869 )

              Why would they need patience if you provide them with immediate verification of who they're talking to, if they're affiliated with who they claim, and if what they are doing is a normal procedure or something strange?

              How do you propose we identify whether the software is doing "something normal" or "something strange" ? How are you going to define these things ? How are you going to account for situations where the definitions are wrong ?

              • How do you propose we identify whether the software is doing "something normal" or "something strange" ?

                First of all require ACLs as part of applications, second profile normal use cases for common types of applications and make sure users can see those profiles to make sure they make sense, third require apps to be signed, and fourth, vet those ACLs either as the OS vendor, using third party security greylists, or a combination of both in a weighted manner. But that's just a quick outline, there have been a number of very good papers written on this topic, and demonstrations of working code, just not moved i

            • by Bungie ( 192858 )

              If you force users to click a button, the same button, in the same place, over and over and over again when there is no real need to do so, all you do is condition them to click a button and ignore the useless UI.

              Every day I launch Visual Basic 6.0 from my start menu, and every time it gives me the same UAC warning that it will be run with Admin privileges. If I had to stop and determine the correct button every time it would be a total pita. If UAC presents me with a different dialog or presents it under

    • Bill Gates and Steve Ballmer made the mess, and I'm doing my best not to sit in it.

  • Considering that the x86 platform is inherently insecure, I don't understand why this is surprising to people. Until we move away from the architecture, I don't think someone who says they takes PC's security seriously is being as serious as they could be. And yes, I do realize that a new architecture is a huge change, and one that's going to be a long time coming... But it's something that WILL happen. We will eventually need to overcome the shortcomings of x86, and it's at that point that we can really
    • The ISA has nothing to do with it.
      We're not talking about low level attacks, we're talking about the overall landscape at the top level.

      We couldn't even get that shit right if we were given ideal hardware.

    • Considering that the x86 platform is inherently insecure

      How is it any more insecure than any other CPU architecture?

    • by mikazo ( 1028930 )
      While placing a function's return address right next to its local variables and arguments on the stack is kind of a dumb idea, there are many higher-level security issues to work out that aren't specific to x86. For example, phishing, cross-site scripting, input validation, side-channel attacks, in-band attacks, and the fact that it's safe to assume the user is an idiot that will click on anything he's told to. The list goes on.
      • by reiisi ( 1211052 )

        The other things are at least a little bit easier to deal with when the underlying execution model is stable.

  • Too Expensive (Score:3, Interesting)

    by bill_mcgonigle ( 4333 ) * on Friday May 21, 2010 @03:42PM (#32297626) Homepage Journal

    It may be that a secure and convenient system is possible, but it's too expensive for anybody to sit down and write.

    Rather, we're slowly and incrementally making improvements. There's quite a bit of momentum to overcome (witness the uproar when somebody suggests replacing Unix DAC with SELinux MAC) in any installed base, but that's where the change needs to come from, since it's too expensive to do otherwise.

    If time and money were no object, everything would be different. More efficient allocation of the available time and money is happening as a result of Internet collaboration.

    So, 'we're getting there' seems to be as good an answer as any.

    • Re: (Score:3, Insightful)

      by 0xABADC0DA ( 867955 )

      witness the uproar when somebody suggests replacing Unix DAC with SELinux MAC

      The uproar is because SELinux is a complete pain and tons of work to set up correctly and completely. The SELinux policy for Fedora is ~10mb compiled. Although it does work pretty well at preventing escalation.

      But finally you get the system locked down with SELinux and still it does nothing to prevent BadAddOn.js from reading mydiary.txt if it's owned by the current user.

      What's really needed is:

      - A hardware device to authenticate the user. Put it on your keychain, shoe, watch, whatever.

      - OS that grant pe

      • The uproar is because SELinux is a complete pain and tons of work to set up correctly and completely.

        Right, but I think this is largely the case because Unix DAC and SELinux MAC are mixed in an unholy matrimony. This causes things to get complicated, and frankly not many people care, so there's not enough work done to do SELinux right. An experimental distro that ripped out Linux DAC would be an interesting project.

        The SELinux policy for Fedora is ~10mb compiled

        For what, 14,000 packages?

        OS that grant perm

        • if you put a lock on a box and leave the box in the middle of the highway, is the box secure?

          I'm inclined less to access control lists (vectors, whatever) and more to ephemeral users (kind of like sandboxes for everything, but not really).

        • Right, but I think this is largely the case because Unix DAC and SELinux MAC are mixed in an unholy matrimony. This causes things to get complicated, and frankly not many people care, so there's not enough work done to do SELinux right. An experimental distro that ripped out Linux DAC would be an interesting project.

          It has much more to do with SELinux exploding when it has to deal with shared resources.

          Take shared libraries for instance. All the policies label everything in /usr/lib with the same type, so any program can link any library. Obviously this is bad, but the thought of specifically labeling each library and then having rules for every program so they can link the ones they use is madness. They don't do it because in practice, in the real world, it just can't be done.

          And then how do you protect mydiary.txt

      • The problem is two-fold. Under traditional Unix access systems, a process has the same privileges as the user who runs them. Otherwise every binary (or be more general, call it an application context) would need to have its own privileges, its own list of resources it can or can't access. Which is what SELinux does, which is why your policy files are so large.

        But even then, even if you do THAT, you need a way to elevate. Occasionally you'll want your browser to have read access to your files, say you
    • we're not getting there. As an example, McAfee has an on-access scan. Any file read or written gets scanned.

      A virus can disable that, so the workaround is to have a monitor program ensure the on-access scan is enabled.

      That can be stopped, so you make the service un-stoppable.

      That can be worked around, so the current solution is to have anther monitor connect to the policy server (for lack of a better term), download what the settings should be, and re-set all of the settings and re-start any stopped servi

      • I don't seem to have these problems on my Fedora systems. My parents don't seem to have these problems on their Macintosh.

        Windows would be a poor example of making any progress.

  • Biology might provide useful metaphors -- in particular, I wonder if the parasite/host relationships might provide insights into attacker/defender models.

    Parasites evolve in response to defense mechanisms of their hosts. Examples of host defenses include the toxins produced by plants to deter parasitic fungi and bacteria, the complex vertebrate immune system, which can target parasites through contact with bodily fluids, and behavioral defenses. An example of the latter is the avoidance by sheep of open p

    • Biology might provide useful metaphors -- in particular, I wonder if the parasite/host relationships might provide insights into attacker/defender models.

      Of note is that we are seeing some more sophisticated 'ecologies' of malware coming into view. Botnets that don't 'kill' the victim. Malware that kicks off other malware. However, evolution will 'accept' a less than perfect approach to an infection (ie, getting entirely rid of the thing) as long as the organism can get to successfully reproduce. I just

  • Attacks are cheap (Score:3, Insightful)

    by ballwall ( 629887 ) on Friday May 21, 2010 @04:24PM (#32298280)

    I think the issue is not that we're bad at security, it's just that attacks are cheap, so you need the virtual equivalent of fort knox security on every webserver. That sort of thing isn't feasible.

    The lock on my house isn't 100% secure, but a random script kiddie isn't pounding on it 24/7, so it's good enough.

    • by jd ( 1658 )

      Not every webserver, but you do need that level on ever point of access (physical and virtual). So, this would mean the gateway firewall and/or router, plus the web proxy server(s), plus whatever proxy(s) the users used to connect to the outside world. The web proxies should be on a network that can only talk to the webservers and maybe a NIDS. This isn't overly painful because proxies are relatively simple. They don't involve dynamic database connections, they don't need oodles of complexity, the rights ne

  • In my experience, developers don't want Security anywhere near their products. We insist that they fix these "theoretical" and "academic" security problems, ruining their schedules and complicating their architectures.

    Fine! Whatever. We will continue cleaning up your messes and pointing out the errors in your coding (which we warned you about). You can continue stonewalling us and doing everything you can to avoid us. We still get paid and you still get embarrassed.

    • by Jaime2 ( 824950 )
      You're generally right, but I make it a point to write code with a watchful eye on things like limiting attack surface and granting least priviledge. I'm usually foiled by the people who implement my projects. For example, it drives me nuts that our implementers are willing to put passwords in plain text in config files when my platform offers a command line utility to encrypt them that is transparent to the application. Every time I'm involved in an implementation, the passwords get encrypted. By the t
  • by istartedi ( 132515 ) on Friday May 21, 2010 @04:54PM (#32298798) Journal

    If you define security as being able to determine whether or not a program will reach an undesired state given arbitrary input, isn't that equivalent to the halting problem? Isn't that NP hard? I know that I generally force programs to halt when they're behaving badly, if they don't halt on their own.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      [...] isn't that equivalent to the halting problem? Isn't that NP hard?

      The halting problem is undecidable.

  • by dltaylor ( 7510 ) on Friday May 21, 2010 @05:04PM (#32298954)

    Until there are negative consequences for the execs, there will never be IT security because it costs money. If the execs of companies that have IT breaches were jailed for a couple of years (hard time, not some R&R farm) and personally fined millions of dollars, they would insist on proper security, rather than blowing it off. 'Course, these are the same guys who schmooze with, and pay bribes to, "our" elected representatives, so that's never gonna happen.

    "Security is not a product, it's a process", and, since there's no easily calculated ROI on the time spent on securing IT, even when there's a paper process, it is so frequently bypassed that it might as well not exist.

    • by trims ( 10010 )

      We don't even need to go this far.

      The solution to both the Quality and Security issue in software is Strict Liability.

      We need to make software accountable to the same standard we require for ordinary goods: no more disclaimers of harm, avoiding Warranty of Fitness, or any of the other tricks we currently accept as par-for-the-course in software.

      Sure, it will slow down software development. But, we're no longer the Wild West here - we have a reasonable infrastructure in place, and it would help soci

  • #1. Getting management to say "OK we'll let the deadline slide, max out the budget and reduce some functionality/ease-of-use so we can fix the security flaws".

    #2. Getting minimum wage Java programmers to understand/care about securing their code.

    Things are not helped by the sad state of so-called security products by the likes of Symantec that seem very popular with PHB's, they must have a lot of sales reps hanging around golf courses.

    Its also a bit much Google bitching about other people's security - wardr

  • I would hate to work in an environment where "it's hopeless, nothing we do today works" was the prevailing theme.

    • by reiisi ( 1211052 )

      That's one of the reasons I quit the industry 4 years ago.

      Not sure why I think I want to return to the industry now, I'm sure I'm not going to get anybody to listen to me this time.

  • The principles of secure software are almost the same as bugfree software. No silver bullet -- again.

  • Even in shops that I have worked in where there was an attempt at high security at a network level, there are other events that occur for the admins to do their job that undermines said security such as using portable media, shared user IDs. This because security has only been done to the extent that it will cost no more money where a complete infrastructure of servers should have been put in place to allow people to work in the environment; safe staging servers and the like. Furthermore I see less 'securi
  • Mr. Zalewski lives in his little unsecure world, not understanding the big picture. IBM Power Systems running i5/OS (formerly iSeries and AS/400) keep data and programs in separate memory areas. You cannot execute data as a program unless you go through an IBM compiler. To my knowledge there has never been a virus in the wild on this platform. We don't even need firewalls! This basic concept is not used in any other computer I know of, but surely the patents have expired by now and others can use this conce

It is easier to write an incorrect program than understand a correct one.

Working...