Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security IT

Michal Zalewski On Security's Broken Promises 125

Lipton-Arena writes "In a thought-provoking guest editorial on ZDNet, Google security guru Michal Zalewski laments the IT security industry's broken promises and argues that little has been done over the years to improve the situation. From the article: 'We have in essence completely failed to come up with even the most rudimentary, usable frameworks for understanding and assessing the security of modern software; and spare for several brilliant treatises and limited-scale experiments, we do not even have any real-world success stories to share. The focus is almost exclusively on reactive, secondary security measures: vulnerability management, malware and attack detection, sandboxing, and so forth; and perhaps on selectively pointing out flaws in somebody else's code. The frustrating, jealously guarded secret is that when it comes to actually enabling others to develop secure systems, we deliver far less value than could be expected.'"
This discussion has been archived. No new comments can be posted.

Michal Zalewski On Security's Broken Promises

Comments Filter:
  • by Monkeedude1212 ( 1560403 ) on Friday May 21, 2010 @03:27PM (#32297392) Journal

    When Virtual Security mirrors Physical Security - people should expect more from virtual security? How is a Night watchmen not a form of "vulnerability management" and "attack detection"?

    All security in general is reactive. You can't proactively solve every problem - this philosophy goes beyond security. The proactive solution is to plan on how to handle the situation when a vulnerability gets exploited, something I think virtual security has managed to handle a lot better than physical security.

  • Security is hard (Score:3, Insightful)

    by moderatorrater ( 1095745 ) on Friday May 21, 2010 @03:35PM (#32297518)
    Computer security is roughly equivalent to real-world security, only the malicious agents are extremely fast, can copy themselves at will, and can hit as many targets as they want simultaneously. When considered from the point of view of real-life security, our software security problems seem almost inevitable.

    The central insecurity of software stems from the fact that security requires time and effort, which makes it hard to get management to fully commit to it, and there's nothing in the world that can make a bad or ignorant programmer churn out secure code. There have been solid steps taken that have helped a lot, and programmers are getting more educated, but at the end of the day security requires a lot of effort.
  • by fuzzyfuzzyfungus ( 1223518 ) on Friday May 21, 2010 @03:39PM (#32297590) Journal
    Do you actually think that all IT and PC security companies have a giant cartel going, where they all secretly agree to suck? Somehow including all the "independent security researchers", which includes anybody with a computer, a clue, and some free software?

    Seriously? If there were some magic bullet, the temptation for one cartel member to make a giant pile of cash on it would be overwhelming.

    Much more troublesome, for security, is the fact that there are no known methods of secure computing that are economically competitive with insecure ones, not to mention the issue of legacy systems.

    You can buy a lot of low end sysadmins re-imaging infected machines for what it would cost to write a fully proven OS and application collection that matches people's expectations.
  • by fuzzyfuzzyfungus ( 1223518 ) on Friday May 21, 2010 @04:19PM (#32298208) Journal
    The difference is that programs are mathematical constructs and thus(if you are willing to take the time, and possibly constrain the set of programs it is possible for you to write) you can prove their behavior.
  • Attacks are cheap (Score:3, Insightful)

    by ballwall ( 629887 ) on Friday May 21, 2010 @04:24PM (#32298280)

    I think the issue is not that we're bad at security, it's just that attacks are cheap, so you need the virtual equivalent of fort knox security on every webserver. That sort of thing isn't feasible.

    The lock on my house isn't 100% secure, but a random script kiddie isn't pounding on it 24/7, so it's good enough.

  • by melikamp ( 631205 ) on Friday May 21, 2010 @04:54PM (#32298792) Homepage Journal

    When Virtual Security mirrors Physical Security - people should expect more from virtual security? How is a Night watchmen not a form of "vulnerability management" and "attack detection"?

    I agree about the physical security: with software, we are confronted with a very similar set of problems.

    All security in general is reactive.

    I am not sure what that means. If I have a safe, for example, as a solution to my policy of restricting access, then I have something that is both proactive and reactive. The safe is proactive because it make unauthorized access via a blowtorch much more expensive than authorized access via a combination. It is reactive because it makes undetectable unauthorized access prohibitively expensive. I don't see why software security is different.

    I am not a professional security specialist, but, with all due respect, I think that I have a clearer understanding of security philosophy than the author of TFA. At times, he seems to be completely lost.

    He spends a lot of time attacking strawmen. He analyzes some definitions, for example: "A system is secure if it behaves precisely in the manner intended - and does nothing more." I would not dignify this with a comment, because this is the definition of bug-free software, nothing else. "A system is secure if and only if it starts in a secure state and cannot enter an insecure state." Does this even mean anything, unless we define "secure state"? He is right about one thing: these are bad definitions. In fact, they are so bad, I can hardly see what they have to do with the software security.

    The focus is almost exclusively on reactive, secondary security measures: vulnerability management, malware and attack detection, sandboxing

    He disses the reactive approach, even though it is one of the cornerstones of the physical security. A system that cannot be compromised surreptitiously is often a less attractive target than the one that can, making it more secure in practice. And why is sandboxing in this list? Correct me, but it is the poster child of proactive approach. If your hypervisor or interpreter or whatever sandbox you are using is bug-free and is effective at enforcing your security policy, then the entire process is completely secure.

    Which brings me to my next point. I'll go ahead and try to give a reasonable definition of software security. The software is secure if it is effective at enforcing the given security policy. I don't have to say that it is bug-free: it's an underlying assumption, because if the software has a bug which allows for violation of your policy, then the software is not effective at enforcing it.

    I am perplexed by the omission of the policy notion from TFA. How can we start talking about security if we did not define what we are trying to secure ourselves from? Let's take one very popular policy, say, restriction of access to data. Despite of all of complaints in TFA, the problem is largely solved. To be more specific, let us imagine that we have a policy as follows:

    (1) Data has to reside on a networked host (otherwise the problem would be trivial).

    (2) Data has to be available upon an authorized request over the network.

    (3) Data has to be available upon an authorized local request.

    (4) Data should not be even detectable by an unauthorized agent.

    (5) The same networked host has to be able to service unrelated public requests (e.g., HTTP).

    I am not a professional, but even I can probably slam together a system, over a weekend, to implement this policy. OpenBSD, one restricted account apache to serve public requests, another restricted account apache with SSL to serve the data, reasonable file permissions. Good luck compromising me without social engineering.

    I guess what I am trying to say is, there is nothing wrong with our understanding of software security. The reason the field looks so bad is because people design overly complicated, contradictory, or outright brain-dead security policies.

  • Re:Too Expensive (Score:3, Insightful)

    by 0xABADC0DA ( 867955 ) on Friday May 21, 2010 @04:56PM (#32298828)

    witness the uproar when somebody suggests replacing Unix DAC with SELinux MAC

    The uproar is because SELinux is a complete pain and tons of work to set up correctly and completely. The SELinux policy for Fedora is ~10mb compiled. Although it does work pretty well at preventing escalation.

    But finally you get the system locked down with SELinux and still it does nothing to prevent BadAddOn.js from reading mydiary.txt if it's owned by the current user.

    What's really needed is:

    - A hardware device to authenticate the user. Put it on your keychain, shoe, watch, whatever.

    - OS that grant permissions for specific objects based on user input, not to processes. If the user selected mydiary.txt from the trusted input dialog then the browser can read it. Otherwise it can't, or it has to ask permission to do so (OS puts up a dialog).

    These two things could reliably cover the vast, vast majority all actual security needs, without hassles to the user, and without remote automated attacks. It wouldn't be perfect still, but it would be magnitudes better than what we have now. Unfortunately there's no mass market to provide a general purpose hardware device like that, and software would have to be modified slightly.

  • by dltaylor ( 7510 ) on Friday May 21, 2010 @05:04PM (#32298954)

    Until there are negative consequences for the execs, there will never be IT security because it costs money. If the execs of companies that have IT breaches were jailed for a couple of years (hard time, not some R&R farm) and personally fined millions of dollars, they would insist on proper security, rather than blowing it off. 'Course, these are the same guys who schmooze with, and pay bribes to, "our" elected representatives, so that's never gonna happen.

    "Security is not a product, it's a process", and, since there's no easily calculated ROI on the time spent on securing IT, even when there's a paper process, it is so frequently bypassed that it might as well not exist.

  • Re:Motivation (Score:4, Insightful)

    by 99BottlesOfBeerInMyF ( 813746 ) on Friday May 21, 2010 @05:15PM (#32299134)

    A big part of social engineering is that users don't have the patience for the sorts of full explanations required to implement that.

    Why would they need patience if you provide them with immediate verification of who they're talking to, if they're affiliated with who they claim, and if what they are doing is a normal procedure or something strange?

    Consider Microsoft's new UAC system, for example—that's close to what you described,

    No, not really.

    but users tend to either just hit "yes" as quickly as possible to get on with their work

    UAC is a study in how operant conditioning can be used to undermine the purpose of a user interface. It's a classic example of the OK/Cancel pitfall documented in numerous UI design books. If you force users to click a button, the same button, in the same place, over and over and over again when there is no real need to do so, all you do is condition them to click a button and ignore the useless UI. Dialogue boxes should be for the very rare occasion when default security settings are being overridden, otherwise the false positive rate undermines the usefulness. Dialogue boxes should be fairly unique and the buttons should change based upon the action being taken. If your dialogue box says "yes" or anything other than an action verb, you've already failed. Further UAC is still a failure of control. Users don't want to authorize a program to either have complete control of their computer or not run. Those are shitastic options. They need to be told how much trust to put in an application and want the option to run a program but not let it screw up their computer. Where's the "this program is from an untrusted source and has not been screened: (run it in a sandbox and don't let it see my personal data)(don't run it)(view advanced options)" dialogue box?

  • Re:Too Expensive (Score:1, Insightful)

    by Anonymous Coward on Friday May 21, 2010 @06:35PM (#32300174)

    Oh yes, that's a brilliant idea; we all know that when a user is bombarded with Yes/No pop-ups he takes the time to read and understand what is happening instead of just hitting yes to make it go away.

    That's exactly the point though. If you have a secure file selection dialog and a secure Finder that grants permission to the files selected by the user, when would a user ever get a permission dialog? They'd never get one, so if one came up they would actually read it. It prevents malicious code from accessing a user's files, but doesn't get in the user's way.

    Can you name a single real instance where a program needs to access a user's files that the user did not select?

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...