Michal Zalewski On Security's Broken Promises 125
Lipton-Arena writes "In a thought-provoking guest editorial on ZDNet, Google security guru Michal Zalewski laments the IT security industry's broken promises and argues that little has been done over the years to improve the situation. From the article: 'We have in essence completely failed to come up with even the most rudimentary, usable frameworks for understanding and assessing the security of modern software; and spare for several brilliant treatises and limited-scale experiments, we do not even have any real-world success stories to share. The focus is almost exclusively on reactive, secondary security measures: vulnerability management, malware and attack detection, sandboxing, and so forth; and perhaps on selectively pointing out flaws in somebody else's code. The frustrating, jealously guarded secret is that when it comes to actually enabling others to develop secure systems, we deliver far less value than could be expected.'"
It's true. (Score:3, Informative)
Re: (Score:1)
That's crazy talk. Microsoft has led the way in principles of application security, secure web frameworks.... they've not exactly blazed the trail with managed language runtimes and secure-by-default, but it's an understatement to say they have certainly caught up.
Sarcasm becomes you. (Score:2)
nt;
Re: (Score:2)
On Linux and OSX apps that want to edit non user files either need to be started as root or they won't run.. period. When MS Windows reaches the same point Microsoft will have finally caught up.
Right now I have mission critical accounting software that won't run as anything but administrator and I don't feel like were winning the security war at all.
Re: (Score:2)
The same is true on any NT based Windows operating system. Set up a limited user account in Windows XP and try to run software that makes changes to the system...it won't work! Unless you use runas [microsoft.com] to run the software as an administrative user, it cannot change the system files. The only exception is if they are on a
Re: (Score:1)
Oh yeah, I sort of did forget about the zombies... I work in application security and probably have kind of a narrow view.
It'll Never Happen (Score:2)
Re:It'll Never Happen (Score:5, Insightful)
Seriously? If there were some magic bullet, the temptation for one cartel member to make a giant pile of cash on it would be overwhelming.
Much more troublesome, for security, is the fact that there are no known methods of secure computing that are economically competitive with insecure ones, not to mention the issue of legacy systems.
You can buy a lot of low end sysadmins re-imaging infected machines for what it would cost to write a fully proven OS and application collection that matches people's expectations.
Re: (Score:2)
If there were some magic bullet
Eliminate users?
Re: (Score:3, Funny)
I think normal bullets are sufficient for that. Unless some of the users are wizards, of course.
Re: (Score:2, Funny)
Do you actually think that all IT and PC security companies have a giant cartel going, where they all secretly agree to suck?
They are called security conferences and 'best practice; documents
Seriously? If there were some magic bullet, the temptation for one cartel member to make a giant pile of cash on it would be overwhelming.
They appear to have found the magic bullet. it is called "the principle of least privellege". Basically they take away your ability to do anything but log on. Then when you shout loudly enough that you can no longer do your job, they make you fill out so much paperwork that you'll never want to ask for access again. Finally when you have just enough access to do enough of your job that you don't get fired (ineffectively and poorly) they conti
Re: (Score:2)
With the resounding success of Win3.0, Microsoft demonstrated that you don't need to provide a secure computing platform if you market your product to customers who know nothing about the technology. Things have gone downhill from there.
Re: (Score:2)
I installed Win30 on machines on several different networks: Novell (sometimes), 3Com (mostly), and Lantastic (occasionally). Security was often an issue, even back in those days. Remember the first malware for PCs travelled from business to business by sneakernet.
Security became more of an issue as many Win3.xx machines were linked to the Internet before Win98 finally came along and most everybody upgraded. Yeah, there was WinNT before Win98, for the companies that could afford its massive support costs.
Re: (Score:2)
You hit the nail on it right there, with the "economically competitive" part. That's the problem.
Sure, if you've got a bunch of custom hardware running custom software that's thoroughly engineered and audited, and that never exchanges data with the rest of the world, you can have considerably higher security tha
Dedicated machines, tying the hardware down. (Score:2)
I'm wondering how the license issues will fall out on locked-down Android based devices, and that is part of the problem.
(Locked-down and tied-down are slightly different things.)
Re: (Score:2)
Oh yeah, we also write our own malware, or else we'd go out of business. Didn't you get the memo?
I will disagree. (Score:2)
No. And no one is saying that.
You might want to look at this article.
http://www.ranum.com/security/computer_securit [ranum.com]
Re: (Score:2)
You miss the grandparent's point... if it was already possible to fix the problem, the first company who did so would kill off the competition and earn ****loads of money. All the profit from selling the solution would go to that compnay, so it would be very high, while the loss of shrinking the market would be evenly distributed between it and its competitors, so it would be comparatively low. Since competetitors have no reason to be loyal to eachother, the only way to prevent it would be if all the securi
Re: (Score:2)
I would gladly go out of business and do something useful. Maybe design a slick database. Or write a cool game. Instead I'm sitting here, improving the ability of my VM at detecting "pointless" loops in malware.
Allow me to tell you something: AV people tend to be amongst the best in the business. We know more about the Intel architecture than maybe most people at Intel. We know quirks of Windows that even people at MS don't know about (how I know? If they knew they wouldn't have put that crap in there!).
Do
Re: (Score:2)
we do not even have any real-world success stories to share.
"We didn't get hacked or release our entire customer database this month"
So let me get this straight (Score:5, Insightful)
When Virtual Security mirrors Physical Security - people should expect more from virtual security? How is a Night watchmen not a form of "vulnerability management" and "attack detection"?
All security in general is reactive. You can't proactively solve every problem - this philosophy goes beyond security. The proactive solution is to plan on how to handle the situation when a vulnerability gets exploited, something I think virtual security has managed to handle a lot better than physical security.
Re:So let me get this straight (Score:4, Informative)
In the real world, security is hard because matter is malleable. When an armored vehicle gets blown up, we don't say that it "failed to validate its inputs". It just didn't have enough armor. Even in cases where it survives, all it would have taken is larger projectile, or one moving a bit faster... When somebody pulls an SQL injection or something, though, it is because the targeted program did something wrong, not because of the inescapable limitations of matter.
The only real class of security issues that mirror real-world attacks are DOS attacks and the like, because computational capacity, memory, and bandwidth are finite.
Re: (Score:1)
Not all real life attacks are blowing somet
Re: (Score:2)
But thats where virtual security is LESS favourable than physical security.
There was a time where SQL injection wasn't even a concieved idea - so how do you protect from that kind of threat?
With Physical security, the amount of things involved are very few. It basically boils down to keeping bullets out or keeping people out. And both of those get a bonus with the more armour you add or the more people you hire.
With virtual security, you can take a million computer scientists and tell them to get cracking b
Re: (Score:3, Insightful)
Re: (Score:2)
Beware of bugs in the above code; I have only proved it correct, not tried it.
correctness (Score:2)
The most correct program in existence consists of exactly one instruction:
NO-OP
and it is unfortunately not correct in all contexts.
Re: (Score:1)
Many attackable flaws--like SQL injection--are also bugs. That is, unsanitized data is put into something that's parsed for meaning.
(This is a long-known problem, at least in UNIX circles, as it is the SQL equivalent of command quoting problems.)
These bugs show up as crashes and odd behaviour with incorrect user input, or unanticipated user input. (Ask Apple how much it cost for an incorrectly quoted "rm" command in the iTunes update script.)
You test for this stuff by feeding your program the whole range
Yes, but SQL injection was predictable (Score:2)
That particular example is a bad one for the point you're making.
Things happen when you have control logic and peripherals.
By "peripherals" I mean anything the code can control. It could be a database, or a Space Shuttle main engine.
Dan Bernstein's theory, which he sharply distinguishes from least privilege, is to ruthlessly eliminate the code's control over anything not actually required. No matter how complex the code, it can't do anything that the computer can't. No compromise of my laptop could damage a
Re: (Score:2)
Re: (Score:3, Insightful)
When Virtual Security mirrors Physical Security - people should expect more from virtual security? How is a Night watchmen not a form of "vulnerability management" and "attack detection"?
I agree about the physical security: with software, we are confronted with a very similar set of problems.
All security in general is reactive.
I am not sure what that means. If I have a safe, for example, as a solution to my policy of restricting access, then I have something that is both proactive and reactive. The safe is proactive because it make unauthorized access via a blowtorch much more expensive than authorized access via a combination. It is reactive because it makes undetectable unauthorized access prohibitively expensive. I don't
Re: (Score:2)
One way in which virtual security does not mirror physical security is in jurisdiction. Physical security can rely, to some extent, on local law enforcement, be it truly local or on a national level. On the internet, the ability of law enforcement to aid in security is limited because those attempting to break in can do so without ever setting foot in the country containing the target machine.
The whole situation feels similar to the depression era where bank robbers successfully used jurisdictional limitati
timing? (Score:2)
You mean the race condition between the marketing department's release schedule and the engineering department's bugzilla?
Reactive Security Not Necessary Bad (Score:1)
Reactive security is not necessarily a bad thing. Only by challenging today's security can we seek to inspire people to improve security for tomorrow.
I do, however, feel that security in the digital age is laughable at best. It turns out telling a computer not to do what it's told is significantly harder than telling it to do what it's told.
Re: (Score:2)
It's indeed a very good thing. When coupled with Preemptive security. To give an example in a "traditional" realm, Preemptive security would be locking your doors when you leave the house (and setting an alarm/installing bars on windows, etc). Reactive security would be having the police come to your house with guns drawn because someone is inside (and then later figuring out how they got in and closing that hole). Neither on their own would be sufficien
Re: (Score:2)
Re: (Score:1)
Not clicking suspicious links is not equivalent to not locking your door. It's
Re: (Score:1)
I think the lack of preemptive security is self-balancing. Those who do not believe preemptive security is necessary are those who fall victim and promptly change their tune.
It is unfortunate that such victims traditionally pay an exorbitantly disproportional price for their misconception (such as identity theft victims), but again, I believe the system to be self-balancing. If the thousands and thousands of bone-chilling horror stories about identity theft aren't enough to get you to take it seriously, wha
Wrong approach? (Score:2)
I just get this feeling like the approach is all wrong to security.
At the heart of the security concept is that CPUs generally aren't designed with security in mind. I blame Intel, ARM, Motorola, IBM, and anyone else I can. CPUs are just executing code they're told to execute. NX, ASLR, and other "security" features don't work. Particularly when the underlying architecture itself is flawed.
Well, no, IBM gets a pass. Given that the PS3 has yet to see a major exploit, I believe that the Cell may have sec
Re: (Score:2, Informative)
The underlying architecture is fine. Ever since the 286 it's been possible to run code while limiting it to accessing only a specified set of memory addresses. What more is it supposed to do? It's not the CPUs' fault that OSes are failing so hard at the principle of least privilege.
They're just "executing code they're told to execute"? Well, of course - do you want them to refuse to execute "bad" code? If so, please show me an implementation of an IsCodeBad() function.
Re Cabsec - Capability Based Security (Score:3, Informative)
I've read through all the comments, and this is the only sane one that stands out. The principle of least privilege, as I see it, is the idea of letting the user give privileges to a program at run time, and they would chose the least possible set of resources to get the job done.
The main thing is that with cabsec, you NEVER trust a program with the full resources of a user, and thus it never has enough resources to take out your system.
Consider if Outlook were only allowed to talk to a mail server, and a d
Re: (Score:2)
I'm assuming I'm misreading something here? The article is a little light on details but it sounds like all they're doing is constructing automated unit tests by brute-force running each possible input through an existing correct routine which can then be applied to later modifications of said routine. Which strikes me as remarkably bad in several ways:
- You need to know that the routine is correct in order for the "diagnostic subprogram" to create a correct set of tests. But if you already know its corr
Re: (Score:2)
So, your suggestion is to get rid of CPUs?
Should I translate that to, let's convert all of our software to run on dedicated finite-state machines? One machine per program?
Re: (Score:2)
My suggestion is to dump *crappy* CPUs.
Which unfortunately, is a lot of them.
We've got the Processing part down, now we need to make sure that they play nice when talking to foreign machines. Unfortunately, that's not happening.
Common CPU exploits we see now weren't new when Intel designed the 386. It's just that well, when Intel designed the 386, I don't think anyone would've expected that architecture to stay the same and be used in mission critical, 24/7 support situations.
Re: (Score:2)
Getting rid of illusions is a valuable service even without proposing a fix, and it's not the job of the fire alarm to put out the fire.
Sometimes saying there is no solution frees up resources to adapt and cope. If you call the fire department and say "my potassium stockpile has caught fire", the best thing you can tell them is that they're need to fall back, protect other buildings, and let it burn itself out.
I still try to protect my clients, but part of that is to warn them that certain problems are unso
It's about time, really. (Score:2)
Bosses keep saying, why re-invent the wheel?
If our wheels are triangular and Microsoft keeps selling us on the idea that wheels are supposed to be triangular, then we need more people to tell it like it is.
Security is hard (Score:3, Insightful)
The central insecurity of software stems from the fact that security requires time and effort, which makes it hard to get management to fully commit to it, and there's nothing in the world that can make a bad or ignorant programmer churn out secure code. There have been solid steps taken that have helped a lot, and programmers are getting more educated, but at the end of the day security requires a lot of effort.
Re: (Score:1)
No, no, no! You've got it all wrong. It isn't the infatuation with Turing Machines that is the problem. It is the infatuation with *Networks*. Once we eliminate all means by which one computer can communicate with another we'll have the perfectly secure computer that we've all always dreamed of.
Actually, that might not go far enough. Some evil hacker terrorist-type might still be able to infect systems through software loaded from a disc or something! Better to just do away with the whole I/O sys
Re: (Score:2)
You keep harping about the turing model.
What do you suggest to replace it? Magic?
A decision machine is going to behave like a Turing machine.
Period.
Analyzing where and how decisions are made is useful. Getting rid of the decision step itself is not.
Turing is not the problem.
The problem is the real world, and the fact that models are never the real thing, or there would be no reason to build models.
Inventing a new programing model that somehow avoids the Turing bottleneck (instead of postponing it or spreadi
Motivation (Score:3, Interesting)
Security can be widely deployed by enterprise IT, OS vendors, and possibly some hardware OEMs. The larger the footprint, the easier it is for such real security to be rolled out. The thing is, while some IT departments have very good security, just as many have terrible. Hardware vendors are unlikely to have the expertise and are unlikely to be able to profit using an integrated security platform as a differentiator. This pretty much leaves OS vendors. MS has a monopoly so they don't have much financial motivation to dump money into it. Apple doesn't really have a malware problem, with most users never seeing any malware let alone making a purchasing decision based upon the fear of OS insecurity. Linux is fragmented, has little in the way of malware problems, and has niche versions for those worried about it.
I'm convinced malware is largely solvable. It will never be completely eliminated by the vast majority could be filtered out if we implemented some of the cool new security schemes used in high security environments. But who's going to do it? Maybe Apple or a Linux vendor if somehow they grow large enough or their platform is targeted enough. Maybe if MS were broken up into multiple companies with the IP rights to Windows, they're start competing to make a more secure product than their new rival. Other than that, we just have to sit in the mess we've made.
Re: (Score:2)
Not as long as social engineering is possible.
Re: (Score:2)
I'm convinced malware is largely solvable.
Not as long as social engineering is possible.
Social engineering relies upon deceiving a user and getting the user to authorize someone they don't know to do something they don't want. By making sure the user is informed of who is contacting them and what exactly that person is doing, as well as making sure something very similar is never required, yes we can eliminate pretty much all cases of social engineering as it is generally understood.
Re: (Score:2)
A big part of social engineering is that users don't have the patience for the sorts of full explanations required to implement that. Consider Microsoft's new UAC system, for example—that's close to what you described, but users tend to either just hit "yes" as quickly as possible to get on with their work, or disable it entirely.
Re:Motivation (Score:4, Insightful)
A big part of social engineering is that users don't have the patience for the sorts of full explanations required to implement that.
Why would they need patience if you provide them with immediate verification of who they're talking to, if they're affiliated with who they claim, and if what they are doing is a normal procedure or something strange?
Consider Microsoft's new UAC system, for example—that's close to what you described,
No, not really.
but users tend to either just hit "yes" as quickly as possible to get on with their work
UAC is a study in how operant conditioning can be used to undermine the purpose of a user interface. It's a classic example of the OK/Cancel pitfall documented in numerous UI design books. If you force users to click a button, the same button, in the same place, over and over and over again when there is no real need to do so, all you do is condition them to click a button and ignore the useless UI. Dialogue boxes should be for the very rare occasion when default security settings are being overridden, otherwise the false positive rate undermines the usefulness. Dialogue boxes should be fairly unique and the buttons should change based upon the action being taken. If your dialogue box says "yes" or anything other than an action verb, you've already failed. Further UAC is still a failure of control. Users don't want to authorize a program to either have complete control of their computer or not run. Those are shitastic options. They need to be told how much trust to put in an application and want the option to run a program but not let it screw up their computer. Where's the "this program is from an untrusted source and has not been screened: (run it in a sandbox and don't let it see my personal data)(don't run it)(view advanced options)" dialogue box?
Transference of responsibility (Score:2)
I'm convinced that the software companies intentionally fuck up the interfaces like that. That way they are not responsible if the user installs something bad.
And, exactly like you posted, the user will NOT read the pop-ups after the first few. All they will see is a "click 'yes' to continue" the same as they see on the EULA's and every other pop-up. The same as "do you want to run this".
A basic white list would be better for the users than the current situation. And pop-up a DIFFERENT box when the user is
Re: (Score:2)
Why would they need patience if you provide them with immediate verification of who they're talking to, if they're affiliated with who they claim, and if what they are doing is a normal procedure or something strange?
How do you propose we identify whether the software is doing "something normal" or "something strange" ? How are you going to define these things ? How are you going to account for situations where the definitions are wrong ?
Re: (Score:2)
How do you propose we identify whether the software is doing "something normal" or "something strange" ?
First of all require ACLs as part of applications, second profile normal use cases for common types of applications and make sure users can see those profiles to make sure they make sense, third require apps to be signed, and fourth, vet those ACLs either as the OS vendor, using third party security greylists, or a combination of both in a weighted manner. But that's just a quick outline, there have been a number of very good papers written on this topic, and demonstrations of working code, just not moved i
Re: (Score:2)
Every day I launch Visual Basic 6.0 from my start menu, and every time it gives me the same UAC warning that it will be run with Admin privileges. If I had to stop and determine the correct button every time it would be a total pita. If UAC presents me with a different dialog or presents it under
mess who made? (Score:2)
Bill Gates and Steve Ballmer made the mess, and I'm doing my best not to sit in it.
x86 (Score:2)
Re: (Score:1)
The ISA has nothing to do with it.
We're not talking about low level attacks, we're talking about the overall landscape at the top level.
We couldn't even get that shit right if we were given ideal hardware.
Re: (Score:1)
Considering that the x86 platform is inherently insecure
How is it any more insecure than any other CPU architecture?
Re: (Score:1)
Re: (Score:2)
The other things are at least a little bit easier to deal with when the underlying execution model is stable.
Too Expensive (Score:3, Interesting)
It may be that a secure and convenient system is possible, but it's too expensive for anybody to sit down and write.
Rather, we're slowly and incrementally making improvements. There's quite a bit of momentum to overcome (witness the uproar when somebody suggests replacing Unix DAC with SELinux MAC) in any installed base, but that's where the change needs to come from, since it's too expensive to do otherwise.
If time and money were no object, everything would be different. More efficient allocation of the available time and money is happening as a result of Internet collaboration.
So, 'we're getting there' seems to be as good an answer as any.
Re: (Score:3, Insightful)
witness the uproar when somebody suggests replacing Unix DAC with SELinux MAC
The uproar is because SELinux is a complete pain and tons of work to set up correctly and completely. The SELinux policy for Fedora is ~10mb compiled. Although it does work pretty well at preventing escalation.
But finally you get the system locked down with SELinux and still it does nothing to prevent BadAddOn.js from reading mydiary.txt if it's owned by the current user.
What's really needed is:
- A hardware device to authenticate the user. Put it on your keychain, shoe, watch, whatever.
- OS that grant pe
Re: (Score:2)
The uproar is because SELinux is a complete pain and tons of work to set up correctly and completely.
Right, but I think this is largely the case because Unix DAC and SELinux MAC are mixed in an unholy matrimony. This causes things to get complicated, and frankly not many people care, so there's not enough work done to do SELinux right. An experimental distro that ripped out Linux DAC would be an interesting project.
The SELinux policy for Fedora is ~10mb compiled
For what, 14,000 packages?
OS that grant perm
random thoughts from way out in left field (Score:2)
if you put a lock on a box and leave the box in the middle of the highway, is the box secure?
I'm inclined less to access control lists (vectors, whatever) and more to ephemeral users (kind of like sandboxes for everything, but not really).
Re: (Score:2)
Do you let users read and write data across their sandbox boundaries? If so, how do you control that?
Re: (Score:2)
Right, but I think this is largely the case because Unix DAC and SELinux MAC are mixed in an unholy matrimony. This causes things to get complicated, and frankly not many people care, so there's not enough work done to do SELinux right. An experimental distro that ripped out Linux DAC would be an interesting project.
It has much more to do with SELinux exploding when it has to deal with shared resources.
Take shared libraries for instance. All the policies label everything in /usr/lib with the same type, so any program can link any library. Obviously this is bad, but the thought of specifically labeling each library and then having rules for every program so they can link the ones they use is madness. They don't do it because in practice, in the real world, it just can't be done.
And then how do you protect mydiary.txt
Re: (Score:2)
But even then, even if you do THAT, you need a way to elevate. Occasionally you'll want your browser to have read access to your files, say you
Re: (Score:1, Insightful)
Oh yes, that's a brilliant idea; we all know that when a user is bombarded with Yes/No pop-ups he takes the time to read and understand what is happening instead of just hitting yes to make it go away.
That's exactly the point though. If you have a secure file selection dialog and a secure Finder that grants permission to the files selected by the user, when would a user ever get a permission dialog? They'd never get one, so if one came up they would actually read it. It prevents malicious code from accessing a user's files, but doesn't get in the user's way.
Can you name a single real instance where a program needs to access a user's files that the user did not select?
Re: (Score:2)
we're not getting there. As an example, McAfee has an on-access scan. Any file read or written gets scanned.
A virus can disable that, so the workaround is to have a monitor program ensure the on-access scan is enabled.
That can be stopped, so you make the service un-stoppable.
That can be worked around, so the current solution is to have anther monitor connect to the policy server (for lack of a better term), download what the settings should be, and re-set all of the settings and re-start any stopped servi
Re: (Score:2)
I don't seem to have these problems on my Fedora systems. My parents don't seem to have these problems on their Macintosh.
Windows would be a poor example of making any progress.
fueling global warming, hey? (Score:2)
Come to think of it, we had less automotive pollution (overall, not in certain specific areas) in the '70s, too.
Parasites and Hosts (Score:2)
Re: (Score:2)
Of note is that we are seeing some more sophisticated 'ecologies' of malware coming into view. Botnets that don't 'kill' the victim. Malware that kicks off other malware. However, evolution will 'accept' a less than perfect approach to an infection (ie, getting entirely rid of the thing) as long as the organism can get to successfully reproduce. I just
Re:Not so much in ix and ux environments (Score:4, Interesting)
Modern Microsoft OSs aren't really any more "inherently vulnerable" than anyone else that might be viable in the consumer space. At this point it's more about getting the apps onboard with the security model. In the server space, Win2008 r2 gets most things right - just about everything is off by default, the kernel itself is quite secure, there's a good model for running as a non-admin and escalating when needed.
The biggest problems with Windows right now are apps that pointlessly need to run as admin, and apps that don't sandbox even narrower than "all the current user's data". All OSs are equally vulnerable to social engineering trojans - if you can trick the user into giving you the root password, you win - but outside of that Windows itself is only particularly weak in that a lot of the code is still new.
The real trick for security - for Windows and everyone else - is to adopt a model more like SE Linux where you just agressively limit what each app has access to. SE Linux is too hard to configure for the broad market, but a simpler approach where each app is sandboxed in a VM with just the resources it needs will shut down the "drive by" attacks involving flash, PDF, and similar apps. You can't do much about social engineering trojans, but you can fix the rest with sandboxing/jailing that doesn't require the end user to configure stuff.
The Web browser shouldn't be special in this regard - every app should be jailed automatically, requiring effort from app developers to broaden an app's scope, instead of the current model where app developers are asked to do extra work to narrow an app's scope.
Attacks are cheap (Score:3, Insightful)
I think the issue is not that we're bad at security, it's just that attacks are cheap, so you need the virtual equivalent of fort knox security on every webserver. That sort of thing isn't feasible.
The lock on my house isn't 100% secure, but a random script kiddie isn't pounding on it 24/7, so it's good enough.
Re: (Score:2)
Not every webserver, but you do need that level on ever point of access (physical and virtual). So, this would mean the gateway firewall and/or router, plus the web proxy server(s), plus whatever proxy(s) the users used to connect to the outside world. The web proxies should be on a network that can only talk to the webservers and maybe a NIDS. This isn't overly painful because proxies are relatively simple. They don't involve dynamic database connections, they don't need oodles of complexity, the rights ne
developers' fault (Score:2)
In my experience, developers don't want Security anywhere near their products. We insist that they fix these "theoretical" and "academic" security problems, ruining their schedules and complicating their architectures.
Fine! Whatever. We will continue cleaning up your messes and pointing out the errors in your coding (which we warned you about). You can continue stonewalling us and doing everything you can to avoid us. We still get paid and you still get embarrassed.
Re: (Score:2)
Security is NP hard? (Score:3)
If you define security as being able to determine whether or not a program will reach an undesired state given arbitrary input, isn't that equivalent to the halting problem? Isn't that NP hard? I know that I generally force programs to halt when they're behaving badly, if they don't halt on their own.
Re: (Score:1, Informative)
[...] isn't that equivalent to the halting problem? Isn't that NP hard?
The halting problem is undecidable.
jail+fine the execs (Score:3, Insightful)
Until there are negative consequences for the execs, there will never be IT security because it costs money. If the execs of companies that have IT breaches were jailed for a couple of years (hard time, not some R&R farm) and personally fined millions of dollars, they would insist on proper security, rather than blowing it off. 'Course, these are the same guys who schmooze with, and pay bribes to, "our" elected representatives, so that's never gonna happen.
"Security is not a product, it's a process", and, since there's no easily calculated ROI on the time spent on securing IT, even when there's a paper process, it is so frequently bypassed that it might as well not exist.
Re: (Score:2)
We don't even need to go this far.
The solution to both the Quality and Security issue in software is Strict Liability.
We need to make software accountable to the same standard we require for ordinary goods: no more disclaimers of harm, avoiding Warranty of Fitness, or any of the other tricks we currently accept as par-for-the-course in software.
Sure, it will slow down software development. But, we're no longer the Wild West here - we have a reasonable infrastructure in place, and it would help soci
mod parent up! (Score:2)
I wonder if my sock puppet has mod points.
Two problems fixing security (Score:2)
#1. Getting management to say "OK we'll let the deadline slide, max out the budget and reduce some functionality/ease-of-use so we can fix the security flaws".
#2. Getting minimum wage Java programmers to understand/care about securing their code.
Things are not helped by the sad state of so-called security products by the likes of Symantec that seem very popular with PHB's, they must have a lot of sales reps hanging around golf courses.
Its also a bit much Google bitching about other people's security - wardr
Completely depressing article (Score:1)
I would hate to work in an environment where "it's hopeless, nothing we do today works" was the prevailing theme.
Re: (Score:2)
That's one of the reasons I quit the industry 4 years ago.
Not sure why I think I want to return to the industry now, I'm sure I'm not going to get anybody to listen to me this time.
No bugfree software etiher (Score:1)
The principles of secure software are almost the same as bugfree software. No silver bullet -- again.
Simple answer: Security is EXPENSIVE (Score:2)
Power Systems security: separate prog. from data (Score:1)
Re: (Score:1)
Interesting points, but you are just a horrible person. Do you put on a balaclava and talk this way at dinner parties?
While I agree that avioding Microsoft is good, (Score:2)
it only postpones the problem, as long as the industry itself is pushing impossible deadlines and weighing them down (and fueling them) with impossible feature lists.
Sandboxing is good in theory, but nobody really does it right, yet. The present version is more of a low wall than a speedbump, for now. Today's low walls are tomorrows speedbumps.
Re: (Score:2)
> "The frustrating, jealously guarded secret is that when it comes to actually enabling others to develop secure systems, we deliver far less value than could be expected.'"
The frustrating, jealously guarded secret is that when it comes to actually enabling others to develop secure systems, NOBODY WANTS TO PAY AN EXTRA BUCK, UNTIL THE DAMAGE IS DONE.