The Six Dumbest Ideas in Computer Security 792
Frater 219 writes "The IT industry spends a huge amount of money on security -- and yet worms, spyware, and other relatively mindless attacks are still able to create massive havoc. Why? Marcus Ranum suggests that we've all been spending far too much time and effort on provably ineffective security measures. It may come as a surprise that anti-virus software, penetration testing, and user education are three of "The Six Dumbest Ideas in Computer Security"."
zerg (Score:5, Funny)
Re:zerg (Score:5, Funny)
Come to think of it, why the hell isn't the UN trying to do this already? Won't somebody PLEASE think of the children?
Re:zerg (Score:5, Interesting)
Don't forget Sneakers, which was way cooler (IMNSHO) than Hackers.
Re:zerg (Score:3, Insightful)
Cool by default because it was a movie about hacking before the world at large even knew about hacking (and phreaking, and blue boxes...)
Re:zerg (Score:3, Insightful)
No, no. Before Sneakers there was "War Games."
Cool by default because it was a movie about hacking before the world at large even knew about hacking (and phreaking, and blue boxes...)
Not to mention the fact that, unlike so many other movies about hacking, War Games involved actual research on the system being targeted on the part of the main character in the movie. Sure, most of the research was done as a montage because otherwise it's boring, but it was strongly implied that he spent weeks trying to
Re:zerg (Score:3, Informative)
There were precisely two cool things about Hackers [imdb.com].
1. Angelina Jolie.
2. Airbrushed keyboards.
Sneakers [imdb.com], on the other hand, Hollywoodified an already absurd idea..
A much bigger problem (Score:5, Insightful)
These people get the crap and then bring it into the cacoon, thus negating the hundreds of thousands of dollars of security infrastructure
Re:A much bigger problem (Score:5, Interesting)
Re:A much bigger problem (Score:4, Informative)
They didn't negate it. The stateful firewall still stopped traffic at it's border etc... What they did was expose the lack of hours spent planning the security. here is what I do and you are free to do it, improve it or ignore it (that makes it free). In my company every network jack that does not have a direct attached device on it is plugged into a bank of switches that are seperated from my network by a pixfirewall. The firewall has rules that allow basis e-mail, web and specific application data to go accross. Most traffic is denied. If anyone plugs a laptop in they are able to do those things but are unable to do Windows file share, domain login etc... If they need to use those I have to be given control of the box and it does not leave the building.
Re:A much bigger problem (Score:3, Insightful)
As well, any network that can get completely owned by a road warrior is inherently brittle. It needs more defense in depth.
Re:A much bigger problem (Score:4, Interesting)
My first frustration with them started when they put up the internet content filter. This I had to bypass by turning on my apache proxy at home and accessing the internet through my home machine (using ssh, of course). The local helpdesk guy just rolled his eyes at me when I showed him playboy.com. I wasn't just being a pain, though - they had the filter tuned so tightly that even some of our vendor websites were filtered.
The next thing they did was run this horrid agent via the login script that lets them do whatever they want. On the surface, it seemed okay because they were just using it to make sure your machine was patched and running the latest anti-virus. However, it seemed to crash or seriously effect the performance of most machines that were still running 95 or 98. Their solution? Put 2000 on all of those machines. Ever run 2000 on a 200 MHz machine with 32-64MB of RAM with Norton running? Unusable. So, I figured out that you could easily trip up their startup script by strategically placing a single text file. The IT guys know this and leave me alone, and in fact refer people to me (with a wink and nod) when they have this problem. :)
Password management is a disaster. If you use Outlook or webmail, occasionally you might get a warning that your password will expire in n days. One of the options is to change your password. Almost everyone does. Uh-oh, now you can't log in to the network... why? I don't pretend to know. All I know is that you must make the password change when you first log in to windows and never when prompted after login. I'd ridicule the people that haven't grasped this - but really, they are just following directions, aren't they?
What is next? I don't know, but there is a reason that us Luzers find the IT management to be an obstacle rather than a help.
Re:A much bigger problem (Score:3, Interesting)
It really is neat-o when I read about personal stories about hell-desk or being that "luzer" (when we know you arent... luzers dont even know what ssh is).
Thanks. (no, this isnt satire, I really am pleased that slashdot can still generate what it originally did years ago.. real people commenting about their problems.)
Re:A much bigger problem (Score:5, Interesting)
[snip]
Their solution? Put 2000 on all of those machines. Ever run 2000 on a 200 MHz machine with 32-64MB of RAM with Norton running?
Well, if you read between the lines here, it's clear that at least one reason that your IT department does stupid things is because there isn't a proper capital budget for replacing old machines. In fact I'd bet they don't have a proper operating budget either. It's typical enough: not enough resources to prevent problems, barely enough resources to mount a pantomime of a response to them when they arise. The only thing you'd need to get a perfect trifecta of dysfunctional management is a culture of scapegoating masquerading as "accountability".
The typical game plan:
(1) Willful ignorance
(2) Wishful thinking
(3) Make a show of responding
(4) Look for somebody to blame.
IT is overhead, and overhead is the devil when you run a company. That means in a well run company you seldom can expect everything you might wish for. But you can't just wish overhead away: you have to be smart enough to know when spending less on one piece of overhead means you spend more in ten other plances. Sounds like your senior management fails this test.
Dumbest security policies? (Score:5, Interesting)
I worked for a firm earlier where we had to change our passwords every week where the password had to 1) be exactly 14 characters and 2) be ~60% different to the previous four passwords.
The result was of course that almost every user had their passwords on post-it notes.
Re:Dumbest security policies? (Score:5, Interesting)
I still had to change my password every two weeks, with conditions similar to what you describe-- IIRC ten or more characters, mix of numbers and letters, had to be substantially different than the one before. I eventually a system down for remembering what it was, but I'll be the first to admit I was using my Mac's "stickies" to keep track of the password for the first six months. Considering they were dealing primarliy with graphic designers, not programmers, I can only imagine what some of the other employees were doing. Since they also weren't the easiest employers to deal with, I can only imagine that the lack of give-a-shit factor kept many employees from trying to hard to keep that ever-changing password a closely guarded secret. Let me stress that the damage that could be done if my password was compromised was completely negligible-- maybe someone could have inserted a dirty message in a greeting card, but it still had another check to go through before it went online!
Basically my point is, there's a point where security for security's sake is an annoyance. I'm certainly not an expert in these matters but IMO making low-level users go through hoops is just going to foster ill will, better to lock down their privileges in the first place and make sure no damage could be done if that account was compromised. Frequently changing admin passwords is of course another matter, but that's part of the responsibility that comes with the job.
Re:Dumbest security policies? (Score:3, Insightful)
Re:Dumbest security policies? (Score:3, Insightful)
For real effectiveness, though, you have to implement this the way we have it at work -- every webapp, from travel reservations to sexual harassment, training has a different account with different login names and mandatory strong, rotated passwords.
Re:Dumbest security policies? (Score:5, Funny)
Man, you had it easy. My current place uses iris scans for authentication. We have to swap out our eyeballs every 30 days, and our new eyes can't be the same colour as the last pair.
Re:Dumbest security policies? (Score:3, Interesting)
A. User allowed to use simple passwords that they can easily remember such as 'password', or 'abc123'. This user doesn't have to write their password down to be able to remember it.
B. User with a complex password, but writes it on a post it note because they don't stand a chance in hell of remembering it.
If user B is also requested to take the simple step of placing the
Re:Dumbest security policies? (Score:5, Interesting)
Needless to say the course was less than effective and illustrates what should be the seventh dumbest idea - "Security policies have no effect on productivity". The amount of grief caused to companies by rigid, pedantic security nazis is astounding.
MOD PARENT UP (Score:3, Insightful)
Re:Dumbest security policies? (Score:5, Funny)
I don't understand why people bother with postit notes
Re:Dumbest security policies? (Score:3, Funny)
Better not do that on your girlfriend's PC.
Make that your ex-girlfriend's PC.
Re:String comparison? (Score:3, Interesting)
Why the hell would writing you password on post-its be a stupid idea? Everywhere I've worked the IT people didn't give a shit about the guy in the next room or cube getting your password. It was the people outside the building that mattered.
You are telling me that you could come up with a unique 14 character password every week and not have to write it down? Listen, I'm a pretty fucking smart guy, and I don't have that ability. Wit
Real security has to be build into the foundation (Score:5, Insightful)
To illustrate, ask yourself this question: why do most corporate computer users have permissions on their computer to download and execute arbitrary programs?
Now, it should be noted that even Linux gives the average user this capability. But that needn't be so.
Antivirus programs are a bandaid, not a solution. But most people treat them as a solution, and therein lies the problem.
If you really want to take care of security issues, you have to do so at the foundation.
Re:Real security has to be build into the foundati (Score:5, Interesting)
Relevant example:
alex@joker:/tmp# mount | grep tmp
alex@joker:/tmp#
bash:
alex@joker:/tmp#
Sun Dec 3 17:49:23 CET 2000
Unistalling right now (Score:5, Funny)
I'm gonna stop using condoms too while I'm at it.
Re:Unistalling right now (Score:5, Funny)
What does making water balloons have to do with preventing a computer infection? I don't get it.
You bet! (Score:3)
I also stopped using condoms, since I limit my activities to my wife. I'm also free from those sorts of
Re:Unistalling right now (Score:3)
If you're careful about what you install, stay away from Kazaa and warez, and keep an eye on your windows\currentversion\run registry entries, and for god's sake do not open file attachments, you can stay saf
Highly applicable (Score:5, Informative)
Woah, he's not talking about Slashdot?
He mixed up hacking and cracking (Score:5, Insightful)
Re:He mixed up hacking and cracking (Score:5, Informative)
Re:He mixed up hacking and cracking (Score:3, Insightful)
There's little point fighting battles that you can't win, unless you mean to make an example in your loss. In this case, you can't possibly win and there's no example to make (except perhaps that language evolves - big deal); I'd suggest saving your effort for something you *can* make a difference to.
Poor Article (Score:5, Interesting)
Re:Poor Article (Score:5, Insightful)
I guess that people who comment like this have never done any serious security work in their life.
If you had, you'd acknowledge all the points (plus the extras) easily...
On my webservers... (Score:5, Interesting)
I patch PHP to set a constant in the namespace of the script whenever a 'dangerous' function is called (eg: system(), shell_exec, the backtick operator etc., others
If the script is allowed to call the functions, all well and good, it's just logged. If not, the offending IP address is automatically firewalled. I purloined some scripts [ibm.com] from the 'net that allow shell-level access to manipulate the firewall.
So, now I had a different problem - the webserver wasn't running anywhere near the privilege needed to alter the firewall, and I didn't want to just run it under sudo in case anyone broke in. I wrote a (java (for bounds-checking), compiled with gcj) setuid program that takes a command string to run, an MD5-like digest of the command, and a set of areas to ignore within the command when checking the digest. The number of areas is encoded into the digest to prevent extra areas being added. If the digest doesn't match, the program doesn't run. This is a bit more secure than 'sudo' because it places controls over exactly what can be in the arguments, as well as what command can be run. It's not possible to append ' | my_hack' as a shell-injection.
So, now if by some as-yet-unknown method, you can write your own scripts on my server (it has happened before, [sigh]), you're immediately firewalled after the first attempt - which typically is *not* 'rm -rf
Well, PHP and SQL injection of course, but the same script is used there - if the variables being sent to the page are odd in some way (typically I look for spaces after urldecoding them as a first step - SQL tends to have spaces in it
What would be nice would be a register within a PHP script that simply identified which functions were called. In the meantime, this works well for me...
Just thought I'd share, because it's similar to what the author is saying regarding only trusting what you know to work, and everything else gets the kick (squeaky wheel-like
Simon
DRM (Score:4, Interesting)
Re:DRM (Score:4, Insightful)
DailyDave (Score:3, Interesting)
[immunitysec.com]http://lists.immunitysec.com/pipermail/dailydave/2 005-September/002347.html [immunitysec.com]
Dave's "Exactly 500 word essay on "Why hacking is cool, so that Marcus changes his web site"." [immunitysec.com]http://lists.immunitysec.com/pipermail/dailydave/2 005-September/002366.html [immunitysec.com]
One good point this article makes (Score:5, Interesting)
OTOH, it is a collosal pain in the arse to deny all traffic and only allow what you want because so much code is network aware these days and designed to talk to some place across the net. Then again, it does tell you which apps are communicating in the first place.
On my Windows boxes I use Sygate Personal Firewall to create a specific list of allowed executables and block everything else with a block all entry at the bottom of the fall-through list. No match, no talk. Inbound and out. Combined with NAT it makes for very little traffic reaching my internal network. When I leave my desk for the night and Windows is running, remove a few check marks and save and it only allows the file sharing app to talk and I keep that updated and locked down at all times.
It also can be set to approve or deny execution of code that may have changed since last allow/deny challenge.
That which is not forbidden is not only not compulsory, but probably suspicious.
Dumbest Ideas in Corporate Email Security (Score:5, Funny)
Password must be 10+ characters in length, contain upper and lower case letters, 3 numbers and 2 special characters.
Result:
Users keep their passwords on post-it notes stuck to their monitors.
2) Constant password expiration
Passwords expire every 3 months. New passwords can not resemble old passwords.
Result:
Users keep their passwords on post-it notes stuck to their monitors.
#4) Hacking is Cool (Score:5, Interesting)
Crime as a problem of context is studied in Gregory Bateson's seminal book Mind and Nature: A Necessary Unity [amazon.com]. Bateson addresses two flaws in our court system. One is to treat a crime as something isolated and somehow measurable in penal terms. Taking a crime out of context, i.e., the makeup of the criminal, is blind to the forces that generate criminal actions.
Bateson speaks of (crime) "...as not the name of an act or action; it is the name of a frame for action. ...( he suggests)... we look for integrations of behavior which a) do not define the actions which are their content; and b) do not obey ordinary reinforcement rules." In this context he suggests play, crime and exploration fit the description. As long as we are only able to punish according to some sort of arbitrary eye for an eye method of bookkeeping we will be unable to root out crime.
Bateson's second criticism of our judicial system addresses it's adversarial nature. He writes... "adversarial systems are notoriously subject to irrelevant determinism. The relative 'strength' of the adversaries is likely to rule the decision regardless of the relative strength of their arguments. Bateson's second
He further goes on to a brilliant analysis of the Pavlovian study of dogs in terms of the dog's view of the context; and, how the dog's context is violated when the dog's view of a "game" of distinction is morphed into a game of guessing without there being any markers to tell the dog the context of the game has been changed. This switch in context drives neurotic and violent behaviour in the dog. I suspect much anti social behaviour is driven by the criminal's inability to read society's context markers.
No "default permit" for application launch in OSX (Score:5, Insightful)
Try OSX. As of some update about a year ago, OSX stopped having "default permit" for launching applications by double-clicking. If you double-click and that leads to launching an executable that hasn't been run before, it pops up a dialog to ask you about it.
Thus, no more executables bearing viruses disguised as documents.
Re:No "default permit" for application launch in O (Score:3, Insightful)
Re:No "default permit" for application launch in O (Score:4, Insightful)
The Final Solution (Score:4, Interesting)
The solution is to keep the operating system and applications on read-only media. The end-user operating system of the future should be designed around this idea, and they should reboot from readonly media on a regular basis, this way viruses cannot spread and worms cannot get a foothold.
Its doable. Its feasable. Its the future, once engineers really decide to solve the problem.
I disagree on part of default permit (Score:5, Insightful)
"On my computer here I run about 15 different applications on a regular basis. There are probably another 20 or 30 installed that I use every couple of months or so. I still don't understand why operating systems are so dumb that they let any old virus or piece of spyware execute without even asking me."
The author has a point here, but answer to his question is very simple - his computer doesn't ask for permission to execute most programs because most users would absolutely panic if their computer regularly asked for their input.
I base this on my own experience as a college tech, which is necessarily limited. That said, two points to consider:
I have never, ever seen a student running in a non-administrator account on their Windows PC, even though XP supports this feature. This would prevent much malicious software from running, and avoids the "default permit" behavior that the article author finds so odious. However, users do *not* want to see error messages when they try to run things, nor do they want to log into a different account to install their p2p flavor of the week. They want things to "just work". So, non-administrator accounts are fantastically unpopular.
Another example: Zonealarm. My school encourages students - in fact, tells students they are *required* to - install ZoneAlarm. So what happens? Zonealarm asks them if they want to let, say, AIM or Weatherbug access their network connect - and the user freaks out. They think it's an error, that their computer is busted, etc.
In short- desktop machines tend to be default-permit because desktop users are completely unwilling to deal with an alternative arrangement.
Locking down users (Score:5, Interesting)
I have a pretty strong personality and a thick skin, but after a while, I gave up. Even brand-new interns complained about the situation that they were not able to install their "favourite software" or about the blocked ports at the corporate firewall.
After a while, the HR manager came to me and said, that in four years, half of the employees complained about me. Whenever I tried to change something (firewall, user rights,
All of the users are working as administrators on their computers at home - I know that, because most of them told me about the troubles they have with spyware and viruses, but they would never accept to have lower permissions at work. The common sense is, that the computer at work is actually theirs.
The same with company laptops. Everyone connects it at insecure networks at home, friends, hotel rooms, other companies and so on and after a business trip, you have to either reinstall the machine or remove spyware/malware.
It's just the lack of understanding, the habit to always work with admin rights at home and the lack of respect for the job of an IT administrator/manager.
Re:Locking down users (Score:3, Informative)
Here's why:
(1) Response times. When I made a request for installation of, or permission to install, software needed for my work responsibilities, response times ranged from 45 minutes to a couple days. 45 minutes is little enough time to find something else to do in. A period
Re:Locking down users (Score:4, Insightful)
Re:Locking down users (Score:4, Interesting)
See? You're the best example. I/We am/are talking about account restrictions for average users (no admin access) in business environments and you're calling me "power tripping network Nazi". That's exactyl what I mean. At work, it's not your computer and not your responsibility when something really bad happens.
Just go on with your administrator account at home.
they missed a big one... (Score:3, Insightful)
Well said (Score:5, Interesting)
I worked in "security research" field for 10 years. I loved it.
Then companies got involved, certifications/courses/books appeared, pentesting became a business...
I moved to another field, for the very reasons MJR explained in his editorial.
Everyone wanted to be "secure", but noone wanted to invest time or brains in order to achieve that goal.
In 4 years of pentesting (and I'm talking about BIG players and companies with bright people, big budgets), I have only ONCE seen a company that actually took SERIOUS measures in order to improve its' security. I'm not talking about adding another layer of firewalls or installing new toys, but actually redesigning their security infrastructure/thinking.
All the others wanted signed paper which says "You are secure now".
I ended up pointing all of them to MJR's Ultimate Firewall [ranum.com]
Whose ideas are the dumb ones? (Score:5, Insightful)
The price of Default Deny is loss of flexibility. If it is easy to avoid denial (e.g. automatic addition to a whitelist), it's just Default Permit by another name. If it's really hard, it will keep you from doing everything except that which you already know you want to do--in other words, nothing new, nothing clever, just the same stuff over and over. This would turn computers into the equivalent of a stereo system. They do thsoe narrowly-defined tasks that they were engineered to do, and nothing else.
People are going to occasionally want to do something new. When they do, there are certain things that they almost certainly *don't* want to do. Thus, you enumerate badness to help protect them when they want to use their computer as a flexible general-purpose device.
It's better to have systems that are secure by design. Duh. The point is, though, that even systems that are secure by design are likely to have flaws. If you look for flaws, and fix them, then you have a chance of staying ahead of other people who are looking for flaws to exploit them.
The coolness of hacking has nothing to do with security. Hacking is cool because it demonstrates our ability to manipulate our environment, to do things that are supposed to be impossible through ingenuity. In a factory of mindless corporate drones, hacking is not cool. But if you live in the real world where programs have flaws, there is even a security use for people who enjoy finding ways to use the flaws to accomplish things that the creators didn't intend.
Educating users is ridiculous--his point is that users should't be educated because they should be educated before you hire them. Okay, and how did *they* get educated? What happens if you have to hire real people who are talented but they haven't all gone to this magical security training school? His point *should* have been that there are only some things that can be taught, and that you shouldn't assume you can teach completely counterintuitive behavior. But you might be able to teach someone enough to avoid clicking on strange attachments without deleting photos in
I don't want a secure, useless system. I want a secure, *useful* system. And that means compromises need to be made between security and usability. Reading this article gives very little clue as to how to construct a good balance.
Educating users... (Score:4, Insightful)
To make a point using the author's own analogy... while flying on an airplane, it's basically common knowledge that you don't want to walk up to the door and pull the big silver lever. Bad things happen if you do. However, if the plane has crashed and you need to get out, that's exactly the action you want to take. We don't have fire sensors that only enable the handles if the plane cabin exceeds a certain temperature... we rely on user education to make people only use this option at the right time.
Even the author's own solution, of scraping off all email attachments and saving them via url doesn't help. If someone sends out a virus, and it gets saved to a remote server, the user can still copy it to their system and run it. But if the user is educated about the kinds of thing that can happen when they do this, and about the dangers of running software from unknown or even partially untrusted sources...
The #1 dumbest idea in computer security? (Score:3, Informative)
It isn't. Sure, certain engineering and design principles can help security a great deal, but when it comes down to it, security is about the human brain. If you don't run the system intelligently, it doesn't matter how well designed it is, or how well the design is implemented. You will get p0wned.
I'd trust an all Windows 98 network without a firewall, run by someone who knows what they are doing, over an OpenBSD network locked down against everything run by my mom.
Why M. Ranum is an idiot (Score:3, Insightful)
Guess what. You've just pretty much gone back to the dark ages. Everyone has a set of programs installed on their computer by the priesthood, and that's all they can run. Might do something about viruses. Definitely reduces the utility of the machines.
#3: Hacking worthless
Holding your adversary's skills in contempt is generally not a good idea. Refusing to learn them is just plain stupid. And, of course, hacking (even the black-hat sort the PC prefer to call "cracking) isn't what he says it is. Learn a particular exploit? Any script kiddie can do that. Figuring out how to identify holes and develop exploits, that's another thing entirely, and as useful for a security professional as lock-bypassing is for Medeco.
#6: Sit on your duff and let the other guy take the lumps.
Sure, you CAN do that. But there's reward as well as risk in adopting the new stuff. And consider that if everyone took that strategy, progress would be entirely stifled. His IT exec who waited two years to put in wireless may have saved money -- but he also had two years without wireless, which may have cost him more.
Re:Why M. Ranum is an idiot (Score:4, Insightful)
Of course, then there's the developers who (should) know what they are up to, and will need to be able to install things without having to go through the IT department for every scripting tool they need to get their job done. So you put those guys on a separate network segment, firewalled off from the rest of the office workers - so if a developer manages to clobber the network, they don't clobber the entire company.
Weird mixture of stupid and trivial (Score:3, Insightful)
The second is just too overreaching: would you like a computer which can run 30 programs from a master list and nothing else? There are many cases where "enumerating goodness" is exactly the right thing to do, and - guess what - that's exactly how such cases are done, for example, sudo.
The rest of the article is basically boils down to this: if you don't want your system to be hacked, don't make it hackable. Sure thing. Don't debug your programs, just write them correctly. Don't install airbags into cars, just avoid crashes. Stupid us, doing all the precautions and safety things for years. Just don't make mistakes, see how easy it is?
Late but... (Score:3, Insightful)
This guy has a couple good 'no duh' points and several really stupid ones. Let me elaborate:
#1) Default Permit
This I agree with, in the case of firewalls in a corporate environment, where the input/output can be predetermined and controlled. Everything should be blocked except for the handful of things that need to get through.
#2) Enumerating Badness
This idea BLOWS for desktop applications, which is what he advocates. Why is it bad? Because while he only has "30"-or so applications he uses, as most people do, those 30 are different for most users. You can't enumerate all legit software, it can't be done. You can enumerate most of it. But then you get to a list comparable to 70,000 virus signatures you are trying to leave behind. Besides, if I write my own application, my anti-virus software would need an accurate, detailed signature of what the application looks and acts like to be able to identify and allow it... Something I cannot reasonably do. Which is why we have companies creates the signatures, for the (comparably) finite number of viruses and trojans. Default Deny on a desktop, especially personal ones, is a broken, unmaintainable, BAD idea.
Even in a corporate environment, which has more home-grown apps, you would need custom signatures for each internal app to function. Something not practical for an IT department to create. The idea just doesn't hold on a PC.
#3) Penetrate and Patch
His argument: if you had designed it securely, you wouldn't need to pentest it.
Ok, but how do you know your implementation was complete to the design, or that your design didn't have a hole in it? Well, you have to test it... pentest it, that is.
Yes, it is a great idea to securely design your apps, with secure-by-design principles. Afterwards, you STILL need to test it in a live environment to ensure you didn't forget or miss any steps. That is only a logical step. Pentesting even the most secure of networks is critical, to be able to PROVE they are secure. You can't just say 'because I said it was!' and expect that to fly.
#5) Educating Users
He contradicts himself. He says that you shouldn't have to educate users because they should already be educated... Which is a chicken/egg problem he never admits to. You should do both: hire competent, smart people, AND train them in the policies and guidelines of their environment.
Other ways... (Score:4, Funny)
http://www.comics.com/comics/dilbert/archive/imag
Re:Here it comes... (Score:5, Funny)
Joke? (Score:4, Insightful)
If you install Windows, you are making a conscious decision to open yourself up to a plethora of attacks that simply aren't possible on any other platform. Maybe the benefits outweigh the risks, but don't pretend that the risk isn't there or that it's some outdated joke.
Re:Joke? (Score:5, Insightful)
Where does the Macintosh OS fit in to your scheme of things? By all measurements it seems to have been built with user friendliness in mind, however it's also generally regarded as being pretty secure by design also.
Is it secure *only* because it's less popular than Windows? I.e., if it had Windows' marketshare, would it be regarded as insecure? Call me biased, but somehow I don't think it would.
User friendliness versus security is not necessarily a one-to-one tradeoff. It's possible to have something of both, although perhaps at the expense of some third quality (speed, or efficiency perhaps?).
Anyway, I'm not disagreeing with you outright as much as I'm just wondering where some other operating systems fit in on your continuum, if Windows is "user friendly" but insecure and *nix is "secure by design" but not user friendly.
Re:Joke? (Score:3, Informative)
Re:Joke? (Score:3, Insightful)
If so, then it's *not* secure by design, it's secure because of market share. The single biggest security problem with desktop windows is having system administrator be the default user
Re:Joke? (Score:4, Interesting)
On a standard Mac OS X box (not sure about Server), the root user isn't even enabled by default. You need to go pretty deep into the preferences in order to enable it.
The first user you create during the install process is an "Administrator," which means you can 'sudo -s' on the commandline and become root temporarily, but only by re-authenticating. I'm not sure if that meets your criteria for 'root-like entity,' but it seems a pretty good compromise to me.
Anything you run through the GUI (and anything you run through the CLI unless you specifically sudo and become root) executes as a non-root user. So email attachments, etc., cannot execute as root unless the user takes the very unlikely steps of enabling the root user, and then logging in as it.
There were a few privilege escalation bugs in past versions of the OS which allowed an Administrator to become root without properly authenticating again after login, but they were in early versions and I haven't heard of any recently.
Re:Joke? (Score:5, Insightful)
Macs have 'administrator' accounts which are actually just members of a 'wheel' like group for sudoing. There is a 'root' account on OSX, which you can't even log into by default. You can set a password for it by doing a 'sudo -s' and then 'passwd'. This account can't be logged into in the GUI, merely on the command line. The vast majority of users will not use this functionality.
Whenever a program needs rootlike privileges to install software (which is rare, as macs use app-folders) or to do system maintenance, they OS requires you to actually type in your password. This is the 'wheel' like functionality.
This security model is more secure than having only 'root' and 'user' accounts, which is why many Linux distributions, like Ubuntu, now default to this exact behavior.
So in OSX anyway, there is NO user account with root or root-like privileges.
Re:Joke? (Score:4, Interesting)
You don't get a root login by default, but any user in the admin group has rw privileges in the Applications directory. If, for the sake of argument, you replace some common application such as the Safari web browser with a trojan subsitute, can either run with the privs of any user who starts it. If you replace an app which normally requests authentication to run as root, you can get full privileges by getting the user to enter their password exactly as they are expecting to do. Although the default user is not the Unix root, this hole means that there is little difference between the security of Windows and Mac.
There is an easy fix: create an account which has admin privileges, then remove these privileges from your normal account. This works almost as easily as the default installation. For a few operations (such as dragging an app into the Applications folder) you will be asked for the user name and password of an administrator, and for these you supply the details of the new admin account that you created. There really is no other down-side that I've come across in running MacOSX like this (unlike using a non-admin user in Windows).
Re:Joke? (Score:5, Insightful)
"Partially this is because Windows is the predominant desktop OS, but it is also because *nix is generally secure by design, whereas Windows is user friendly by design."
Why do I get the feeling that the basis for your belief here is simply because you have to type in a password before you can boot into your Linux system?
I think there's way too much complacency among Linux and Mac advocates. As far as I'm concerned, they are both Katrinas waiting to happen. Neither of these systems are very popular, but because of the rampant advocacy, fans of both systems come up with this fallacious assumption that just because Macs and Linux systems are almost never get hit by viruses or other forms of attacks, that they must be more secure by design. No! No! No! And if I was a manager for a small to large business, I'd prepare for such attacks *before* they happen and ignore all of this fanboy buzz.
Re:Joke? (Score:3, Insightful)
Re:Joke? (Score:4, Insightful)
Enumerating badness.. virus scanners and default permit firewalls.. these damn things are the bane of Windows. Instead of blocking unknowns or at least asking for permission Windows and Windows apps tend to rely on blacklists to tell them what is unsafe. With thousands of apps being released daily and probably thousands of hacks too that is a pretty tall order. IMO greylisting unknowns while blacklisting known threats is a good solution. That way the user can't easily screw up and allow through known threats and they're prompted before allowing possible threats through.
Penetrating and Patching is mostly only a problem in Windows because Microsoft and other companies release beta (or less) quality software as final releases and use paying customers to do the testing. Any program can have flaws and it is wise to test them and patch them. Sometimes those flaws are small errors in an otherwise good design and a patch will fix them. Other times those flaws are huge design errors that require whole features or even applications to be rewritten and then patching is useless. Either way it isn't a problem except when you've sold the broken useless crap to some unsuspecting consumer before doing the testing.
Hacking is cool. The guy is an idiot on this point. Knowing your enemy is a good lesson in security. So is knowing your own weaknesses. You learn those things by first copying your enemy and then by stepping ahead to guess what your enemy may do next. You're not a real engineer if you don't understand ways in which your creations can go wrong either by bad luck or by ill intent.
Educating users is a must. That doesn't mean you need to educate users on every single threat. It means that you don't dumb users down in the MacOS/Windows way and that you teach them basics of what is expected and unexpected behavior of their computer.
Inaction is cheaper than action but action can be a better defense so long as you're willing to keep changing as you find out more. Microsoft often takes the route of inaction which is cheaper. They wait to see what happens, again using customers as test subjects, and then buy or copy the strongest response. This has lead them to bad designs in general though. If they'd taken action they could have designed better software to begin with. They can afford to make early actions in defense of their customers so there is no excuse for them not to. On the other hand the customers may not have that kind of money so for them inactivity can be a better idea.. or would be if Microsoft was doing it's job.
Overall, Microsoft has again and again proved itself asleep at the wheel when it comes to security (and most other things). Fortunately they are starting to take action finally as they finally reached the point when customers were looking at better options. Smoke and mirrors works for a while (sometimes a long while) but eventually people get tired of always being victims. This is the situation Microsoft has put itself into and one that most other software venders are close to. With the industry maturing and customers becoming more savvy they'll finally have to start paying attention to these things. Five years ago customers thought I was weird for mentioning the security of the systems the were using. Now they ask about it. BIG DIFFERENCE.
A Fundamental Linux Security Flaw (Score:3, Insightful)
With Linux, installation is endless. How I so wish that I could just get one fricking package from online for KDevelop or any other tool I use and run one installation process.
Instead I have dependency hell.
KDevelop wants a package called Graphviz, something called Arts (the SUSE version isn't good enough), a new kind of source control system. I have to go to fifty different web sites tha
Re:A Fundamental Linux Security Flaw (Score:4, Interesting)
b) packages should have a list of certified sites for their dependencies. OR, there should be an https repository for ALL packages.
You appear to be using SuSE, yet you say you have to go hunting around for packages. This doesn't make sense.
If you use YaST to install packages, you can do so from one of the official mirrors. These contain all of the dependencies, so you don't need to go hunting. I've got the latest KDevelop, and everything it needed was installed automatically, so I'm wondering what on earth you did to have problems. The machine here has KDevelop 3.2.2 on KDE 3.4.2b, all installed via YaST with SuSE's own packages, and no googling for anything.
Furthermore, SuSE do appear to sign their packages. I'm not sure when this is checked though, so it may or may not be OK to rely on that. Using https for transfers won't really change anything; it wouyld stop eavesdroppers, but I don't think anyone is interested on eavesdropping on transfers of publicly available packages.
Your point is otherwise valid, and installing random packages from random/untrusted locations is an accident waiting to happen. Major distributions, however, do take steps to ensure that their packages are safe. Any distribution which provides a package which is dependent on an external package (ie: not provided by that distribution) is providing you with a bug, and it should be reported as such.
-- Steve
Re:Here it comes... (Score:5, Insightful)
Re:Here it comes... (Score:3)
I don't disagree while the program is in developement, but there comes a time when the code should be bug free. Humans make mistakes, but they also need to correct them. Sloppy code is not acceptable.
"skip the testing, it looks fine" (Score:5, Insightful)
Have you ever written code for idiots?
When I'm creating software I have to hide my work in progress from management. By that I mean, show them chunks only. I can never let them see something that looks like an operational product till its' been up and running and tested six ways from Sunday, because if they see a working prototype, they'll try to force me to roll it out as productive immediately. Telling them it's "not done" doesn't work either - I've come it to work and found a demo project distributed as productive. I mean wtf? - Some PHBs just don't get it at all. You tell them its' running against a test database, needs 3 more weeks work and bang, its' out the door. - It's not on fire right now so it must be done, right?
In those circumstances, I don't really give a sh*t if it fails and costs them money, except the blame (and 3 am phone calls) fall to the team that wrote it.
You're %100 right, there is no exuse for buggy code, but there is tonnes of it out there, being used productively that was never really finished. Sometimes it's got less to do with the lazy developers than managers who don't listen.
Re:Here it comes... (Score:5, Insightful)
Re:users are teh greatest security problem (Score:5, Funny)
#
Removing
Removing
Removing
Removing
Removing
Error: cannot remove
* Entering phase 2
Scanning ports for viral spreading:
No suitable ports available.
* Entering phase 3
Accessing sendmail...
Mailing...
Mailing...
Mailing...
Error: mail blocked: too many recipients. Wait ten minutes and try again.
In short, users aren't a major problem because they should only be able to hurt themselves. The problem is that they often can and do hurt others. This is the result of poor design.
Nope (Score:3, Informative)
And I quote:
"Dealing with things like attachments and phishing is another case of "Default Permit" - our favorite dumb idea. After all, if you're letting all of your users get attachments in their E-mail you're "Default Permit"ing anything that gets sent to them. A better idea m
Re:users are teh greatest security problem (Score:3, Insightful)
SELinux [redhat.com] is the solution.
Re:Dumber Article... (Score:3, Insightful)
One of the points basically comes down to "write perfect code". Well, duh, why didn't I think of that before? Jeez. Patching is bad because your code should have been perfect in the first place? That's the dumbest thing I ever heard.
His argument that an OS should ask you before running something is also stupid. How many users do you know who would actually read & understand such a question? Never mind actually giving a sensible answer. Lets say I just downloa
Scoop (Score:3, Funny)
Maybe he's a friend of ESR's or RMS's. Trying for his own elevation to 3 char alias fame...
Re:Dumber Article... (Score:3, Interesting)
If we limit the issue down to a corporate network, then refusing to run that infested screensaver because
Re:Dumber Article... (Score:5, Insightful)
Just how many applications do you propose to whitelist? I would say that an average Linux desktop would need to plan for at least 10000 entry checks for each application that would start up.
Now, as for bash scripts or Perl.....
Are you suggesting that users of the computer should be unable to write their own scripts to automate boring stuff?
IMO, a better way of doing things is to define a good security perimiter and attempt to balance security with usability on both sides. Then you can aggressively filter what comes through.
Trying to download that great screensaver from your web browser. Nope... Ain't gonna work. Trying to open that attachment? Not on the approved types. Sorry.
Note that this is pretty much doable today with current technology. Indeed, I don't see why one cannot arbitrarily decide that users cannot have the executable bit set in their home directories (it is a mount option you know). It certainly makes sense not to allow the suid bit set (another mount option). And this will get you 99% of the way there with only 1% of the management overhead and a lot less computing overhead.....
Re:Dumber Article... (Score:3, Interesting)
I'm talking about your average office person that uses Word, Excel, Powerpoint, maybe a couple of other applications. The people that can barely operate a computer beyond what their job entails. People that are the number one cause of the propogation of worms and viruses and spyware because they click Yes on everything that pops up, because it is a computer, and computers are giant brains that know everything. Okay, I exaggerate, but you must get my point?
Ok, then substitute macros in office documents for P
One more thing (Score:5, Insightful)
Now in this case, with SE-Linux, you can even specify what files a given application can load. This can be used to limit scripting languages to known good scripts, or to prevent confidentail information from being sent via email.
The SE-Linux information is stored in the inode, so it is specified by the administrator at file creation time or inherits properties according to policies. This avoids the issues you see with trying to maintain a whitelist of hashes and apps.
The point is that the user cannot be given something like the pointless SSL certificate browser warnings that allow a user to click "I don't care, let me in anyway". Default Deny, not Default No.
And if someone in AR forgets to pay Thawte for your SSL cert and it expires for a critical server (say internal app for credit card processing), users will be locked out. Cute. I am a firm believer in manual override capabilities. That will never happen, you say. All I have to say is domain name registration exiration for Hotmail....
Here is the problem. People think of security in a vacuum. Real security is a piece of a larger availability/security/usability problem. You have to tackle all three at once and ensure that one does not preclude the others within reasonable parameters.
Re:Dumber Article... (Score:5, Informative)
Re:Dumber Article... (Score:3, Insightful)
Re:Dumber Article... (Score:3, Interesting)
This one makes the Halting Problem look like a walk in the park.
Whitelisting should work fine, in situations where the user i
Re:Zone Alarm (Score:4, Interesting)
They won't read the box that comes up. they'll mindlessly click "Allow" even if the message said "This program would like to kill your wife and rape your dog. Would you like to allow it?"**
Whatever it takes so they can get on teh intarweb!!1
**Just like not reading EULA's. A while back company (don't remember who) made a EULA that actually said you get money if you call them. Several THOUSAND people installed the program before one guy actually called.
Re:OK. (Score:3, Interesting)
Further, points 1 and 2 are essentially the same things, just reworded.
Point 4 is somewhat mistitled. I do think learning the basics of how exploits work is important to creating sturdier code. Otherwise, you'd just write stuff that's vulnerable to buffer overflow constantly.
Point 5... Where do I begin? The problem is NOT self-correcting. I work for a university, and every year we get students asking us how some bank got their university e-mail address and "Should I respond to them?" For every o
Re:Either stupid or obvious (Score:4, Informative)
Think of it this way:
int isPrime( long primeSuspect)
{
if(primeSuspect == 2 || primeSuspect == 3 || primeSuspect == 5 )
return 1;
return 0;
}
How would you patch it? Test it for every prime and then add them to the check list? Or would you realise that the design is crap and change the design?
He wants you to change the design, rather than just fix the aparent flaw that 7 returned false.
Re:Either stupid or obvious (Score:3, Interesting)
Actually, I take that back. It's an accurate representation of the article. Which was bad.
The example implies that the only application of 'penetrate and patch' is for idiots to check a design that's so obviously flawed you could simply correct it by thinking about it. And it assumes that if that flaw emerged, the developer would be sufficiently dumb to just fix the flaw as related to the specific test data and not anything else related, like, say, the underly
Re:Either stupid or obvious (Score:5, Insightful)
1) Default deny instead of default allow. Actually, default deny is just as stupid as default allow, as if you have default deny, people just get sick of being asked if they want to allow something, and end up clicking "yes" on every box they see.
Why on earth would you allow your users to select what is acceptable? I believe his proposition was stating that you as the systems admin should set what people can use, and block everything else, otherwise if users could specify what was allowed, then you're back to square one like you say.
2) Enumerating Badness So you want to write a virus scanner that somehow can recognise viruses without being told which programs are viruses. Modern virus checkers already mostly do this. With spyware it's very hard for a computer to tell the difference between a program you wanted installing and one you didn't. How do you expect it to tell?
Simple, you have a fixed set of programs that are allowed to run, and you don't allow users to install additional programs. Anything not designated as allowed to run therefore gets stopped in its tracks before harm can be done.
3) Penetrate and Patch So you are saying we should write code without bugs and holes? What a great idea that is? why did no-one think of saying that before?
Actually I think his point is that code is being written insecurely when it really could be written securely. Look at how things are now, the buffer overflow is a security flaw that has been known about for quite some time, and there are very easy ways to protect against it, yet buffer overflow exploits are still quite common. The point is we shouldn't be trying to understand the flaw and try to patch it, we should try and understand how the flaw ever came to existing, and fix that!
4) Hacking is cool You think people should learn how to stop hacking and intrusion without learning how existing hacks work? Then you are stupid. Shush.
As I explained in an above post, his point is that time could be better spent learning about the root cause of the security exploits (things like buffer overflows) and how to prevent them, rather than spending the rest of your life trying to guard against the countless flaws that the various programs you'll run may have.
5) Educating Users So you are saying that we have to do security without teaching users how to do it. That just isn't going to work unless you never let users install their own applications or plug-ins. Yes teaching users is hard, but it has to be a vital part.
His point here was that users shouldn't even be able to cause harm in the first place, and if they can, then no amount of education is likely to prevent them from inadvertantly harming others. That said though I do believe users should be educated, but I agree with his point as well.
6) Action is better than Inaction So, after saying the state we are in is rubbish, you now say we shouldn't actually change anything. Eh? Or are you saying "don't try something new without testing it first"? Well thats more than a little obvious.
It should be obvious, but how many companies got burned because they switched to very insecure wireless networks early on?
All up the points he raises are interesting, if idealistic at times. Next time you should try reading better
The Microsoft Way (Score:5, Insightful)
When users are annoyed by questions they don't understand, support costs go up. Windows users really can't answer questions about whether to allow various TCP connections. Since only programs we approve can be installed on the "users" machine, there is no point in default deny.
Just like currency security doesn't try to identify all the different kinds of forgery, so the idea of "trusted" computing is that all programs are bad except the ones signed directly or indirectly by Microsoft.
To be effective, "trusted" computing must be airtight against workarounds by end users. That is why hardware enforcement is an integral part of the picture. The XBox project has been very effective in eliminating holes in the "trusted" computing hardware, thanks to the many volunteer hackers attacking it.
Currency security experts don't spend time on basement printing presses. They spend time on creating currency features that are expensive to reproduce on a small scale. End-user freedom is not an issue in the "trusted" computing paradigm. We simply want an airtight system that allows *only* Microsoft approved programs to execute, and a hardware enforced way to retroactively delete content when Microsoft makes a "mistake".
We want to ensure that defeating the hardware interlock on our machines requires resources way beyond what an individual or small company can muster. It doesn't matter if organized crime or Chinese corporations have the resources. Their exploits give us justification to tighten the screws on our captive users.
One of the main real selling points of our software is that we aim it at users who don't know or care about computing. They just want to use some applications. If our users had any desire or aptitude to learn about security, they would have defected to that "competitor" that shall not be named. Once we succeed in legally banning un-"trusted" hardware, any talk of user "education" will be banished to dark alleyways.
You say, "never let users install their own applications or plug-ins". Darn tootin. The whole point of "trusted" computing is to prevent users from installing their own applications or plug-ins. That is 99% of the security problem with Windows. If a user doesn't know whether to allow a TCP connection, they certainly have no idea whether some no-name (i.e. non-Microsoft) program is safe to install.
We have 100s of millions of machines running our software in the field. We have a nearly complete monopoly on desktop software. Knee-jerk actions are simply out of the question. The damage done by an insufficiently tested patch is far worse than the damage done by the nastiest malware - because our users will blame it on *us*. (The rebels blame the malware on us, but that is irrelevant.)
Neither stupid nor obvious (Score:5, Insightful)
Default deny makes more sense when you think of it at the organizational level -- like a firewall. Both default deny and allow mean that you have to respond to new needs ... but default allow means you have to respond to new attacks (by blocking them) whereas default deny means you have to respond to new user needs (by allowing them). I've operated both sorts of firewalls -- and when you are in good communication with your user base, default deny is both more reliable and MUCH LESS WORK.
Ah ... you didn't read the article, did you? Every program that's running on your system that you didn't authorize to be there, is a problem. It doesn't matter if it's a "virus" or not, or if it's on Symantec's bad-guy list yet. Consider the following dialogue I had with a Windows technician:
Me: Windows host foo.example.org is cracked. It's portscanning out and trying to break into things. I've blocked it off the network.
Tech: I just ran an anti-virus scan on foo, and it didn't find anything. The user wants to get back to work; please put it back on the network.
Me: I didn't say it had a virus; I said it was scanning out and trying to break into things. It's still trying to scan out. I'm not going to put it back on the network.
Tech: Antivirus software says clean!
Me: snort says scanning out!
Tech: Antivirus software says clean!
Me: tcpdump says scanning out! Go get Clueful Tech to look at it.
Clueful Tech: Oh yeah, it's got all these processes called "fuck.exe" running. It's hosed. I'm reinstalling it.
Me: Thank you, Clueful Tech.
If you need antivirus software, your problem is not viruses -- it is that you don't have any control over what programs are getting to run on your computer. Get that control, and you don't need antivirus software.
Anyone who tells you that all software has bugs is being honest. Anyone who tells you that all software is equally buggy is trying to sell you Microsoft IIS. We can go a long way towards "code without bugs" just by observing the history of software and going with those options which have proven to need much less patching in the past.
We can also -- and more importantly, I think! -- favor software that is architected in such a way as to minimize security exposure. That means privilege separation and least privilege. Running your Web server as root is a brain-dead idea. It means not using more complicated software than you need -- if boa or publicfile serves your needs, don't use Apache.
It's interesting, but it isn't essential to the job. What you need to know is that attacks work by exploiting mistakes in the design and implementation of programs. What you need to know about buffer overflows, for instance, isn't how to exploit one for fun and profit -- but rather, that any C program that uses gets() is broken ... and that programs written in higher-level languages that have checked strings can't suffer from them.
There is a place that I've found that "hacking knowledge" is useful -- in demonstrating incontrovertibly that a problem exists. Joe Moron has a Windows-based embedded print server that's vulnerable
Re:Um wtf (Score:5, Insightful)
Re:sigh (Score:3, Insightful)
You know, I've heard this hogwash from the MS camp many times, so let's just examine it for a minute, shall we?
Usually when someone drags out this tired old argument, they are referring to the number of *desktop* machines. I'll grant that the vast majority of desktops run Windows. Anyone would be a fool to argue o