Schneier: Security Awareness Training 'a Waste of Time' 284
An anonymous reader writes "Security guru Bruce Schneier contends that money spent on user awareness training could be better spent and that the real failings lie in security design. 'The whole concept of security awareness training demonstrates how the computer industry has failed. We should be designing systems that won't let users choose lousy passwords and don't care what links a user clicks on,' Schneier writes in a blog post on Dark Reading. He says organizations should invest in security training for developers. He goes on, '... computer security is an abstract benefit that gets in the way of enjoying the Internet. Good practices might protect me from a theoretical attack at some time in the future, but they’re a bother right now, and I have more fun things to think about. This is the same trick Facebook uses to get people to give away their privacy. No one reads through new privacy policies; it's much easier to just click "OK" and start chatting with your friends. In short: Security is never salient.'"
Obligatory car analogy (Score:5, Insightful)
Re:Obligatory car analogy (Score:5, Insightful)
Re:Obligatory car analogy (Score:5, Interesting)
No. TFS is a terrible representation of TFA.
This is a more fitting excerpt:
The whole concept of security awareness training demonstrates how the computer industry has failed. We should be designing systems that won't let users choose lousy passwords and don't care what links a user clicks on. We should be designing systems that conform to their folk beliefs of security, rather than forcing them to learn new ones.
Even though TFA is pretty crappy itself with its myriad of bad analogies, the idea of trying to craft effective simplified 'folksy' models makes sense. My favourite metaphor for internet security is regarding the internet as a square in a foreign city center. It gets the message of what to trust and what not across a lot better than trying to explain Javascript, cross-site scripting, or what an executable is.
In addition to this approach to raising security awareness, a case is (sort of) made for designing systems to support users in security related decisions in a way consistent with the above. I'd say that a green colored address bar in a browser is an example of how to do it the right way and the blanket statement 'this file may harm your computer' one of how to do it the wrong way.
Re: (Score:2)
And Schneier isn't asking for companies to stop teaching their drivers to drive safely before the seatbelts, airbags, and automatic cars are ready - he's just outlining that as the better goal than only relying on safe driving.
Re:Obligatory car analogy (Score:5, Funny)
What's so fun about driving? That's like saying a Roomba takes the fun out of sweeping the floor.
Re: (Score:2)
Some androids enjoy ironing.
Re: (Score:2)
Re: (Score:2)
Driving can be fun. When you get away from it all, and there's an open road, through nice scenery, with lots of nice swooping corners.
However, most of the time, for most people, it's being stuck in heavy traffic, on a dull road. The same road, every day. And few people find that fun.
Re: (Score:2)
Actually it's closer to a taxi. Which most people have cause to use from time to time. They're particularly useful in cities.
Re: (Score:2)
Actually it's closer to a taxi. Which most people have cause to use from time to time. They're particularly useful in cities.
Yes, good point - but extending the benefits of taxis beyond the cities. Every wealthy man has a driver, or hires one on demand, but the non-wealthy people don't get that.
Heck, when I have a self-driving car, I'll see my extended family more often. They're "only" 8 hours away, but that's two full days of driving with a stack of children in the back seat. When I can tuck the kids in
Re:Obligatory car analogy (Score:4, Insightful)
Driving a car is a far more focused task, with more salient dangers. Even without safety training people understand that driving erratically, or at high speeds can be dangerous. Using a computer or the internet is more like watching TV or reading an article and determining if what you're watching is fact or fiction; it requires judgement and motivation.
Given that many adults don't have these skills and importantly that the effects of failure extend beyond the individual involved what Schneier is proposing makes sense.
Re: (Score:2)
Here's a better car analogy. You're driving down the street on four bald tires, and a guy driving a truck for a tire shop happens to pull up next to you at a red light. The guy remarks on your crap tires, and now you have two choices. You can listen to him because he probably knows what he's talking about when he tells you you're running a serious risk of dying on the highway when one of those tires fails catastrophically, or irrationally ignore him because you perceive that he's just trying to sell you som
Re: (Score:2)
Re: (Score:2)
What use is it if you build a closed environment, with restricted access and rely on two factor authentication, if some CxO gives his RSA token and password to his unvetted summer intern to do some trivial task without supervision?
Is security awareness training the end all of IT security? Of course not. But frankly, it is a tri
Re: (Score:2)
In many ways computing today is like not having seatbelts.
Passwords are just not good for security anymore. Most hacks go around them, or just use someones elses password list and give it a shot. or the person just keeps it fairly visible. Passwords are more like Anti-lock mechanism to your breaks then like seat belts. They will prevent some of the minor problems but not help protect in the case of a major problem, and sometimes cause problems where it didn't need to happen.
The real issue there is little
Re: (Score:2)
It demonstrates that car industry has failed. We should be designing systems that don't need seatbelts and don't care if user decides to slam into a tree at 100km/h.
Considering the close to 2 million deaths per year due to automobiles, you could say the car industry has failed by relying too much on individual training and responsibility. Not that available technology gave them much of a choice. The solution, of course, is to completely take humans out of the equation, which Google is working on.
Re:Obligatory car analogy (Score:4, Insightful)
It demonstrates that car industry has failed.
I would say that the car industry had failed if listening to the wrong radio station - tuning 92.3 instead of 92.5, say - allowed a malicious broadcaster to arbitrarily incinerate the contents of my trunk or assume remote control of my vehicle.
Re: (Score:3)
Re: (Score:2)
Automatic breaking
Yeah, that does describe a lot of IT deployments from a security perspective.
Re:Obligatory car analogy (Score:5, Insightful)
To stay closer to the original analogy...
Would you drive a car randomly left by the side of the road with big stickers on it saying "You may be eligable to win $1mln if you drive this car!!! (paid for by Soilent Green Corp.".
Re: (Score:3)
Only if the blinkers were on.
Re: (Score:2)
free car!!!
Re: (Score:3)
Re: (Score:3, Insightful)
Re: (Score:2)
The training itself may be inexpensive, but the lost time for "all" employees forced to take the course/class/lecture is not. Not to mention the burden on your staff in tracking attendance compliance etc.
And, compared to the cost of cleaning up an incident, it's STILL infinitesimally small.
Re:Obligatory car analogy (Score:5, Insightful)
And, compared to the cost of cleaning up an incident, it's STILL infinitesimally small.
All it takes is one single employee to ignore or disregard the training, and you'll still be paying that cleanup cost. That's Bruce Schneider's point: it's a structural problem, not one that can be fixed by placing more burdens on end users.
Re:Obligatory car analogy (Score:5, Insightful)
Bruce is right. In many environments, information awareness training is an attempt to solve the problem at entirely the wrong end of the failure chain, and is frequently ineffective. It may be difficult to hear for some, but the fact is that such training simply doesn't have a great track record of producing significant overall gains in organizational security, largely owing to the difficulty of mitigating widespread stupidity on the part of human operators. Most companies are not wholly staffed by information security experts, and any perceived near term security gains following training sessions quickly erode as employees revert back to an attitude of "I just want to do X, Y, and Z, and I'm too busy to keep thinking about those scary stories portrayed in last week's training."
Even military environments suffer from these training challenges. The difference in a military unit is the very real possibility of going to prison for merely mishandling cryptographic material on accident. On the "low" end of the punishment scale, there's more than a few senior enlisted military comms folks out of a job because of such process failures. I served with one such person.
It's worth noting in closing that you might want to spend a bit of time looking into who Bruce Schneier [wikipedia.org] is before framing him in any additional snarky quote marks. To say this is a man who typically knows what he's talking about is an understatement.
Re: (Score:3)
anyone who knows anything abuot computers and has ever been forced to take the DOD IA Awareness Training online course thing (its shockwave flash!! *shudder*) knows just what a joke and waste of time it is.
Re:Obligatory car analogy (Score:5, Insightful)
Lazy bums aside, employees are most concerned with getting their work done. Security is usually one of the things that gets in the way of this. I'm often appalled by the way quite a few companies handle security (I tend to see more than my fair share being a security consultant), it often seems they have some CISO who needs to build a monument for himself, showing off how much he works by making sure EVERYONE knows about it by the sheer number of hoops that they have to jump through. That's how you get amazingly stupid setups for passwords like "at least 12 characters, no 3 characters of the alphabet in consecutive order, at least 2 numbers not at start or end and not next to each other with at least 2 special characters ....yaddayadda".
If you see anything like this, start flipping keyboards and count the ones that contain post-it notes with the passwords du jour (because of course they need to change every other nanosecond, too).
This has nothing to do with security, people, this is what I dubbed "Monkey Island Security". You remember Monkey Island? Where Guybrush gets jailed by those cannibals and they start putting up more and more elaborate doors every time you escape through the wall? That's what some do in IT security, we get more and more elaborate and time consuming hoops our employees get to jump through while those that want to bypass security can easily ignore that because the problem lies elsewhere.
NO, and I mean ZERO, security breeches that I have been aware of in the last two decades can be traced to password guessing. It is amazing, though, how more and more breeches that can be blamed on personnel blunders can eventually be traced to them trying to cope (yes, cope) with security. Post-its containing passwords. Security measures unhinged or bypassed by employees because it actually kept them from doing their work. And so on, so forth.
Security does NOT mean annoying your employees. Perfect security would actually be nearly invisible to your employees. Because that would also include them not being part of the security system, hence, not being able to fuck it up!
Re:Obligatory car analogy (Score:5, Insightful)
This isn't merely a problem of specialization limiting perception. You're expecting average users to consistently conduct themselves in a manner they're demonstrably incapable of, at least the majority of them. Terminating the employment of those who fall victim to attacks through their own inaction or outright carelessness isn't a long term solution either, as it merely results in churn and a significantly higher bar in terms of what sort of person may be employed at a company. Money is limited, and organizations have to make decisions on the most effective ways to spend that capital with an aim to improving overall organizational security. That money is best spent on incrementally improved and frequently reevaluated security infrastructure and processes that inhibit improper access or information disclosure without overt reliance on human operators to make correct choices in terms of security posture, because those operators will often fail.
I've spent years dealing with problems in this area, and I strongly dislike the reality of the situation. Unfortunately, my disliking it doesn't make it less true.
Re: (Score:3)
When I was responsible for application development security a while ago, one of the first things I made sure of was that our users knew that I would not be pissed at them but HAPPY if they manage to fuck the system up somehow. Because they should not be able to, them being able to fuck it up accidentally meant that someone could fuck it up deliberately at least as easily.
Never blame the user for fucking something up. It is not his job to use your tool correctly, it is your job to create a tool he cannot use
Comment removed (Score:5, Interesting)
Re: (Score:2)
It would be like trying to teach me how to rebuild cars, i don't like cars, never cared about what model I drove, I just don't give a damn as long as it gets me from A to B and THAT is how many of your employees see the PC. They don't want to know about the thing, couldn't care less what its doing as long as they can get their work done and punch out, they have not the slightest interest in PCs which if you don't have any desire to really learn? Not gonna stick.
Back to the car analogy - new/inexperienced drivers are given a restricted license that prohibits them from going on dangerous roads or for driving alone. Such drivers are highly prone to do common mistakes, including attempting to drive with the parking brake active. They also do silly things, like alternate between moving and brake on a flashing green light (and the proper procedure is shown in any official manual for driving),
They only receive permission to drive by themselves after they've proven the
Re:Obligatory car analogy (Score:4, Insightful)
Training only goes so far. Even the best-trained user will make a mistake.
"Oh, I didn't mean to click that".
Re: (Score:3)
Training only goes so far. Even the best-trained user will make a mistake.
"Oh, I didn't mean to click that".
It will happen - but doesn't have to. There are three factors at play here: The training, the setup and the users themselves. The right kind of user doesn't need training as such, just some basic on a piece of paper (or similar electronic analogy).
I will use myself as an example.
I have been in IT since before the first virus or worm. I have been exchanging emails for several decades. I've pirated PC-games and downloaded cracks and keygens. I've used (among others) Windows since version 3.11 daily. I websurf
Re: (Score:2)
http://www.schneierfacts.com/facts/371/ [schneierfacts.com]
Re:Obligatory car analogy (Score:5, Insightful)
Moreover, you really couldn't even be bothered to do a simple Google search before you shot your mouth off, could you?
In a way, you're actually making Schneier's point. Posting a snarky Slashdot comment is easy and instantly gratifying; doing the least bit of research is a little bit harder and doesn't pay off immediately -- so you can see which happens more often.
Well, duh.. (Score:2, Insightful)
Users can screw up because they are just as human as you. So live with it. Design around it. Make it safe regardless.
I've only been saying that since, mwah, 1999 or so.
Policies are OK, but rules that assume perfect compliance to work are really only there to cloak the failure of engineering in some fault tolerance in system architecture and user UI design. Glad someone finally caught on..
Re: (Score:2)
Sure, humans can screw up, can't the people engineering make mistakes as well?
Most software designers don't leave security holes in their software by design, one would hope.
Re:Well, duh.. (Score:5, Insightful)
Re: (Score:2)
Obligatory quote (Score:5, Insightful)
A common mistake that people make when trying to design something completely foolproof was to underestimate the ingenuity of complete fools.
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
Yes, but many corporate networks *still* require a user to enter a password containing alpha, numeral, and special characters, and have the passwords expire after 2-3 months. Eventually, the users get the beat down by the boss or IT about writing it on a post-it stuck on their monitor. IT therefore has successfully trained most users to write down passwords in a notebook or a desk calendar. Indeed! The users have grokked the corporate mantra toward information security: Security through Obscurity.
Invalid comparison (Score:5, Insightful)
He's comparing security with health and driving to 'prove his point'. Security is not the same as health or driving. So, any conclusion from making a comparison is a false one.
Second, you don't have to choose between completely ignoring security awareness training and spending lots and lots of money and time in it. There is a very good choice somewhere in between. I agree with him that the information systems have to be secure and shouldn't offer dangerous actions but no matter how secure you make your information system, it will all fail if the user has no clue about what he or she is doing. And giving empolyees a basis level of security awareness doesn't have to cost a lot of money but will still help you prevent a lot of trouble.
Re: (Score:3)
Also in computer security, there's a lot of false-flag type attacks going on: in the modern day, something tends not to look obviously unsafe, but winds up being so (browsing the web "safely" shouldn't even be a problem, when you get down to it the browser should be keeping things thoroughly "on the web only").
I totally agree with Bruce here (Score:3)
I totally agree with Bruce here
We should be designing systems that won't let users choose lousy passwords
It reduces the search space I have to look at in order to brute force things, and that's a good thing...
Re:I totally agree with Bruce here (Score:4, Insightful)
Sorry, but your approach is inefficient. Since the system now requires users to choose passwords that aren't memorable (and probably to change them regularly as well) a large number of them will have them written down on post-it notes stuck to their monitors. That reduces the search space even more. :D
Re:I totally agree with Bruce here (Score:5, Insightful)
Damn my lack of mod points today. +1
Force users to chose complex passwords they write them down or learn what the minimum requirement is and create something stupidly simple anyway. Or they constantly forget their complex passwords and are bugging the admins to reset their passwords every 5 mins. Final variant is they use the same complex password for all systems. So, its fairly secure from brute force or random guessing, but once a hacker has one password, he has them all... one password to rule them all etc.
I've used systems with ridiculous requirements where i've not been able to remember 1 hour later what the hell i used. Something like requiring at least one capital, one number, one punctuation mark, no more than 2 consecutive characters, and no less than 12 characters. I ended up with something like this: Aabbaabbaabb1!
Re: (Score:2)
Re:I totally agree with Bruce here (Score:4, Insightful)
Re: (Score:2)
That would be the 99.99% of us whose offices are never cleaned, have no windows, and have rigorous security preventing anyone who isn't cleared from entering the building?
Re: (Score:2)
Consider the threat model. Written down passwords are better for many people. Even BS says so. So it must be true.
Re: (Score:3)
This [xkcd.com] is typically sufficient.
Create a random, nonsense but memorable phrase composed of *gasp* dictionary words, pepper a couple of punctuation marks if you like (is a space considered punctuation for password verification? IDK) and vary the capitalization if that's one of the requirements for password verification. e.g., "correct, horse...battery staPLE!" I'm sure as you read that, the associated verbal emphasis is resounding in your cranium, making it easy to remember where the punctuation and capitals
Tick the box exercise for auditors (Score:5, Insightful)
Security Awareness training is a tick the box exercise most companies do to get auditors off their back.
Apparently, users are supposed to be "trained to recognise phishing emails and other Internet frauds". IT has enough trouble these days trying to recognise them, and somehow our ordinary users are supposed to recognise them too?
Users have to be "trained to pick good passwords". This should be system designed to prevent users from picking bad passwords in the first place.
Users should be advised to "pick strong passwords and change them regularly". Two contradictory statements, no-one can remember a new complex password that changes regularly unless they write it down. Oh, users should be told "not to write down passwords".
Awareness training is pushed because there are a number of so-called "security consultants" who have no real technical skills, yet have made a living pushing this snakeoil. They unfortunately are also good self-promoters and have the ear of regulators and auditors.
If you are relying on security awareness to protect your infrastructure, you're screwed. Most users don't care, and even those who do care cannot possibly be expected to remain aware of the myriad of threats that exist. Often, their attempts to remain secure achieve the opposite purpose ("I heard you tell me email was insecure, so I use dropbox now to transmit files to customers").
What galls me most is I have to spend part of my IT budget this year spending money on this stupid notion because it is expected by auditors. This means I have to cut back on the security projects that make a real difference.
Re: (Score:2)
Re:Tick the box exercise for auditors (Score:5, Insightful)
Yes, I do. The problem is that passwords are fundamentally broken. They are broken in several ways.
1) The password must be hard to guess. This, generally, makes it hard to remember.
2) Many implementations restrict the number of characters that I can use for a password. This is downright stupid, as it prevents xkcd/936 compliance.
3) Every service which uses a password must have a different password to prevent password reuse attacks. This exacerbates 1).
4) I need a way to recover the password if I lose it. This exposes a secondary attack vector on my password.
5) There needs to be a guarantee that the password will never be transmitted or stored unencrypted.
OAuth fixes 3) and mitigates 5) and 2).
Two-factor authentication fixes 1): guessing my password can be easy provided that attacks on my service provider are slow and that I can report my token lost/stolen in time several orders of magnitude lower than the time required to guess the whole solution space.
Biometrics can be used to mitigate 1) and 4), but they expose additional flaws, such as lack of revocation. If someone ever gets your fingerprint, they have access to all your fingerprint secured data/possessions, unless they are additionally secured by something else.
Using most OAuth vendors, however, exposes an additional security hole: tracking by the OAuth vendor (see Google, Facebook privacy concerns).
Ultimately, it seems to me that the solution is probably private OAuth vendors with support for smartphone-based secure keys. The problem is getting service providers, such as banks, to implement OAuth via a username + domain (OAuth vendor) + token approach.
This should allow users to choose their OAuth vendor, thereby allowing flexibility in the market when a particular OAuth vendor does Bad Things with users' data. This makes the required password complexity minimal. If the engine which processes the token and password were rolled into a secure smartphone application and transmitted to the OAuth vendor via a back-channel, it would also prevent password scraping.
Re: (Score:2)
Security engineering and awareness training aren't mutually exclusive: what's needed is a pragmatic balance between the two. Never try to use technology to solve people problems.
For instance, fraud detection is something people always will have an edge in, thanks to several millennia of social evolutionary pressures. But they won't be infallible, and will be more efficient if technology can filter out the worst distractions. Neither is complete without the other. The question is where we get the most bang
Re: (Score:2)
I'm interested to know how you design a system that works around the weakest link in security being the user? Every system that has been envisaged has been design to authorise the user. The attacks on security aren't attacks on security but rather an attack on the common sense of the user to not let others in.
The only way around this system is to prevent the user from being able to log someone else in, and the typical way that happens is at the incredible inconvenience to the user, i.e. tying his login to t
Re: (Score:2)
Oh, users should be told "not to write down passwords".
I disagree, they should pick a strong password, write it down, and keep it somewhere secure, like their wallet.
Re: (Score:2)
However, this means you should not be requiring them to change the password without good cause. Weekly/Monthly/Quarterly resets are not a good enough reason to force a password reset.
We give our users the instructions to put the password on a folded s
Re: (Score:3)
Apparently, users are supposed to be "trained to recognise phishing emails and other Internet frauds". IT has enough trouble these days trying to recognise them, and somehow our ordinary users are supposed to recognise them too?
That's because your users should have the one thing that the best malware filter/firewall/virus scanner hasn't: Common sense!
Re: (Score:2)
Potentially unwanted programs (Score:2)
we should be investigating in technologies that detect password misuse or the unauthorised install of software on a user's PC.
Unauthorized by whom? There are plenty of tools for web development, remote assistance, and accessibility that show up as "potentially unwanted programs" in certain spyware checkers. A web server could have been installed by a web developer testing his own web application, or it could have been installed by an intruder to serve up kid porn.
Re: (Score:2)
If your input field doesn't accept ); anymore, the probability of an user starting an SQL injection attack intentionally or involuntarily sinks drastically.
If you replace the door to the secure vault with a man trap, inadvertedly leaving the door open to the secure vault doesn't happen anymore, and so does tailgating.
You rightly identified the user as the weakest link, but your solution is disputed by Bruce Schneier.
Re: (Score:2)
I'm also not convinced that a man trap is a secure alternative to a door as it's easy enough to fashion a hook on a pole to remove items without setting off the man trap. I'd rather have a reasonably
Not news (Score:4, Informative)
Nice to hear it from someone with a big name. I'm an IT security specialist, giving talks every now and then, and I've basically been saying the same for years now. It is one of the topic where I face the most fierce opposition, usually from (big surprise) consultants and other people who offer security awareness trainings.
I've been doing this for so long that I can sum it up in one sentence by now: If security awareness trainings would work, don't you think we would be seing SOME effect after doing them for 20 years?
Of course, I am exaggerating a bit to make the point. I do think that training to make users familiar with specific security protocols is useful. I don't think general security awareness is. There is a plethora of reasons why it's a failure, from the context-specific nature of the human mind to the abstract level, but the main reason is that we have enough experience to show that it really is a waste of time and resources. Putting the same amount of money and effort into almost any other security program is going to give you a better ROI.
Re: (Score:2)
While I agree that the system should do what it can to prevent intrusions and bad passwords, there are some things that users are just going to have to know not to do such as not writing their passwords on a sticky note or replying to some random email with their bank login or social security number.
Re: (Score:2)
And I believe that even these "simple" seeming user mistakes have underlying root causes.
For example (because I gave a talk about that, I've done the research) - why do people write down passwords? Could it be, at least in part, because we ask them to remember crap like [|+DU%:,9}v2 -- actual output from an online password generator!
Nobody who has other hobbies can remember that, much less 20 of those (because we also tell people to not re-use passwords).
Solution: Write it down.
Here's how I solved this prob
Re: (Score:2)
Maybe a picture is, user awareness is the very last line of defence. If the terrorist is on the plane and armed, the passengers are the last line. But it was the failure of everything before that point that's to blame. Gee we really should increase passenger awareness of how to spot terrorists -- he has a big beard, no wait he doesn't have a beard, no wait he's dressed ordinary but is reaching into his bag, no wait he's taking off his shoe, no wait he's actually a she and young, etc.
We all know there are "b
Re:Not news (Score:5, Insightful)
Nice to hear it from someone with a big name. I'm an IT security specialist, giving talks every now and then, and I've basically been saying the same for years now. It is one of the topic where I face the most fierce opposition, usually from (big surprise) consultants and other people who offer security awareness trainings.
Of course, I am exaggerating a bit to make the point. I do think that training to make users familiar with specific security protocols is useful. I don't think general security awareness is. There is a plethora of reasons why it's a failure, from the context-specific nature of the human mind to the abstract level, but the main reason is that we have enough experience to show that it really is a waste of time and resources. Putting the same amount of money and effort into almost any other security program is going to give you a better ROI.
I am honestly surprised by this. I really do not see how you can avoid security awareness training.
Forcing the users to pick non-lousy passwords is simply not enough if the users will happily repond to an email from email.admin@scamsite.ru (Re: YO'RE ACCOUNT IS SOON EXPiRE!1) with their username, passowrd, SSN, date of birth and random security questions.
OK, that's a bit of an exaggeration, but users do happily respond to really poor phishing attacks and will tell their password to someone they assume is an email admin because the email comes from an account with admin in the name.
Security is as much as a social problem as a technical one, and you simply cannot ignore the social aspect. And for that people have to have some understandings of basic security protocols: e.g. the admins will never ask for your password.
In fact, I would go as far as to say that security is very much a social problem. Technology will only get you half way. If your system is not easily hackable from the outside, you have reached the minimum standard. The trouble is that "social engineering" is really easy.
Even if you switch to 2 factor authentication it won't help enough: if the user believes that an admin has contacted them, then they will do ANYTHING to help that admin and will even follow detaile dinstructions to bypass as much security as possible. For some reason people being scammed are way better at following instructions than when they're not being scammed.
As someone else quoted earlier: never underestimeate the ingenuity of complete fools.
Re:Not news (Score:4, Insightful)
I really do not see how you can avoid security awareness training.
To use a metaphor from my most recent talk: If you need to write "push" and "pull" on your doors, then they are designed badly. Same for security awareness. Improving the security tools is better than telling people how to safely handle broken tools.
but users do happily respond to really poor phishing attacks
Yes, they do.
And all the security awareness training we've been doing for two decades has made which sustained change, exactly? That is the point. Not that we don't have a security problem, but that security awareness trainings are not a good way to solve them.
Security is as much as a social problem as a technical one, and you simply cannot ignore the social aspect.
I don't. On the contrary, I believe the security awareness training advocates do. They think that just telling someone solves the problem, when overwhelming evidence to the contrary proves them wrong.
I believe the solution lies in asking a) why and b) how the users break security protocols and then tackling those issues, instead of telling them "don't do it" and thinking you've solved the problem.
As someone else quoted earlier: never underestimeate the ingenuity of complete fools.
I believe calling the users dumb and fools and "lusers" and such is a cop-out. It's an easy pseudo-solution to avoid the real problem, which is not so trivial. Redesigning your concepts, protocols, hardware and software to be fail-safe (or idiot-proof, if you want) is hard. Much harder than shoving everyone into a room to listen to a boring lecture, 90% of which they'll have forgotten as soon as they're out the door.
Re: (Score:3)
Re: (Score:2)
I believe calling the users dumb and fools and "lusers" and such is a cop-out.
The full quote is more or less: the trouble with making something foolproof is that one underestimates the ingenuity of fools.
It's not so much calling users fools as calling into question the concept of foolproof. Users can and will do all sorts of strange things half of which you would never imagine. It is very hard to defend against things which you cannot think of.
Redesigning your concepts, protocols, hardware and software to
consequences (Score:2)
Telling people what they need to be doing, and then never punishing them won't work. If people start getting fired for failure to follow security practice, it would stick more. And communicating good security practice doesn't require a consultant or speaker. There are videos out there; examples of what to look for. I agree hiring a big name to train everyone at your company who uses a computer is a waste of funds better spent, but ignoring the human element is willful ignorance. It is disingenuous for
Fine to a certain point... (Score:2)
While I agree with him to a certain point, there is a limit to how far security can be imposed on a user. Security always introduces overhead to doing a job. A user will accept that to a certain point if the reason is explained, however there is a point where putting more onerous security restrictions on a user is counter productive.
For example, if the IT policy is that passwords must be changed every week, be 80% different, be a combination of letters, numbers, upper and lowe case and cannot contain any pa
Yes but no (Score:2)
I think I understand his point, and I agree in part, but I also disagree. I think security awareness is good, but I think relying on it is bad.
First of all, I think there will always be situations where the security technology fails - social engineering is an obvious example - and ultimately the final barrier is the security smarts of the target. Anything which raises that barrier, even a little, is a good thing. The question, obviously, is whether the benefit is worth the cost of the training.
And secondly
Security training is more than systems (Score:3)
Security training is very important, but it needn't concentrate on systems.
Re: (Score:2)
Its about the middle level IT manager who gets a call from a very annoyed board director who says his password doesn't work and you better reset it now or head will role.
Similar situation in a previous job: I was a tech for a secondary (high) school. The Headteacher (Principal) called while off site and asked for the local admin password for the laptop as he'd forgotten the password he'd set on the user account he was given. I, being an employee, gave it to him and thought nothing of it.
The next day I explain the situation to the network manager and he goes MENTAL at me about data security and all manner of other policies, stating that the local admin password was also use
The worst thing (Score:5, Insightful)
is that many companies are too lazy to even get the most fundamental things right. Why on earth would you not distribute your own CA fro your internal web services? Do you really want to train yout employees that clicking on the "accept certificate" button is an everyday thing to do? Why dont you manage to get the security settings in a way that "content from an unknown source" is not "content from you own file server"? how the hell shoud the office assistant know that this is dangerous and theoretically unusual if in everyday work the instruciton says to accepti it several times per day? why yould you enable macros in office documents for no reason and not sign the document?
All security training, hints like "be careful when opening attachements from unknown sources" are anihilated if you train your employees everyday to do the exact opposite thing, namely constructing worflows and selecting toolsets which are requiring exactly that.
My 2 cents on this
a) If there is a "do not use/do x" in your security education, then something is wrong. The right way is "use/do y"
b) Construct your standard processes in a way that your users/employees can work secure *AND* efficient.
c) If there are new tools and your users demand these, keep an open ear! Note to the management: reserve some bugdet for it. If users find dropbox an efficient service, the right way is not to forbid it but to ask yourself why you cant provide any decent file sharing on your own servers.
Re: (Score:3)
James Lyne [sophos.com] once said that he changed to standard security certificate dialog to say "by cllicking this you kill 1000 kittens".
No one raised an issue, not even IT.
Which goes to show how pointless the dialog is and how far it goes in adding security
Re: (Score:2)
the dialog is pointless becaus nobody does it right. The people would pretty quickly learn that it does not kill 1000 kittens in average.
Correct would be to write: in one of hundred times, clicking on this will cause a malware infection. If it does, it department will send 1000 killed kittens via in-house mail to your table. That's 10 kittens in average per click. Good luck.
I am sure after one or two times burying the desktop of some office assistant under dead kittens and posting it on the companies homepa
Re: (Score:2)
d) if you set up security policies, ENFORCE THEM!
Or Hire "security aware" people and trust on them.
That's related to your point where employees are used to processing files from untrusted sources, but receive training not to do so.
Tools is a good example for that. 2 out of 3 companies I worked for had a whitelisted set of tools you were allowed to install. It never contained either a the full set of tools you needed to do your work, nor the newest versions. So you were completly left in the dark if you were
That balance between usability and security (Score:2)
These are invariably give and take.
People simply need to be smarter. They aren't. No amount of precautions which do not inhibit functionality will help. People want to do what they want to do. The weak link is almost always the people and you can't control them with computers. You can limit what they do, but now you're encroaching on usability.
Exactly correct (Score:3)
He is correct. User training is largely a waste of time, and both in development, and deployment, the systems are not designed or setup for security. So yes, users clicking a link is not safe, and it should be. Users opening an application and reading data should be safe, but isn't.
These problems have to be engineered out. They cannot be socially controlled out, the audience has neither the inclination, knowledge or interest in resolving this. And even after training, once its established how you've trained your monkeys, a new method will be established that undoes the training.
The whole industry is still in its infancy. Its building bridges that are made from cardboard, and without any form of certification or regime. This will only be resolved when it becomes apparent that software providers cannot ship things like 'our software cannot be held accountable for anything, have a nice day'. Nobody in the world making bridges gets away with 'if this bridge falls down, we are not accountable'.
The Adobe and Java scenario is exactly like this. Both are wholly unaccountable, and yet frankly directly responsible for perhaps billions upon billions of dollars of data loss, theft, security breaches, and so on.
There is no_fundamental_reason why people should even bother to make their software secure - so they only ally a baseline effort to the task. Until this is addressed, the rinse, shampoo, rinse, shampoo will repeat. And its actually why the security landscape is degrading. Things like Metasploit may have seemed to help. But fundamentally the white hat hacking and info security folks have ultimatly not helped. Its only highlighting how bad things are, putting guns in hands that should not have them, and making things globally worse. The vendors have not changed by very much.
Re: (Score:2)
It's a process and that process must be taught.
If users are taught that giving their passwords away is wrong on every level, even to IT professions who are upgrading their work PC (happened!), and yet they still do it, they need more training. If that training involves sleeping rough for a week because they lost their job because they're too stupid to learn and follow a simple rule, so be it.
So in a perfect world we wouldn't need it? (Score:2)
I read the points he is saying, and I respect Scheier, especially in terms of the work he did earlier.
He makes some interesting discussion points, but it mostly seems to boil down to that we have to fix things from an engineering perspective, and let the rules of thumb about security spread by osmosis.
I would say, while there are still gains to be made at the engineering level, for many organizations serious about security, the low hanging fruit has already been taken care of mostly. Going further would oft
Partially Disagree (Score:2)
Security training is a necessity, but its almost always done incorrectly. As much as it shocks us there are still hordes of workers who have no idea what spearphishing is or why anti-virus doesn't wholly protect their computer.... My belief is that once a year and at start date of the employee you have an online brief going over basic security/what to look for, reinforce the fact that the network and individual systems are monitored and let them know what the penalties can be for not practicing what they ar
Targeted Ads at their best (Score:4, Funny)
It's all about presenation (Score:2)
If we could just..... (Score:2)
This point of view smacks of "if we just worked a bit harder/longer we'll be able to build a perfectly secure system".
It aint gonna happen. Not for a system as sprawling as the internet, not for a system with as complex requirements as an operating system.
The more you know about security, the easier it seems to do what is required to improve security - but you have to have very tight control of platforms to be able to follow through on implementing that security. And tight control prevents innovation.
Yes, and no. (Score:2)
systems that don't care what links a user clicks on
Definitely. As far as is possible we should stop users accidentally doing something stupid by making sure that they can only do the right things. This is not always practical though as for a start there are factors outside our control (for the password example we can't control how the user might store and potentially distribute their credentials in other services (password managers) or in the real works (bits of paper)).
systems that won't let users choose lousy passwords
I can't see a way that could be implemented which is not essentially an attempt to enume
What I don't get... (Score:2)
What I don't get... is why we even still have passwords. Why don't we all have Read only USB security dongles that confirm our identity? For banks, for work, for your medical records? The rest of the sites... Slashdot for example, who gives a crap. But a universal HARDWARE standard for sensitive info seems like a rather simple solution to do away with all this password nonsense.
Re: (Score:2)
Why don't we all have Read only USB security dongles that confirm our identity?
Because it's probably easier to steal your identity dongle than find a good wrench for $5.
The ID-10-T problem (Score:2)
The security of a computer is only as strong as its weakest link, and that weakest link is almost always the 6 inch gap between the ears of the computer user. And because the compromise of an entire network is easier to achieve once a single computer on the network is compromised, that makes the security of the corporate network only as strong as the weakest link... and every time you think you have found your company's dumbest user, you find another one who makes your previous candidate look like an IT gee
Institutional security doesn't work either.. (Score:2)
Because the problem with IT security in most organizations isn't training the rank and file, or building more-secure systems. The problem is that you can have all the IT policies in the world (coding standards, complex passwords, granular access), if they're not enforced with real consequences for ignoring/avoiding them, then it's all useless. Case in point: I once worked in a Fortune 500 company that had a pretty strict password policy (change password every 90 days, upper/lowercase/special characters r
Other Wastes of Time (Score:2)
Other wastes of time:
Driving School
Hunter Safety Class
Swimming Lessons
First Aid Course
Condoms
I was with him until... (Score:2)
It soudns to me like they are using bad training. (Score:3)
1) Tell people about social hacking/engineering.
2) Tell people about common tricks like infected flash drives being dropped in parkways, calling and requesting a password, etc. etc.
3) Warn them that sometime during the year, YOU WILL TRY TO HACK THEM.
4) Tell them if they fall for the hack, they will not get a bonus that year. (It helps if you actually give out yearly bonuses - even $100 will be fine)
5) Actually test them two months later.
6) If they fail the test, send them an email and require that they take your 10 minute class again.
I have found that if you do this, then people learn. The threat of losing even $100 bonus a year is more than enough to get people to stop being stupid.
Note, this will not stop people from downloading things from the internet and/or playing games. But it will stop them from picking up random flashdrives and using them - as well as stop them from giving out passwords over the phone.