Survey: Most IT Staff Don't Communicate Security Risks 227
CowboyRobot writes "A Tripwire survey of 1,320 IT personnel from the U.S. and U.K. showed that most staff 'don't communicate security risk with senior executives or only communicate when a serious security risk is revealed.' The reason is that staff have resigned themselves to staying mum due to an environment in which 'collaboration between security risk management and business is poor, nonexistent or adversarial,' or at best, just isn't effective at getting risk concerns up to senior management."
one-way street (Score:5, Insightful)
IT would love to, but upper management doesn't want to hear it.
Re:one-way street (Score:5, Insightful)
Or, more to the point, they don't understand it even if you try to tell them. And many in upper management, if you communicate the problem, will immediately turn it on you, wanting to know why you haven't fixed it already.
Re:one-way street (Score:4, Interesting)
"Why haven't you fixed it yet?"
- Because we're coming to you right now to get authorization to spend the money required to fix it.
"Rarglkebargle that's too expensive, find a free solution instead. Now where's the intern for my morning blowjob?"
- There is no free solution. It takes time, hours, and a certain amount of training for the staff to get them to understand and help them comply with the security policies.
"Rargle I'll just find someone else then. Fuck you, you're fired. Time for my powerlunch with the other cocaine-addled executives! Hey, I just saved the company your salary! I think I'll award myself some stock options for my brilliance and frugality!"
Re:one-way street (Score:5, Insightful)
"What!? There's IIS vulnerability on serverXYZ ?! Uninstall all IIS on all systems immediately!"
Re:one-way street (Score:5, Insightful)
That sounds like it would help productivity.
Re: (Score:2)
It probably will, that's the scary thing.
Re: (Score:2)
Re: (Score:3)
In some companies (mainly seen this in educational institutions), there can be fault finding, "What, there is a vulnerability? Who was the last man in charge? Fire them!"
I've seen many people in IT who stepped up and reported security issues, only to get a target painted squarely into their backs and pretty soon after, shown the door with a black mark for their resume of "communicating to others about bypassing company security controls" or some other tales.
A lot of places will not hesitate to shoot the m
Re:one-way street (Score:5, Insightful)
For my own experience, having brought security concerns to 'responsible' adults during my formative years in school, I was trained that doing so instantly results in demonization of the messenger. NEVER EVER point out that the emperor has no clothes.
This is fairly common in schools, and other organizations. How much does this behaviour train people to silently ignore security issues when discovered for fear (often well earned fear) of unjust reprisals for bringing them to the attention of those who are 1) most affected 2) responsible to prevent/fix these issues?
Re: (Score:2)
This is somewhat how I got started in IT. Back when Novell Networks were still a thing, I was in middle school.
My F500 company still runs on Netware you insensitive clod!
Re: (Score:2)
Re: (Score:3)
Or, more to the point, they don't understand it even if you try to tell them.
I call BS. I know this is contrary to widely held Slashdot opinion but for the most part people don't get into upper management without know which side of the bread to butter. Sure there are cases where you have the "Vice President of being the CEO's step son" and "Chief Flirt with the Ownership" and its true lots of people are promoted to their level of incompetence; but upper management is mostly as smart you probably are and with better social skills.
If they don't understand its because you talking t
Re:one-way street (Score:5, Insightful)
Management doesn't want to hear about it.
Management doesn't understand it.
Management doesn't want to spend money on it.
Nothing happens until it becomes an "issue" and then it's somebody in IT who gets the axe while everyone above is covering their asses.
Re: (Score:2)
Management that hears it is put in the position of either using their budget to fix it to standard which they should have been following but weren't gaining them nothing, or admitting that THEY screwed up and asking for additional funds from their manager who would be in a similar position.
Re:one-way street (Score:5, Interesting)
IT would love to, but upper management doesn't want to hear it.
Partially true, but not universally so. The problem is more that technical staff speaks in terms of technical risks, while upper management thinks in terms of business risk, and the two are not obviously aligned. It's like a patient who wants to know "how bad it is," and the doctor answers in terms of probability of due to . The key is to be more proactive about it, and to qualify where a business/organization is strong or weak in terms of security, while providing a plan to improve things down the road. It's impossible to tell someone what the odds are of X being compromised due to Y risk, resulting in Z cost; the best you can do is look for weaknesses and then come up with a plan to prioritize and fix them. Upper management understands the need to be secure, but they need to be given something they can understand and act on or approve. They won't make decisions based on things they don't understand (if they're smart).
Of course, if compliance comes into the picture, then the risk definition changes. It no longer becomes about risk of compromise, but risk of fines due to noncompliance. This makes it very easy to categorize the risk and communicate it...and as a result, compliance-based security spending is very high compared to security-based security spending.
Re: (Score:2)
It's impossible to tell someone what the odds are of X being compromised due to Y risk, resulting in Z cost; the best you can do is look for weaknesses and then come up with a plan to prioritize and fix them.
The thing is, there are often ways of quantifying it. For instance, let's say there's a risk in exposing N customer credit cards. Look at what it cost TJ Maxx and some other high-profile victims. That's the Z variable in your equation. Then you can evaluate the difficulty of exploiting the weakness: If you can find it easily on your website with Google, that's high, if there's some obscure combination of weird parameters done just right, that's a lower risk, getting the odds. Multiply the odds by the cost,
Re:one-way street (Score:4, Insightful)
"Partially true, but not universally so. The problem is more that technical staff speaks in terms of technical risks, while upper management thinks in terms of business risk, and the two are not obviously aligned."
Balls.
If your upper IT management is not also business-savvy, you have the wrong people.
I have run into this personally, and also seen colleagues go through it. It tends to go something like this:
IT: "Mr. Manager, sir: the login system I inherited from my predecessor stores passwords in plain text. This is unacceptable, because it puts the company at risk of liability should we ever be hacked."
Manager: "Haha. Who would bother to hack us?"
IT: "You never know. That's the problem. But in the unlikely event that we ARE hacked, we could be liable because the system is not properly secured."
Manager: "How much will that cost?"
IT: "Mmmmm.... let's see. 40 man-hours to make the code changes system-wide, and 20 man-hours to roll out the database changes. Part of that is to set up a system to send out a mailer to all the users to change their passwords, pages to handle that, and to deal with the traffic that will generate. Say, roughly, $8000 realistically, over a period of two weeks."
Manager: "Haha. Not bloody likely."
IT: "But the company could be liable for millions."
Manager: "It's simply not a problem. Go away."
Re:one-way street (Score:5, Funny)
The risk of this vulnerability is 2.5 Snowdens.
Re:one-way street (Score:5, Funny)
They're CEOs which means they are Fox-addled GOP types. Quantify it in Obamas and all of a sudden they'll spend everything in the world to get rid of it.
Re: (Score:2)
The risk of this vulnerability is 2.5 Snowdens.
2.5 Snowdens is what? Nuking a major population center or creating a virus that wipes out a fifth of the world's population?
Normal businesses deal with risk of at most 0.01 Snowden, and that would be accidental death of their entire work force.
Re: (Score:2)
It's a log scale
Re: (Score:2)
Well done sir, well done.
Re: (Score:3)
And that is the crux of the matter. Risk must be quantified in the units that business decisions are made - dollars. Beyond that, risk needs to accurately assessed to the point of what is the likelihood and not what is possible. Once we know the likelihood and the cost, decision makers will be able to make their decisions.
Ah, but here's the problem: It can't be done.
Explain to me how you will take risk and quantify it in dollars, when the attacks, the attackers and the vulnerabilities are changing over time. Explain to me how you will take the complexity of an environment with multiple critical paths...which will have changed by the time you're done mapping all of them, by the way...and map the vulnerabilities (all of them...you'll need to know this, obviously, and good luck with that) against those, in combination with a
Re: (Score:2)
And that is the crux of the matter. Risk must be quantified in the units that business decisions are made - dollars. Beyond that, risk needs to accurately assessed to the point of what is the likelihood and not what is possible. Once we know the likelihood and the cost, decision makers will be able to make their decisions.
Ah, but here's the problem: It can't be done.
Explain to me how you will take risk and quantify it in dollars, when the attacks, the attackers and the vulnerabilities are changing over time. Explain to me how you will take the complexity of an environment with multiple critical paths...which will have changed by the time you're done mapping all of them, by the way...and map the vulnerabilities (all of them...you'll need to know this, obviously, and good luck with that) against those, in combination with a full on threat assessment of all the threat actors who may be interested in the organization as a target. Explain to me how you'll actually come up with a probability of compromise for every threat and vulnerability, and a cost for each possible kind of breach. Oh, and since capital planning will be determined using this, you need to predict, with a fair degree of accuracy, how all of this will change over the next 36 months (including guessing correctly about which capital budgets for other business functions will be approved).
This has been tried; it does not work. It costs an insane amount of money to do it, and this is why none of the security frameworks (CMMI, ITIL's security subset, COBIT, NIST SP800-53, etc.) try to do it. That's why you have to instead look at where you are weak overall, and work on improvement in general terms. There's no way to get to discrete numbers when it comes to this form of risk, because there are actual people on the other end of the equation, trying to change the numbers. It's not like most other forms of risk, where the outside cause is non-sentient and fairly quantifiable with actuarial means.
Just do like they did for the bank bailouts: pick a really big number.
Re: (Score:2)
Yeah, they did a great job with that historically right?
Mortgage crisis mean anything to you?
Re: (Score:2)
Yeah, they did a great job with that historically right?
Mortgage crisis mean anything to you?
Exactly...and when it comes to leveraged debt and securities, there actually aren't people trying to make them all fail. It's nothing like security, it's far, far simpler. And yet...look what happened even so, when the quants were set loose?
Re: (Score:2)
Isn't management supposed to be in charge of figuring out dollar amounts? Aren't they the ones who are supposed to have the mad communication skills?
Re:one-way street (Score:5, Insightful)
This, this, a thousand times this. Upper management are always deliberately clueless about security, unless the company is in the business of security.
Actually having security means:
- Management has to bother complying with it.
- Management has to NOT constantly carve out exceptions to it ("I'm the CEO, I'm too important to have to remember my own goddamn password or take 5 seconds entering it into a computer in the morning! Now where's my intern to deliver my coffee and morning blowjob!")
- Management has to spend the money on the maintenance and monitoring of it.
- Management, who have the purchasing / decisionmaking power, have to step away from getting blowjobs from pretty interns long enough to actually look at the competing products/options and make a decision.
- Upper Management will always privilege Middle Management over those whose job it is to deal with security. See point 2 about exceptions: middle management complains "security makes it impossible to get our work done" and the response from Upper Management is never to have the staff spend some time training and understanding the security and why it's there and how to work WITH it, it's "fuck you security why are you getting in the way of business? Shit, I'm taking time off from my two-blowjob lunch to deal with this!"
And just TRY to talk to them about two-factor identification (via cellphones or a swipe-card or something). You will get nowhere because the brainless, Peter Principle, Fail-Upwards recipients of CEO/CTO/CFO jobs will say it's "too much work" for them to comply with.
Re: (Score:2)
See point 2 about exceptions: middle management complains "security makes it impossible to get our work done" and the response from Upper Management is never to have the staff spend some time training and understanding the security and why it's there and how to work WITH it
There are many organizations where it really isn't possible to "work with" security because security policy is implemented by a group of people who don't care what the business needs to get done to make money. There are also some organizations where "security" gives lip service to communicating and working with the users, but the reality is that the rules are created with CYA as the primary driving force. In other words, if something bad happens, the security group gets to say "obeying our rules would hav
Re: (Score:2)
As an example, we're trying to get a data transfer application that uses a non-standard port to work through our firewall. The current test setup has no data that can even be remotely considered "sensitive" (e.g., test files are "lorem ipsum" or similar). But, before the port can be opened to see if the protocol will work at all, we needed to recompile some libraries to force the user of higher strength encryption. Now, our testing is hampered by the "too many changes" problem...is the config file for the app on both ends correct, does the encryption sync up, is the port open, is any IDS/traffic shaper/etc. causing a problem, etc. The correct way to test would have been to just open the port with a restriction on the outside IP address, and then we could just use the app with default config (no security, etc.), and make changes to get to a production config that met the security requirements. At that point, the firewall rule could be changed to allow the connections from arbitrary IP addresses we will eventually need.
But, because security has a veto on everything, we're spending a lot more time trying to figure out what is causing issues. A proper security group would understand when rules can be bent or broken (and even allow rules to be permanently changed), instead of blindly applying rules that they might not even have had a hand in creating (depending on turnover within the organization).
It's the wrong way to go about configuring/testing the application... You should be testing the application in an isolated test environment and not on the production network. In fact, you could have gone to the firewall guys to install a test firewall in the test environment to iron all of these things out. The application should be fully developed, tested, and configured before you even think about connecting it to the Internet. It sounds like the implementation team wanted to take a shortcut and "just deploy it" without any consideration for security, etc... My opinion is that you're blaming the firewall guys because you didn't do you're homework...
Re: (Score:2)
There are many organizations where it really isn't possible to "work with" security because security policy is implemented by a group of people who don't care what the business needs to get done to make money.
Or where security is trying to do their damn job, while shitwits who don't understand the first thing about security claim they know "what the business needs to get done to make money" while they are really claiming they want their password to be "god" or just plain blank.
In addition, I have never been
Re: (Score:2)
Upper management are always deliberately clueless about security, unless the company is in the business of security.
This is more true than you know. Being ignorant of something protects them. They don't want to know, because with knowledge comes responsibility. If you know you're vulnerable, and you did nothing, it's far worse than being able to say that you didn't know.
Is it right? Of course not. But I have more than a few times encountered people who did not want to know something because of culpability implications.
Re: (Score:2)
Wouldn't you be too, if those in charge were also the problem?
Re: (Score:3)
He just wants a blowjob.
Re: one-way street (Score:2)
"IT would love to, but upper management doesn't want to hear it."
Exactly! Nobody wants to hear it. The security people also don't report that the locks are crappy, the fingerprint reader laughable and the cameras are so lame that a mother wouldn't recognize her kids.
If it works, don't bother us, especially not if it costs money to fix.
Re: (Score:2)
Mark Twain said something along the lines of 'It's easier to fool people than convince them they've been fooled.'
Re: (Score:2)
Get the person next up in the chain of command to sign off the risk, then they pass it up the chain of command and so on and eventually you get the money you want.
Works every time as no one wants the responsibility.
Re: (Score:2)
My personal experience has been I could stand on a chair and wave my arms as I shouted about it and if it either costs them money or inconveniences them in the slightest (even if that is just not being able to use 'god' as their password) then they refuse to listen. Then if their is a security issue they blame you for not 'fixing' it.
Re: (Score:2)
Worse than that, upper (and middle, and even lower) management orders you to do things that goes against security, like opening access to the intranet from the whole internet so they can access it from anywhere they are, asking full access for their portables, no matter what they have installed or where they use it, transfering remote access passwords by unencrypted mail, and of course, their phones. And any recommendation to do any of this a bit more secure get scrapped because they are "complicated".
Also
Re: (Score:2)
Re: (Score:2)
They don't want to hear it, because no executive is willing to put their name on a piece of paper that says they understand there is a risk, and are willing to live with it. We live in a society of blame now, where even the most carefully examined issue and well thought out justification of acceptable risk can be turned into a breach of fiduciary duty and gross negligence.
Re: (Score:2)
IT would love to, but upper management doesn't want to hear it.
I find management don't understand these things and will either ignore what they are told or go off the deep end and demand ridiculous fixes.
It's always been up to IT to refuse to do anything that comes with a huge security risk and to compromise on the small security risks.
Don't ask, don't tell. (Score:2)
It isn't that management doesn't care, or doesn't understand (which probably happens a lot anyway), it is the fact that the things they DO care about and DO understand are all negatively effected by "Security" issues.
Basically application development becomes more complex, expensive, cumbersome, requires more approvals, documentation, oversight, etc...
All things that a manager doesn't like to hear all summed up in a word. Combine this with FOI and privacy, well he is in for a bad day.
Oh and it has to be host
Re: (Score:2)
Allow me to counterexample from real life to your aptly crafted strawman bullshit.
I worked part-time a few years back as IT (the lone guy) for a construction company. Not full time because they weren't willing to hire anyone for full-time, just "on call" hourly rates and a few hours of "maintenance work" each week. They kept the main company server, with all the technical drawings and blueprints and scanned contracts and everything else, on a rolly cart in an open closet area that had a back-access door wit
Holy buzzword Batman! (Score:2)
Is it possible to cram any more buzzwords into that paragraph?
Re: (Score:2)
They forgot "synergy" and "best practices".
Re: (Score:2)
"I think we should discuss this offline..."
For some reason, that statement bothers me above all others.
Re: (Score:3, Funny)
Oh god, when they say that in person, to your face, and mean use email to discuss it, it's time to shrivel up and die.
Re: (Score:2)
Don't forget the neo-classic, "vertical integration"
Re: (Score:3)
"However, it's clear from this report that most organizations fail to properly consider security risks when making day-to-day business decisions. Changing this will require security professionals to talk to upper management about security risks in terms that are clearly relevant to overall business goals."
Re: (Score:2)
Spoon fed (Score:4, Interesting)
Send them out too often, and you risk being ignored. Send them out infrequently, and people say they weren't warned. Once a month seems to do the trick where I work. Management actually encourages this since it keeps people aware without becoming annoying.
What management says is: (Score:2)
"Don't worry about it, it's not that serious."
Well, you are wrong, your head is up your ass, and this kind of stuff is why guys like you hire guys like me, even if you don't know that. So, let the IT dept. do it's job, dammit!
Shoot the messenger (Score:2, Informative)
Yes, I did stop communicating security risks eventually. I'd say I stopped after the 10 or 20 thousandth 'So what?' from management.
Security = Liability (Score:5, Insightful)
Security = Liability. There is no other way to look at this from the bean-counter point of view. This is why all organizations need CIO, someone who is capable of translating "if we don't do X, we going to get pwned" into "if we don't spend X$ and Y man-hours, we are exposing our business to $Z,000,000 -sized liability".
This problem boils down to techies and suits not speaking the same language. So someone has to translate.
Re: (Score:2, Informative)
No, it's not a language barrier. The problem is that techies cannot tell management what the management does not want to hear. Even if the techies translate perfectly the message "this will cost you $$$ but it MIGHT save you $$$$$!" simply don't work no matter how true the message really is.
Re: (Score:2)
"Well, look, I already told you. I deal with the goddamn customers so the engineers don't have to!! I have people skills!! I am good at dealing with people!!! Can't you understand that?!?
WHAT THE HELL IS WRONG WITH YOU PEOPLE?!!!!!!!"
-- Tom Smykowski
Re: (Score:2)
"if we don't do X, we going to get pwned" into "if we don't spend X$ and Y man-hours, we are exposing our business to $Z,000,000 -sized liability".
Um.
This sounds a lot like risk management.
Risk management is for COMMUNISTS.
Never do a risk assessment when you start a new project, it will just bring up uncomfortable information and make everyone feel sad. :(
Re: (Score:2)
This is why all organizations need CIO, someone who is capable of translating "if we don't do X, we going to get pwned" into "if we don't spend X$ and Y man-hours, we are exposing our business to $Z,000,000 -sized liability".
Unfortunately, the average security person will vastly overestimate both the severity and the chance of a particular threat coming to pass, and thus will always suggest that X, Y, Z, A, B, C, and the entire alphabet including lower case simply must be done to avoid billions of dollars of damage.
Re: (Score:2)
RIAA, is that you?
Re: (Score:2)
Yes, but this site is read by "average security persons" and I am tailoring my language to something that can be understood. Saying "this cost more to fix than insure", or "we are covered by contracts from this liability", or any other response that does not include "lets fix it all, right now" is usually not well-received. :)
Oblig Dilbert (Score:3)
Re: (Score:2)
We actually did something similar to a boss back in the days. Oh yes, good times.
ROFL (Score:2)
Is....is super lie...
There are many issues that IT attempts to communicate to Senior Management, but, for a variety of reasons, go unhandled. We've tried communicating before...and people said "Shutup, you're talking too technical, you need to speak business," then it became "Shutup, every week there is some sort of thing that needs attending to..."; so, after a while, those reports start getting filed in the garbage can immediately after they are printed, since that's where they end up anyways.
It's only la
It's just in the way you present the problem (Score:2)
This usually works;
"Em, sorry to interrupt, but there are some policemen here?
They say they need to speak to you about some irregularities in the pension fund."
http://www.youtube.com/watch?v=UxVivkXUfdU [youtube.com]
Terminology (Score:2)
Seeing that these various "attacks" have various goals it can be hard to even
Re: (Score:2)
What constitutes a security problem is actually very concrete: Whatever your CISO defined. That's part of his job and he better do it. He will also have procedures ready to deal with security problems.
If he does not, take him, fire him, hire someone else. He's worthless.
Of course not. (Score:5, Insightful)
As someone who has been working in IT for almost two decades, I'm not the least bit surprised. There are all kinds of things that we've given up on trying to communicate. People don't want to hear it. They don't understand what you're saying, they don't want to figure it out, and if you can get them to understand, they still don't care.
In the case of security, it falls into this classification of 'technical things nobody even wants to understand' and also into the classification of 'preventative measures that people will not recognize the importance of, until after it bites them in the ass.' You tell people that it's a bad idea to use "password" as your password, and they'll blow you off. The more you stress the point, the more annoyed the'll become-- all the way up until someone malicious gains access to their accounts. Once they've been hacked, they'll come back angry, demanding, "Why didn't anyone tell me it was a bad idea."
Until there's an actual security breach, people think you're chicken little. They'll tell you, "I've been using 'password' for my password for 10 years and I've never had a problem."
Face that kind of attitude for a several years, and you get awfully tired of warning people.
Mod parent up. (Score:2)
Exactly what I was going to say, but I only had 5 years in IT long ago. Most the nature of IT is unlikely to change; the big issues are rarely technical in nature... people and culture change so slowly it seems almost static relative to technology.
I would add the problem with management is they often are short sighted (except the founder) and do not want to invest in the hypothetical. They don't want to comprehend enough to actually be able to weigh the risks in their "thinking" on such matters - if you ma
Re: (Score:2)
In the case of security, it falls into this classification of 'technical things nobody even wants to understand' and also into the classification of 'preventative measures that people will not recognize the importance of, until after it bites them in the ass.' You tell people that it's a bad idea to use "password" as your password, and they'll blow you off. The more you stress the point, the more annoyed the'll become-- all the way up until someone malicious gains access to their accounts. Once they've been hacked, they'll come back angry, demanding, "Why didn't anyone tell me it was a bad idea."
Until there's an actual security breach, people think you're chicken little. They'll tell you, "I've been using 'password' for my password for 10 years and I've never had a problem."
Face that kind of attitude for a several years, and you get awfully tired of warning people.
Exactly right.
Security professionals have had to be budget-minded for a while now. We're not telling you this because we want to bankrupt the business, we're telling you this because it is a reasonable precaution to take, in line with standards and industry norms, and will save your ass and pay for itself 100x over if there is a breach. People view their own internal security department as the enemy, rather than someone who is on the same side trying to get people to do things properly. We get that there
Re: (Score:2)
there's another option to the two you present: IT-related legal risk and liability (it isn't always technical, even if it falls under IT security). I got an out of band compliment for an explanation I gave to management after they insisted we do something wrong/illegal/risky and I laid out the reasons why we should not -- apparently I got the point across but *management just didn't care* and we were instructed to procede.
Seriously: I was complimented for clearly communicating the risk and liability, and at
Re: (Score:2)
The example you gave, if true, is a classic demonstration that IT management does not understand their business, not the other way around.
First, while you may want to approach a person directly to give them a friendly heads-up as a first step, the basic thing IT management is supposed to understand is that a user having weak passwords is not so much a risk to that user but a risk to the business. If a user ignores your friendly heads-up, or the problem is more widespread than 1 person, the next step is to
almost all said "too technical". Wrong words, then (Score:4, Insightful)
6x% said there was a communication problem. 61%, or almost all with a problem, said it was too technical for management to understand.
One commenter talked about trying to explain escalation attacks and ssl issues to the boss. Yeah, my boss wouldn't understand that either. He does understand BUSINESS RISKS. If I point to a WSJ or Forbes article about a company that got owned and say "we are vulnerable to the same thing" he'll understand that. He doesn't understand SSL ciphers, he's not supposed to. He does understand "PR nightmare" and "noncompliance".
If I want business managers to do something, should I maybe explain the business case for what I'm proposing? Maybe point to a line in the WSJ article that says "the attack is estimated to have cost the company $2.4 million so far. No word yet on when their services will be back online". Perhaps that's what management understands better than the technical details?
"6% of $1M loss = $60K, can be avoid for $4K" (Score:5, Insightful)
To take that a step further, it would be interesting to see what happened if those complaining of poor communication emailed their boss saying:
You may have seen the Forbes and WSJ articles related to the security breach at XYX Corp.
We are currently at risk for the same type of issue. I estimate a 6% chance of a breach in the next three years which would cost the company around $1 million,
so we have an actuarial liability of $60,000. If we secure the system, I estimate the risk would be reduced to 3%, eliminating $30,000 of the liability. I estimate the cost as $4,000 to eliminate that $30,000 liability and much of the $1M risk.
That you you are presenting management with this decision "do we want to save $30,000 by spending $4,000?" That's not too technical, that's exactly
the decisions they are trained to make.
Looking at it that way can also teach we engineers something. We might estimate the cost of a breach at $30,000 with a 1% chance of it happening. That's a $300 liability. If it would require 10 man-hours to fix, including meetings and stuff, the company would lose a lot of money trying to fix it. (Remember people cost approximately double their salary, once you pay for health insurance, taxes, their office space, etc.) Management would be "right" to simply accept the risk, knowing that bad might happen, at a cost of $30K. Better to risk a $30,000 problem that probably won't happen than to spend $2,000 avoid it. (Best would be to make a note to fix it in the next version / rewrite, when the _extra_ cost is only 1 man-hour.)
Re: (Score:2)
But how can we assess the probability of a successful attack? Since most companies choose not to disclose breaches, we don't have meaningful statistics to base our estimates on.
Re: (Score:2)
That is a real problem. And, honestly, I don't think there's a large enough set to make such statistics all that meaningful to begin with. But what you can do is to rate against other risks. Don't say its an exact percentage, break it down into something more like 99% (almost certain), 75% (ilkely), 25% (possibly) and 1% (unlikely). Notice I left out 50%... It might be worth adding in a (0.01%, highly unlikely) but the point is to emphasize the label, not the percentage. Don't claim it as an actual percenta
based on professional knowledge or desired outco (Score:3)
If you are asking for resources to be spent to avoid a particular risk, you either have the professional knowledge to discuss the level of risk, or you're talking out your ass.
How can you get that knowledge? We logged just over 10,000 brute-force attacks last year on the x,000 sites we monitor. I can query those logs to provide various numbers. So logging is one way. The major security lists get several reports per day. MMonitoring those lists will help you understand the threats - how common they ar
Re: (Score:3)
The truth is that those probabilities are just totally fabricated from whole cloth. Now on the one hand, it's true that business managers go through the day making decisions in exactly that way all the time. But engineers are more likely trained to base decisions and declarations on actual hard data (with several places of accuracy), and the cognitive dissonance of that same person just inventing numbers to win an argument may be too much to bear.
We logged over 10,000 attacks last month. Data. (Score:2)
I don't know about you, but I HAVE hard data to base my estimates on. If you don't, a professional opinion giving a rough estimate isn't "made of whole cloth". If you're making recommendations, you should be able to say with some confidence that an SQL injection attack on a public web server is at least 100X more LIKELY than having your WAP cracked. Management may not know that, but somebody in IT should know it and be able to communicate it to management.
Re: (Score:2)
A smart boss would ask you to explain the significance of the "too technical" stuff that you are trying to explain. If the boss doesn't understand still doesn't ask why you think
what does blame buy you? (Score:2)
> If the boss doesn't understand still doesn't ask why you think something is important then
> he is just as much to blame for the communication failure
That's true for ANY communication failure. What does blame get you?
If I'd like to get something done, I can either communicate it in a way that gets it done, or not.
It does me no good to go about it such that it fails and I can blame the other guy.
Blame and $2 will buy a cup of coffee ($8 in California).
Re: (Score:2)
Should it not be management's job to know that security vulnerabilities lead to business problems? Why is it IT's responsibility to learn business management, legal requirements, etc?
If I go to management and say "the fire escape is broken" it is their responsibility to deal with it and understand the consequences (that is what they're paid the big bucks for after-all). I don't have to go through case law, reports and news story and provide them with a powerpoint documenting that company X got fined 50
management isn't reading this thread (Score:2)
Perhaps they should do this and that. They aren't reading this thread, so talking about what they should do is not helpful. ...
What can we nerds do to help the situation? If speaking in terms of business risks solves the problem
You see relevant news stories on CNN / MSNBC / Fox. How hard is it, really, to send your boss the link with a note saying "I noticed we're vulnerable to this. I'd like to discuss securing our systems from this type of problem"?
Re: (Score:2)
You see relevant news stories on CNN / MSNBC / Fox.
Not necessarily (I certainly don't anyway). How many news articles do people see in reality? Especially ones with enough technical detail that an IT admin could say "we're vulnerable to that exact situation".
Talking in business risks isn't really an IT person's job nor is it their expertise (so they wouldn't be able to do it well). If management is already ignoring their senior IT staff then if IT came and tried to sound "businessy" management is likely to pat them on the head and reply with "you let me w
Re: (Score:2)
Yes, but that would require the techie to understand the management speak in the article.
There's the problem again.
It takes HIPAA or similar regulation (Score:2, Interesting)
Management won't listen to anything regarding security until there's a personal fine associated with it. In fact, ignoring IT's comments allows them to claim ignorance. If you want upper management to pay attention to security risks, make them liable. Until then, IT is just another fall-guy when stuff breaks.
Grass roots (Score:2)
The only way I've been able to implement proper security at any site has been from the ground up. You find a couple of developers or application support folks with a clue, and get their systems and processes into shape. At the same time, streamline and increase stability. Hopefully other teams will see the benefits of your changes, and follow suite. The only security that comes from on high is security theater, e.g. PCI compliance auditing, which never addresses any real security issues, only check boxe
Yes, we do but we do it like this (Score:2)
"This e-mail is about my _________ concerns. It is my understanding that I am not funded to replace _________ with the latest version due to budget concerns. As such, I will leave _________ at its current version, ___ until it is reviewed during _________. If this is incorrect, please reply."
Anyone wondering why? (Score:5, Insightful)
I've been in IT-Security for about a decade now. I've had my share of consulting jobs and inevitably a poor security communication comes down to one of three reasons:
1. Ignorance at management levels
2. Blame-shifting
3. Blinkered management
Let's shed some light on them.
One is easily explained and I guess everyone can tell at least one tale of them noticing something being horribly wrong in their IT setup, dashing to their superior, reporting the finding and being met with a blank stare and a "huh? Erh... ooookay... we ... I mean, I will look into it...", leaving you with the feeling that entrusting your superior with a problem is like dumping a baby into a trash can. When this happens more than once, IT becomes complacent as well. Management doesn't give a fuck, so why should we?
The second is actually worse, but rather common around Europe in my experience: The person who reports the finding gets the blame. Directly or indirectly. Either they get chewed out why they could let that happen (whether it is actually in their responsibility or not), or they are now seen as some sort of management snitch with his peers 'cause he ratted them out and now someone gets the blame. This is usually the case in companies where finding a culprit has a bigger priority than finding the person who can fix the problem. It's amazing how often that is actually the case.
And finally, management that just doesn't give a fuck. It is usually somehow tied with the first case, ignorance of the importance and size of a problem is tightly coupled with the willingness to ignore it altogether and wish it away.
In a culture like that, NOBODY is very keen to report problems. It's time management starts to understand that problems are part of the game and nothing that can easily be avoided. The human factor is always in play when work is done, and humans err. By definition. Anyone claiming he doesn't make mistakes simply does not work. It is that simple. Only if you don't work you cannot make mistakes. So mistakes will happen and problems will arise. It is now very pointless to start pointing fingers and spending resources finding the culprit, because after we found him we still have the problem on the table! We can do that AFTER the problem is solved. That not only gives the person responsible for it the chance to fix it themselves, but it is also the sensible order of doing things. First get the problem fixed, then you find a strategy to avoid repeating the mistake. Yes, that may include replacing the person responsible for it, but first of all we should find out just WHY he made that mistake, WHY it was possible for him to make it (actually, 9 out of 10 times it's NOT the person's mistake, it's a mistake in the process. But it's just easier to fire some easily replaceable worker than the process manager...) and HOW we can avoid making it again. Just replacing someone does NOT fix a problem if the process behind it is shot, because the next person will make the SAME mistake again.
But I ramble, back onto security reporting.
Companies need to establish a culture of security awareness amongst their workers. Security is the minimum of technical and staff security. The MINIMUM. Not the average. I can have the tightest security system in the world if the users hand out their passwords to anyone calling. Of course, preferably the human factor would be taken out of security altogether, but that is not easily possible. Security reporting must be a process, and a process that is rewarding for the person reporting. Someone reporting a security risk must not be seen as a "problem maker", as he often is. He upset the apple cart, he put sand into the gear, he makes the machine run wobbly. Everything went smooth and then that idiot comes along and says we're insecure. So what, anyone see anything bad happening? This is, sadly often, the approach taken to ITSEC. We have to understand that someone who reports a security problem is not "making" this problem but actually helping us avoid a much bigger problem.
Re: (Score:3)
The person who reports the finding gets the blame.
Also known as "shoot the messenger". It's a common problem throughout the world, that the person who reports a problem (security issue, software bug, licence lapse, theft) gets tarred with it. A lot of management actually promote this way of dealing with issues as it keeps the number of fault reports down - which they get measured against and rewarded for doing.
The only way this can ever, in my experience, get resolved is by having QA as an entirely different management structure: outside of software devel
Adversarial (Score:2)
Adversarial is the key word here. Business doesn't view security as an entity trying to protect them from liability, get them on par with industry norms, and maybe even create some efficiency and ease support burdens, they view security as an impediment to signing the contract. Your own security team is just trying to save you from yourself...arguing with them as a proxy for the customer doesn't get you anywhere but into even more trouble.
So I must be blessed (Score:2)
Around here, security rules. Adds and changes to apps go through security review, separate standards are published and enforced, and all this lives inside a secured perimeter that is well monitored and regularly improved.
My own workstation refuses most removable media, and if I can get one attached, my senior and not-so-senior managers get email alerts that this was done. Yes, this impacts an old app that expects to save to a floppy, even the SUBST command trips this alert. Flash drives etc don't work any
Re: (Score:2, Funny)
DAMMIT wrong thread!
This was supposed to go in the helicopter RV kills guy thread.
nothing to see here, move along folks.
Re: (Score:2)
I don't recall being asked what I thought about that. I remember when more than half of Americans thought getting involved with Iraq was a good idea. Or when they thought that Obamacare was a bad idea.
Just because more than half of Americans want something, doesn't mean it's a good idea. What's more, thanks to gerrymandering, it can take nearly 75% or so to actually ensure anything gets done.
Re: (Score:2)
My profession isn't sys-admin, but I take care of that at my office. (SO, 6-8 people)
Both my boss and colleagues use super weak password (tom101) in spite of me asking them to be serious.
I warned the system was insecure, but was never given a moment to work on it.
At some point I just had to wash my hands of it, I'm not even paid to be responsible for it.
There is a limit of how many times you will tell people the sam thing, especially when they don't care or get annoyed because it requires an effort from them.
It seems management don't want to spend ressources on a problem they don't (want to) understand, preferring closing their eyes.
But in fact, you and they may be liable for anything that goes wrong.
Re: (Score:2)
Both my boss and colleagues use super weak password (tom101) in spite of me asking them to be serious.
Why is this an issue?
Seriously, the point of security rules is to keep data safe while still allowing the business to function. If it's a small office with no access from outside the local network, then maybe password strength isn't important. Maybe the real threat would be someone who is already inside the company, knows nothing about hacking, but could type in the long password they find written on the sticky note because the user couldn't remember it.
Until the entire system (by which I mean the whole c
Re: (Score:2)
Security does have incredible short term effects. If your R&D papers that you worked on for months get out a week before you have them wrapped up to file patent, you may rest assured that the stock market will feel that earth quake.
Re: (Score:2)
If upper management is worth a dime they'll play that ball back into your field and ask you what's to be done.
That's basically what I do when IT comes with a problem I don't instantly understand. What is the problem? What is the implication? What can be done against it? What can you do about it? What can I do to give you what you need to do it?
Re: (Score:2)
A lot of managers at the top level are there because they have a big mouth and not because they have competence. Not all though - and changes in process is a very scary thing for those that lacks competence because it means that they either have to work for their salary or look for a new job.
You sooner or later will see the difference between managers, and you shall watch out for those that changes job at a breakneck pace and show up with a "new cool gadget" every other week paid by the company while you as
Re: (Score:2)
There is a big difference there. IT Staff does one thing and IT Security Staff does another. They must work together though. As a malware remediation consultant, I can confidently say 95% of the organizations out there (mostly small to medium sized ones) do not have any comprehensive understanding of good security procedures or what to do when they are compromised
Most small to medium sized businesses don't have separate IT staff and IT security staff either...