Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Businesses Security IT

Survey: Most IT Staff Don't Communicate Security Risks 227

CowboyRobot writes "A Tripwire survey of 1,320 IT personnel from the U.S. and U.K. showed that most staff 'don't communicate security risk with senior executives or only communicate when a serious security risk is revealed.' The reason is that staff have resigned themselves to staying mum due to an environment in which 'collaboration between security risk management and business is poor, nonexistent or adversarial,' or at best, just isn't effective at getting risk concerns up to senior management."
This discussion has been archived. No new comments can be posted.

Survey: Most IT Staff Don't Communicate Security Risks

Comments Filter:
  • one-way street (Score:5, Insightful)

    by X0563511 ( 793323 ) on Friday September 06, 2013 @11:07AM (#44775645) Homepage Journal

    IT would love to, but upper management doesn't want to hear it.

    • Re:one-way street (Score:5, Insightful)

      by intermodal ( 534361 ) on Friday September 06, 2013 @11:13AM (#44775711) Homepage Journal

      Or, more to the point, they don't understand it even if you try to tell them. And many in upper management, if you communicate the problem, will immediately turn it on you, wanting to know why you haven't fixed it already.

      • Re:one-way street (Score:4, Interesting)

        by Moryath ( 553296 ) on Friday September 06, 2013 @11:35AM (#44776011)

        "Why haven't you fixed it yet?"

        - Because we're coming to you right now to get authorization to spend the money required to fix it.

        "Rarglkebargle that's too expensive, find a free solution instead. Now where's the intern for my morning blowjob?"

        - There is no free solution. It takes time, hours, and a certain amount of training for the staff to get them to understand and help them comply with the security policies.

        "Rargle I'll just find someone else then. Fuck you, you're fired. Time for my powerlunch with the other cocaine-addled executives! Hey, I just saved the company your salary! I think I'll award myself some stock options for my brilliance and frugality!"

      • Re:one-way street (Score:5, Insightful)

        by Feyshtey ( 1523799 ) on Friday September 06, 2013 @11:39AM (#44776071)
        Or worse, their ignorance spawns knee-jerk reactions that cripples wide swaths of the workforce's productivity.

        "What!? There's IIS vulnerability on serverXYZ ?! Uninstall all IIS on all systems immediately!"
        • Re:one-way street (Score:5, Insightful)

          by NatasRevol ( 731260 ) on Friday September 06, 2013 @12:02PM (#44776349) Journal

          That sounds like it would help productivity.

          • by Z00L00K ( 682162 )

            It probably will, that's the scary thing.

          • Not if you're a webfarm. Should you use something else? Yes. Should you yank the plug and then try to figure out how to bring your entire operation back online after the fact because a patch was missing? Probably not.
        • by mlts ( 1038732 ) *

          In some companies (mainly seen this in educational institutions), there can be fault finding, "What, there is a vulnerability? Who was the last man in charge? Fire them!"

          I've seen many people in IT who stepped up and reported security issues, only to get a target painted squarely into their backs and pretty soon after, shown the door with a black mark for their resume of "communicating to others about bypassing company security controls" or some other tales.

          A lot of places will not hesitate to shoot the m

      • Re:one-way street (Score:5, Insightful)

        by Anonymous Coward on Friday September 06, 2013 @11:50AM (#44776215)

        For my own experience, having brought security concerns to 'responsible' adults during my formative years in school, I was trained that doing so instantly results in demonization of the messenger. NEVER EVER point out that the emperor has no clothes.

        This is fairly common in schools, and other organizations. How much does this behaviour train people to silently ignore security issues when discovered for fear (often well earned fear) of unjust reprisals for bringing them to the attention of those who are 1) most affected 2) responsible to prevent/fix these issues?

      • True. And even in the best case scenario, you've only managed to create more work for yourself on a hypothetical/speculative issue, when you're already overloaded with more immediate problems.
      • by DarkOx ( 621550 )

        Or, more to the point, they don't understand it even if you try to tell them.

        I call BS. I know this is contrary to widely held Slashdot opinion but for the most part people don't get into upper management without know which side of the bread to butter. Sure there are cases where you have the "Vice President of being the CEO's step son" and "Chief Flirt with the Ownership" and its true lots of people are promoted to their level of incompetence; but upper management is mostly as smart you probably are and with better social skills.

        If they don't understand its because you talking t

    • Re:one-way street (Score:5, Insightful)

      by robinsonne ( 952701 ) on Friday September 06, 2013 @11:15AM (#44775737)
      Exactly.
      Management doesn't want to hear about it.
      Management doesn't understand it.
      Management doesn't want to spend money on it.

      Nothing happens until it becomes an "issue" and then it's somebody in IT who gets the axe while everyone above is covering their asses.
      • Management that hears it is put in the position of either using their budget to fix it to standard which they should have been following but weren't gaining them nothing, or admitting that THEY screwed up and asking for additional funds from their manager who would be in a similar position.

    • Re:one-way street (Score:5, Interesting)

      by Shoten ( 260439 ) on Friday September 06, 2013 @11:21AM (#44775817)

      IT would love to, but upper management doesn't want to hear it.

      Partially true, but not universally so. The problem is more that technical staff speaks in terms of technical risks, while upper management thinks in terms of business risk, and the two are not obviously aligned. It's like a patient who wants to know "how bad it is," and the doctor answers in terms of probability of due to . The key is to be more proactive about it, and to qualify where a business/organization is strong or weak in terms of security, while providing a plan to improve things down the road. It's impossible to tell someone what the odds are of X being compromised due to Y risk, resulting in Z cost; the best you can do is look for weaknesses and then come up with a plan to prioritize and fix them. Upper management understands the need to be secure, but they need to be given something they can understand and act on or approve. They won't make decisions based on things they don't understand (if they're smart).

      Of course, if compliance comes into the picture, then the risk definition changes. It no longer becomes about risk of compromise, but risk of fines due to noncompliance. This makes it very easy to categorize the risk and communicate it...and as a result, compliance-based security spending is very high compared to security-based security spending.

      • It's impossible to tell someone what the odds are of X being compromised due to Y risk, resulting in Z cost; the best you can do is look for weaknesses and then come up with a plan to prioritize and fix them.

        The thing is, there are often ways of quantifying it. For instance, let's say there's a risk in exposing N customer credit cards. Look at what it cost TJ Maxx and some other high-profile victims. That's the Z variable in your equation. Then you can evaluate the difficulty of exploiting the weakness: If you can find it easily on your website with Google, that's high, if there's some obscure combination of weird parameters done just right, that's a lower risk, getting the odds. Multiply the odds by the cost,

      • Re:one-way street (Score:4, Insightful)

        by Jane Q. Public ( 1010737 ) on Friday September 06, 2013 @01:48PM (#44777661)

        "Partially true, but not universally so. The problem is more that technical staff speaks in terms of technical risks, while upper management thinks in terms of business risk, and the two are not obviously aligned."

        Balls.

        If your upper IT management is not also business-savvy, you have the wrong people.

        I have run into this personally, and also seen colleagues go through it. It tends to go something like this:

        IT: "Mr. Manager, sir: the login system I inherited from my predecessor stores passwords in plain text. This is unacceptable, because it puts the company at risk of liability should we ever be hacked."

        Manager: "Haha. Who would bother to hack us?"

        IT: "You never know. That's the problem. But in the unlikely event that we ARE hacked, we could be liable because the system is not properly secured."

        Manager: "How much will that cost?"

        IT: "Mmmmm.... let's see. 40 man-hours to make the code changes system-wide, and 20 man-hours to roll out the database changes. Part of that is to set up a system to send out a mailer to all the users to change their passwords, pages to handle that, and to deal with the traffic that will generate. Say, roughly, $8000 realistically, over a period of two weeks."

        Manager: "Haha. Not bloody likely."

        IT: "But the company could be liable for millions."

        Manager: "It's simply not a problem. Go away."

    • Re:one-way street (Score:5, Insightful)

      by Moryath ( 553296 ) on Friday September 06, 2013 @11:28AM (#44775911)

      This, this, a thousand times this. Upper management are always deliberately clueless about security, unless the company is in the business of security.

      Actually having security means:

      - Management has to bother complying with it.

      - Management has to NOT constantly carve out exceptions to it ("I'm the CEO, I'm too important to have to remember my own goddamn password or take 5 seconds entering it into a computer in the morning! Now where's my intern to deliver my coffee and morning blowjob!")

      - Management has to spend the money on the maintenance and monitoring of it.

      - Management, who have the purchasing / decisionmaking power, have to step away from getting blowjobs from pretty interns long enough to actually look at the competing products/options and make a decision.

      - Upper Management will always privilege Middle Management over those whose job it is to deal with security. See point 2 about exceptions: middle management complains "security makes it impossible to get our work done" and the response from Upper Management is never to have the staff spend some time training and understanding the security and why it's there and how to work WITH it, it's "fuck you security why are you getting in the way of business? Shit, I'm taking time off from my two-blowjob lunch to deal with this!"

      And just TRY to talk to them about two-factor identification (via cellphones or a swipe-card or something). You will get nowhere because the brainless, Peter Principle, Fail-Upwards recipients of CEO/CTO/CFO jobs will say it's "too much work" for them to comply with.

      • See point 2 about exceptions: middle management complains "security makes it impossible to get our work done" and the response from Upper Management is never to have the staff spend some time training and understanding the security and why it's there and how to work WITH it

        There are many organizations where it really isn't possible to "work with" security because security policy is implemented by a group of people who don't care what the business needs to get done to make money. There are also some organizations where "security" gives lip service to communicating and working with the users, but the reality is that the rules are created with CYA as the primary driving force. In other words, if something bad happens, the security group gets to say "obeying our rules would hav

        • As an example, we're trying to get a data transfer application that uses a non-standard port to work through our firewall. The current test setup has no data that can even be remotely considered "sensitive" (e.g., test files are "lorem ipsum" or similar). But, before the port can be opened to see if the protocol will work at all, we needed to recompile some libraries to force the user of higher strength encryption. Now, our testing is hampered by the "too many changes" problem...is the config file for the app on both ends correct, does the encryption sync up, is the port open, is any IDS/traffic shaper/etc. causing a problem, etc. The correct way to test would have been to just open the port with a restriction on the outside IP address, and then we could just use the app with default config (no security, etc.), and make changes to get to a production config that met the security requirements. At that point, the firewall rule could be changed to allow the connections from arbitrary IP addresses we will eventually need.

          But, because security has a veto on everything, we're spending a lot more time trying to figure out what is causing issues. A proper security group would understand when rules can be bent or broken (and even allow rules to be permanently changed), instead of blindly applying rules that they might not even have had a hand in creating (depending on turnover within the organization).

          It's the wrong way to go about configuring/testing the application... You should be testing the application in an isolated test environment and not on the production network. In fact, you could have gone to the firewall guys to install a test firewall in the test environment to iron all of these things out. The application should be fully developed, tested, and configured before you even think about connecting it to the Internet. It sounds like the implementation team wanted to take a shortcut and "just deploy it" without any consideration for security, etc... My opinion is that you're blaming the firewall guys because you didn't do you're homework...

        • by Moryath ( 553296 )

          There are many organizations where it really isn't possible to "work with" security because security policy is implemented by a group of people who don't care what the business needs to get done to make money.

          Or where security is trying to do their damn job, while shitwits who don't understand the first thing about security claim they know "what the business needs to get done to make money" while they are really claiming they want their password to be "god" or just plain blank.

          In addition, I have never been

      • by arth1 ( 260657 )

        Upper management are always deliberately clueless about security, unless the company is in the business of security.

        This is more true than you know. Being ignorant of something protects them. They don't want to know, because with knowledge comes responsibility. If you know you're vulnerable, and you did nothing, it's far worse than being able to say that you didn't know.
        Is it right? Of course not. But I have more than a few times encountered people who did not want to know something because of culpability implications.

    • "IT would love to, but upper management doesn't want to hear it."

      Exactly! Nobody wants to hear it. The security people also don't report that the locks are crappy, the fingerprint reader laughable and the cameras are so lame that a mother wouldn't recognize her kids.

      If it works, don't bother us, especially not if it costs money to fix.

    • Mark Twain said something along the lines of 'It's easier to fool people than convince them they've been fooled.'

    • Get the person next up in the chain of command to sign off the risk, then they pass it up the chain of command and so on and eventually you get the money you want.

      Works every time as no one wants the responsibility.

    • My personal experience has been I could stand on a chair and wave my arms as I shouted about it and if it either costs them money or inconveniences them in the slightest (even if that is just not being able to use 'god' as their password) then they refuse to listen. Then if their is a security issue they blame you for not 'fixing' it.

    • by gmuslera ( 3436 )

      Worse than that, upper (and middle, and even lower) management orders you to do things that goes against security, like opening access to the intranet from the whole internet so they can access it from anywhere they are, asking full access for their portables, no matter what they have installed or where they use it, transfering remote access passwords by unencrypted mail, and of course, their phones. And any recommendation to do any of this a bit more secure get scrapped because they are "complicated".

      Also

    • Comment removed based on user account deletion
    • by Casca ( 4032 )

      They don't want to hear it, because no executive is willing to put their name on a piece of paper that says they understand there is a risk, and are willing to live with it. We live in a society of blame now, where even the most carefully examined issue and well thought out justification of acceptable risk can be turned into a breach of fiduciary duty and gross negligence.

    • by 1s44c ( 552956 )

      IT would love to, but upper management doesn't want to hear it.

      I find management don't understand these things and will either ignore what they are told or go off the deep end and demand ridiculous fixes.

      It's always been up to IT to refuse to do anything that comes with a huge security risk and to compromise on the small security risks.

    • It isn't that management doesn't care, or doesn't understand (which probably happens a lot anyway), it is the fact that the things they DO care about and DO understand are all negatively effected by "Security" issues.

      Basically application development becomes more complex, expensive, cumbersome, requires more approvals, documentation, oversight, etc...

      All things that a manager doesn't like to hear all summed up in a word. Combine this with FOI and privacy, well he is in for a bad day.

      Oh and it has to be host

  • "However, it's clear from this report that most organizations are missing the majority of opportunities to integrate security risks into day-to-day business decisions. Changing this paradigm will require security professionals to develop new communication skills so they can talk about security risks in terms that are clearly relevant to the top-level business goals."

    Is it possible to cram any more buzzwords into that paragraph?
    • They forgot "synergy" and "best practices".

  • Comment removed based on user account deletion
  • Spoon fed (Score:4, Interesting)

    by barista ( 587936 ) on Friday September 06, 2013 @11:16AM (#44775755) Homepage
    I send out security risk info to our employees every so often, but not all the time.

    Send them out too often, and you risk being ignored. Send them out infrequently, and people say they weren't warned. Once a month seems to do the trick where I work. Management actually encourages this since it keeps people aware without becoming annoying.
  • "Don't worry about it, it's not that serious."

    Well, you are wrong, your head is up your ass, and this kind of stuff is why guys like you hire guys like me, even if you don't know that. So, let the IT dept. do it's job, dammit!

  • Shoot the messenger (Score:2, Informative)

    by Anonymous Coward

    Yes, I did stop communicating security risks eventually. I'd say I stopped after the 10 or 20 thousandth 'So what?' from management.

  • by sinij ( 911942 ) on Friday September 06, 2013 @11:19AM (#44775805)

    Security = Liability. There is no other way to look at this from the bean-counter point of view. This is why all organizations need CIO, someone who is capable of translating "if we don't do X, we going to get pwned" into "if we don't spend X$ and Y man-hours, we are exposing our business to $Z,000,000 -sized liability".
     
      This problem boils down to techies and suits not speaking the same language. So someone has to translate.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      No, it's not a language barrier. The problem is that techies cannot tell management what the management does not want to hear. Even if the techies translate perfectly the message "this will cost you $$$ but it MIGHT save you $$$$$!" simply don't work no matter how true the message really is.

    • "Well, look, I already told you. I deal with the goddamn customers so the engineers don't have to!! I have people skills!! I am good at dealing with people!!! Can't you understand that?!?

      WHAT THE HELL IS WRONG WITH YOU PEOPLE?!!!!!!!"

      -- Tom Smykowski

    • by endus ( 698588 )

      "if we don't do X, we going to get pwned" into "if we don't spend X$ and Y man-hours, we are exposing our business to $Z,000,000 -sized liability".

      Um.

      This sounds a lot like risk management.

      Risk management is for COMMUNISTS.

      Never do a risk assessment when you start a new project, it will just bring up uncomfortable information and make everyone feel sad. :(

    • This is why all organizations need CIO, someone who is capable of translating "if we don't do X, we going to get pwned" into "if we don't spend X$ and Y man-hours, we are exposing our business to $Z,000,000 -sized liability".

      Unfortunately, the average security person will vastly overestimate both the severity and the chance of a particular threat coming to pass, and thus will always suggest that X, Y, Z, A, B, C, and the entire alphabet including lower case simply must be done to avoid billions of dollars of damage.

      • by imikem ( 767509 )

        RIAA, is that you?

      • by sinij ( 911942 )

        Yes, but this site is read by "average security persons" and I am tailoring my language to something that can be understood. Saying "this cost more to fix than insure", or "we are covered by contracts from this liability", or any other response that does not include "lets fix it all, right now" is usually not well-received. :)

  • by PPH ( 736903 ) on Friday September 06, 2013 @11:25AM (#44775853)
    here [dilbert.com].
  • Is....is super lie...

    There are many issues that IT attempts to communicate to Senior Management, but, for a variety of reasons, go unhandled. We've tried communicating before...and people said "Shutup, you're talking too technical, you need to speak business," then it became "Shutup, every week there is some sort of thing that needs attending to..."; so, after a while, those reports start getting filed in the garbage can immediately after they are printed, since that's where they end up anyways.

    It's only la

  • This usually works;
    "Em, sorry to interrupt, but there are some policemen here?
    They say they need to speak to you about some irregularities in the pension fund."

    http://www.youtube.com/watch?v=UxVivkXUfdU [youtube.com]

  • Every day I look at my server logs and see all kinds of "attacks" I'm not sure 5 minutes go by without another wp-admin attempt. I suspect that these are probing known easy attacks. So if I were running IT for a company I could make it sound like this was trench warfare WWI style. It would almost be funny to have an air raid siren going off every time one of these attacks came and having a fire pole for the Admins to slide down.

    Seeing that these various "attacks" have various goals it can be hard to even
    • What constitutes a security problem is actually very concrete: Whatever your CISO defined. That's part of his job and he better do it. He will also have procedures ready to deal with security problems.

      If he does not, take him, fire him, hire someone else. He's worthless.

  • Of course not. (Score:5, Insightful)

    by nine-times ( 778537 ) <nine.times@gmail.com> on Friday September 06, 2013 @11:34AM (#44775993) Homepage

    As someone who has been working in IT for almost two decades, I'm not the least bit surprised. There are all kinds of things that we've given up on trying to communicate. People don't want to hear it. They don't understand what you're saying, they don't want to figure it out, and if you can get them to understand, they still don't care.

    In the case of security, it falls into this classification of 'technical things nobody even wants to understand' and also into the classification of 'preventative measures that people will not recognize the importance of, until after it bites them in the ass.' You tell people that it's a bad idea to use "password" as your password, and they'll blow you off. The more you stress the point, the more annoyed the'll become-- all the way up until someone malicious gains access to their accounts. Once they've been hacked, they'll come back angry, demanding, "Why didn't anyone tell me it was a bad idea."

    Until there's an actual security breach, people think you're chicken little. They'll tell you, "I've been using 'password' for my password for 10 years and I've never had a problem."

    Face that kind of attitude for a several years, and you get awfully tired of warning people.

    • Exactly what I was going to say, but I only had 5 years in IT long ago. Most the nature of IT is unlikely to change; the big issues are rarely technical in nature... people and culture change so slowly it seems almost static relative to technology.

      I would add the problem with management is they often are short sighted (except the founder) and do not want to invest in the hypothetical. They don't want to comprehend enough to actually be able to weigh the risks in their "thinking" on such matters - if you ma

    • by endus ( 698588 )

      In the case of security, it falls into this classification of 'technical things nobody even wants to understand' and also into the classification of 'preventative measures that people will not recognize the importance of, until after it bites them in the ass.' You tell people that it's a bad idea to use "password" as your password, and they'll blow you off. The more you stress the point, the more annoyed the'll become-- all the way up until someone malicious gains access to their accounts. Once they've been hacked, they'll come back angry, demanding, "Why didn't anyone tell me it was a bad idea."

      Until there's an actual security breach, people think you're chicken little. They'll tell you, "I've been using 'password' for my password for 10 years and I've never had a problem."

      Face that kind of attitude for a several years, and you get awfully tired of warning people.

      Exactly right.

      Security professionals have had to be budget-minded for a while now. We're not telling you this because we want to bankrupt the business, we're telling you this because it is a reasonable precaution to take, in line with standards and industry norms, and will save your ass and pay for itself 100x over if there is a breach. People view their own internal security department as the enemy, rather than someone who is on the same side trying to get people to do things properly. We get that there

    • there's another option to the two you present: IT-related legal risk and liability (it isn't always technical, even if it falls under IT security). I got an out of band compliment for an explanation I gave to management after they insisted we do something wrong/illegal/risky and I laid out the reasons why we should not -- apparently I got the point across but *management just didn't care* and we were instructed to procede.

      Seriously: I was complimented for clearly communicating the risk and liability, and at

    • The example you gave, if true, is a classic demonstration that IT management does not understand their business, not the other way around.

      First, while you may want to approach a person directly to give them a friendly heads-up as a first step, the basic thing IT management is supposed to understand is that a user having weak passwords is not so much a risk to that user but a risk to the business. If a user ignores your friendly heads-up, or the problem is more widespread than 1 person, the next step is to

  • by raymorris ( 2726007 ) on Friday September 06, 2013 @11:35AM (#44776013) Journal

    6x% said there was a communication problem. 61%, or almost all with a problem, said it was too technical for management to understand.

    One commenter talked about trying to explain escalation attacks and ssl issues to the boss. Yeah, my boss wouldn't understand that either. He does understand BUSINESS RISKS. If I point to a WSJ or Forbes article about a company that got owned and say "we are vulnerable to the same thing" he'll understand that. He doesn't understand SSL ciphers, he's not supposed to. He does understand "PR nightmare" and "noncompliance".

    If I want business managers to do something, should I maybe explain the business case for what I'm proposing? Maybe point to a line in the WSJ article that says "the attack is estimated to have cost the company $2.4 million so far. No word yet on when their services will be back online". Perhaps that's what management understands better than the technical details?

    • by raymorris ( 2726007 ) on Friday September 06, 2013 @12:02PM (#44776355) Journal

      To take that a step further, it would be interesting to see what happened if those complaining of poor communication emailed their boss saying:

      You may have seen the Forbes and WSJ articles related to the security breach at XYX Corp.
      We are currently at risk for the same type of issue. I estimate a 6% chance of a breach in the next three years which would cost the company around $1 million,
      so we have an actuarial liability of $60,000. If we secure the system, I estimate the risk would be reduced to 3%, eliminating $30,000 of the liability. I estimate the cost as $4,000 to eliminate that $30,000 liability and much of the $1M risk.

      That you you are presenting management with this decision "do we want to save $30,000 by spending $4,000?" That's not too technical, that's exactly
      the decisions they are trained to make.

      Looking at it that way can also teach we engineers something. We might estimate the cost of a breach at $30,000 with a 1% chance of it happening. That's a $300 liability. If it would require 10 man-hours to fix, including meetings and stuff, the company would lose a lot of money trying to fix it. (Remember people cost approximately double their salary, once you pay for health insurance, taxes, their office space, etc.) Management would be "right" to simply accept the risk, knowing that bad might happen, at a cost of $30K. Better to risk a $30,000 problem that probably won't happen than to spend $2,000 avoid it. (Best would be to make a note to fix it in the next version / rewrite, when the _extra_ cost is only 1 man-hour.)

      • by jeti ( 105266 )

        But how can we assess the probability of a successful attack? Since most companies choose not to disclose breaches, we don't have meaningful statistics to base our estimates on.

        • That is a real problem. And, honestly, I don't think there's a large enough set to make such statistics all that meaningful to begin with. But what you can do is to rate against other risks. Don't say its an exact percentage, break it down into something more like 99% (almost certain), 75% (ilkely), 25% (possibly) and 1% (unlikely). Notice I left out 50%... It might be worth adding in a (0.01%, highly unlikely) but the point is to emphasize the label, not the percentage. Don't claim it as an actual percenta

        • If you are asking for resources to be spent to avoid a particular risk, you either have the professional knowledge to discuss the level of risk, or you're talking out your ass.

          How can you get that knowledge? We logged just over 10,000 brute-force attacks last year on the x,000 sites we monitor. I can query those logs to provide various numbers. So logging is one way. The major security lists get several reports per day. MMonitoring those lists will help you understand the threats - how common they ar

      • The truth is that those probabilities are just totally fabricated from whole cloth. Now on the one hand, it's true that business managers go through the day making decisions in exactly that way all the time. But engineers are more likely trained to base decisions and declarations on actual hard data (with several places of accuracy), and the cognitive dissonance of that same person just inventing numbers to win an argument may be too much to bear.

        • I don't know about you, but I HAVE hard data to base my estimates on. If you don't, a professional opinion giving a rough estimate isn't "made of whole cloth". If you're making recommendations, you should be able to say with some confidence that an SQL injection attack on a public web server is at least 100X more LIKELY than having your WAP cracked. Management may not know that, but somebody in IT should know it and be able to communicate it to management.

    • Yeah, my boss wouldn't understand that either. He does understand BUSINESS RISKS. If I point to a WSJ or Forbes article about a company that got owned and say "we are vulnerable to the same thing" he'll understand that. He doesn't understand SSL ciphers, he's not supposed to. He does understand "PR nightmare" and "noncompliance".

      A smart boss would ask you to explain the significance of the "too technical" stuff that you are trying to explain. If the boss doesn't understand still doesn't ask why you think

      • > If the boss doesn't understand still doesn't ask why you think something is important then
        > he is just as much to blame for the communication failure

        That's true for ANY communication failure. What does blame get you?

        If I'd like to get something done, I can either communicate it in a way that gets it done, or not.
        It does me no good to go about it such that it fails and I can blame the other guy.
        Blame and $2 will buy a cup of coffee ($8 in California).

    • Should it not be management's job to know that security vulnerabilities lead to business problems? Why is it IT's responsibility to learn business management, legal requirements, etc?

      If I go to management and say "the fire escape is broken" it is their responsibility to deal with it and understand the consequences (that is what they're paid the big bucks for after-all). I don't have to go through case law, reports and news story and provide them with a powerpoint documenting that company X got fined 50

      • Perhaps they should do this and that. They aren't reading this thread, so talking about what they should do is not helpful.
        What can we nerds do to help the situation? If speaking in terms of business risks solves the problem ...

        You see relevant news stories on CNN / MSNBC / Fox. How hard is it, really, to send your boss the link with a note saying "I noticed we're vulnerable to this. I'd like to discuss securing our systems from this type of problem"?

        • You see relevant news stories on CNN / MSNBC / Fox.

          Not necessarily (I certainly don't anyway). How many news articles do people see in reality? Especially ones with enough technical detail that an IT admin could say "we're vulnerable to that exact situation".

          Talking in business risks isn't really an IT person's job nor is it their expertise (so they wouldn't be able to do it well). If management is already ignoring their senior IT staff then if IT came and tried to sound "businessy" management is likely to pat them on the head and reply with "you let me w

    • by Spudley ( 171066 )

      Maybe point to a line in the WSJ article that says...

      Yes, but that would require the techie to understand the management speak in the article.

      There's the problem again.

  • Management won't listen to anything regarding security until there's a personal fine associated with it. In fact, ignoring IT's comments allows them to claim ignorance. If you want upper management to pay attention to security risks, make them liable. Until then, IT is just another fall-guy when stuff breaks.

  • The only way I've been able to implement proper security at any site has been from the ground up. You find a couple of developers or application support folks with a clue, and get their systems and processes into shape. At the same time, streamline and increase stability. Hopefully other teams will see the benefits of your changes, and follow suite. The only security that comes from on high is security theater, e.g. PCI compliance auditing, which never addresses any real security issues, only check boxe

  • "This e-mail is about my _________ concerns. It is my understanding that I am not funded to replace _________ with the latest version due to budget concerns. As such, I will leave _________ at its current version, ___ until it is reviewed during _________. If this is incorrect, please reply."

  • by Opportunist ( 166417 ) on Friday September 06, 2013 @12:05PM (#44776387)

    I've been in IT-Security for about a decade now. I've had my share of consulting jobs and inevitably a poor security communication comes down to one of three reasons:

    1. Ignorance at management levels
    2. Blame-shifting
    3. Blinkered management

    Let's shed some light on them.

    One is easily explained and I guess everyone can tell at least one tale of them noticing something being horribly wrong in their IT setup, dashing to their superior, reporting the finding and being met with a blank stare and a "huh? Erh... ooookay... we ... I mean, I will look into it...", leaving you with the feeling that entrusting your superior with a problem is like dumping a baby into a trash can. When this happens more than once, IT becomes complacent as well. Management doesn't give a fuck, so why should we?

    The second is actually worse, but rather common around Europe in my experience: The person who reports the finding gets the blame. Directly or indirectly. Either they get chewed out why they could let that happen (whether it is actually in their responsibility or not), or they are now seen as some sort of management snitch with his peers 'cause he ratted them out and now someone gets the blame. This is usually the case in companies where finding a culprit has a bigger priority than finding the person who can fix the problem. It's amazing how often that is actually the case.

    And finally, management that just doesn't give a fuck. It is usually somehow tied with the first case, ignorance of the importance and size of a problem is tightly coupled with the willingness to ignore it altogether and wish it away.

    In a culture like that, NOBODY is very keen to report problems. It's time management starts to understand that problems are part of the game and nothing that can easily be avoided. The human factor is always in play when work is done, and humans err. By definition. Anyone claiming he doesn't make mistakes simply does not work. It is that simple. Only if you don't work you cannot make mistakes. So mistakes will happen and problems will arise. It is now very pointless to start pointing fingers and spending resources finding the culprit, because after we found him we still have the problem on the table! We can do that AFTER the problem is solved. That not only gives the person responsible for it the chance to fix it themselves, but it is also the sensible order of doing things. First get the problem fixed, then you find a strategy to avoid repeating the mistake. Yes, that may include replacing the person responsible for it, but first of all we should find out just WHY he made that mistake, WHY it was possible for him to make it (actually, 9 out of 10 times it's NOT the person's mistake, it's a mistake in the process. But it's just easier to fire some easily replaceable worker than the process manager...) and HOW we can avoid making it again. Just replacing someone does NOT fix a problem if the process behind it is shot, because the next person will make the SAME mistake again.

    But I ramble, back onto security reporting.

    Companies need to establish a culture of security awareness amongst their workers. Security is the minimum of technical and staff security. The MINIMUM. Not the average. I can have the tightest security system in the world if the users hand out their passwords to anyone calling. Of course, preferably the human factor would be taken out of security altogether, but that is not easily possible. Security reporting must be a process, and a process that is rewarding for the person reporting. Someone reporting a security risk must not be seen as a "problem maker", as he often is. He upset the apple cart, he put sand into the gear, he makes the machine run wobbly. Everything went smooth and then that idiot comes along and says we're insecure. So what, anyone see anything bad happening? This is, sadly often, the approach taken to ITSEC. We have to understand that someone who reports a security problem is not "making" this problem but actually helping us avoid a much bigger problem.

    • The person who reports the finding gets the blame.

      Also known as "shoot the messenger". It's a common problem throughout the world, that the person who reports a problem (security issue, software bug, licence lapse, theft) gets tarred with it. A lot of management actually promote this way of dealing with issues as it keeps the number of fault reports down - which they get measured against and rewarded for doing.

      The only way this can ever, in my experience, get resolved is by having QA as an entirely different management structure: outside of software devel

  • Adversarial is the key word here. Business doesn't view security as an entity trying to protect them from liability, get them on par with industry norms, and maybe even create some efficiency and ease support burdens, they view security as an impediment to signing the contract. Your own security team is just trying to save you from yourself...arguing with them as a proxy for the customer doesn't get you anywhere but into even more trouble.

  • Around here, security rules. Adds and changes to apps go through security review, separate standards are published and enforced, and all this lives inside a secured perimeter that is well monitored and regularly improved.

    My own workstation refuses most removable media, and if I can get one attached, my senior and not-so-senior managers get email alerts that this was done. Yes, this impacts an old app that expects to save to a floppy, even the SUBST command trips this alert. Flash drives etc don't work any

Beware of all enterprises that require new clothes, and not rather a new wearer of clothes. -- Henry David Thoreau

Working...