Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security

Insurer Won't Pay Out For Security Breach Because of Lax Security 119

chicksdaddy writes: In what may become a trend, an insurance company is denying a claim from a California healthcare provider following the leak of data on more than 32,000 patients. The insurer, Columbia Casualty, charges that Cottage Health System did an inadequate job of protecting patient data. In a complaint filed in U.S. District Court in California, Columbia alleges that the breach occurred because Cottage and a third party vendor, INSYNC Computer Solution, Inc. failed to follow "minimum required practices," as spelled out in the policy. Among other things, Cottage "stored medical records on a system that was fully accessible to the internet but failed to install encryption or take other security measures to protect patient information from becoming available to anyone who 'surfed' the Internet," the complaint alleges. Disputes like this may become more common, as insurers anxious to get into a cyber insurance market that's growing by about 40% annually use liberally written exclusions to hedge against "known unknowns" like lax IT practices, pre-existing conditions (like compromises) and so on.
This discussion has been archived. No new comments can be posted.

Insurer Won't Pay Out For Security Breach Because of Lax Security

Comments Filter:
  • Seems reasonable (Score:5, Insightful)

    by Bruce66423 ( 1678196 ) on Wednesday May 27, 2015 @04:31AM (#49780709)
    If a company cuts corners on security, then in the same way that if I leave my door unlocked and get burgled, I can't make a claim. There's going to be a good living for lawyers establishing what is the required level of security. But if this incentivises senior managers to ask the right questions, then it's probably a good development.
    • Re:Seems reasonable (Score:5, Interesting)

      by JaredOfEuropa ( 526365 ) on Wednesday May 27, 2015 @04:37AM (#49780727) Journal
      The hard part is indeed establishing what the right level of security is and how to evaluate companies against that. At least over here, the exclusions for burglary are pretty clear cut: leaving your door or a window open, and for insuring more valuable stuff there are often extra provisions like requiring "x" star locks and bolt, or a class "y" safe or class "z" alarm system and so on. With IT security, it's not just about what stuff you have installed and what systems you have left open or not; IT security is about people and process, as much or more than it is about systems.
      • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday May 27, 2015 @05:06AM (#49780817) Journal
        Not that real world IT systems often ascend to this level of security; but the issue is not going to be clarified by the fact that the analogy to physical security is only partially accurate: everyone accepts that (for a given purpose; bank vaults and nuclear installations get judged differently than houses) there is some level of 'reasonable security', which reflects appropriate caution on the policyholder's part; but is known to be breakable. Materials have limited strength, police have nonzero response time, sensors generate false negatives.

        With IT systems(at least at the level of software attacks, if they break in at the silicon level it's another story), there is a platonic essence of 'the secure' floating out there, though generally far, far, far, too expensive, cumbersome, and slow to build to ever see the light of day; and there really isn't the same degree of agreement about what counts as 'secure enough for X' or 'incompetent'. Gross incompetence is something you can identify, and there are various formally proven systems in existence, mostly for the constrained use cases of cost-insensitive customers; but the stuff in the middle is very much up in the air.
        • Re:Seems reasonable (Score:4, Interesting)

          by jbolden ( 176878 ) on Wednesday May 27, 2015 @06:18AM (#49781011) Homepage

          Industry handles this in other areas and for that matter security as well by having auditing firms and engaging in a "best practices" audit. "Best practices" doesn't actually mean best practice but rather not doing stupid or dangerous stuff. The audit is how that gets determined.

          • by Anonymous Coward on Wednesday May 27, 2015 @06:50AM (#49781121)

            I'm not so sure about that. I've had the misfortune of dealing with auditors whose definition of best practices included completing non-deviation from things they obviously read out of a college textbook and do not understand at all.

            The notion of actual risk and threat analysis and applying practices to suit situations was completely alien to them.

            I've also dealt with very competent auditors. I rather miss dealing with them. I imagine the incompetent ones cost less, and that kind of thing is going to be a problem as security audits become more prevalent.

            That, and we must never forget that as much as we may applaud the insurance company in this particular story for calling out poor practices, the primary purpose of a modern insurance company is to take your money and give you nothing in return. Everybody needs to be very aware of that, and be untrusting in all your dealings with anyone in the insurance business.

            • by jbolden ( 176878 )

              I agree there are terrible auditors that don't understand what they are doing. But in most companies you can push back against that, it is just that then the burden switches to you. You have to verify and certify that alternative approach X is better than industry standard approach Y.

              As far as the rest, the purpose of an insurance company is to pool risk. The person being insured should likely not want to have to file a claim because that means something bad happened. The company doesn't want to give no

            • by mlts ( 1038732 )

              For a while after 2001, there were auditors and "security consultants" (described best by another /. poster as "suit wearing chatter monkeys") which would do their job by chucking existing solutions and installing Windows, saying that Linux isn't "Sarbanes Oxley compliant." Thankfully this has gone to a dull roar... but in general, it still remains that an OS with FIPS, Common Criteria, EAL 3, and other certifications is going to be a lot more auditor friendly than one that doesn't.

              I probably would say tha

              • >If proper routers are too expensive, a PC with a bunch of NICs and PfSense can do the job for a smaller installation.

                I've come across several individuals recommending that one buy a cheap laptop, configure SmoothWall, or similar Linux distro on it, and use that as a firewall, router, DNS server. One end is connected to the pre-pwned junk from your ISP, and the other end is connected to your system.

            • That, and we must never forget that as much as we may applaud the insurance company in this particular story for calling out poor practices, the primary purpose of a modern insurance company is to take your money and give you nothing in return. Everybody needs to be very aware of that, and be untrusting in all your dealings with anyone in the insurance business.

              As in all industries, there are the good and the bad. I would posit that you are speaking about "bad" insurance companies, not good ones. Not eve

          • Best practices audit is usually code for how many corners can we cut and still be profitable if we get sued for x? Auto manufacturers are notorious for this but all industries do to some extent. A certain ignition switch case comes to mind.
            • by jbolden ( 176878 )

              Yes that's a fair characterization. For companies below the line, i.e. those that would lose a lawsuit easily this is helpful.

        • by Rich0 ( 548339 ) on Wednesday May 27, 2015 @06:33AM (#49781061) Homepage

          everyone accepts that (for a given purpose; bank vaults and nuclear installations get judged differently than houses) there is some level of 'reasonable security', which reflects appropriate caution on the policyholder's part; but is known to be breakable.

          I agree with your post. I'll just add that a big problem with IT security is that companies cannot rely on the same level of protection from governments in preventing intrusion.

          For example, if I have a safe in my house, the means an attacker would have to penetrate it are going to be limited. Since my township has police and neighbors that wander around, they can only spend so much time there before they're likely to be detected. They can generally only carry in stuff that will fit in the doors and is man-portable, since if they have to cut a hole in the house and lower their equipment using a giant crane somebody is likely to notice. If they want to use explosives they will have to defeat numerous regulatory and border controls designed to prevent criminals from gaining access to them, and of course they will be detected quickly. Some destructive devices like nuclear weapons are theoretically possible to use to crack a safe, but in practice as so tightly controlled that no common thief will have them. If the criminal is detected at any point, the police will respond and will escalate force as necessary - it is extremely unlikely that the intruder will actually be able to defeat the police. If the criminal attempted to bring a platoon of tanks along to support their getaway the US would mobilize its considerable military and destroy them.

          On the other hand, if somebody wants to break into my computer over the internet, most likely nobody is going to be looking for their intrusion attempts but me, and if they succeed there will be no immediate response unless I beg for a response from the FBI/etc. An intruder can attack me from a foreign country without ever having to go through a customs control point. They can use the absolute latest technology to pull off their intrusion. Indeed, a foreign military might even sponsor the intrusion using the resources of a major sate and most likely the military of my own state will not do anything to resist them.

          The only reason our homes and businesses have physical security is that we have built governments that provide a reasonable assurance of physical security. Sure, we need to make small efforts like locking our doors to sufficiently deter an attacker, but these measures are very inexpensive because taxpayers are spending the necessary billions to build all the other infrastructure.

          When it comes to computer security, for various reasons that secure environment does not exist.

          • by jbolden ( 176878 )

            Centcom is interested in starting to build a government infrastructure for defense. They agree this needs more collective action and government assistance. Right now the public is pulling in the opposite direction however.

          • by Archangel Michael ( 180766 ) on Wednesday May 27, 2015 @10:25AM (#49782813) Journal

            I agree with your post. I'll just add that a big problem with IT security is that companies cannot rely on the same level of protection from governments in preventing intrusion.

            I am in IT, but not in Security. However, I don't need to know security to know that a large part of the problem is that money fixes problems, and nobody wants to spend the money needed to fix the problems. Further, problems are pushed down to the people least able to fix them (consumers) more often than not.

            These security breaches are going to be even more prevalent and no amount of security will ever resolve them completely. The real fix, IMHO, is to assume that all this info is publicly traded, even when it shouldn't be, and work the problem from there. IF the systems were in place that made assumptions such as this, the problem is much easier to define, and fix.

            • Those security breachers will be fixed, once courts start ordering companies whose data was breached, to pay each individual whose data was taken, a minimum US$1,000,000. IOW, 250 people affected, the payout is US$250,000,000. 10,000 people's data is removed, the payout is US$10,000,000,000.

          • by mlts ( 1038732 )

            To boot, with physical security, intruders can get shot. However, if an attacker is going after your stuff via the Internet, there isn't much one can do back to hurt them, especially if they are in a country that doesn't like your home nation. It is a purely defensive war, where a victory can't be obtained, but only mitigating or avoiding a defeat.

            However, we do have one thing on our side when it comes to computer security... the air gap. Not 100% secure (as Stuxnet showed), but it forces an attacker to

        • The people who know how to determine a reasonable level of security will be the ones left standing in the business in the coming years. Time to either understand various audit levels or get out. I know I won't use cloud services yet because there is no minimum agreed to level of security. This makes all of them mickey mouse in my mind.
        • by Anonymous Coward

          Like my dentist who has a Comcast provided gateway device (modem, router, firewall) and all of the patient records on a server for which every staff member knows the admin password and it hasn't been changed, ever, and on which there are three different vendor remote access programs installed. Not difficult to recognize gross negligence there at all.

        • If the insurers get together and set up a minimum requirement for the middle the costs would go down quickly. I'd say that at the minimum records should only be accessible via the intranet, with all machines able to access the intranet being company issued (BYOD is moronic). If internet access is allowed on the same machine it should only be through a virtual machine.

          For financial and medical institutions the cost of a scheme like this should be negligible. My sister works for a bank and at least they have

          • People want to work from home and companies recognise that this is desirable, so I think you're hoping for too much in trying to ban it. However making it safer - and getting insurance companies to impose the right constraints - may be the best way forward. 'If your system is hacked because an unauthorised laptop was attached to it, we don't pay out' should be a standard insurance clause. Similarly trying to separate the email system from the rest of system to sandbox spear fishing attacks should be require
      • It can be about systems - what policies you have, and have you been audited for security shortcomings. People and process are important factors, but they do not count if you have no security system in place and no way of knowing if its been configured to work.

        Hopefully this will drive more established standards for IT security, along the lines of both having a world-class 'lock' but also "you left the key under the mat" so it doesn't count.

        • my previous employer (in the UK) wanted to be able to store credit card details of customers for automatic payment processing. unfortunately for us, a law came out that essentially meant that to get certified, we'd have to switch to MS Windows servers. that was the only platform for which there were guidelines and which could be audited. in the end we gave up and had a 3rd party process payments for us. the law pretty much caused the monopoly of sagepay in the UK.

          • Which law is that?
            • by Anonymous Coward

              Not a law, an industry standard. The first version 1.0/1.1 of PCI DSS, an industry nonsense standard for credit card processing, basically didn't have a linux/bsd component. Version 2.0 mandated a windows antivirus package on the linux machine performing credit processing, creating a market for non-working antivirus for linux to meet checkbox compliance. Version 3.0 has brought some sanity.

              Basically, the entire standard is a list of nonsense rules (like ISO 9000) and doesn't provide security.

      • by Anonymous Coward

        Lawyers and judges have been doing this for years. Eventually a case will go to a court and it'll be argued by both sides and a judge will make a ruling. That ruling will establish precedent.

        What is the right level of security will be measured by risk, damages, cost mitigation, and legal responses, all of which will be handled by lawyers. THe technical side will be just to define the boundaries.

        • So the lawyers win again...
          • If you haven't been paying attention, they always win. Even the losers get paid well.

          • by Anonymous Coward

            Pretty much. Lawyers' business is to charge you money to navigate a complex maze of legal stuff that they defined. Lawyers built in the fees and damages associated so they can charge you under those damages to make sure you're protected from the system they created. Sounds unethical? Doesn't matter. Sounds illegal, kind of like a protection racket? It's not because lawyers decide what's legal.

            Get this, during the economic down turn in 2008, pay scales in all industries went down, unemployment went up

            • Since we're just talking anecdotes, I heard quotes for felony cases go down considerably from big brand and smaller brand lawyers in my city during and after the recession (since its really still ongoing even though they say it's the "new normal" or whatever).
      • Typically a company has to undergo an assessment to qualify for the insurance and then periodically reassess annually. At least that has been the case for every information security insurance policy with which I have been involved. Where companies can veer off track is if they are not consistent in their application of the assessment. For example a new system or process goes on line and a senior manager just wants it done, NOW! The new system or process may never be considered under that annual assessme

      • by Anonymous Coward

        The hard part is indeed establishing what the right level of security is and how to evaluate companies against that.

        No, that's the easy part. The insurer in this case has already made that assessment, albeit after the fact.

        What needs to happen now is for companies to make that same assessment when taking out the policy in the first place -- get the insurer to sign off on your security practices up front, and you won't get this kind of dispute.

      • Re:Seems reasonable (Score:5, Informative)

        by luis_a_espinal ( 1810296 ) on Wednesday May 27, 2015 @09:13AM (#49782165)

        The hard part is indeed establishing what the right level of security is and how to evaluate companies against that. At least over here, the exclusions for burglary are pretty clear cut: leaving your door or a window open, and for insuring more valuable stuff there are often extra provisions like requiring "x" star locks and bolt, or a class "y" safe or class "z" alarm system and so on. With IT security, it's not just about what stuff you have installed and what systems you have left open or not; IT security is about people and process, as much or more than it is about systems.

        I would disagree with you on this (somewhat). There are well established practices on how to build secure systems, for each major development platform (JEE, .NET, RoR, etc) and also for general decision-making.

        Any organization, big or small, needs to be able to come up with scenarios and questions for things that need care, and for which it might need to provide evidence of attention. The important thing is to execute due diligence when it comes to defending your business against attacks, and to demonstrate providing evidence of such due diligence.

        If we are in e-business or are bound by PCI, HIPAA and/or SOX compliance, the following questions would come to mind (just an example):

        1. Are we addressing the top 10 risks identified by OWASP?
          1. If so, can we quickly identify how we address them?
          2. What other risks identified by OWASP do we address and how?
        2. How do we address CERT alerts and advisories?
        3. Are we on top of security patches?
        4. Are the underlying systems security patches up to date?
          1. If so, can we quickly provide evidence of this?
        5. If we are bound by HIPAA and/or SOX how do we address security concerns that might stem from these regulations?
          1. How do we quickly provide evidence (evidence of process and assurance)?
        6. Do we have a multi-tiered architecture, or do we run everything co-located?
        7. Are back-end databases on their own machines, in their own subnets outsize of a DMZ?
        8. Are "mid-tier" services on their own machines, separated from databases?
        9. Are they in a DMZ? Are they proxied by a HTTP server in different machines?
        10. Do we have firewalls? If so, do we keep an inventory of their rules?
        11. Are we up to date with patches for network assets (firewalls, SSL appliances, etc)?
        12. Are we still on SSL 3.0 or older versions of TLS?
        13. Do we specifically disable anonymous ciphers?
        14. If we use LDAP, do we disable anonymous binds?
        15. Do we use IPSec to secure all communication channels (even those internally, a requirement for banking in several countries)?
        16. If not why? How do we compensate?
        17. If we are in E-Commerce, how do we demonstrate that we are PCI-compliant?

        In my opinion and experience, these questions present the starting point for a framework to determine the right level of security in a system. More should be piled on this list obviously, but anything less would open a system to preventable vulnerabilities.

        And that is the thing. The right level of security is the one that helps you deal with preventable vulnerabilities that you, the generic you, should know well in advance, vulnerabilities that are well documented. How costly the prevention is, that is a different topic, and any business will be hard press to justify to an insurer that they forego to deal with a vulnerability because it was too expense.

        Answers to those questions and evidence of such would constitute proof that an organization followed reasonable due diligence in establishing the right level of security. Moreover, it will have a much greater chance to disarm an insurer trying to find a way to avoid covering damages.

        Notwithstanding the ongoing abuses done in the Insurance business, insurers have rights also. My general health and life insurance is not going to pay up my family if I kill myself while base jumping with blood alcohol levels up the wazoo.

        • While I agree 100% with what you're saying, I think the problem lies in the fact that there is no consistent, *external* measure to indicate your security level, and that's where things fly off the rails.

          There are things like SOX compliance (in the US, anyway), but that's more for auditibility than security. What is the minimum required aspects your infrastructure has to have to be able to say that you're considered reasonably "secure"? Encryption of all data stores using an officially recognised encrypti

      • by Bengie ( 1121981 )
        Follow best practices, two factor auth, only white listed executables can run, all non-system programs run in separate VMs/jails, minimum permissions, systems that store sensitive data do not have direct internet access are partitioned into a separate network with a firewall that only allows ports that are absolutely required, any program that can access the internet cannot also access sensitive data, etc etc.
      • by mlts ( 1038732 )

        One has to be more specific than "firewalls" and "encryption" as well. I can put up a Linux box, use LUKS for all partitions except the kernel, stuff a rule in nftables, and I can claim I have both "firewalls" and "encryption".

        However, does that mean an intruder from the outside is locked out. Hardly. The disk encryption means nothing when the volume is mounted and the data is being copied from remote.

        What is really needed is going beyond generalities, but having a specific set of guidelines. FISMA come

      • The hard part is indeed establishing what the right level of security is and how to evaluate companies against that.

        That will be an issue, but I get the impression that many organizations will have to make significant improvements before it becomes a matter of immediate practical concern.

      • Their shit was being indexed by search engines. I'm willing to bet they didn't do anything right. This is more than likely a major HIPAA violation. You're right; it is about people and process. Their people and processes apparently suck.

      • The hard part is indeed establishing what the right level of security is and how to evaluate companies against that. At least over here, the exclusions for burglary are pretty clear cut: leaving your door or a window open, and for insuring more valuable stuff there are often extra provisions like requiring "x" star locks and bolt, or a class "y" safe or class "z" alarm system and so on. With IT security, it's not just about what stuff you have installed and what systems you have left open or not; IT security is about people and process, as much or more than it is about systems.

        It's fairly simple and done in just about every other industry. The insurance companies will come up with standards. Then 3rd party "Security experts" will pop up offering certification. "We're Security level blackwatch plaid certified! We get a $20k discount on our policy!" etc... Microsoft finds a bug and doesn't patch it? It's hard for your local bank to sue them... but the entire insurance industry?

        This is a good thing.

      • Not much different than the right level of maintenance for an airplane, a ship, a bridge, a tower or any other complex engineered device. There will always be gray zones where a court will have to rule, however I believe you can perfectly request a certain number of things to be done in order to keep your business and IT infrastructure covered in case of a security breach.
    • by Rich0 ( 548339 )

      If a company cuts corners on security, then in the same way that if I leave my door unlocked and get burgled, I can't make a claim. There's going to be a good living for lawyers establishing what is the required level of security. But if this incentivises senior managers to ask the right questions, then it's probably a good development.

      Maybe. If you're buying an insurance policy to cover leaks of information, then almost by definition any claim is going to be the result of lax security. So, why bother buying insurance at all if the insurer can get out of it? The likely result is that those harmed won't be able to collect damages since there will be no insurance, and the company that lost the data will simply declare bankruptcy.

      I think there are better precedents. For example, my company is routinely audited by its insurers or other ce

      • Technically, insurance companies never pay; they keep working down the chain of liability to get the next guy to pay.

        How do you insure against hardware failures of the shrink wrap EULA washes hands of any liability? It will be a major change in the industry.

        • That's why the grandparents post mentions audits that are defined by the insurance company. If the insurance companies believes that you've taken all reasonable precautions, then the buck stops there. As the insurance client, your responsibility is to meeting the insurers requirements. If something *still* goes bad, then the insurer gives you money.

          How the insurer reclaims that money is a different question altogether, and is generally irrelevant to you as the client (with the obvious exception of raisin

    • If a company cuts corners on security, then in the same way that if I leave my door unlocked and get burgled, I can't make a claim.

      I agree with the second portion of your comment. It's an entirely different matter when it's personal property verses protected information. There is or should be a certain level of security afforded to one's private property regardless of the level of security maintained. Meaning I don't care if the door is wide open it's still wrong to have someone come in and take what isn'

    • How many it consultants have Professional Liability Insurance? What would the premiums be--3-5%?!

      This is going to be a big problem in the industry.

    • by Anonymous Coward

      If a company cuts corners on security, then in the same way that if I leave my door unlocked and get burgled, I can't make a claim.

      To a point I can agree with that. BUT it also sounds to me like a typical weaselly insurance company squirming out of a valid claim. "We sell security breach insurance. You had a breach? By being breached you must not have taken reasonable security measures. Claim denied."

  • In a similar way, most home owner's insurance will also not pay out if there is no sign of forced entry. I also foresee patient litigation for allowing publicly accessible records on the internet.
    • This. I'ts important to understand your policy. You can purchase a policy that covers that situation, but it will cost more.

      If you're buying insurance to cover security breaches, your most likely risk is that it happens from some level of negligence (negligence being the legal meaning), a poorly crafted firewall ruleset, or an unpatched server, etc. So it makes sense to purchase a policy that covers negligence. It will be more expensive; but a policy that doesn't cover negligence is probably not very us
  • Ahh..a pity. (Score:5, Interesting)

    by fuzzyfuzzyfungus ( 1223518 ) on Wednesday May 27, 2015 @04:57AM (#49780787) Journal
    For one brief shining moment, I thought that this story was about a health insurance company being dragged into court and beaten on by their insurance company; and my heart leapt and sang with the unalloyed joy of a Norman Rockwell puppy; because that would just be so beautiful.

    Alas, 'Cottage Health' is a medical provider of some sort, so such feelings swiftly evaporated.

    That aside, this seems like a situation that is simultaneously common sense(Obviously you won't be able to buy 'cyber insurance' that covers egregious negligence, at least not for any price that doesn't reflect an essentially 100% chance of payout, plus the insurer's profit margins and transaction cost); and likely to be an endless nightmare of quibbling about what 'security' is.

    We've all seen the long, long, history of attempts to do security-by-checklist, most of which allow you to say that you 'followed industry best practices' by closing the barn door after the horse is long gone, so long as the barn door was constructed with galvanized nails of suitable gauge and is running any antivirus product, efficacy irrelevant. It's not as though 'security' is fundamentally unknowable and intersubjective, man; but it sure isn't something you'd want a lawyer or a layman attempting to boil down into a chunk of contractual language. Barring some miracle of clarity, I suspect that we'll see quite a few dustups that basically involve the insurer's expert witnesses smearing the policyholder's security measures(if they did it by the checklist, the expert witnesses will be snide grey hats who eat 'best practices' for lunch, if they deviated from the checklist, it'll be hardasses on loan from the PCI compliance auditing process, if they implemented a mathematically proven exotic microkernel it'll be somebody asking why Windows Updates weren't being applied in a timely manner); and the policyholder's expert witnesses puffing like salesmen about how strong the security was; and how it must have been an 'advanced persistent threat' to have hacked through such durable code walls.

    The fundamental question of 'did you fail to lock the door, or did somebody take a crowbar to it?' is sensible enough in the context of an insurance claim; but rigorously defining what 'locking the door' means in a complex IT operation; and where the boundary between 'incompetence' and 'unavoidable imperfection' lies, is not going to be pretty. My only hope is that if any of these go to jury, the lawyers decide to strike anyone who sounds like they might know something about computers; because it's going to be a long, boring, slugging match of a case.
    • by AbRASiON ( 589899 ) * on Wednesday May 27, 2015 @06:03AM (#49780951) Journal

      You think that's bad? I very briefly thought this was due to some laptop / hardware being forced open by LAX airport security staff.

    • Actually, security by checklist is the way to go for writing an insurance policy. An underwriter should be able to work out actuarial tables for companies that follow which security best practices, and then price policies accordingly. For instance, if you pass a PCI scan and have virus scanners installed and don't give your users admin rights, and have websense installed, and you have data of $X value, you have an Y% chance of getting jacked, so your policy costs $Z.

      I'm not saying that the checklist should

  • All you need to do is take some scissors to your Ethernet cable and put your server in a locked room. You now have unhackable data. There are absolutely lax security practices out there, but being connected inherently carries risk. Our job is to minimize that risk as much as possible through best practices, but nothing is absolutely bulletproof. Security isn't a destination either, it's an investment that never goes away. The day a business stops making that investment is the day their risk goes up.
    • Stuxnet got onto Iranian centrifuges disconnected from the Internet and in locked and secured facilities. The problem is that at some point, someone has to communicate with these systems, so perfect security isn't possible... even just talking to them runs into the "little Bobby tables" problem [xkcd.com].

      • I was being facetious - all systems have to be accessed in order to be useful. Cord-cutting is meant to illustrate the point that they are rendered effectively useless in the pursuit of obtaining absolute security. Some people think jokes are less funny when they are taken literally, over analyzed, and explained in explicit terms. Even that can be entertaining in its own way, thanks.
    • by jbolden ( 176878 )

      A server not connected to a network in a physically secure location was the situation for the computer that Bradley Manning stole from.

  • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday May 27, 2015 @06:03AM (#49780953) Journal
    In thinking about it, and how much of a clusterfuck this is likely to be; it struck me that there might actually be a way to restructure the incentives to provide some kind of hope:

    Historically, 'retail' insurance, for individuals and little stuff, was mostly statistical with a side of adversarial: Aside from a few token offers of a free fitbit or whatever, the insurer basically calculates your expected cost as best they can based on your demographics and history and charges you accordingly, and tries to weasel out of anything too unexpectedly expensive.

    However, for larger endeavors, (the ones I'm most familiar with are utility and public works projects, there may well be others), sometimes a more collaborative model reigned: the insurer would agree to pay out in the event of accidents, jobsite deaths, and so on, as usual, and the client would pay them for that; but the insurer would also provide guidance to the project, best practices, risk management, specialist expertise on how to minimize the number of expensive fuckups on a given type of project, expertise that the customer might not have, or have at the same level. This was mutually beneficial, since the customer didn't want accidents, the insurer didn't want to pay for accidents, and everyone was happiest if the project went smoothly.

    In a case like this; the incentives might align better if the contractor were were delivering both the security and the breach insurance: this would immediately resolve the argument over whether the policyholder was negligent or the insurer needs to pay up: if the IT contractor got the systems hacked through neligence, that's their fault; and if they secured the systems; but a hack was still pulled off, that's where the insurance policy comes in.

    This scheme would run the risk of encouraging the vendor to attempt to hide breaches small enough to sweep under the rug; but it would otherwise align incentives reasonably neatly: an IT management/insurance hybrid entity would internalize the cost of the level of security it manages to provide(more secure presumably means greater expenditures on good IT people; but more secure also means lower effective cost of providing insurance, since you can expect fewer, smaller, breaches; and fewer, smaller, claims). If the equilibrium turns out to be 'slack off, pay the claims', that suggests that the fines for shoddy data protection need to be larger; but the arrangement would induce the vendor to keep investing in security until the marginal cost of extra work on IT was higher than the marginal gain from lower expected costs in claims; so the knob to turn to get better security is relatively accessible.
    • Both levels need insurance, but I think you are right that the consultant needs the E&O coverage. One failure in the parallel though is time; if a bridge is built, the claim period lasts about 5-10% of its life. Same goes for most nonresidential buildings. What happens when a consultant is replaced on an account? Where does the new consultant assume liability for existing changes? Software bugs? Unpatched systems?

    • by BVis ( 267028 )

      if the IT contractor got the systems hacked through neligence, that's their fault; and if they secured the systems; but a hack was still pulled off, that's where the insurance policy comes in.

      The IT contractor can't stay on-site 24/7 and monitor all the employees. The biggest security problems come from inside the organization; from idiots writing down their passwords to double-clicking on every single attachment that they get, users will never stop creating new and interesting ways to be complete fucking

  • Gotta love it (Score:1, Offtopic)

    by JRV31 ( 2962911 )
    An insurance company trying to screw an insurance company. Gotta love it.
  • by Anonymous Coward

    This insurer should be jailed. The nerve.

    • Since when is it unamerican to screw people over in the name of keeping profits for yourself? I've come to know that as the very definition of American.
  • Insurance is the biggest scam ever perpetrated in the history of mankind. You pay and pay and pay some more, then, when you need to use it you're given every excuse possible why the coverage you've been paying for doesn't apply.

    When one takes into consideration the thousands of dollars each year the average person pours down the drain for insurance, it's no wonder people are going broke. That money could be used for more productive endeavors such as food, housing, education or transportation.

    Instead, the

    • I'll listen to your complaint after you have collected $40,000 to replace that car you just totaled. Doesn't matter whose fault it was, you will still collect. I'll agree that it seems like they are trying to screw everyone, but in the majority of cases they do pay up. Maybe not as much as you think you should get, but they do. Same for health insurance. I'm going to be donating a kidney in a few weeks to someone who needs it. The total cost of both operations (the one to take it from me and the one t
      • A) Insurance NEVER pays to replace a car, even if it is totaled. They give you 80% of the current value.

        B) I have 40K to replace MY car which the other guy totaled but which HIS insurance won't pay anything near what it costs me. I could sue him in court for damages to recover the rest of the money but insurance companies have seen to it that you really can't sue in court any more because they would have to do what people are paying them to do.

        C) Your donation of a kidney is your choice. That is complete

        • by Anonymous Coward

          I had a baby that required NICU care for 7 days. His lungs weren't fully matured and he was having difficulty breathing, so he had to be placed on C-PAP and eventually a tiny little ventilator. The cost for the life saving care that he needed totaled well over $200,000, of which I paid 0. So tell me again how the couple hundred dollars a month I pay in health insurance isn't worth it?

  • by gstoddart ( 321705 ) on Wednesday May 27, 2015 @08:03AM (#49781587) Homepage

    because Cottage and a third party vendor, INSYNC Computer Solution, Inc. failed to follow "minimum required practices," as spelled out in the policy. Among other things, Cottage "stored medical records on a system that was fully accessible to the internet but failed to install encryption or take other security measures to protect patient information from becoming available to anyone who 'surfed' the Internet," the complaint alleges

    And now what we need is criminal/financial penalties for companies who are so blindingly inept at security.

    If your business model involves confidential personal information, and you are this incompetent, you have no business being in the business you're in.

    This just screams someone was lazy, stupid, indifferent, or cheap ... possibly all of these things.

    I can completely see insurance companies saying "hell no we're not paying".

    When companies start having actual liability for being that terrible at security, they'll do something. Right now, they can mostly just say "wow, we wish we were sorry".

  • An insurance company that wants to start insuring new things should understand what they are insuring. They need to run tests beforehand to determine if the level of security is acceptable prior to issuing said policy. This would be similar to my getting a physical before obtaining a large dollar amount life insurance policy. The company wants to be fairly certain that my health is at an acceptable level and I am not expected to die soon from natural causes. Issuing policies solely as a means to profit, wit
    • The claim is that the insured made a bunch of medical records available to the public on the internet, presumably by accident. Unless that happened before the policy was in place, due diligence wouldn't have revealed a mistake that was going to be made later.

      There does seem to be a valid business need for "my employee screwed up" insurance. Companies that want that kind of coverage should probably make sure that's what they're buying.

  • Don't really wanna make it tough
    I just wanna tell you that I had enough
    Might sound crazy,
    But it ain't no lie,
    Bye, bye, bye

  • I'm guessing they made their employees change their passwords every 2-3 months.That's pretty much all the precaution any company I've ever worked for has taken.
  • by Anonymous Coward

    Let's see them apply this to IT support outsourced Overseas.

    Let's see

    Outsourcing = $.002 per call

    Insurance for outsourcing $ 200000.00 per call

    Nope the numbers don't add up

  • They weren't hacked. They accidentally put a bunch of medical records online according to http://www.noozhawk.com/articl... [noozhawk.com] (linked from the article). If they bought an insurance policy that only pays out if they follow "minimum required practices", and they didn't, then of course it shouldn't pay out.

  • Ah, the brutal irony of these medical insurers not being covered by THEIR insurance. It harkens back to the days when the medical insurance industry claimed that patients’ pre-existing conditions disqualified them from coverage. I guess a pre-existing condition is now considered a compromise or malware infestations. Unfortunately their deep pockets will be able to cover their losses while individuals often times couldn’t cover the medical care or went bankrupt because of the exorbitant costs.
  • Finally the security offenders are forced to pay. It's weird how coverage gets all hung up about finding and punishing the perps.

    It's also weird how we're very comfortable with self-regulating systems like The Market or Evolution, but don't seem to think that these systems require feedback. How many security breaches would be avoided if there was consistent (negative) feedback?

  • I am at the point of saying that almost all insurance should in itself be illegal. Often employees fail to do as they are asked in a business. sometimes the people at the top of the chain of command must simply accept employees statements that work is properly completed. I have been in a situation in which the president of a substantial company went a bit senile as he was past 80 years of age. Frankly many things could slide by the old guy without him understanding that something was wrong. So even
  • This is the only thing that will get us better enterprise IT security. Insurances actually care about not paying, and hence they care about actual risk. They do not care about "compliance" unless it actually decreases their risk.

    (Side note: Anything you cannot get insured, like a nuclear reactor, is a very bad idea in the first place.)

  • The insurance would only kick in where the insured doesn't do their job securing the data. There is no in-between in computer security, you either implement security and it works or you don't and you take an insurance for when the inevitable happens.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...