Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security

Is There a Lack of Market Incentives for Cybersecurity? (acm.org) 160

Computer science professor Moshe Y. Vardi is the Senior Editor of Communications of the ACM.

And he's concerned about the state of cybersecurity today: In 2017, I wrote: "So here we are, 70 years into the computer age and after three ACM Turing Awards in the area of cryptography (but none in cybersecurity), and we still do not seem to know how to build secure information systems." What would I write today? Clearly, I would write: "75 years," but I would not change a word in the rest of the sentence....

The slow progress in cybersecurity is leading many to conclude the problem is not due to just a lack of technical solution but reflects a market failure, which disincentivizes those who may be able to fix serious security vulnerabilities from doing so. As I argued in 2020, the computing fields tend to focus on efficiency at the expense of resilience. Security usually comes at a cost in terms of performance, a cost that market players seem reluctant to pay. To discuss the market-failure issue and how to address it, the Computing Community Consortium organized in August this year a visioning workshop on Mechanism Design for Improving Hardware Security. The opening talk was given by Paul Rosenzweig, an attorney who specializes in national security law. He argued that technological development is founded, at the end, on human behavior.

So, the key to good cybersecurity is to incentivize humans. Thus, the answer lies in the economics of cybersecurity, which is, mostly, a private domain with lots of externalities, where prices do not capture all costs.... As the philosopher Helen Nissenbaum pointed out in a 1996 article, while computing vendors are responsible for the reliability and safety of their product, the lack of liability results in lack of accountability. She warned us more than 25 years ago about eroding accountability in computerized societies. The development of the "move-fast-and-break-things" culture in this century shows that her warning was on the mark....

If we want to address the cyber-insecurity issue, we should start by welcoming liability into computing.

Thanks to long-time Slashdot reader shanen for sharing the article
This discussion has been archived. No new comments can be posted.

Is There a Lack of Market Incentives for Cybersecurity?

Comments Filter:
  • by The Evil Atheist ( 2484676 ) on Sunday October 23, 2022 @07:46AM (#62990305)
    Short-term profit. Whether it is security, or waste reduction, none of those are conducive to short-term profits.

    Regulations are supposed to exist to simulate market incentives for not screwing up. No market, left to its own devices, would ever conjure up such mechanisms. Companies that try to do the right thing will always be a few steps behind those that take short-cuts (or rather, simply do nothing).

    Fines, and jail sentences, should be strict, enforced, and heavy enough to simulate consequences, as if they arose in a market that took into account long term costs to the entire system.
    • Dont need fines. Remove liability restrictions holding IT companies. Hold IT companies liable for hacking cracks, etc and watch security tighten right up

      • Dont need fines. Remove liability restrictions holding IT companies. Hold IT companies liable for hacking cracks, etc and watch security tighten right up

        You really think holding police officers personally responsible for the level of crime that exists in an area, will magically make crime disappear?

        Let me clarify further with that idea of yours; good fucking luck hiring anyone to work in IT. You gonna put your dick on the legal chopping block, Mr. CEO of IT Company? Have fun finding insurance. I wouldn't post my resume within IP networking range of that liability nightmare.

        • This is a big part of why C-suite paychecks are so big. It's not really because they're visionaries or anything, it's risk acceptance. They're personally financially responsible and can (and have) gone to jail for the malfeasance of their underlings, regardless if they knew about it or not. Juries are notoriously hostile to C-level folks, so if you want someone to take on that kind of risk, "fuck you, pay me "
          • Re: (Score:2, Informative)

            by Anonymous Coward

            Yeah, happens so often: https://www.washingtonpost.com... [washingtonpost.com]

          • Except less than 1% of x suite execs are ever held fiscially reaponsible for the crimes

          • by jvkjvk ( 102057 ) on Sunday October 23, 2022 @11:48AM (#62990793)

            >This is a big part of why C-suite paychecks are so big.
            >It's not really because they're visionaries or anything, it's risk acceptance.

            Sorry, what?

            The big part of why C-suite pay is so big is that we have cult worship of "leaders" and "leadership positions". There is no way that the majority of C-suite execs earn their paychecks when referenced to the rest of the staff's salaries.

            "Risk acceptance" ? For every cite you can give of a C-suite exec going to jail, there are 100's or 1000's that don't. So, no, I don't find that an acceptable excuse for their paychecks.

            >so if you want someone to take on that kind of risk, "fuck you, pay me "

            What kind of risk? You have a better chance of getting hit by lightening. And the only risk you will be taking is if you are *personally* involved in illegal behaviour, knew about it and didn't stop it. If you can cite one instance of where C-suite execs went to jail where there weren't culpable or knowledgeable about the criminal activity I would be impressed.

            • There is no way that the majority of C-suite execs earn their paychecks when referenced to the rest of the staff's salaries.

              That's your (and certainly others') opinion, but that's it.

              For every cite you can give of a C-suite exec going to jail, there are 100's or 1000's that don't.

              Do you have data that shows that? Someone else made a similar argument, but unless it's borne out by sentencing statistics, it just sounds like there's an assumption of guilt rather than a presumption of innocence. I get this is /. and everyone just wants to see every CEO frogmarched into a federal pen, but...

              If you can cite one instance of where C-suite execs went to jail where there weren't culpable or knowledgeable about the criminal activity I would be impressed.

              Maybe not quite what you're looking for, but this guy was acquited [abovethelaw.com] after being charged for a crime that was never committed. In this case, ove

        • Not police officers though they should be held to a higher not lower standard.

          Police officers who commit crimes get lighter sentences than regular criminals.

          But yes it companies should be held liable when their software fails. Microaoft can release a patch that deletes every betwork share it connects to and nothing will be done to them.

        • You might understand why your analogy is faulty in three simple questions: Can a police officer influence who lives in their neighborhood? Can they invade everyone's privacy to the point where they can actually know what people do with the resources they have? Can they set policies that ensure that everyone does only what they are supposed to do?

    • by gweihir ( 88907 )

      I completely agree. There is no replacement for _personal_ fines and prison time for intentionally not doing what needs to be done in a leadership position. Nothing else will fix this mess.

    • by Z80a ( 971949 )

      If the consumers were any smart, those wasteful things would not succeed and money wouldn't be made.
      Of course, we don't teach people to be smart consumers at school or at home etc and we end up with this absolute mess you need to use a government to solve.

    • Personal liability of corporate officers, even after they have gone elsewhere. Realistic values of the damages.

      Making up numbers: data breach affects a million people, damages $50/person = $50M. If there were 10 corporate officers at the time, they pay $5 million each. This in addition to the company's liability.

      Of course, they are welcome to get insurance. If they screw up often, or dramatically, their premiums will reflect the risk, and no one will hire them.

  • by Shadow of Eternity ( 795165 ) on Sunday October 23, 2022 @07:49AM (#62990315)

    it's consequences. Companies hoard vast quantities of extremely sensitive data and when they're inevitably breached, or even just outright leak it themselves through sheer incompetence, they face no meaningful consequences long term. Why would any for-profit shareholder-beholden corporation ever accept the significant and permanent expense of meaningful cybersecurity?

    They're ALREADY cutting corners and lying to people. Whatever incentive scheme you come up with they'll just cut corners and lie about that too. If you want to fix this the solution is to make shareholders absolutely fucking TERRIFIED of a cybersecurity breach. The solution here isn't incentives, its COSTS. Start jailing executives and seizing corporate assets to directly compensate the victims of the breach. After the first company gets its most recent stock buyback seized and liquidated while the board faces 5+ years in jail you'll never see another breach again unless it's a state sponsored attack.

    • There is no accountability or transparency. The big names pay a lot to paper over the cracks and downplay matters by less than full disclosure. Perhaps 1/2 the big CVE's are reworks or variations on bugs 'already patched'. Your data and your photograph is personal information - it uniquely belongs to you. However we know otherwise. Mandatory fines for breaches, including full disclosure buy independents is needed.
    • I doubt bigger fines & harsh criminal prosecutions would make much impact on day to day considerations for implementing adequate security. Just like with building regulations, they need to pass certain, basic thresholds before they can release software &/or expose systems to the interwebs pipes. We need to make sure companies, i.e. their executives, take action BEFORE there's a security breach, not years down the line when they may have moved on to another position or company. Is this a case for req
      • I doubt bigger fines & harsh criminal prosecutions would make much impact on day to day considerations for implementing adequate security.

        Is this a case for requiring certificates of compliance?

        The fines and harsh criminal persecutions would be the outcome of failing to be compliant.

        Just like with building regulations

        And just like with building regulations, there are (or there should be) harsh punishments if it fails even after supposedly complying with them but not actually.

      • by znrt ( 2424692 )

        I doubt bigger fines & harsh criminal prosecutions would make much impact

        no, they don't. it's not the severity of the consequence what acts as a deterrent, but the assurance that the consequences are inescapable. this is how it works in regular crime, and this is why corporations are so used to rampant negligence if not directly illegal behavior: they seldom face any consequences at all.

        but there are consequences foreseen in most legal systems and fines and punishments are probably strong enough, they just need to be consistently applied. the problem is that a judicial system th

        • ...because harsher sentencing & more vigorous enforcement has worked so well in the war on drugs, right?
          • by znrt ( 2424692 )

            if you read carefully i did recommend against contemplating harsher sentencing as a solution so ... insert [wat?] meme here.

            as for "more vigorous enforcement" then yes, of course. what about some enforcement at all? that's the whole point, that corps and their decisionmakers regularly get away with anything. worst case scenario is some millionaire fine down the line decades after the facts, which will most likely be watered down or negotiated away after the headlines, or may be even considered a marginal c

    • by jd ( 1658 )

      In part, sure, but no company expects to get hacked, even if their password file is in plain text and posted publicly.

      The minimum standard expected must therefore be raised. Maybe stronger requirements for credit card processing, or for holding personal data of any kind, so that it matters less that CEOs are oblivious.

    • by gweihir ( 88907 )

      After the first company gets its most recent stock buyback seized and liquidated while the board faces 5+ years in jail you'll never see another breach again unless it's a state sponsored attack.

      Indeed. Personal _criminal_ responsibility is a must. You screw this up, you pay. Personally. The CEO and the board must be held accountable for neglecting things. To be fair, we may still see the occasional successful attack for a long time, the technological debt in the IT security space is just too high. But we will see only rare cases where people were not trying to get things fixed and just hoped for the best. And that would be good enough.

    • it's consequences. Companies hoard vast quantities of extremely sensitive data and when they're inevitably breached, or even just outright leak it themselves through sheer incompetence, they face no meaningful consequences long term.

      This. This right here is the answer. Equifax exposed financial data for over a hundred million people who were not their customers and could not in any meaningful sense opt out of their data collection in the first place. That it continues to exist as a corporate entity at all, and that its stock is worth more now than it did the day before the 2017 breach, is the sum total of everything you need to know about the problem.

  • Liability (Score:4, Interesting)

    by Retired Chemist ( 5039029 ) on Sunday October 23, 2022 @07:50AM (#62990317)
    No one in our society wants to accept liability or responsibility for anything. Make money fast have good lawyers and do not worry about the details is the theme. This is hardly limited to computing. See for example Ford and their SUV roll over issues. They and their lawyers decided it was cheaper to pay off the victims than to fix the problem and they were killing people. In a situation, where many of the players are in foreign countries and under multiple legal systems, it becomes even more difficult. If you want to incentivize computer security, you first have to a customer base that is willing to pay for it and I do not see that existing. All the government regulation in the world can only act after the fact and if they can even figure out who is responsible. At a minimum, we would have to start holding corporate officers responsible for the actions of their organizations, and I do not see that happening any time soon.
    • Re:Liability (Score:4, Interesting)

      by Archtech ( 159117 ) on Sunday October 23, 2022 @08:16AM (#62990365)

      No one in our society wants to accept liability or responsibility for anything. Make money fast have good lawyers and do not worry about the details is the theme. This is hardly limited to computing.

      "The values of a society totally preoccupied with making money are not altogether reassuring".
      - John Kenneth Galbraith, “The Great Crash, 1929”, Chapter V, The Twilight of Illusion, Section IV, p. 76

      "As the sociologist Georg Simmel wrote over a century ago, if you make money the center of your value system, then finally you have no value system, because money is not a value".
      – Morris Berman, “The Moral Order”, Counterpunch 8-10 February 2013. http://www.counterpunch.org/20... [counterpunch.org]

      • by gweihir ( 88907 )

        Very much so and well said. If money is all you care about, professional standards for your products is something you will try to avoid like the plague. Money corrupts and money is the mechanism to scale greed up to incredible heights.

    • Re:Liability (Score:5, Insightful)

      by geekmux ( 1040042 ) on Sunday October 23, 2022 @08:55AM (#62990423)

      ...All the government regulation in the world can only act after the fact and if they can even figure out who is responsible. At a minimum, we would have to start holding corporate officers responsible for the actions of their organizations, and I do not see that happening any time soon.

      It's quite worse than that when the only dick that ends up on the chopping block at the end of the hacked day, is the poor scapegoat in charge of security. Yeah, there's an "executive" being held responsible alright. It just so happens to be the same person and position that we're here actually wondering why there's no incentive for it anymore.

      Not only do people (a.k.a. users and management) generally not give a shit about privacy or security, but now you have companies blaming their security leads when the breach happens, or firing them when they start chirping a bit too loudly about how cheap the company is with IT/Security, or how non-compliant they are with security policies or mandates.

      The term whistleblower has become downright evil now, and insider threat will soon be re-classified as domestic terrorism to highlight the real risk and liability anyone takes on assuming an IT/Security management position. The CEO soon won't have to wonder why the CSO earns twice as much; half their salary will have to go towards a personal liability policy and permanent legal retainer soon.

      • by jvkjvk ( 102057 )

        So what? As long as security becomes a meaningful part of keeping a business running it doesn't really matter.

        They will pay to increase security to decrease liability, as long as that is less expensive. And this would be a good thing.

        We already have liability for all kinds of stuff. Autos to fridges to bridges. Why should software get a pass?

    • by gweihir ( 88907 )

      Indeed. It is a form of corruption and the people doing it are certainly corrupt. These things have to be stopped or society goes to hell. You have to be able to trust that a professionally made product meets professional standards and that exceptions are rare and make the ones that have screwed up liable. Of course, greed, stupidity, arrogance and a fundamentally broken legal system does not ensure that at all. And hence society slowly crumbles.

    • by godrik ( 1287354 )

      Civil liability may be opening too many doors. But at some point, I think we need a (high bar) criminal negligence standard.
      Bad code/operation can wreck lives and get people killed.

  • by daten ( 575013 ) on Sunday October 23, 2022 @08:07AM (#62990349)
    First, liability will be the end of open source software. It will also drive up the cost of all commercial software. Companies like Microsoft have been building the same product for decades, with billions in revenue and still can't get security right. There are still new vulnerabilities every month.

    Second, I've worked in three cyber security industry for the last 15 years and studied security for twice that. I've attended all of the conferences, traveled the world to help build global solutions to monitor and protect important networks, and worked with world class incident response and threat analysts. Plenty of smart people understand the problem very well, but we still don't have a fix for it. Even air-gapped networks get compromised.

    There is obviously room for improvement, all companies fall short of what they could be doing to build more secure software, but no one has proven they can build perfectly secure software. Calling for liabilities and large penalties without a proven solution is naive.

    • Re: (Score:3, Insightful)

      Shill harder. What we're calling for is a liability for actual negligence and incompetence.

      • by Entrope ( 68843 ) on Sunday October 23, 2022 @09:27AM (#62990489) Homepage

        Which security vulnerabilities or exploits are, at this point, not arguably "actual negligence" or incompetence?

        As Bruce Schneier points out, computer security is an asymmetric problem: A defender needs to get everything right all the time, whereas an attacker only needs to get lucky once. That means it's a lot easier to attack a computer system than to defend it. Even so, almost all the security failures are repeats of things that happened before. That suggests that almost all security exploits would lead to liability under your framework.

        • There is a lot of stuff we can make liable. For example, people are still writing SQL injection exploits. That only happens because of negligence. It should never happen.

          Passwords allowing anyone to log in with an empty password (you'd be surprised how common that is). This is just negligence.

          Overall, with good developer training and even a basic checklist, exploits should be rare.

          • by Entrope ( 68843 )

            Overall, with good developer training and even a basic checklist, exploits should be rare.

            [citation needed]

            What training and "basic checklist" do you think are appropriate? The infosec checklists used in a lot of Western countries have hundreds of items, and they don't even start to enumerate types of security bugs in software -- only in how a computer system is deployed, configured and used. Good SAST and DAST scanners cost thousands of dollars per year per developer, generate enormous numbers of false positives, and still miss simple security errors.

            • What training and "basic checklist" do you think are appropriate?

              To begin with, don't write SQL injection exploits. That is item 1.

              The checklist depends on your language and frameworks. Here's one example: https://github.com/eliotsykes/... [github.com]

              But seriously, it's not like the OpenBSD team is doing anything particularly genius level. They just do things that should be common sense.

              • by Entrope ( 68843 )

                So you really mean that people could mitigate liability by selecting from an almost endless list of context-dependent and debatable checklists.

                There's a reason that the real world doesn't work that way.

            • OWASP Top 10. Learn it. Read it. Use it.

              Without fail, if we find something, it's on that list.

          • by nasch ( 598556 )

            Who decides what is negligence, and how? Courts, on a case by case basis? Given how little most judges understand about IT issues, that could go badly, and would not result in any certainty about the situation. Someone else?

            • I don't know about your courts, in ours here, judges don't even assume that they know everything and have experts on practically every topic possible at hand. Do you think a judge can determine whether a building that came down had a solid foundation and that's not the reason, like the plaintiff said? Of course not, he has an expert architect at his disposal that checks out the site and gives him his expertise on the subject.

              Of course, other parties in the case can bring their own expertise to the table, an

              • I'm not that familiar with the legal system but I have heard about some pretty poorly reasoned decisions around tech. Hopefully they're the exceptions.

                • by jvkjvk ( 102057 )

                  You can find poorly reasoned decisions around *anything*. The question is whether it is greater in tech than in other types of industry. But then you have to really wonder, what industry isn't "tech" these days?

        • That means it's a lot easier to attack a computer system than to defend it.

          One should think so, but that's not the case. By your logic, football games should routinely be three-digit scorers. In general, they are not.

          Sitting on the offensive side of security, I can tell you that attacking systems has its own problem. And the defender has a lot of tools in his arsenal that makes my life miserable if he knows how to use them. A sensible security system is built like an onion, with multiple layers between you and your target, and they ALL have to fail for you, as the attacker, to suc

      • Shill harder. What we're calling for is a liability for actual negligence and incompetence.

        Think harder. Any good lawyer will easily spend every dollar a mega-corp wants to write off as a business expense endlessly arguing/defending what is and isn't negligence.

        As if you really have to question that shit with current delusions in leadership that actually support any legal concept of "hate" speech.

    • by The Evil Atheist ( 2484676 ) on Sunday October 23, 2022 @08:31AM (#62990391)

      liability will be the end of open source software.

      Not really. The most common and costly exploits aren't because some super-intelligent hacker found some rare 0-day in some C program.

      It's the easy stuff - badly configured security policies, accounts, authentication etc. Storing passwords in plaintext STILL. Hardcoded passwords. Taking untrusted user input as direct inputs to things like SQL transactions.

      Most of these C buffer exploit vulnerabilties would simply not be reachable in any decently secured system, and time and time again it is demonstrated that there is more money in simply going after the easy stuff.

      • This right there, someone mod this insigh... oh, alread happened.

        Whenever there's a big breach being reported, almost every time it's some old shit. An ancient exploit that worked because nobody bothered to patch the server, a faulty configuration that has been in the "do not do this, dumb shit!" list of configuration recommendations since its inception, certificates with ancients and broken encryption schemes (or, even better, no encryption at all because, hey, we offload at the proxy for performance reaso

    • Air gaps remove entire classes of intrusion types, much like say memory safety removes entire classes of exploit types in software development.

      • by gweihir ( 88907 )

        You cannot really do air-gaps these days anymore. Even fully separated networks have been successfully attacked, e.g. the Iranian uranium centrifuges. As to memory safety, it has its place, but it is not a panacea either. You still have drivers, libraries, whole OS kernels that cannot be done with memory safety unless you are willing to accept huge performance penalties.

        On the other hand, a professional coder should know how to handle a situation without memory safety or stay the hell away from it. But when

        • You still have drivers, libraries, whole OS kernels that cannot be done with memory safety unless you are willing to accept huge performance penalties.

          Maybe it's time to start considering accepting those penalties, at least for cases where sensitive information is being handled.

          • by gweihir ( 88907 )

            You still have drivers, libraries, whole OS kernels that cannot be done with memory safety unless you are willing to accept huge performance penalties.

            Maybe it's time to start considering accepting those penalties, at least for cases where sensitive information is being handled.

            I think you are not clear on how large these penalties would be. Anyways, the right way is to require qualified people do the work, not to stop doing it. Or would you really abandon, say, regular electrical wiring and mandate pre-made cables with safe connectors only at maybe 10x the cost of the result? Or abandon car mechanics and mandate a full car replacement on every problem?

            There are tons of situations in engineering where screwing up has dire consequences. The way to handle them is always to make sure

            • I think you are not clear on how large these penalties would be.

              I'm just not concerned. How much of the performance would it eat, half? That's totally acceptable. It doesn't need to happen to my desktop, where I play games and shit, but it probably should happen to servers that carry other people's PII, and workstations doing the same.

              There are tons of situations in engineering where screwing up has dire consequences. The way to handle them is always to make sure the people dealing with them have the skills to do things right. And that is missing in software: We have far too many incompetent, and often outright clueless people write software for professional use. And that has to stop.

              Even smart people make mistakes. Why not require that the valuable data go through systems designed to keep it safe?

        • The truism that air-gapped networks have been successfully attacked is like C defenders saying memory safe languages still have exploits. That kind of arguing has thankfully stopped working for C, a couple decades too late. It's not a good argument, at best it's poorly thought out, at worst it's disingenuous.

          Defacto all programmers are the wrong people to let use C, we're all gardeners here.

    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday October 23, 2022 @10:06AM (#62990543) Homepage Journal

      First, liability will be the end of open source software.

      No. You make the users liable. Then they will demand software which is either secure, or provides indemnification.

      It will also drive up the cost of all commercial software.

      Great.

      Plenty of smart people understand the problem very well, but we still don't have a fix for it.

      We're not even using most of the fixes we have now, we prioritize performance over security from the kernel (and the fact that there's a monolithic one) up.

      There is obviously room for improvement, all companies fall short of what they could be doing to build more secure software, but no one has proven they can build perfectly secure software.

      Who's trying?

    • The liability does not come to the makers of software but to their users. Also, it is quite possible to formulate laws sensibly, you needn't get the junk we often have today. It's quite possible to have laws account for things you cannot influence or predict.

      We already have a difference between accidents and neglect. It applies not only to traffic on the street, it can also apply to traffic on a network.

    • by jvkjvk ( 102057 )

      >First, liability will be the end of open source software. It will also drive up the cost of all commercial software.

      If it will be the end of open source software, ti will be the end of commercial software. There is no case where they would not be the same in terms of cost of liability. There would be insurance for each, and it just may cost more for closed source commercial software because of the inherent risk of the unknown.

      Also, there is no law against spinning up a company creating a commercial ve

  • If the perceived risk for the business due to financial loss is not great enough, then the government must protect the business's users by making the perceived risk due to legal punishments higher.

    And any payment of ransoms must be illegal, and punishments enforced.
  • Stealing of information is big business.
    Holes in security systems are used for all kinds of purposes, from crime, to (different) crime fighting, to finding and solving problems.

    In this regard it is much like the oil industry.
    Too much money is made by the status quo to fix it properly.
  • Cloudflare and others are just now implementing everything the whole industry should have been implementing 3 decades ago. I don't even blame management and market forces for not forcing through all the onerous shit necessary for working securely, it's the industry which has been pretending onerous structural solutions were not necessary and everyone could just keep muddling on and add ever more invisible heuristic detectors in the background to patch over the holes. The people who should have known better

  • As I wrote in my 2005 book High-Assurance Design, developers know next to nothing about secure coding, are not interested in it, and executives don't demand it - and no surprise, that is why programmers are not interested! There needs to be higher liability for the cost of users' security related losses - otherwise, makers of software will continue to not care.
    • Indeed, rarely at work has anyone asked if my code is secure. They only ask if it's done.

      • by gweihir ( 88907 )

        I have been asked. I have also had my code undergo a full code and architecture security review in one instance. Considering where it ended up being used, that was the right thing. Of course, any updates, I just coded, compiled, and pushed to deployment in binary form (!) myself, with no oversight whatsoever and using my own development system off-site, kind of defeating the whole thing. To be fair, this was C code in a Java-shop and they had nobody among their 5k or so IT people that could have reviewed th

  • I work in a highly regulated industry that _already_ has penalties for insufficient security and breaches. They do their best with security, but there are still gaps.

    Legacy software running on old (sometimes unsupported) operating systems are a problem. The data has to remain online; even with outside help the data mapping/transfer and decommissioning can't maintain pace with acquisitions. Subject matter experts often leave or transfer when their company is acquired, so institutional knowledge needed for fa

    • Same here, our C-Levels are personally responsible for anything that could be considered neglect when it comes to security, so you can pretty much assume that we're not only very well funded but also incredibly well staffed in security here.

      And yes, there's legacy systems that you can't get rid of. It's the simple reality of doing business, some systems cannot be replaced. But what you can do, and what they do, is to identify those systems and build the security equivalent of a nuclear bunker around them. S

  • by david.emery ( 127135 ) on Sunday October 23, 2022 @09:09AM (#62990463)

    Should individuals have any legal liability for bad code? Professional Structural engineers do have individual professional liability, as well as their employers, if a building falls down. My father, who was a structural engineer, paid a lot of $ for professional liability insurance. And for a couple of years, he was called in as an expert witness on engineering liability lawsuits. We talked a lot about how the courts viewed professional liability.

    I've argued since the mid 1980s for engineering liability for software engineers. That doesn't mean that every person writing code should have a 'Software Engineering license.' But I'll note that ACM and IEEE have had substantial institutional opposition to that notion whenever I'd raise it in a professional society meeting. So is Vardi arguing for only corporate, but no individual, responsibility and liability?

    • by Entrope ( 68843 )

      When would you require a software engineer to be licensed and liable?

      I can write software very carefully. I can spend a lot of effort to make sure my code is almost entirely free of known classes of security bugs, that it doesn't rely on third party software with known bugs, and so forth. That doesn't guarantee it will be secure -- either when I deliver it or after people deploy it and start to change versions of related software.

      People can follow high-assurance development methods (like RTCA/DO-178C for

      • When would you require a software engineer to be licensed and liable?

        The specifics could be argued about forever and ever, but the answer should have something to do with criticality. If you're working on a component that could cause vulnerabilities even when used correctly, then you probably should be held to a high standard. You do need some kind of exception for research projects and such, and there's a lot of devilish details there, but once a product is paid for then the restrictions certainly should apply.

      • When would you require a software engineer to be licensed and liable?

        That's a good question. The answer from drinkypoo below "something to do with criticality" is a good functional approach.

        But I'll give you an economic answer: Hold the company liable and allow the company to hold employees liable! Now as I understand civil engineering liability, a PE CE is liable if s/he has not followed "accepted engineering practices." Of course, that places the onus on establishing "accepted engineering practices," but at the macro level, this is a -limit on liability- for the indivi

    • by gweihir ( 88907 )

      Should individuals have any legal liability for bad code?

      Eventually we need to get there when that code is written professionally. Every other engineering discipline has it and software needs it as well. The current state of things is massively expensive to society and this cannot go on.

      Of course, this does not mean that the individual coder needs to become liable, but then their employer will. Of course, code is "full custom design", so some limits should apply. I would place the level at simple negligence, basically if the state-of-the-art was not followed or t

    • by jvkjvk ( 102057 )

      >Should individuals have any legal liability for bad code?

      No, but companies definitely should. You could argue for a PE *role* in software engineering, but it generally wouldn't be the people writing the code. It would be a separate role that "signed off" on the deliverable.

      For gross malfeasance, this is *one* person you would go after first (but the company would also be sued, along with their grandmothers). For security issue that are more subtle than "we shipped with a hardcoded default password" i

  • by Eunomion ( 8640039 ) on Sunday October 23, 2022 @09:11AM (#62990467)
    Good luck incentivizing security with trillions of sovereign dollars focused against it.
  • It's fair to say that the end users of corporate and commercial systems see security measures as getting in the way. Jeebus, just getting "people" to understand that your email password is not a system password is met with blank stares. Also met with anger and frustration when, for some reason, the email password doesn't work everywhere. "People" can't work with concepts.
    • Welcome to security. Your job is to make sure that your users having no fucking clue about security doesn't matter to the quality of your security.

      Of course, if you do what a lot of security officers do, i.e. try to offload their job to the users, that will fail.

  • Is There a Lack of Market Incentives for Cybersecurity?

    The current levels of cyber crime alone should be a massive incentive to invest in Cybersecurity. If there is no market for Cybersecurity the invisible hand of the free market is an idiot.

    • by gweihir ( 88907 )

      If there is no market for Cybersecurity the invisible hand of the free market is an idiot.

      The "invisible hand of the free market" has always been a short-sighted, greedy idiot. Markets cannot self-regulate outside of a small area.

    • The only thing the invisible hand has ever been able to do is rake everything in while yelling "gimme that it's mine!"

      Don't expect that greedy bastard to understand anything about long term sustainability.

    • by jvkjvk ( 102057 )

      >The current levels of cyber crime alone should be a massive incentive to invest in Cybersecurity.

      It's becoming so prevalent anything less than ransomware or complete system compromise it ignored. Data breach? Meh. Been there done that. Did it affect the stock price much/did the price go back up? Did it cost us much in terms of fines? Yeah thought so.

      Now, if you can't do business and are actually losing income, or they are actively in your systems, people care. But they see it as like a lightening

  • by GuB-42 ( 2483988 ) on Sunday October 23, 2022 @10:04AM (#62990537)

    I work for a large consulting company and the cybersecurity branch is by far the most profitable.

    Also, I don't know of any corporate IT department that will not piss everyone off in the name of "security".

    Does it means it is done well? Generally not, but people pay a high price for that, both in terms of money (for example by hiring overpriced consultants) and in inefficiency (for example by restricting useful software).

    The thing with security (not just cybersecurity) is that it is adversarial, there is no limit. You could spend 99% of your budget on it and still be vulnerable if your adversary does the same, and it is essentially what happens during war time. For me the ideal amount is zero, because it means we trust each other. In real life, there is a balance to be found, but I don't think focusing too much on security is a thing to celebrate.

    Small caveat for cybersecurity specifically: a large part of it is just fixing bugs, this is unambiguously a good thing, exploitable or not, something like data corruption is terrible user experience.

  • The market is short-sighted and dumb these days. CEOs are mercenaries that try to get the largest bonus they can and move on after a few years, no matter how much scorched earth they leave. They simply hope that a rare events will not happen and do not prepare for them. To be fair, with ransomware you are pretty much at a probability of 1x/year now if you are not prepared. Then there are no legal incentives. CEOs do not get punished for gross negligence or even intent to be insecure beyond all reason.

    So wha

  • The moment you mention the term 'liability', lawyers all over the world heard "cha-ching!!!". Furthermore, government regulator started thinking "Oh, goodie, we get to test and license programmers."

    • by jvkjvk ( 102057 )

      Software made it's debut sometime in 1948, but wasn't called that until 1954 or so.

      I think that now, after 70 years, we are due for Professional Engineers. Not every person creating software or having do to with software (architects, devops, QA, etc.) needs to be a PE but reliability and fitness for purpose (including security) is something we should be striving for. Having the equivalent of a PE framework will go along ways to stabilize that. Would you want a car that the manufacturer says isn't warrant

  • There is a lack of market incentive for private cybersecurity much as there is a lack of market incentive for private fire stations or police departments. From a paper I wrote earlier this year:

    This emphasis on expediency over security (when, in reality, both are attainable with careful design) is similar to Carr's description of how companies' commitment to cybersecurity is strictly a matter of revenue maximization relative to the perceived risk associated with being compromised. Marcus Willet’s reco

  • Government and Lawyers collect a lot of cash, rinse and repeat.
  • The road to better cybersecurity is made of better tools and accepting trade-offs. Maybe you need to give up some convenience or some performance as a trade-off for security. But the biggest barrier to better security is the very tools we use to build software. The software we build needs to be move provable, we need ways of ensuring that we can avoid unexpected things from our software. It is easier to assume liability when you have confidence in what you produce.

    If you think about how a bridge gets built

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Sunday October 23, 2022 @10:31AM (#62990617)

    IT security is fundamentally based on culture.

    Let's use the car analogy: In Germany (where I live) we have TUV (Technischer Ueberwachungsverein), ABE (Allgemeine Betriebserlaubis) and the German drivers license. All of those ensure and enforce high standards in vehicle security and driving skills. Every Grandma in Germany can easily drive a stick-shift way better than most US-Americans for instance. A German drivers license takes roughly 30 hours of training and two tests, one theory and once practical and sets you back 1500+ Euros. The result of all that is German "car culture" that enables the gouvernment to say: "Yeah, there are stretches on the Autobahn where our citizens themselves can decide how fast they want to drive." and yet the amount of traffic deaths per capita is still quite low compared to other nations including the US. You could say that Germany has a superiour "car culture" compared to other nations.

    IT in Germany (and other places) has no such regulations. As such IT security is quite shitty, as you would expect. If people can't tell the difference between a client and a server and don't know such basics as what a user account is or how to use the clipboard you get home folders from CEOs who should know better that look like someone from an mental asylum has been using it for 10 years.

    Until there is a universal IT "drivers license" and alphabet and people are forced to learn and use it IT security will continue to be as abysmal and primitive as all the rest. You can read what I'm writing because you and I learned the same letters and know interpunctation and know how to read and write english. Regular IT usage would be the eqivalent of a fresh 1st-grader trying to express a complex thought like this here. Not pretty and quite dangerous if critical decisions are based on such a lack of cultural finesse. You all know failed software projects and the dimwitts that drove those against the wall, right?

    Bottom line: This is not a market-problem, it's larger than that: It's a culture problem. And until society sees it as such and aims to change that these problems will remain.

  • by Tony Isaac ( 1301187 ) on Sunday October 23, 2022 @10:55AM (#62990671) Homepage

    When you secure your home, you generally have a deadbolt on each exterior door, and latch all your windows. But if a burglar wants in, all they need to do is kick in the door or break a window. You *could* build with bullet-proof glass and use a vault door to secure your entrance. But the cost would be enormous, and wouldn't be justified by the level of risk. It's better and less expensive to improve security by getting to know your neighbors or having a dog.

    Cybersecurity is no different. Sure, you can lock down your network like Fort Knox, but the price tag would be enormous and require cybersecurity experts that many smaller companies simply can't afford. It's better to offload as much security as you can, such as using a reputable third-party credit card processor rather than writing your own credit card processing system.

    Security that is too strict can backfire. A highly secure door might be propped open with a book. Requiring frequent password changes causes people to use the same password they would have used, but add a sequential number at the end. What exactly was gained? Security has to keep unauthorized people out, but it also must allow authorized people in, without hampering their work unnecessarily.

  • Every physical business has security concerns, such as shoplifting. They take measures to reduce shoplifting, but taking measures to eliminate it completely would require extreme measures such as armed guards at the door, inspecting every customer's purchases, and more importantly, ensuring that employees don't walk off with merchandise. These types of measures would be so off-putting to customers that they would shop elsewhere. So stores strike a balance between security and convenience, knowing that they

  • The big problem with market incentives is that companies can all promise perfect security until they're caught in a breach.

    One solution to this is to add 3rd party agencies who come in and evaluate the security and assign the organization a rating. It's a pretty common model used when organizations get certifications.

    It certainly adds a bit of cost, both in hiring the agencies and in actually adding security that's requested, but it's a way for security to actually be made visible to the consumer before a b

  • by jd ( 1658 ) <[moc.oohay] [ta] [kapimi]> on Sunday October 23, 2022 @12:35PM (#62990945) Homepage Journal

    1. There's no incentive for companies to be secure
    2. Security produces slower software
    3. Security is slow and expensive to develop and install
    4. There really aren't enough experts to develop secure software
    5. Certification for HIPPA/PCI/etc doesn't require any actual security in place
    6. There's no revenue stream from being secure
    7. Companies assume they won't be hacked
    8. Customers assume the company won't be hacked UNLESS security is considered important
    9. Customers hate overt security measures
    10. Security is difficult to maintain, due to inevitable turnover

  • In 2017, I wrote: "So here we are, 70 years into the computer age and after three ACM Turing Awards in the area of cryptography (but none in cybersecurity), and we still do not seem to know how to build secure information systems."

    What does "cryptography" have to do with "cybersecurity"? As soon as the writer conflated those two topics, I knew it was going be a pointless whine about his imagined issues unrelated to anything.

    What does a Turing Award mean, really? It's about fundamental research, right? So what is there in "cybersecurity" that is such a profound research area? Don't misunderstand me, cybersecurity is important, and it is a definite skill, but I like it to system administration and database administration - necessary SK

  • by kenh ( 9056 )

    while computing vendors are responsible for the reliability and safety of their product, the lack of liability results in lack of accountability

    OK, I'll bite. A company buys some HP servers, contracts with AT&T for internet service, implements a website built on an Oracle database, and serves up webpages via an Apache web server, all running behind a Cisco firewall.

    When the company's website is hacked, and customer information is stolen, which company is responsible?

    Answer, the company that put it all together, not HP, AT&T, Oracle, Apache, or Cisco - so where should all those valuable cybersecurity investments be made, by the retail compan

  • For money.
    Nice world you got there, be a shame if something should happen to it.
  • If a company isn't rewarded for providing better security (or punished for a lack of security) then security will get very little attention. It's created a market for lemons. See https://en.wikipedia.org/wiki/... [wikipedia.org]

  • It's difficult for people to remember complex passwords, so they tend to use simple ones. Even if a complex password is used, every system has a "forget password" feature that can still be potentially exploited. People still write passwords on post-it notes stuck to their desk. You can only secure a system so much before you end up inadvertently locking your own users out. Even if you managed to make the most secure system ever, you could still just bribe someone who has access.

A motion to adjourn is always in order.

Working...