Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security Books

Is Cybersecurity an Unsolvable Problem? (arstechnica.com) 153

Ars Technica profiles Scott Shapiro, the co-author of a new book, Fancy Bear Goes Phishing: The Dark History of the Information Age in Five Extraordinary Hacks.

Shapiro points out that computer science "is only a century old, and hacking, or cybersecurity, is maybe a few decades old. It's a very young field, and part of the problem is that people haven't thought it through from first principles." Telling in-depth the story of five major breaches, Shapiro ultimately concludes that "the very principles that make hacking possible are the ones that make general computing possible.

"So you can't get rid of one without the other because you cannot patch metacode." Shapiro also brings some penetrating insight into why the Internet remains so insecure decades after its invention, as well as how and why hackers do what they do. And his conclusion about what can be done about it might prove a bit controversial: there is no permanent solution to the cybersecurity problem. "Cybersecurity is not a primarily technological problem that requires a primarily engineering solution," Shapiro writes. "It is a human problem that requires an understanding of human behavior." That's his mantra throughout the book: "Hacking is about humans." And it portends, for Shapiro, "the death of 'solutionism.'"
An excerpt from their interview: Ars Technica: The scientific community in various disciplines has struggled with this in the past. There's an attitude of, "We're just doing the research. It's just a tool. It's morally neutral." Hacking might be a prime example of a subject that you cannot teach outside the broader context of morality.

Scott Shapiro: I couldn't agree more. I'm a philosopher, so my day job is teaching that. But it's a problem throughout all of STEM: this idea that tools are morally neutral and you're just making them and it's up to the end user to use it in the right way. That is a reasonable attitude to have if you live in a culture that is doing the work of explaining why these tools ought to be used in one way rather than another. But when we have a culture that doesn't do that, then it becomes a very morally problematic activity.

This discussion has been archived. No new comments can be posted.

Is Cybersecurity an Unsolvable Problem?

Comments Filter:
  • by chill ( 34294 ) on Sunday May 28, 2023 @02:31PM (#63557561) Journal

    As someone working in this field for the last couple decades, all I got to say is job security baby!

    • by gweihir ( 88907 )

      Well, yes. But having been active in this are for something like 35 years now, I have to say my disappointment with the industry is very, very deep. Most/all of the today relevant security problems were known back then and there _are_ solutions. Instead, most software stuff is still made "cheaper than possible" and sucks with regards to security. There also seems to be a trend to make actually using software less and less efficient (MS, I am looking at you) for a pretty bad showing overall.

  • by Big Hairy Gorilla ( 9839972 ) on Sunday May 28, 2023 @02:32PM (#63557565)
    People are the problem. Make it convenient, and people will hand over their first born.
    • If we get to the point where spear-phishing is the only attack vector, then we've won.
    • by gweihir ( 88907 )

      People are part of the problem. Tech is the other half of the problem. Current tech is cheaply made and generally insecure. It makes it far too easy for users to make security-critical mistakes. At the same time, users are incompetent and too easy to fool. Any real solution must fix both aspects or it will fail.

      • Ok, true. There is no doubt that security is an afterthought with developers. I've been a developer.

        We've had respectable password security for 1000 years, but most people still use 12345.

        And that same person finds it too mentally taxing to enter a more random password, and resists any attempt to train them to cut and paste a long password (from an encrypted text file). Users ARE idiots, who will not do more than the barest of minimums.

        I'm not convinced developers are the REAL problem.
        • by gweihir ( 88907 ) on Sunday May 28, 2023 @05:52PM (#63557883)

          I'm not convinced developers are the REAL problem.

          My reasoning goes as follows: A tool for general use must be safe to use for a non-expert. That does not mean it cannot be misused. That means it must be reasonably obvious to a non-expert how to use it right and using it wrong must be significantly harder than using it right or come with some obvious high cost if the occasional failure is acceptable. That is basic safety-engineering. (I had a mandatory section on safety-engineering in my CS studies 35 years back. I only later learned that this is apparently _not_ standard.)

          For example, a hammer or knife or kitchen stove is dangerous. But the failure modes are obvious to non-experts and obviously unpleasant. Sure there will be the occasional smashed or cut finger and the occasional kitchen fire. But you need to really work at it and ignore very concrete and directly personally threatening danger-signs obvious to the average person. Compare that to software: Given the right circumstances, you can still often blow up your company with a single click on a reassuringly calm blue pop-up. You can still use remote access without 2FA or on the same device that the 2nd factor is on making it useless. And so on. Non-experts cannot behave safely when it is so easy to behave unsafe.

          Now, where the blame is when a developer has no clue about IT security, safety engineering or user behavior is a point that can be debated. You can say it is the fault of the person because this is an unfinished immature discipline and everybody needs to learn more than required to be competent. You can say it is the fault of the people hiring somebody like that. You can say it is the education system, but then most developers actually do not have a real engineering education at all. And you can say it is a lack of regulation where people with a lack of skill are allowed to do things that require those skills.

          Whatever you prefer here regarding blame, in the end the one building the insecure mechanism is the one that has failed. And that means developers are at the root of the problem and they are where this problem needs to be fixed. If an electrician wires up you house so it catches fire, that electrician needs to be fixed, regardless of how he became incompetent.

        • That's amazing, I've got the same combination on my luggage!
  • How about we "tools down" on new stuff for a few years and just harden what's out there?

    If we just keep building fresh, new attack vectors, then, yes.

  • by Morpeth ( 577066 ) on Sunday May 28, 2023 @02:38PM (#63557581)

    "Cybersecurity is not a primarily technological problem that requires a primarily engineering solution," Shapiro writes. "It is a human problem that requires an understanding of human behavior." That's his mantra throughout the book: "Hacking is about humans." And it portends, for Shapiro, "the death of 'solutionism.'"

    It requires both -- understanding why and how humans hack, AND using that info to inform your engineering and tech solutions. There will always be hackers and the need for cybersecurity, not sure why they article claims he has "some penetrating insight", it's patently obvious.

    It's the same as saying theft or murder are an unsolvable problem, duh. There will sadly always be theft and murder, but it doesn't mean you don't keep upping your game to combat/address/mitigate it.

    • There are two competing forces at work causing a gap. The first one is security engineers designing security want privacy also, and therefore do not trust the service to hold biometric data or something that can positively identify you. They also will not trust a third party to handle the authentication. The other force is people fail to understand where you should be giving your credentials to and where you shouldn't. Also remembering the credentials. Things are secure to a cybersecurity expert who careful
    • by gweihir ( 88907 ) on Sunday May 28, 2023 @03:52PM (#63557719)

      Indeed. It is only about making it hard enough for attackers that the residual risk is low enough and most attackers starve and hence go out of business. That is entirely possible. And, of course IT security is just as much a technological problem as it is a people problem. Like all engineering really. The brakes in your car have to be both technologically reliable and effective (for example, brakes in ordinary cars are _always_ designed to be stronger than the motor, no matter what, and for obvious reasons) and designed so that a human can use them reasonably well. Omit either aspect and they become dangerous.

      Hacking is both about tech and humans. Anybody denying that is simply incompetent. Focusing on the human angle to the detriment of the tech aspects is in no way better than focussing on the tech angle to the detriment of the human aspects. Of course, keeping two aspects equally in view is more difficult. But it is the only thing that works.

  • It's only "unsolvable" if you let the perfect be the enemy of the good.

    • It's only "unsolvable" if you let the perfect be the enemy of the good.

      Bingo. With security, as with so many other things in life, a partial solution can be "good enough."

      You can mitigate computer security risks by doing the things we are already doing, like (imperfect) access control systems. You can mitigate the damage done when the bad guy gets through your security systems by having things like good backups and a way to restore them in an acceptable period of time at an acceptable cost. You can mitigate some business risks - like "what happens when your supplier gets ha

  • There is value in automating human tasks. A human figures out what to do then you automate that thing to make humans more effective. Even if it is only a game of whack a mole.

    This is a little dated but the concept is not... the most commonly exploited vulnerabilities are not the most current vulnerabilities.
    https://www.cisa.gov/news-even... [cisa.gov]

    If you automate vulnerability detection and prevention then you've given yourself a security boost even if you have not 'solved' the potential for future problems.

  • It seems like if you really wanted maximum security, you'd be designing systems for that first and foremost and ground up, hardware and software. The first such systems that you could reasonably prove were secure would by necessity be very simple, and programming them would probably be an agony for the foreseeable future, but is anyone in industry actually prepared to even try to use such systems to get work done? We all used to get work done on single-digit-MHz computers back in the day — indeed, one such system might serve many users. Last time I looked at one of their screens, the CA DMV was still using a primitive AF mainframe app via 3270 emulator :)

    • You can't get secure code by tacking it on as an afterthought.
    • by gweihir ( 88907 )

      Really not needed. Just take a simple hardened Linux distribution and you are already deep in the area where attacks become way too expensive for most attackers. People that can still afford to do it in that case can also afford to break into your systems physically and then it becomes a different problem.

      We do not need "maximum security". "Reasonable security" is already quite enough. But what MS crap, "APPs", incompetently configured Linux servers and cloud systems, etc. give us is "pathetic security" at

    • Hack into a CA DMV terminal session - yeah, it requires some network access.

      Designing systems for security means redesigning network protocols and security features, taking the current OSI model, from layer 2 up. The Internet poses a somewhat more complex problem, baking security into a new Internet would require, I think protections against address manipulation and forgeries.

      All this would indeed require redesigning from scratch. And I, for one, would be cautious about adopting these new, 'secure', systems

    • It doesn't sell.

      Companies want to hear that you can make BYOD secure. Just put more lipstick on the pig and get the mascara ready for the next time.

  • If man can make it then man can break it. It's simple really.

    • by gweihir ( 88907 )

      That is one of these easy answers that are plausible, convincing, easy to understand and wrong.

      • Prove it. Make something unbreakable. I dare you.

        • by gweihir ( 88907 )

          So that is what you get from this? If somebody can break it, then "man can break it"? Well, true, but completely irrelevant nonsense.

          Of course the systems administrator of a system can break it. Of course that is completely irrelevant to the discussion at hand. Circumstances do matter.

          • Solvable means a resolution. Cyber security is a never ending problem. It has no final solution. That is my point. That is the wording in the headline. Unsolvable. The answer is Yes. Period.

  • Unsolvable (Score:5, Interesting)

    by SuperDre ( 982372 ) on Sunday May 28, 2023 @02:53PM (#63557615) Homepage
    It is unsolvable as long as humans need to be able to interact with it. Yeah you can make it super secure, but then it won't be usable anymore as it isn't feasible if a human has to do so many things first before it can use the system. And the biggest security problem IS the human. Using biometrics can also be broken.
    • If phishing is the only way hackers can get into the system, then we've won.
      • No we haven't won. As it's the most commonly used way to hack into secure systems. So it's unsolvable because you cannot solve that problem, as humans need to be able to interact with the system, and as long as humans need to do that, it's insecure.
    • Re:Unsolvable (Score:4, Interesting)

      by gweihir ( 88907 ) on Sunday May 28, 2023 @03:39PM (#63557697)

      No, it is not. The problem today is a market failure where software and systems are cheaply designed, cheaply made and customers do not know that doing it better is entirely possible. For example, attacks by Email are only a thing because of the abysmal stupidity of Microsoft and others. Of course email attachments should never be easy to open automatically or with a single click. Of course, frigging documents should not be executable code with system access and should not be able to attack you. But no, they had to turn email readers into frigging web-browsers and make everything "easy".

      What you find when you actually look is that the problem is not only solvable, it is basically solved. It is just commercial mainstream crap that cannot get there. Note that 100% security is in no way needed. It is quite enough to make attacks unprofitable and come with a high risk of detection. At the moment, attacks are _very_ profitable (ransomware) because security standards of mainstream systems are abysmally bad compared to what they could be.

      • You say 'ofcourse', I say, the software becomes unusable if I have to go through too many hoops, especially, in this example, if you recieve a lot of mails and expect it to have attachments you need to open. That's the problem with security, it can and must never prohibit the user in being able to use the software on a regular base. For instance, having to put in your credentials every single time you need to access certain data/service, no problem if you only have to access the data/service a few times a
        • by gweihir ( 88907 )

          You did not understand what I wrote. The problem arises from the combination of insecure document formats with insecure email practices. Both in this case from the same perpetrator, so they cannot claim innocence. For example, do you really need text document or spreadsheet that was attached to an email to be able to write your file system or call other programs? If you do, then there will never be any security for you. But the fact of the matter is that this is only really needed as an absolute exception a

        • by jsonn ( 792303 )
          Microsoft's file formats are fundamentally misdesigned by the lack of sand-boxing in the default configuration. Note: I'm not saying that macros should be off by default, but that the default should not allow macros access to anything outside the document unless explicitly allow by the user/administrator. Such access should in a well defined system also be done using defined resources and controlled by proper proof-of-origin. But Microsoft has failed time and time again to make any steps in that direction a
    • "It is unsolvable as long as humans need to be able to interact with it" ...also as long as humans are involved in the creation of IT systems.

      IMHO, nothing can be truly secure if humans mess with it in any way...

  • Same as it ever was (Score:5, Interesting)

    by cellocgw ( 617879 ) <cellocgw.gmail@com> on Sunday May 28, 2023 @02:59PM (#63557629) Journal

    Every time a new mechanical, or even partially mechanical, lock comes out, one craftsman or another finds a way to build the mechanical key -- or to bypass the key mech (see, e.g. "bump key" for standard tumbler door locks).

    Software's even worse, because it's damn hard just to make software do what you want it to do, let alone NOT do everything else in the universe. Ultimately it comes down to a cost-benefit ratio. We don't bother with DoD-class crypto phones for everyday use for that reason. We don't install bank-vault quality timelocks on our home doors for that reason.
    At some point, the best you can do is air-gap the systems that need total security, vet the crap out of all users, and hope & pray spies don't get in. So far, not a single government in the world has managed to keep spies from getting jobs/assignments it top-levels of gov't management.

    • by Jeremi ( 14640 )

      Every time a new mechanical, or even partially mechanical, lock comes out, one craftsman or another finds a way to build the mechanical key -- or to bypass the key mech

      That was my view also, which is why I figured that Bitcoin's algorithm would be hacked within a year or two after it become profitable to do so, at which point the value of Bitcoins would promptly fall to near-zero as counterfeiters took over and people lost faith in the reliability of the blockchain algorithm.

      And yet, here we are, 14 years later, and Bitcoins are still valued at about $27k apiece; it seems this particular lock has remained largely unpicked, despite an enormous financial incentive to do so

    • by jsonn ( 792303 )
      Military grade encryption is entirely overrated. There is little evidence that the NSA has significantly smarter cryptoanalysists compared to the (public) security researchers. The main problem is that people are bad at designing secure protocols and even worse at correctly implementing them. Most of the actual security breaches are a direct result of that. I can recommend everyone to look at the history of Kerberos for one of the better non-trivial algorithms, especially the infamous drama version: https:/ [mit.edu]
  • I've never had a good experience with our cybersecurity groups. Everything they want to do is "implemented" by other groups. The operational fallout is handled by groups that aren't them, and the cybersecurity groups are terrible at communicating and collaboration. Every experience I have makes me wish the whole sector didn't exist. Give the resources to the local subject matter experts to figure out things out themselves and provide REALISTIC guidelines.
    • by gweihir ( 88907 )

      IT security has its large share of incompetents and semi-competents, just like software making and IT operations. It is really quite pathetic overall. One thing that helps is making sure an IT Security person actually has some real-world engineering experience: Writing code, configuring and operating systems, application of cryptography, etc. There are far too many IT Security people have no real-world engineering skills and hence can only stand in the way of others but cannot help to secure things.

  • by jmccue ( 834797 ) on Sunday May 28, 2023 @03:07PM (#63557655) Homepage
    we all know AI will fix the problem, and cure the common code too.
  • by gweihir ( 88907 ) on Sunday May 28, 2023 @03:26PM (#63557677)

    There are a few things that are generally done really badly today and that make for the mess we have:
    1. Use of insecure software and Operating Systems that are not up to the state-of-the art (MS Windows and MS Office and many many "Apps" are main offenders here)
    2. Incompetent configuration and maintenance (open cloud containers, lack of timely updates, etc.), usually due to incompetent and/or inexperienced personnel
    3. Software development that ignores security or by people that do not understand security. Basically "cheaper than possible" developers.
    4. Lack of use of known secure mechanisms (2FA, still active old protocols, etc.)
    5. Sabotage by "surveillance fascists", i.e. people that cannot stand citizens having secure communication mechanisms. These can be found in basically all governments.
    6. Bad applied CS/IT/SW-Engineering education. You can still get a degree in these fields without a single mandatory lecture on software security, for example.
    7. Applied CS/IT/SW-Engineering are still not engineering disciplines with general standards and liability for violating the state-of-the-art.
    8. A few other things.

    The thing is, secure software, secure system operation, etc. are understood and entirely possible. Not 100% secure, but that is not required. Making things for attackers very expensive and with a high risk of successful attack detection is quite enough. But the industry does not have the maturity to use what is known. Instead everything IT and software is done cheaply, typically far too cheaply, and there is no competent risk assessment. That there is no meaningful liability system, unlike established engineering disciplines, contributes to the problem.
     

  • Humans are fickle beasts that will commit crimes regardless of the level of security.
  • A completely perfectly secure system on the wrong hands would be pretty terrible.

  • by ka9dgx ( 72702 ) on Sunday May 28, 2023 @04:02PM (#63557733) Homepage Journal

    Of course cybersecurity can be solved... the solution was worked out in the 1970s, and there are commercially available secure systems. The Operating Systems most of us use daily, on the other hand, do not support multi-level security, nor the Bell-LaPadula model.

    If we did use such systems, the user interface would be almost identical, but our applications would only be able to open the files we fed them, and not everything, by default. The world would be a much more secure place, but that would have made the NSA's job a lot harder, so such systems aren't talked about much.

  • Can you make it 100% impossible to physically break into a building? Nope. Someone is always smarter, has a bigger team of crooks, more money, etc. Can you make a prison no one can escape from? They keep saying they can and then people figure out a way.

    Apply the same to your cybersecurity.

  • Your home, your car, your store, your bank, your office, prisons, military installations, you name it. Every "secure" place of any kind can be broken into, if an attacker is determined enough. Just because cybersecurity is security "on a computer" doesn't make it a different category of problem. Security will always have to be designed, monitored, and enforced by a varying set of mechanisms. It will always be an arms race.

  • by chas.williams ( 6256556 ) on Sunday May 28, 2023 @05:06PM (#63557817)
    Theft has been around for thousands and thousands of years. Why haven't we solved that problem yet? The idea that time spent studying the problem or that restricting tools can prevent a particular crime is somewhat specious.
  • by ctilsie242 ( 4841247 ) on Sunday May 28, 2023 @05:55PM (#63557893)

    First, airgapping is not a 100% thing, as Stuxnet has showed, but it will at least force physical intervention to attack a target.

    We need to be asking why devices need to be on the Internet in the first place. If we need monitoring, that is doable in a read-only way (for example, using light pipes to pass the LED readout from an air gapped appliance's display to a Raspberry Pi). Data diodes are not new. I've made those out of serial cables and cutting the Rx line, ensuring that data could only go one way. Of course, this wasn't a fast way of sending data, but it worked well enough.

    We need to go back to least privilege, defense in depth, and maybe even demanding makers of IoT devices provide manifest files for their devices, so firewalls can be configured to only let those specific sites out (although most IoT makers would just do wildcards... but it is a start.)

    From least privilege and defense in depth, we need a UL-like organization that works like Europe's Sold Secure, with gold/silver/bronze/etc. ratings. For example, a "silver" appliance would have had black box testing done. A "gold" appliance would have had the source code scanned and it built, compared with the shipping executables. A "platinum" appliance would have even more testing. Maybe even a tier requiring a deterministic state language used like Ada or SPARK which ensures that all stated with the software are provable.

    Security costs money, and the problem is the "security has no ROI" issue. We are in a free-fall economic downturn with no bottom in sight, so companies don't really care about security, even if it means flirting with bankruptcy. They just are looking to stay afloat. So, companies need to consider other models or even donate to have open source solutions that have been tested and vetted be something that can be adopted, if no commercial provider is trustworthy enough (since top notch security doesn't mean good profits because the time it takes to do it right isn't profitable... thus this needs to be done by organizations and governments.)

    It might be wise to look at different physical networks with a networking protocol designed from the ground up to be secure, be it hardware doing signed and encrypted frames to having key IDs instead of MACs used, with allow lists, to creating virtual tunnels as part of a machine to machine handshake, so UDP-like data transfers with sliding windows is possible and fast, yet authenticated and encrypted. Then, have packets have a network ID, so routers don't ever throw a packet meant for one network onto another.

    Leave TCP/IP for the Internet, work on a network protocol for B2B applications that can either be used with a web of trust like PGP, a root hierarchy like TLS, or both.

  • It is easy to show that CyberSecurity is insolvable. There are multiple, easy proofs. They include:
    • Proof 1: We can't know all attacks. We can't defend against unknown attacks.
    • Proof 2: Even if we could know all attacks, we can't afford to defend against all attacks.
    • Proof 3: Even if we could afford to defend against all attacks, if risky CyberSecurity behavior is more profitable, then we won't eliminate failure.

    A more useful question is, how can we make things better? After a couple decades of doing the things they currently call CyberSecurity, I have found several much more interesting questions. They include:

    1. 0) Can we more accurately measure effective CyberSecurity success? Currently we are measuring the failures. Shouldn't we measure the successes?
    2. 1) Can we do a better job of measuring the complete costs (to ourselves and society) of failure?
    3. 2) Can we do meaningful epidemiology of CyberSecurity? Can we more accurately determine what helps, and how much it helps?
    4. 3) Can we be more accurate and complete in distributing responsibility for failure and improvement?
    5. 4) Can we create and sustain meaningful positive incentives that favor CyberSecurity over insecurity?

    I have found that when I improve these areas, I improve security.

  • Physical security has been an unsolved since humanity existed. Cybersecurity is likely to be the same.

    That's because it is an arms race, and also the fact the you don't just have to keep the bad guys out, you also have to let the good guys in. Conflicting requirements and conflicting goals depending on which side you are on.

  • ... Scott Adams:

    The pursuit of security has no finish line ...
    so technically, it's more like a death march.

  • Why are code segments writable after load? We could bake segment fixups into the CPU segment load process and use bitmasks rather than the hack of inserting interrupts into the code for debugging. Immutable code segments alone would greatly improve computer security, but the use of this "mutable code feature" is baked into way too many processes in our current architectures. I come from the era when you created a new kernel by hand-patching the existing running kernel and saving the modified kernel to a n
  • Like all engineering disciplines, every "solution" is actually a set of tradeoffs. We optimize for the features that are important to us. Tradeoffs include things like speed, cost, quality, durability, ease of use, complexity, configurability, and so on.

    Security too requires tradeoffs. You can build your security like Fort Knox, but that would be enormously expensive, and very few could pay the price. And a tradeoff for such strict security is that it severely limits usability by those who are *authorized*

  • If the LockingPIckingLawyer has taught me anything, its that physical security is an unsolvable problem, and that's been around a lot longer.

    • People can buy cars and drive them into the ground, never checking or changing fluid or lubricants. When people by computers, they do the same thing: drive it into the ground after loading it with weights. The typical business computer - the boss rants and screams : security mechanics are a bigger ripoff than divorce lawyers. Reasonably secure computers have people paid to perform preventatives.
  • https://xkcd.com/538/

    I'm surprised no one posted this before...

  • Isn't it essentially already solved? I mean Chromebooks are essentially malware free. Is it really malware if you have to install it yourself?

"When the going gets tough, the tough get empirical." -- Jon Carroll

Working...