Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Bug Programming

Does Code Reuse Endanger Secure Software Development? (threatpost.com) 148

msm1267 quotes ThreatPost: The amount of insecure software tied to reused third-party libraries and lingering in applications long after patches have been deployed is staggering. It's a habitual problem perpetuated by developers failing to vet third-party code for vulnerabilities, and some repositories taking a hands-off approach with the code they host. This scenario allows attackers to target one overlooked component flaw used in millions of applications instead of focusing on a single application security vulnerability.

The real-world consequences have been demonstrated in the past few years with the Heartbleed vulnerability in OpenSSL, Shellshock in GNU Bash, and a deserialization vulnerability exploited in a recent high-profile attack against the San Francisco Municipal Transportation Agency. These are three instances where developers reuse libraries and frameworks that contain unpatched flaws in production applications... According to security experts, the problem is two-fold. On one hand, developers use reliable code that at a later date is found to have a vulnerability. Second, insecure code is used by a developer who doesn't exercise due diligence on the software libraries used in their project.

That seems like a one-sided take, so I'm curious what Slashdot readers think. Does code reuse endanger secure software development?
This discussion has been archived. No new comments can be posted.

Does Code Reuse Endanger Secure Software Development?

Comments Filter:
  • by Place a name here ( 4508093 ) on Saturday December 17, 2016 @08:41PM (#53505589)
    If you use a third-party library that has a bug in it, you'll be exposed to the same bugs that everybody else using that library are. On the other hand, if you go at it alone, your implementation will have bugs of its own. And if the library is well-maintained, it'll have fewer bugs than the thing you make from scratch.

    Implementing the common functionality from scratch can easily become another kind of "not exercising due diligence", particularly when dealing with complex code. Or to put it another way: code reuse may endanger secure software development, but not reusing code may also endanger secure software development.
    • by beelsebob ( 529313 ) on Saturday December 17, 2016 @08:50PM (#53505633)

      The issue isn't even "using 3rd party code", it's "static linking 3rd party code". If people learned to dynamic link libraries, rather than compile them in, then this wouldn't be a problem at all. If the OpenSSL guys learned that distributing only as a static library is a bad thing, and learned to make their ABI stable, then heart bleed would be a lot less of an issue.

      • Many environments you can't dynamically link. Small embedded systems for instance. Library makers MUST make APIs and ABIs stable, because the difficulty of adapting to changing libraries means that projects will be slow to update to newer versions.

        • by Z00L00K ( 682162 )

          A lot of small home routers falls into this category. Whenever you look at them they are usually built on a very old kernel with busybox and some quickly thrown together web interface with questionable security.

    • by Kjella ( 173770 )

      If you use a third-party library that has a bug in it, you'll be exposed to the same bugs that everybody else using that library are. On the other hand, if you go at it alone, your implementation will have bugs of its own.

      But the value of the target will be proportional too, the value of compromising "every server using OpenSSL" is huge compared to a custom hack that only works for your little company because of your home grown library. It's no doubt that the main reason you use libraries is because of resource constraints, not security. If you're small enough to not matter, spending a man year re-implementing what's already done is a no-go. If you have the resources to seriously consider going it alone you're probably big e

  • by guruevi ( 827432 ) on Saturday December 17, 2016 @08:43PM (#53505599)

    Most people just install software like they would a washing machine, once it's in, they don't touch it anymore except to run it, after all, a washing machine has a strong motor, some controls, valves and a belt, there is virtually no maintenance necessary and if it ever does, it's cheap enough and long-lasting enough you can just throw it out and replace it.

    Maintenance is a necessary part of anything that is a very complex machine and is not protected. A car is a good example, it sits outside so it needs to be protected against burglary but also against the elements, salt and accidents as well as regular maintenance to replace worn and outdated parts.

    Computer systems are as complex and vulnerable as cars, they sit 'outside' on the Internet, where they have to be protected against 'bad people' and just regular outdatedness but often they are considered as dumb appliances (if not actually called that by a vendor).

    People need to pay more attention and fund their "computer-mechanics", not just the manufacturers of their "car" and demand that they are able to inspect and repair their own stuff regardless of who invented it.

    • by murdocj ( 543661 ) on Saturday December 17, 2016 @08:54PM (#53505647)

      The problem is that if software is working, and you "upgrade" it, two things can happen: it can continue to work, or it can fail, often mysteriously. Given that, it's not shocking that people tend to leave software alone if it is currently functioning.

      • by guruevi ( 827432 )

        Same goes for a car, you can try fixing it yourself and often you end up with a problem that your car doesn't start 'mysteriously' because you left the spark plugs out or something stupid like that. Even mechanics at garages have this problem, although less often and are able to fix it quicker.

        If you absolutely need a car when it needs fixing, rent one or get a second one and don't try to fix it yourself if you don't know what you're doing. Would you leave your car without an oil change just because you can

        • Same goes for a car, you can try fixing it yourself and often you end up with a problem that your car doesn't start 'mysteriously' because you left the spark plugs out or something stupid like that.

          Speak for yourself. If I service my car, I know what I need to do, and I find out what is likely to be needed in the near future. If the dealer services the car, I find out which jobs are easiest and/or most profitable. Why would I leave my car with a mechanic for an oil change when I can do it myself in less

      • The problem is that if software is working, and you "upgrade" it, two things can happen: it can continue to work, or it can fail, often mysteriously. Given that, it's not shocking that people tend to leave software alone if it is currently functioning.

        This! It is a huge issue for both vendors, writers, and users. One OS vendor with a track record of updates breaking systems that led to many people refusing to update recently went no choice on us. With predictable bad results, which just reinforces the idea of not updating if it works.

        I did a data transfer on a computer a couple years ago that had a never updated Windows XP OS on it. It was in use every day, and it was not updated once. Not the OS, not the Flash, not the browser, nothing. What broug

    • by nmb3000 ( 741169 ) on Saturday December 17, 2016 @11:18PM (#53506021) Journal

      Disagree. Software is not a washing machine nor a car. It does not break down over time, it is not susceptible to the elements, and it does not age in any notable way. There is literally no reason a program written and working in 1970 cannot continue to execute as well today as the day it was written. And it does! Industrial control systems, ancient government and finance mainframes, and primitive vehicle control systems all do it every day. Software doesn't rust and bit-rot is not a thing. Telling people that they need to keep their programs polished to prevent tarnish sounds like something a sketchy Geek Squad-esque computer shop might do to squeeze a hundred bucks out naive customers.

      I update my software sparingly and with caution. Generally speaking, it's much more likely that usability to be lost or features broken than a serious security issue fixed. If it's a mobile app, it's much more likely that ads were added or made worse, or a feature I've used for 2 years was removed or horribly changed, or increased permissions are requested so that my personal info can be sent away to some third party than any features I actually want were added or bugs fixed.

      Today's model of always-updated has some advantages but every single one is counterbalanced by the negatives. Auto-updating browsers help prevent the mire of zombies that was IE6, but it also means you're at the mercy of Microsoft, Mozilla, and Google when it comes to feature removal and their incessant need to screw around with the UI for no valid reason. Or that addon you really like and rely on suddenly stopped working because the author hasn't updated it yet.

      Yes, updates to address security problems is an important topic, but all too often those updates are bundled up with all sorts of crap that few people want. It would be real nice if software companies would keep the two separate, and make it clear just what has changed between versions.

      • by guruevi ( 827432 )

        This doesn't mean that software updates aren't necessary.

        Your mainframe software is probably rife with bugs and issues and would be very insecure if it were connected to the Internet in the same way it was in the 1970's. Back then, we "knew" the person (often by name) who was responsible simply based on IP address and thus vandalism/criminal intent wasn't as much as an issue. The bits don't age indeed but the methods, underlying hardware platforms and security models behind it do.

        The only reason we have cod

        • This doesn't mean that software updates aren't necessary.

          Your mainframe software is probably rife with bugs and issues and would be very insecure if it were connected to the Internet in the same way it was in the 1970's.

          Ahh, the 1970's internet. Those were good times, were they not?

      • by Dutch Gun ( 899105 ) on Sunday December 18, 2016 @12:11AM (#53506197)

        I want to agree with you in theory (especially about how apps often get made worse for no good reason), but practice, it's simply not practical to leave most software alone - at least, not if you want it to have any sort of reasonable lifetime. The difference is that modern software rarely lives in isolation. The ecosystem on which it runs... the OS, it's system libraries, third party libraries, the tools on which the software was developed... these are all moving forward in time.

        If you leave a piece of software alone, it experiences "bit rot" NOT because it's changing, but because everything around it is probably changing. More importantly, the more time occurs between updates, the more difficult those updates tend to be, until it becomes easier to actually rewrite the damned thing, since the original development system on which it was written may not even exist in the same state anymore. You may argue that software shouldn't always be changing, but you might as well ask for the Earth to stop spinning. Security issues alone will force a minimal level of change will occur.

        Updating continuously has its pain points, but any issues that come up tend to be smaller issues, and can be dealt with more quickly. For example, just the other day I realized MacOS's system Cocoa libraries slightly changed something which broke my code in a number of places, even though I wasn't doing anything sketchy with the API. But a slight change in definition meant I needed to cast some interfaces explicitly, and add new interface functions to retrieve those explicit interfaces. It was a bit of work to track this down and solve it, even for the relatively small amount of code I was dealing with.

        I saw one person on StackExchange say they "solved" it by linking their project against the older version of the library. That "solution" just stacked some technical dept on some poor future programmer, even though it 100% works for now. It may even allow such code to propagate in the future, making the eventual conversion even worse when it happens.

        Moreover, leaving functionality alone and patching only security issues becomes a game of maintaining a *very* long history of supported versions of software. How long does support last? Yes, this is the correct answer for some software, but remember that companies generally pay very well for these long-term support versions, even for Linux, because maintaining a current build is expensive (I have some recent experience with this). For most consumers, the simplest and most economical option is just to keep everything up to date, and yeah, that means taking the bad with the good.

      • bit-rot is not a thing

        Bit-rot does happen. You are obviously not a Windows user, and never experienced DOS 4.1 or even early versions of the extX file systems. I have also had hard disks that seem to lose bits over a period of years.

        Having said that, I have run OpenBSD on Sparc machines (not Internet connected) for over 5 years without an update of any kind (aside from down-time to clean the air filters annually).

        I totally agree that auto-update is a festering can of worms, and bundling makes it worse

  • by mthamil ( 1166787 ) on Saturday December 17, 2016 @08:44PM (#53505603)
    The alternative, which is everyone writing their own mostly terrible implementations, is far worse.
    • If it means that it would be impossible to create stuff like Twitter, perhaps the positives outweigh the negatives.
  • by davidwr ( 791652 ) on Saturday December 17, 2016 @08:50PM (#53505629) Homepage Journal

    Granted, if it's closed-source you have to trust the library vendor. If it's open-source, you either have to do due dilligence or trust someone else who claims to have done so.

    I assume we are talking about re-using source code, linking with staticly-linked libraries, and using and "private copies" of shared libraries binaries (e.g. /usr/local/bin/applicationname/lib/lib1.so or C:\Program Files\Application\DLLs\MyDll.dll). With "public" shared binaries (/usr/lib/sharedlib.so or C:\Windows\...\MSDLL.DLL), you are relying on the library or OS vendor to keep things patched.

    Here's an example:

    I know of a popular product that uses its own private copy of Java. If the vendor doesn't update their customer's versions of Java on a regular basis, an attacker can exploit it, even if the user is updating the "Oracle" version of Java on a regular basis. That's bad. On the other hand, they would probably be in a worse of a position of the vendor re-wrote the functionality of Java in-house, as that code would have its own set of bugs and it would likely NOT be as maintined as Java is. The solution is to use the "Oracle" version of Java instead of a private copy, OR push out updates to the private copy within a day or two of Oracle pushing out their updates.

  • ... upon whether or not the reused code is written in a manner that is proper and secure. If insecure code is reused, well, all bets are off. If secure code is properly reused, then the results can be good.
    • by Jeremi ( 14640 )

      ... and the problem there is, nobody can really tell whether a given library is secure or insecure (exception: after an exploit is found, we can say with certainty that it was/is insecure ;) ).

      So the question becomes, how do we know which third party libraries we can afford to trust the security of our application to?

      • It's only 'secure' until someone finds a way to exploit it.

        Always assume that they will find a vulnerability eventually, but it's still your job to eliminate all of the vulnerabilities you can, and make it as hard as feasible for any attacker to get in.

        Of course there's the issue of resource to deal with. Sure if you had infinite time & other resources to work on the software before release/implementation, you could get something that will take longer than the heat death of the universe to become vulner
        • True, but I think the implied point is that once that exploit is discovered, then they have access to all software that uses it automatically. If people wrote their own, the flaws would have to be individually discovered. I'm not convinced on which is better.
  • Secure Software (Score:5, Interesting)

    by ledow ( 319597 ) on Saturday December 17, 2016 @08:53PM (#53505645) Homepage

    What's needed is better operating system management, not better development practices.

    Once a piece of software is patched, the problem is fixed. That's not the issue at hand. The issue is that that fix then does not make it back to production systems in a decent time.

    What's needed - and I've posited this a number of times for a number of things - is a central repository which lists which, say, linux packages are secure and which are not. Which algorithms, hashes and cryptosystems are compromised or not.

    Then there needs to be an API - running a production system live on the Internet? It will check its version numbers and package hashes against the centralised "uncompromised" versions service. If there's a discrepancy -a package that's been marked as potentially compromisable, but which has an updated or patched version available - the OS is tainted much like the kernel is tainted. If MD5 is retired and any software on the machine still utilises it, the system is marked as tainted as soon as it checks into the centralised API.

    We've needed this for hashs and crypto systems for a long time. SHA-1 is retired, but how do you KNOW that? And how do you know what uses that? Nobody would recommend building a system using WEP or MD5 in this day and age but nowhere is that listed in a queryable manner.

    And then you start saying "Why weren't Facebook checking their systems against the Secure Software Database? Their own fault if they were compromised.", "Why did Yahoo not re-hash with a listed-good algorithm as soon as their existing hash was obsolete?", "Why were they compromised? Because they turned off database checks and updates? Idiots".

    There needs to be a way for production systems to algorithmically say "This is no longer acceptable practice" and start making a fuss such that the system maintainers are forced to start upgrading, with specified timescales (the API could easily obsolete stuff on a set timescale, with warning enough to test changes to newer algorithms).

    Then, if you're compromised because you ignored this, or because you hard-coded MD5 instead of using libraries, all the fault will be in the your third-party, unlisted libraries. And then you might be able to actually start forcing vendors to publicly state "All our software uses the latest database-compatible algorithms, software and patches" rather than just hope that someone at Google isn't just running Slackware 2.0.

    The software can be fixed in a trice. The problem is getting that fix out to production systems in good time, and not being able to sufficiently shame those who don't manage their systems (it's easy to blame a hack on the software, rather than your lax update practices).

    • > central repository which lists which, say, linux packages are secure and which are not. Which algorithms, hashes and cryptosystems are compromised or not.
      >
      > Then there needs to be an API - running a production system live on the Internet? It will check its version numbers and package hashes against the centralised "uncompromised" versions service

      That's precisely what I spend 40 hours a week building and maintaining. It's a very helpful part of a comprehensive security strategy. Other good par

  • by bsDaemon ( 87307 ) on Saturday December 17, 2016 @08:59PM (#53505669)

    Code reuse is a fundamental tenant of secure software development lifecycles. You reduce the chance that you introduce new vulnerabillities by limiting the amount of new code per project to the core business logic and leveraging existing modules for the support infrastructure.

    That said, if the module you reuse has problems then you aren't necessarily better off. The modules need to be vetted and maintained appropriately. Code reuse isn't the problem so much as taking random crap from the internet that solves your problem without assessing its suitability for inclusion given your threat model or properly assessing it for vulnerabilities.

    Monoculture can be an issue from certain perspectives -- flaws in the libssl portion of OpenSSL affect a huge percentage of the internet. However, they only need to be fixed once and consumers of the library can all receive the update, assuming proper patch management in the environment. If your company uses 15 different libraries to perform a specific software function across different product lines without a basis in engineering requirements constraints, you're doing it wrong.

    Security being a subset of correctness, I think overall it is b.s. to say code reuse is a problem. You just need to make sure you are reusing correct, vetted and maintained code. I.e., don't take strange code from someone's github to use in your enterprise software without reviewing it.

    • by Anonymous Coward

      Yes. And that is why we (distro developers) fight so much "embedded library copies" in Linux distros: when one of those escape tracking, a supposedly fixed bug will linger on.

      Developers cannot be trusted to keep their embedded libraries current. Ever. They won't, it is not in their best interests to do so: it takes time, effort, and breaks their build, and it is uninteresting and dull. Issues start in ecosystems where this kind of crap is the rule (the javascript scene, for example), and get truly ridicu

    • by skids ( 119237 )

      This. It also helps if codebases "do one thing and do it well" and when they feel mission creep seeping in, break off extra functionality into a well separated module. That helps confine bugs to more easily audited units.

  • You're responsible for the code you write, and if you are using existing libraries, you are responsible for tracking the packages you use. If they update, and your installer includes it, you need to update your installer. You may not feel this justifies pushing updates, especially if the change is to functions you did not use, but the program really should be checking for library updates and asking the user if they should be updated – and sometimes there are reasons why they cannot. At that point, it

  • Dearth of independent audits endangers secure software development.

    • by skids ( 119237 )

      Yup. Everyone wants a philosophical fix for the security (and general QA) problem. Nobody wants to admit the need for many hours of less-than-prestigious, methodical work.

  • ..reinventing the wheel. Which can easily be done badly.

    Even for the most common search / sort algorithms, there's a good chance you won't code them perfectly first time.

    Code re-use, code-from scratch. Both have their place. Both require intelligent thought.
  • "developers failing to vet third-party code for vulnerabilities"

    LOL. What would you suggest? Code inspection by magically infallible developers who have their own work to do creating new features probably wouldn't recognize vulnerabilities in their own code?

    Most companies and projects do not employ security researchers and specialists, not for their own code and certainly not for anyone else's, and they are not about to do so. And even if they could afford it, I wouldn't be too hopeful of the bugs actually

    • Code inspection by magically infallible developers

      This ... A thousand times this!

  • by FlyingGuy ( 989135 ) <flyingguy&gmail,com> on Saturday December 17, 2016 @09:49PM (#53505787)
    Like:
    • Developers who just say, "we have to use > " because that is all they know how to use and they cannot take the time to learn the basics of good software design principles.
    • Development Managers who could not code their way out of a wet paper bag, making decisions for the development team when they know very little about anything.
    • The coders who write "tools" so they can make databases look like Java then hate the DBA's when they point out that the database is a vertical beast and not a horizontal one. And yes I am speaking about the fools who who write Hibernate / Spring which is so old, so wrong, so wrong headed as to make it practically useless in the day of a modern DB engine. Hibernate has been broken into more times then I can count.
    • SOFD, Software of the day... So a guys writes something he thinks is cool, and some fool decides its great and uses it in a forward facing system. It is never strenuously tested, and likely as not never developed further yet there it is in some critical code.
    • Using Libs or frameworks that are so badly coded that they include 10's of thousands lines of code that are never seen or tested because some 24 year old kid grabs a function that he thinks is going to save him an hour or two.
    • Because Java! Not the worst programming language ever, but the problems that come from garbage collection and programmers who THINK they understand it.

    • Because JavaScript, which is the worst period, ever written period, kludged upon period, ever written period!!

    We need to get off of the HTML/CSS/ crazy drunken bandwagon and get back to basics! Re-boot and re-tool the the entire process because if we don't we are just screwed and more break ins will happen as the things become more and more complex. You need to let DBA's build the database portions of things and secure access. You need to let Systems people write HARDENED code. Let web guys make things pretty. You need to stop demanding a a single point tool and go back to individual inter working parts, written by competent coders, which are then put through a very severe hardening process.

  • by AuMatar ( 183847 ) on Saturday December 17, 2016 @09:53PM (#53505799)

    A well known, maintained library such as OpenSSL? You're far more secure using the open source library. Not only do you need to be an expert to correctly and securely implement that level of cryptography, but it can contain all sort of subtle bugs you're unlikely to catch.

    Now if you're talking about some random library you found on github because some guy on stackoverflow said to use it? That makes you less secure. Don't put random things you found on the internet into your program without reading the code, understanding what it does, and doing a full audit on it first.

    And there's a special place in hell for anyone who uses gradle, nvm, or anything else that automatically downloads the library for them without specifying an exact version. You're just asking to be screwed by a trojan horse. Leftpad was about the best case scenario, imagine if leftpad had changed their code to be a backdoor instead?

    • by skids ( 119237 )

      Don't put random things you found on the internet into your program without reading the code, understanding what it does, and doing a full audit on it first.

      Nobody would get anything done if they had to fully audit all their dependencies.

      However, they should spend some time auditing some portion of it. Especially if they have specialized domain knowledge which allows them to audit that section better.

      In other words, improve, even if slightly, some of the things you use. Contribute. Don't just take. If everyone did that, then security would improve.

      • by AuMatar ( 183847 )

        Sure they would. They'd just get things done slightly slower. But the big, well known dependencies aren't the problem- they may find an occasional security bug, but they'll be quickly patched. The problem is when you use some random library that doesn't have a large user base and contributor base. If you find something on github and use it without reading and understanding every line of it, you're incompetent.

  • by hey! ( 33014 ) on Saturday December 17, 2016 @09:58PM (#53505813) Homepage Journal

    So does reinventing the wheel. So does cut-and-paste coding.

    Code reuse leads to insecure software in much the same way that breathing leads to cancer.

  • Code Re-use and open source software lead to super-reliable, robust, and secure code that is the foundation of the Internet. With all the eyes constantly looking over the code it continues to get better and better. And since no program or company has time to write everything from scratch, code will be reused until we can teach computers how to write code.

  • Good code is good code. Bad code is bad code. Knowing the difference is what makes you a "programmer".
  • As has already been stated, you generally want to prefer to use a third-party library over a custom implementation, for most security-related code. This is doubly true for well-defined algorithms, which are implemented in well-tested (and preferably open source) libraries.

    However... there's an inherent danger in adopting third-party libraries based on uninformed assumptions about quality, as I'm personally well acquainted with. If you have a manager who is prone to making baseless assumptions, and downloadi

  • Hackers will typically target the most popular libraries, because these will be found on the largest number of computers. If you want your software to be more secure, use the #2 or #3 library, assuming they have appropriate functionality. Hackers are less likely to attack these.

    This principle is beneficial in other ways as well. If you are using commercial libraries, the #2 or #3 brand will try harder to support you, the customer, because they want to catch up with #1. The #1 brand, on the other hand, tends

  • Analogy time: Imagine homes with no Circuit Breakers. Any short circuit anywhere could burn down a house. Lawyers and lawmakers arrive on the scene and declare that everything you want to plug in needs to be short proof. Every product has to be certified not to burn down houses, no matter what failure happens. The designers of even a simple lamp can end up being charged with murder, and as a result nobody really wants to use electricity.

    We have circuit breakers, which limit the amount of current to be supp

    • That's not going to fix things, and here's why:

      Imagine a system where Apache is running as a web server, can only access the database, and nothing else. In fact, limit it further: it can't even write/read to the database, just forwards requests to an application server.

      A hacker who manages to break in to this Apache instance still has all the user data that is streaming through the server, which is quite a lot.
      • by ka9dgx ( 72702 )

        Yes, being able to copy the flow of data to a user would be bad, but not system-compromising bad. And why would an instance of Apache be able to connect to more than one IP address? Each thread would be isolated from each other, further limiting the information leakage.

        • Yes, being able to copy the flow of data to a user would be bad, but not system-compromising bad.

          Probably the most valuable thing on that system is the flow of data.

    • by ledow ( 319597 )

      In the UK, every device has to be supplied with a mains plug pre-wired.

      Every such main plug has an individual fuse in it, of the correct rating for the appliance..

      And every circuit is on an RCD / breaker on the fuse board.

      And every fuse board has an RCD / breaker.

      And the house has a fuse for the fuse board.

      Don't lay the blame at one point or one component. Isolate them all.

      As pointed out, an application with permissions to private data is vulnerable no matter what you do - a compromise is a compromise and

  • One of the best examples of Betteridge's Law of headlines [wikipedia.org] I've seen in quite a while! :)

    No, code reuse obviously does not endanger secure software development. It was hard enough for the experts to get ssh right, and you think you're just going to whip one up from scratch? Yer a freakin' idiot if you think that!

    Code reuse (like pretty much everything else associated with software development) has risks and benefits. Learn what those risks and benefits are, and stop searching for magical "silver bullets" tha

    • by Xtifr ( 1323 )

      Correction to my previous post. A real silver bullet fired directly through your CPU will solve your bug problems, because you'll no longer be able to run software--hence, no bugs. Aside from that, silver bullets that fix all your problems are imaginary. :D

  • There is (conceivably) a remedy available under Copyright Law. Many "Internet of Things" devices (in particular, network cameras) run (at least some) libraries that were licensed under the GNU LGPL. One of the conditions of the LGPL is that users be able to - at will - replace the device's LGPL'd libraries with their own version (with the same API). If these devices do not have such an 'upgrade' mechanism available (and I suspect that few, if any, do), then they could find themselves legally liable.

    If the

    • by GuB-42 ( 2483988 )

      That's the case with LGPLv3, not LGPLv2.
      And the "anti-tivoisation" clause as it is called only applies to consumer products.

  • The reality is most companies don't care about security.

    When was the last time your boss added a security audit to your sprint? When was the last time someone said, "make sure you add enough time on this task to make it secure."? Security is not a priority for companies, so we don't spend time thinking about it.

    For these reasons I advocate irresponsible disclosure [medium.com]: we need to give companies motivation to improve their code.
    • I'n not really going to answer those questions, for we're a very special case with very high security requirements (security is here one of the major parts of the specs, usually it's about the same length as the feature demands), so I can't complain about not being asked enough, actually, it would be nice to at least be left out of the meetings that discuss the color of the user interfaces...

      My problem is on the other hand that I cannot outsource anything. I cannot find any partners that can actually comply

  • My argument is that many programmers design needless complexity into things because they believe they can just "outsource" their problems.
    For example people design systems with complex file formats they could not parse themselves, then they load a script interpreter which will parse it for them.... and as a side effect execute any code in that file.
    If they would have chosen a simpler file format, a few lines of code would have been able to parse it perfectly well.

    Also there is one particularly toxic way of

  • It's not a problem when a programmer pulls code out of his archive to put it to new use. What is a problem, though, is people googling their problem du jour that they cannot solve, find code that more or less does what they want, adapt it to their specs and consider that programming.

    What you will usually find as one of the first hits that way is tutorial code that showcases the function that you might be after, but without any sanity checks and without the even barest minimal security in mind, simply becaus

  • It is indeed simplistic and one-sided to assert that "using libraries exposes systems to bugs". The problem is far deeper, and affects both library and bespoke code bases. First, consider the somewhat psychological problem that often happens when using libraries. The various flavors of public code (e.g., Open, Free, etc.) have as an implicit or explicit basis the presumption that since the source is freely available, those using the libraries will peruse the source, either to fix shortcomings or to correct
  • by nuc1e0n ( 924639 )
    But more accurately, code reuse permits more software development in general, including insecure software development.
  • Put a string of password like alphabet soup in a routine. Then when that routine becomes ensconced in some other application it can still be searched for and noted for replacement of removal. It would only take about 128 bytes to clearly note application, programmer, usage, date, time, etc. It used to be a size issue but with todays Gigabyte memory size, that issue has diminished dramatically.
  • The big issue I see in my daily work life is that management acts as if using a third-party solution, be it proprietary or open-source, means we will receive perfect code at the beginning and never have to update it. We lock versions early in the dev cycle, but if a new version comes out mid-development there is a general distrust of changing to the new one.

    And then, when the inevitable critical issue is discovered after we have release, we have no efficient plan on how to update. At least GPL solves that;

  • The problem with unbounded pointer vulnerabilities (stack smashing, return value changing, parameter changing) is the unboundedness of the pointer. ONLY the programmer and (for some languages) the compiler know what values are legal for a pointer offset.

    Programmers aren't enough.

    So I use GoLang (but Java, Rust, Node are all similar in this regard) because I know that all my pure-GoLang 3rd-party libs cannot have unbounded pointer errors. This means Go's SSL, not Go's OpenSSL wrapper. A Userspace written in

  • If you manually install libraries this is also going to be problematic. I've often found myself my own package maintainer. It's hard to keep track of everything and some libraries or packages can break each version needing code changes. In niche cases the libraries are being build not simply because they are out of data on the system and newer is needed. Sometimes they need to be patched in a way that wont be accepted upstream which raises the maintenance code for upgrading. VCS helps but it's still a big h

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...