Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Java Open Source Programming

Report: Aging Java Components To Blame For Massively Buggy Open-Source Software 130

itwbennett writes: The problem isn't new, but a report released Tuesday by Sonatype, the company that manages one of the largest repositories of open-source Java components, sheds some light on poor inventory practices that are all-too-common in software development. To wit: 'Sonatype has determined that over 6 percent of the download requests from the Central Repository in 2014 were for component versions that included known vulnerabilities and the company's review of over 1,500 applications showed that by the time they were developed and released each of them had an average of 24 severe or critical flaws inherited from their components.'
This discussion has been archived. No new comments can be posted.

Report: Aging Java Components To Blame For Massively Buggy Open-Source Software

Comments Filter:
  • by Dr_Barnowl ( 709838 ) on Tuesday June 16, 2015 @11:48AM (#49922329)

    Why?

    Because if you don't test your code, you don't know if changes to it break it.

    Changing the components your code is composed of is a big change.

    Therefore : people get nervous about changing the components they have used (even changing the version).

    What should be happening : when you're planning a new release, raise the component versions to the latest and run your test suite. If it passes, good job, release it.

    What is actually happening : the version numbers never get edited, because that version worked, and if you change it, OMG, it might stop working.

    • Testing is no cure for bad design or bad coding, which are the -root cause-. The specific design and code techniques to prevent vulnerabilities need to be better communicated and enforced (by open source code reviewers, as well as commercial developers).

      That's not to argue testing is unimportant. But it's not the root cause of vulnerabilities, and it's not clear to me that we know how to test for a lot of vulnerabilities.

      • Oh, yes, testing doesn't fix bad design. But it helps to avoid the problem mentioned - which is that projects use versions of components with known problems that are known - and thus have been fixed in newer versions.

        • ONLY if there are tests to catch the problems that exist in the earlier version!

          Either (a) there was a test contemporaneous with the faulty component that wasn't run; or (b) a subsequent fault was discovered, a test for that fault is developed, and that test is subsequently associated with that component (version).

          • He's not talking about vulnerabilities in components. He's talking about testing the application that uses the component to see if if the application functions properly. The test on the application is to see if the latest version of the component will break the application. That was his point - people tend not to use the latest version of the component for fear it will break the application that uses the component. You are talking about security testing the component. That's a different test.
        • I am not going to argue that having better automated test is not a good thing. It is a good thing.

          The problem is more about how these third party components are maintained. The majority of third party components I have worked with, upgrading to a newer version of the component meant rewriting large sections of code just to get the project to compile. The interface to the component changed. The tests would cover the happy paths and some bad paths, but a lot of manual testing and mitigation of new bugs woul
        • Testing won't fix bad design, but it might give you a handle on how bad the problem is.

      • by DarkOx ( 621550 )

        Root cause or not tests are what let you 'fix' the vulnerabilities, re-factor to correct design issues, etc.

        I have to agree with the parent. Having good test coverage is the difference

        between: We are going to be exposed for weeks while I'll 'try' to understand all the impacts of this change and hope QA spotts any potentially disastrous bugs before we go to production.

        and:
        Cool fix is in, tests are passing. Lets yet QA run the build for a day or so and we can get this out the door before it hits Slashdot.

        • by lgw ( 121541 )

          These are flawed Java components, not complete systems. What kind of component-level testing is generally useful for avoiding security issues? Most of the issues I've seen have either been from each component assuming the other was checking for something, or from anti-patterns like depending on "string cleaning" to avoid injection attacks - implementation choices that are bad practice, but have no flaw you can point to at the time the component is written.

          Plus in general insecure code tends to be the resul

    • Well tested code is best. So the few unit tests you have should be run many times to ensure code is well tested.
    • by DickBreath ( 207180 ) on Tuesday June 16, 2015 @12:53PM (#49922853) Homepage
      The root cause: poor management (in most cases)

      The root cause is not poor unit testing. Not bad developers. It is managers who won't allow the change to be made. It ultimately will always come down to money. They are unwilling to spend on having a reasonable staging environment that closely mimics the production system such that making these changes could be done safely and receive proper testing. And people to do that work also cost money.

      In short: management doesn't care, due to money. So the product can just self-destruct. (like SourceForge)
    • by Yaztromo ( 655250 ) on Tuesday June 16, 2015 @01:06PM (#49922957) Homepage Journal

      What should be happening : when you're planning a new release, raise the component versions to the latest and run your test suite. If it passes, good job, release it.

      What is actually happening : the version numbers never get edited, because that version worked, and if you change it, OMG, it might stop working.

      Part of the problem I run into with this is that sometimes projects stick with old dependencies because at some point, some major version came along that significantly changed the organization of the API in such a way that the latest component version an't just be dropped in, but requires significant resources refactoring your code to use it. Getting management buy-in for that when there aren't any big customers breathing down their neck to get a flaw fixed can be neigh on impossible.

      I ran into this recently myself. During internal testing, I discovered a flaw in our product when accessing any of our web resources using an IPv6 destination IP in the URL (i.e.: http://18080./ [18080.] A quick bit of debugging showed that an external library we had been using for several years was doing some brain-dead parsing of the URL to pull out the port number; it was just doing a string split after the first colon it found, and presumed the rest was the port number.

      Modifying the Maven POM to use a newer version of the API in question was initially difficult because the project had since reorganized their own library structure, breaking things into multiple smaller JARs. Except that some of the functionality was actually _removed_, and isn't available at the latest API revision (functionality we had been using, naturally). Classes had moved around to different packages than where they were previously, and various interfaces appear to have been completely rewritten.

      Upgrading to a version of the library that actually fixed the flaw was going to be akin to opening Pandora's Box. Unfortunately, our former architect (from whom I inherited this code) was the type of guy who just liked to throw external libraries at every problem. In the end we had to document the fault for all current versions of the product, and now I'm trying to get management buy-in to do the work necessary to upgrade the library in question for the next version of our product. And this is for just one library out of over 100 that need similar attention.

      Suffice to say, I'm not happy about this state of affairs. Unlike the previous architect, I push against using third-party libraries as our solution to everything. If I were allowed to rewrite everything from scratch, we could avoid these problems. Things are unfortunately messy out here in the real world, and when libraries decide to significantly change their interfaces your program uses to access their functionality, no amount of unit tests is going to make upgrading those libraries any easier.

      Yaz

      • Hit the nail on the head with the problems of external libs.

        > Unlike the previous architect, I push against using third-party libraries as our solution to everything.

        You're not alone, my boss and I are of the same mind set, and I've noticed the same pattern amongst coders:

        * The more better programmers minimize the amount of external libs.

        * The inexperienced / junior programmers are so gung-ho to include every library under the sun that it is almost dizzying. And then they complain why their project take

        • by Anonymous Coward

          * The more better programmers minimize the amount of external libs.

          Yes. Good programmers know when to use an external library. Great programmers know when not to use an external library.

      • Wait, what?

        If you write code, part of the documentation before you start should be a "risks" statement, where you state that a dependency on X external, third party library, exists, and that any vulnerability could cause issues in your application. Also, that substantial upgrades to the library interface will affect maintainability if any interfaces are changed, or are deprecated.

        When someone throws a pile of libraries at a problem, that risk statement gets lengthy.

        Rewriting from scratch is not the best so

        • Wait, what?

          If you write code, part of the documentation before you start should be a "risks" statement, where you state that a dependency on X external, third party library, exists, and that any vulnerability could cause issues in your application. Also, that substantial upgrades to the library interface will affect maintainability if any interfaces are changed, or are deprecated.

          When someone throws a pile of libraries at a problem, that risk statement gets lengthy.

          Which is all well and good if you're doing greenfield development. It's not so good when you're inherited a codebase where none of this was done in the first place, and you're tasked to keep it going. As I said, the real world can be a messy place. In my case, the previous lead architect just threw immature libraries at every problem willy-nilly, at a time before I worked for the company. I get to inherit the problems this lack of foresight caused, and don't have the benefit of going backwards to fix it

          • by hattig ( 47930 )

            Just maintain a visible Risk document with all the issues. Document the estimated fix time. Let it have visibility.

            Then when the shit hits the fan, you have your arse covered. Not that this will protect you against particularly nasty management scum...

            Using Apache libraries, or ${largeCompany} libraries is one thing. But random crap found on Github?

            All you can do is overestimate work, and use the time to kill off the libraries one by one.

      • What should be happening : when you're planning a new release, raise the component versions to the latest and run your test suite. If it passes, good job, release it.

        What is actually happening : the version numbers never get edited, because that version worked, and if you change it, OMG, it might stop working.

        Part of the problem I run into with this is that sometimes projects stick with old dependencies because at some point, some major version came along that significantly changed the organization of the API in such a way that the latest component version an't just be dropped in, but requires significant resources refactoring your code to use it. Getting management buy-in for that when there aren't any big customers breathing down their neck to get a flaw fixed can be neigh on impossible.

        I ran into this recently myself. During internal testing, I discovered a flaw in our product when accessing any of our web resources using an IPv6 destination IP in the URL (i.e.: http://18080./ [18080.] A quick bit of debugging showed that an external library we had been using for several years was doing some brain-dead parsing of the URL to pull out the port number; it was just doing a string split after the first colon it found, and presumed the rest was the port number.

        Modifying the Maven POM to use a newer version of the API in question was initially difficult because the project had since reorganized their own library structure, breaking things into multiple smaller JARs. Except that some of the functionality was actually _removed_, and isn't available at the latest API revision (functionality we had been using, naturally). Classes had moved around to different packages than where they were previously, and various interfaces appear to have been completely rewritten.

        Upgrading to a version of the library that actually fixed the flaw was going to be akin to opening Pandora's Box. Unfortunately, our former architect (from whom I inherited this code) was the type of guy who just liked to throw external libraries at every problem. In the end we had to document the fault for all current versions of the product, and now I'm trying to get management buy-in to do the work necessary to upgrade the library in question for the next version of our product.

        It might not be worth it - this scenario will no doubt play out again. You'll move to version X, and make thousands of code changes to make the move. In about five years your successor will find some bug in version X that is fixed in version X+5. Unfortunately, architectural changes would be once again required, much like now. It's a vicious cycle.

        Your best bet is to continue using the current library and simply not use it's URL parsing functionality - write your own small function to do just that one thing

    • if you don't test your code, you don't know if changes to it break it.

      And if you do test your code, you still don't know. But you'll catch some of them.

    • by RabidReindeer ( 2625839 ) on Tuesday June 16, 2015 @03:30PM (#49924369)

      This is somewhat deceptive. Sonatype supports Maven component archives.

      One of Maven's chief claims to fame is that when you build a project, it doesn't grab "the latest" versions of dependencies, it grabs the selected versions of dependencies. On the grounds that "If it ain't broke, don't fix it".

      This ensures a predictable product because everyone who does a build, no matter when, no matter where, will be pulling in the same resources to build with.

      The problem arises when one (or more) of those selected component versions turns out to have issues. The build ensures that the product will be consistent, and thus will pass its own tests, but as the old observation goes, testing cannot prove the absence of bugs, only their presence. So if there was a vulnerability, an old project's tests wouldn't see it. And because you're asking for a specific library release version, later fixes don't get automatically included (of course, neither do later breakages, but they ignored that aspect).

      In theory, then, this is simple to fix. Just update the project (POM) to pull in newer, better dependencies.

      And the NEXT version of Windows will fix all your problems, and I've got a very nice bridge in NYC for sale cheap.

      If you're working on a project, you generally have all you can do to keep up with issues in your own code, let alone some supposedly trustworthy third-party libraries. You cannot afford to be constantly updating the dependency versions and even if you could, there's the issue of "dependency Hell", where changing the version of Hibernate can conflict with the version of slf4j which can conflict with junit, which can conflict with... I usually like to budget 2 or 3 DAYS when I'm ready to start upgrading dependencies.

      Sonatype doesn't get a pass here, though. If they/Maven supported a mechanism that could flag builds that have known weak dependencies, it would help a lot. Management, of course, would promptly command it to be turned off to ensure "productivity", but at least we'd have some help short of periodically manually auditing every library in a complex project (like that's ever going to happen).

      • I wonder why there are so many articles busting a programmer's balls (only guys are coders apparently(wtf?!)) for not participating in code reuse.

        Hell, if I had to go through all of those hoops during maintenance phase, which is most of the time spent on the project, I would reinvent the wheel every damned time.

        But, whatever.

      • by hattig ( 47930 )

        Ah, the good old Junit / Hamcrest / Mockito Maven Pom upgrade joy!

    • by Kjella ( 173770 )

      No, the problem is that you got no knowledge or control over the quality of work done on the component. There are lots of ways to subtly break it through failing on a particular set of inputs or a particular sequence of events that is combinatoric in nature and essentially impossible to predict. For example imagine you have functions a() through z() and it'll crash if you call c() after g() unless you've called v() in between. How can you unit test for that? You can't. What if the new component has a thread

    • People don't like to test third party products, much less looking inside them. The whole point is that it's like going to the store and buying a box of something from the shelf. To do any more than that is to admit that the adage of "never reinvent the wheel" is inappropriate to software.

      Partially it's a junior programmer mentality too. They do not understand that third party components frequently break; they've never encountered hardware that has bugs, compilers that have bugs, libraries that have bugs,

    • Sorry, but no, it's not that simple. Lots of vulnerabilities come into a project because of dependencies that are poorly managed. Project A depends upon project B which in turn depends upon project C and C has the vuln. All the unit testing of A in the world will not turn up that vuln. That requires system testing and that's a lot more involved.

    • by hattig ( 47930 )

      My previous employer was of this mindset. Even with in-house dependencies. Nothing was other updated, out of fear.

      A horrible environment to work in, of course. Every few months a component upgrade would be required, and because it was 100 releases out of date the upgrade was a horrible horrible experience.

      And they had generally reasonable test suites too. It was pure fear of downtime because of the monolithic architecture of the application.

  • by gstoddart ( 321705 ) on Tuesday June 16, 2015 @11:53AM (#49922365) Homepage

    I'm betting if you have a large enough pool of open source things, which depend on other open source things, then the bugs in the dependencies will trickle up to the projects which rely on them.

    Though, admittedly, Java has also made this more annoying -- a decade or so ago when I was actively working on a Java project, it always amazed me how a new version of Java could completely break everything and then you'd have to re-test and re-certify everything.

    It got to the point we put in very large bold characters in our release notes ... we work on this version of Java, if you get clever and introduce your own version of Java, we won't talk to you until you confirm the bug in the version we support.

    A surprising number of clients were willing to blaze trail with whatever version of Java came along, and then kept expecting we'd be supporting custom versions from vendors or features which didn't exist when our version was built.

    Eventually we learned to dread a new release of Java. Because invariably things went to hell and stopped working.

    • Re: (Score:3, Insightful)

      by Anonymous Coward
      The last major release that "broke" things for me was the 1.4.2 -> 5 transition in 2004. Since then (5 -> 6, 6 -> 7, and 7 -> 8) have been relatively painless, If you were relying on an undocumented feature, or compiling against com.sun.* or sun.* classes you did so at your own risk. If you stuck to the documented JDK, you were usually ok.
      • I agree that 4 -> 5 was difficult, 5 -> 6 and 6 -> 7 was easy, but 7 -> 8 is difficult again. Mostly due to app server containers like Tomcat and JBoss -- specifically the JSP compiling part needs a lot of love for Java 8 in servlet containers.

    • I've only ever seen instances where new versions of Java broke things by removing deprecated components like JINI, but I generally tend to stick to OpenJDK for everything - as the "official" Java (the benchmark for certification) maybe it has less "clever" in it than the others.

      Clients may be keen to move onto newer versions of Java because of the immense litany of security defects that get listed by Oracle when they release a new version, and because of their apparent enthusiasm for end-of-lining support (

      • by Gr8Apes ( 679165 )
        The litany of security defects are largely edge cases in portions of the libraries most don't use or browser based (ie, applets) which don't concern 99% of java devs (I wish I could say 100%, but somewhere, some idiot is still writing applets) The core has been relatively stable since J5.
        • by KGIII ( 973947 )

          I have done some work in Java. I am proud to say that I am not in the 1%! I have never made a Java applet. I can not think of any reason to do so. It *might* have been viable in the past as there may have been little choice otherwise (Flash? What was that one from Microsoft, ActiveX?) but I can not think of any reason to do so now and I am having a hard time making a case for having done so in the past.

          • by hattig ( 47930 )

            It was viable in 1998! That was when I last wrote an applet (a tetris game to run on a 40MHz ARM-based Set Top Box from Acorn that never saw the light of day).

            No, no, wait, I did write a rotating 3D globe applet in 2006 for a laugh.

            Applets were a bad idea, are a bad idea, and should be dead. Luckily browser security policies are making maintaining in-house applets non-viable so they can move to better installation mechanisms now.

          • by Gr8Apes ( 679165 )
            I wrote a couple as a test and quickly abandoned those, as well as some desktop apps painful as they were, way back somewhere between 99 through the early 2000s. The 1% (hopefully much much less) is in reference to today's devs. No dev today should be writing an applet. In fact, I'd be perfectly happy if Oracle removed the applet code and browser integration "capability" (used very loosely) completely.
    • by Daniel Hoffmann ( 2902427 ) on Tuesday June 16, 2015 @12:24PM (#49922605)

      Yeah this is a common problem in pretty much all platforms, what makes Java stand out is too many Java things are actually specifications, not implementations. It kinda mixes all the headaches of conventional development (dll hell, outdated libraries, testing against multiple hardware/OSs) with the headaches that developing for browsers have (multiple implementations of the same specifications). One of the things that make people like the Spring is that, unlike J2EE, Spring is an implementation, not a specification so it usually works is you change your application server for example. Well some parts of Spring relies on Servlets, which is a specification, but Servlets implementations are ok (although Java6 does not support Servlets 3.0 which is a pain in the ass if your client is on Java6 and refuses to update).

    • by TheCarp ( 96830 )

      > It got to the point we put in very large bold characters in our release notes ... we work on this version of Java, if
      > you get clever and introduce your own version of Java, we won't talk to you until you confirm the bug in the
      > version we support.

      It gets really fun when open source folks do this, I actually had this conversation recently:

      "Have you tried the latest version? That module has been updated since the version you are using"
      "No, but I am looking at your code on github now, line X would

    • It got to the point we put in very large bold characters in our release notes ... we work on this version of Java, if you get clever and introduce your own version of Java, we won't talk to you until you confirm the bug in the version we support.

      Which is how we ended up with the management nightmare of different hardware requiring different and incompatible versions of JAVA for the "Web Client" to manage it. So, one workstation to manage Cisco. One workstation to manage EMC. One for HP. One for the phone system and a different one for the voicemail... And hope to God no one clicks "Update" on the popup before reading it!

    • It got to the point we put in very large bold characters in our release notes ... we work on this version of Java, if you get clever and introduce your own version of Java, we won't talk to you until you confirm the bug in the version we support.

      It's really not just you. Every enterprise-class piece of Java software I've ever installed came with its own copy of a specific version of the JVM redistributable, and required that you install it.

      • I haven't directly touched Java in years ... but one of the things which struck me was that it just seemed too damned brittle.

        What should have been core APIs for published interfaces would suddenly change in the number of parameters between versions, or not be there at all, or return something new.

        They'd "fix" something by simply deprecating it/removing it.

        It felt very much like a young language which was constantly shifting under your feet, constantly calling for a do-over, and often breaking backwards com

    • > (1) Is this unique to Java?
      > (2) I'm betting if you have a large enough pool of open source things, which depend on other open source things . . .

      You answer your own question. It may be unique to Java because Java has an absolute embarrassment of open source riches. Some of them have been around for a long time. Bad management leads to big projects not getting upgraded. Not just libraries, but even the Java runtimes that they run on. Just look at how many developers on reddit complain about
  • by QuietLagoon ( 813062 ) on Tuesday June 16, 2015 @11:56AM (#49922387)
    It wouldn't surprise me if similar audits found the same level of vulns in component libraries for other development environments.

    .
    Java developers are no different than other developers.

  • by sunderland56 ( 621843 ) on Tuesday June 16, 2015 @12:03PM (#49922459)
    One basic problem seems to be that repositories are providing downloads of known vulnerable components.

    Once a bit of software has a known vulnerability, it should *immediately* be deleted from all repositories. Responsible developers will post a fix in a timely manner; hacks will wait weeks/months/years to update, Eventually people will move away from the badly written bits of software - because they aren't available. Problem solved.
    • Delete stuff from the Internet... Hmmm... Sounds like a wonderful idea. How?

      Actually it is a terrible idea, even if it could work, because looking at how the code progressed is how you learn. Not to mention that I can patch and old version to fix the vulnerability, but not have to move to the new and incompatible version.
    • by ERJ ( 600451 )
      You don't know how the software is being used. Maybe it is Apache's commons-net which has a vulnerability in the FTP client while my software only uses the SMB client. Maybe the next revision up has API changes that break compatibility.
      In the ideal world everything would be kept up to date but time is a finite resource and if there is not a compelling reason to update it seems silly to waste time on it.
      • by Anonymous Coward

        You don't know how the software is being used. Maybe it is Apache's commons-net which has a vulnerability in the FTP client while my software only uses the SMB client. Maybe the next revision up has API changes that break compatibility.

        +1.

        The OP is advocating for the modern equivalent of book burning.

  • by ErichTheRed ( 39327 ) on Tuesday June 16, 2015 @12:03PM (#49922463)

    This basically defines some of the problems of "enterprisey" software:
    - It's composed of a million glued-together libraries.
    - It's written by chronically understaffed/overworked IT department employees.
    - Rigorous testing either (a) doesn't exist, (b) is so onerous that most developers try to avoid it, or (c) is outsourced/offshored to the lowest bidder, and therefore isn't completed without the staff basically doing the tests for the outsourcer.
    - Anything that breaks it is avoided at all costs because of all of the above.

    By extension, this is why some companies are stuck running IE 6 for key applications, or Office 97 because rewriting the scary mess of macros that runs a process isn't something anyone wants to do. I do systems integration work, and new versions of Java, web browsers, etc. are miserable. They introduce bugs small enough to be annoyances (rendering problems, etc.) and big enough to break the entire system.

    The key to fixing this is for the software architects to require that developers move up to at least a semi-modern release of their key libraries, test everything against them, and remove the old outdated ones once all the bugs are fixed. The problem is that this is never done.

    • by JustNiz ( 692889 ) on Tuesday June 16, 2015 @12:23PM (#49922603)

      >> The problem is that this is never done.

      The reason is that many Software Director positions are now filled with technically clueless people that are basically salesmen rather than engineers.
      They have no comprehension of the concept of technical debt, or the need to spend time on activities that don't directly translate into new features.
      The net result is that you're always just piling more crap onto the top of a steaming turd pile so making it worse, instead of working to replace the shit.

      • by tyme ( 6621 )

        >> The problem is that this is never done.

        > The reason is that many Software Director positions are now filled with technically clueless people

        Now?

    • Actually, this is a problem with ALLsoftware. Most programmers are not experts on security or how to write software that is secure. Libraries just exacerbate the problem because even if the code is 100% unit tested, it doesn't mean it's safe and a lot of these libraries are huge. There's simply no way to know how secure they are. That said, the companies I have worked for run scanners against apps to test them for vulnerabilities. Just because a library has a vulnerability, it doesn't mean that vulnera

  • Accept the fact (Score:4, Interesting)

    by cloud.pt ( 3412475 ) on Tuesday June 16, 2015 @12:07PM (#49922497)

    It's about time everyone stops whining. There are things in life you're better safe than sorry, but then there are things in life you just can't change: not every single entity can keep maintaining what they create. Human beings are limited, and so are human organizations - they lack money, workforce or simply the patience to put up with some "critical flaws" that are just too rooted in bad design to be solved without a restructuring.

    THAT IS THE REAL FLAW.

    There are good ways and bad ways to create reusable components. Black boxing (containing) everything for starters (sans the closed-sourced part) is something people tend to limit the scope to testing and/or to services outside a fully-fledged system's component border. Technologies like SOA are just one of many ways to plug&play every new piece of technology that performs a very specific task in a different way of a previously flawed one. Think project Ara. It's not only fun to develop like this (although some have problem conceptualizing it), but it's also more robust in the long run. Using such paradigms is what we, as the "clients" of such "aging and flawed" components can do push better development of individual components.

    Now, each and every component developer has to find ways to keep their work atomic, so as to not conflict with the principles of technologies they are developed to work for. This might all seem like an utopian way of what to expect of the coding community, but then again we are also still looking for the best ways to apply near-perfect political views designed hundreds of years ago, which are yet to achieve full potential. I keep my hopes up for both issues, but my expectations low.

    • What's the flaw? That an organization is not earning enough cash flow or pathos to fix critical flaws in their product?

      Or that people choose reusable components poorly?

      Or that humans are humans? I suppose that kind of is a flaw, unless you assume from the start that that's kind of true, but if you don't then isn't the flaw on the ignorant?

      Or that you are mixing some sort of political statement with asking people to stop whining? Because no matter your politics, someone is either going to whine or feel so

      • Neither of your options, so I guess you missed the point and decided to look cool with a very subtle...

        Why don't you collect your thoughts and try again?

        The "REAL FLAW" is the conjuncture. The lack of determination flaw is intrinsic to society and nobody can change it. It applies to both humans and their collective forms. I subliminally mention a lot other flaws. One of them is most dev in the community has this general idea that everything should be future-proof, even stuff they don't code, yet stoically and lightheartedly include in their code as if it w

  • It is important to note the following:

    Sonatype has determined that over 6 percent of the download requests from the Central Repository in 2014 were for component versions that included known vulnerabilities.

    That means that when building a project the devs are using an older version of a dependency than a newer, fixed version. You don't pull your own artifacts from Sonatype, just dependencies.

    Yes this can mean there's a bug that might be exposed to end users, but frequently a dependency is just a dependency used by the developer's code. Sure there could be a transient vulnerability, but I don't think vulnerabilities will be that transparent but that depends upon the natu

  • It doesn't matter what language is used. Developers don't upgrade because a PHB doesn't like the risk-benefit ratio.
  • by Captain Damnit ( 105224 ) on Tuesday June 16, 2015 @12:28PM (#49922639)

    It should be noted that the company releasing this report, Sonatype, markets a product called Insight Application Health Check that scans your binaries for libraries with known vulnerabilities.

    I have never used their service, and can offer no comments on its utility or value. However, it is a bit unseemly that TFA doesn't mention that the source of their information about this very real problem also sells a service that solves it. This is a knock on IT World, not Sonatype.

  • If you want software to last more than one season, then rely on old-school "plain jane" HTML for most of your UI with a little JavaScript only where absolutely needed.

    Plane-jain HTML is not glamorous or fancy, but it has been supported and will be supported for a good while. If your org wants fancy, it has to pay the Fancy Tax.

    If you have to repaint the entire screen for an activity, so be it. If it's a relatively small-user-base app, the overall bandwidth overhead is not really bigger than downloading fat

  • Blame Maven (Score:5, Insightful)

    by _xeno_ ( 155264 ) on Tuesday June 16, 2015 @01:02PM (#49922925) Homepage Journal

    This is a problem that Maven has created, mostly.

    What the summary doesn't mention is that "large repository of open source software" is a Maven repository. Maven allows you to specify dependencies for your Java project.

    The problem is that you have to specify a specific version of whatever you use. So let's say you use OpenFoo 1.1 and that at the time you write your code, the latest version of OpenFoo is 1.1.3.

    Now assume a horrible vulnerability is discovered in OpenFoo 1.1.3, so they release OpenFoo 1.1.4 to fix it. Well, your Maven POM says you require OpenFoo 1.1.3, so until you go in and manually change that, you will only ever use 1.1.3. There is - by design - no way to say "I want the latest 1.1 version." You can only describe a single, specific version.

    So it's no surprise that Sonatype will see a ton of old Maven projects continuing to download outdated Maven artifacts. There's no way to say "I want the latest version of a specific branch" you can only specify a single version. Which means that a project that hasn't changed in years will still pull in the old versions of the libraries, even if it would work with the later versions.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      There's no way to say "I want the latest version of a specific branch" you can only specify a single version. Which means that a project that hasn't changed in years will still pull in the old versions of the libraries, even if it would work with the later versions.

      No, Maven does support version ranges: You can say stuff like this: <version>[1.0.0,2.0.0)</version>. Here's a pretty good thread [stackoverflow.com] on the subject from StackOverflow.

      To be fair, I don't think that most projects do this, but at least it's supported. Also, I'd guess that an analysis of a lot of the other artifact repositories like PyPi, Bower, or npm would produce similar results.

      • And the comments on that answer say that the "LATEST/RELEASE features" are deprecated or no longer supported, although the links they give to back that statement up are broken.
    • That sounds like the most useful way of doing things. If I haven't built and tested my codebase against a specific library version, how can I assert that my codebase works properly with that specific library version?

      The counterargument holds that this should never happen as long as people use semantic versioning properly, but that's no more realistic than expecting people to release bug-free libraries that never need to be upgraded.

  • by nickweller ( 4108905 ) on Tuesday June 16, 2015 @05:08PM (#49925129)
    April 2013: "Sonatype's annual survey of 3,500 software developers and shows struggle in setting corporate policy on open source and enforcing it" ref [networkworld.com]

    April 2013: "Control and security of corporate open source projects proves difficult | New Sonatype survey finds 80 percent of most Java applications comes from open source" ref [javaworld.com]

    Nov 2014: "Software developers use a large number of open-source components, often oblivious to the security risks they introduce or the vulnerabilities that are later discovered in them." ref [pcworld.com]

    April 2015: "open-source also represents a vast, unpatched quagmire of cyber-risk that’s putting public safety at grave risk. That’s the assessment of Joshua Corman, CTO at Sonatype" ref [infosecuri...gazine.com]
    • Are you refuting any of these claims?

      #1: Corporate policy on open source is extremely difficult, mostly based on the legal teams who have to approve not being familiar with the idea, terms, language, or really any part of it. "This opens us to risk," they say, and the initiative is killed.

      #2:

      When asked about how well their organizations control which open-source components are used in software development projects, 24 percent did say, "We're completely locked down: We can only use approved components." How

    • by KGIII ( 973947 )

      Their business model pretty much relies on open source. Why, pray tell, do you think that this is FUD and what would be their motive for doing so? Why would they want to harm their bottom line?

  • It's better in that just because a component has a vuln doesn't mean that vuln is exploitable in all situations. Unfortunately, people are TERRIBLE at determining if a vulnerability is potentially exploitable or not.

    It's worse in that the data in the NVD is often wrong and has lots of missing versions. For example, CVE-2013-5960 says "The ... in the OWASP Enterprise Security API (ESAPI) for Java 2.x before 2.1.1 " and it lists the affected versions only as 2.0.1. The description is wrong (the issue was f

  • and this why you use the latest and greatest and update your interface to updating dependencies. It has nothing to do with FOSS.

...there can be no public or private virtue unless the foundation of action is the practice of truth. - George Jacob Holyoake

Working...