Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security

OpenSSL: the New Face of Technology Monoculture 113

chicksdaddy writes: "In a now-famous 2003 essay, 'Cyberinsecurity: The Cost of Monopoly,' Dr. Dan Geer argued, persuasively, that Microsoft's operating system monopoly constituted a grave risk to the security of the United States and international security, as well. It was in the interest of the U.S. government and others to break Redmond's monopoly, or at least to lessen Microsoft's ability to 'lock in' customers and limit choice. The essay cost Geer his job at the security consulting firm AtStake, which then counted Microsoft as a major customer. These days Geer is the Chief Security Officer at In-Q-Tel, the CIA's venture capital arm. But he's no less vigilant of the dangers of software monocultures. In a post at the Lawfare blog, Geer is again warning about the dangers that come from an over-reliance on common platforms and code. His concern this time isn't proprietary software managed by Redmond, however, it's common, oft-reused hardware and software packages like the OpenSSL software at the heart (pun intended) of Heartbleed. 'The critical infrastructure's monoculture question was once centered on Microsoft Windows,' he writes. 'No more. The critical infrastructure's monoculture problem, and hence its exposure to common mode risk, is now small devices and the chips which run them.'"
This discussion has been archived. No new comments can be posted.

OpenSSL: the New Face of Technology Monoculture

Comments Filter:
  • Is anyone surprised? (Score:5, Informative)

    by TWX ( 665546 ) on Wednesday April 23, 2014 @05:38PM (#46828421)
    We already established that often corporations will use free software because of the cost, not because they're enthusiasts, and often those that are enthusiasts for a given project are specifically interested in that project only, not in other projects that support that project.

    Besides, it's disingenuous to claim that no one knew that there were potential problems, the OpenBSD people were not exactly quiet about their complaints about OpenSSL. Of course, rather than considering their complaints on their merits, they were ignored until it blew wide open.
    • Re: (Score:1, Funny)

      by Anonymous Coward

      We already established that often corporations will use free software because of the cost, not because they're enthusiasts, and often those that are enthusiasts for a given project are specifically interested in that project only, not in other projects that support that project.

      Besides, it's disingenuous to claim that no one knew that there were potential problems, the OpenBSD people were not exactly quiet about their complaints about OpenSSL. Of course, rather than considering their complaints on their merits, they were ignored until it blew wide open.

      B-b-b-b-but the many eyes of open source makes all bugs shallow.

      • by raymorris ( 2726007 ) on Wednesday April 23, 2014 @07:06PM (#46829003) Journal

        I guess you're not a programmer, and therefore don't know what a shallow bug is. Conveniently, the rest of the sentence you alluded to explains the term:

        "Given enough eyeballs, all bugs are shallow ... the fix will be obvious to someone."

        If you have to dig deep into the code to figure out what's causing the problem and how to fix it, that's a deep bug. A bug that doesn't require digging is shallow. Heartbleed was fixed in minutes or hours after the symptom was noticed - a very shallow bug indeed. "The fix will be obvious to someone."

        The presence or absence of bugs is an orthogonal question. That's closely correlated with the code review and testing process - how many people have to examine and sign off on the code before it's committed, and if there is a full suite of automated unit tests.

        The proprietary code I write is only seen by me. Some GPL code I write also doesn't get proper peer review, but most of it is reviewed by at least three people, and often several others look at it and comment. For Moodle, for example, I post code I'm happy with. I post it with unit tests which test the possible inputs and verify that each function does its job. Then anyone interested in the topic looks at my code and comments, typically 2-4 people. I post revisions and after no-one has any complaints it enters official peer review. At that stage, a designated programmer familiar with that section of the code examines it, suggests changes, and eventually signs off on it when we're both satisfied that it's correct. Then it goes to the tester. After that, the integration team. Moodle doesn't get very many new bugs because of this quality control process. That's independent of how easily bugs are fixed, how shallow they are, depending on how many people are trying to fix the bug.

        • I have a couple problems with the implication that "short time to find/fix" is so acceptable.

          1. Some amount of damage was done (and no one really knows for sure) through this bug. A fix was identified rapidly after the bug was -discovered-, but that's a long time after the bug was -introduced-.

          2. For some systems, particularly those like SCADA systems where we really have deep information assurance concerns, patching software is not easy! Not everything can use "grab the patched source, rebuild and rei

          • I agree quality code is important. I'm glad software architecture is now recognized as an engineering discipline, so you can choose to have a qualified Professional Engineer lead or review a software project.

            All of which is largely a separate issue from the observation that with enough people looking at a problem, the solution will be shallow - obvious to someone.

      • by TheRaven64 ( 641858 ) on Thursday April 24, 2014 @03:43AM (#46830969) Journal
        OpenSSL is quite shockingly bad code. We often use it as a test case for analysis tools, because if you can trace the execution flow in OpenSSL enough to do something useful, then you can do pretty much anything. Everything is accessed via so many layers of indirection that it's almost impossible to statically work out what the code flow is. It also uses a crazy tri-state return pattern, where (I think - I've possibly misremembered the exact mapping) a positive value indicates success, zero indicates failure, and negative indicates unusual failure, so people often do == 0 to check for error and are then vulnerable. The core APIs provide the building blocks of common tasks, but no high-level abstractions of the things that people actually want to do, so anyone using it directly is likely to have problems (e.g. it doesn't do certificate verification automatically).

        The API is widely cited in API security papers as an example of something that could have been intentionally designed to cause users to introduce vulnerabilities. The problem is that the core crypto routines are well written and audited and no one wants to rewrite them, because the odds of getting them wrong are very high. The real need is to rip them out and put them in a new library with a new API. Apple did this with CommonCrypto and the new wrapper framework whose name escapes me (it integrates nicely with libdispatch), but unfortunately they managed to add some of their own bugs...

        • by Opportunist ( 166417 ) on Thursday April 24, 2014 @05:05AM (#46831169)

          OpenSSL is one great example for what I dubbed "Monkey Island Cannibal security" in my talks (yes, believe it or not, you can actually entertain and inform managers that way, you'd be surprised how many played MI, and even if not that's at least something they can understand). But that whole Monkey Island spiel works as a perfect example for security blunders where one point gets improved over and over because everyone thinks that's the only point it could fail while the rest of the security system gets neglected even though the security problem is obviously there.

          For those who don't know MI (or who forgot), there is a moment in Monkey Island where the cannibals catch your figure and lock him up in a hut. You can escape that hut via a loose panel in the wall. Now, every time the cannibals catch you again, the door of the hut gets more and more elaborate and secure, to the point where that bamboo hut has a code lock reinforced steel door befitting a high security vault in the end. Which of course has no effect on your chances to escape since you never pass that door (at least on your way out).

          The point is that the cannibals, much like a lot of security managers, only look at a single point in their security system and immediately assume that, since this is their way of entering the hut, it must also be the point where you escape. Likewise, the focus on auditing OpenSSL lies always on the crypto routine, and you may assume with good reason that this is one of the most audited pieces of code in existence.

          Sadly, the "hut" around it is less well audited and tested. And that's where the problems reside.

      • by Bengie ( 1121981 )

        B-b-b-b-but the many eyes of open source makes all bugs shallow.

        Everyone who attempted to read OpenSSL quickly lost their ability to see and they gouged out their eyes from the pain. OpenSSL is what you call obscurified code.

    • by Anonymous Coward

      But the rest of us do!

      It's a silly argument. Put your eggs in one basket... then guard the basket. 2-3 FT developers doesn't cut it when there are so many attackers and the motivation is much greater than bragging rights at def con.

    • by Xylantiel ( 177496 ) on Wednesday April 23, 2014 @09:16PM (#46829699)
      I would say it wasn't just OpenBSD either -- it appears that everyone was very reluctant to update from 0.9 to newer versions. This tells me that people knew the development practices weren't up to snuff. It's just too bad that it took such a major exploit to kick everyone in the head and get them to put proper development practices in place for OpenSSL. Many eyes don't work if everyone is intentionally holding their nose and looking the other way.
      • > it appears that everyone is reluctant to updates anything, ever

        Fixed That For You.

        You don't touch core, production libraries if you don't have to for stable code. And new features, enhancements, or portability often hurt the eize and performance of otherwise stable code.

        • Well I would say that is just evidence of the problem. If update adversely impacts stability that badly then updates are not being managed/tested properly, which is exactly the problem with OpenSSL. This also brings up another point -- a lot of the stability problems are due to interaction with various other (broken or oddly-functioning) SSL implementations. The correct way to handle that is with rigourous and extensive test cases, not just closing your eyes and not updating.
          • > The correct way to handle that is with rigourous and extensive test cases, not just closing your eyes and not updating.

            Test labs and test time are quite expensive. Replicating the exact combination of tools you use in production, _with all possible current and new releases of all components_, is a task that quickly grows in a combinatorial fashion. This is not an SSL specific problem, this is a general software problem. A monoculture for development actually _aids_ this by following consistent API's a

  • by Anonymous Coward

    pleese see this good article related :http://www.networkworld.com/weblogs/security/003879.html

  • OSS vs Reality (Score:5, Insightful)

    by Ralph Wiggam ( 22354 ) on Wednesday April 23, 2014 @05:41PM (#46828447) Homepage

    In theory (the way OSS evangelists tell you) as a software package gets more popular, it gets reviewed by more and more people of greater and greater competency. The number of people using OSS packages has exploded in the past 10 years, but the number of people writing and reviewing the code involved doesn't seem to have changed much.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      We're reactive, not proactive - why look for problems if the software is already working?

      This is why we missed Heartbleed, because there's no compelling reason to keep working once the product gets a green light. There never will be a compelling reason. The problem has no solution that doesn't involve throwing money at something that will never have a payoff...so we won't ever do it. People don't do things unless there's an observable negative to *not* doing them.

      • Re:OSS vs Reality (Score:5, Insightful)

        by Ralph Wiggam ( 22354 ) on Wednesday April 23, 2014 @06:33PM (#46828801) Homepage

        That is the reality of the situation. In the fantasy land of OSS evangelists, thousands of highly skilled coders are constantly auditing big OSS projects.

        • "In the fantasy land of OSS evangelists, thousands of highly skilled coders are constantly auditing big OSS projects."

          Do you know what a strawman argument is, right?

          But now, for a reality check: this bug, while serious, affected maybe a few thousands out of millions of users and once discovered it was fully disclosed, audited, peer reviewed and patched *because* it was on an open source environment.

          Now, please, tell me you can say the same about other closed source products.

          • by Anonymous Coward

            Apples GOTO FAIL whilst not patched universally; was dealt with extremely quickly on Apples primary product lines.

            I would actually equate the handling of both problems, but one is closed source the other open. (Since heartbleed wasn't made public for several weeks)

            "Now, please, tell me you can say the same about other closed source products."

            Yes, we can.

    • The reality is the pool of people competent and knowledgeable enough to review such code from a security perspective is very small, increasing the user base does little or nothing to increase how many people are reviewing the code. This particular bug could have been caught by anyone with even a passing knowledge of security, but most developers wouldn't even know where to start with how to review such code.
    • by MobyDisk ( 75490 )

      In theory (the way OSS evangelists tell you) as a software package gets more popular, it gets reviewed by more and more people of greater and greater competency.

      Interesting.

      In reality, as a software package gets more popular, the confidence in it increases. While there is some validity in this, often times the confidence increases disproportionally to the code paths being excercised. My employer had a mixed C/C++ library that has been in use in a dozen products and is almost unchanged for 10+ years. Within a week of porting it to a new C++ compiler I found a buffer overwrite. Nobody believed me. I had to show them the code, build it with multiple compilers, di

  • Apples and oranges (Score:5, Insightful)

    by Grishnakh ( 216268 ) on Wednesday April 23, 2014 @05:43PM (#46828469)

    With open-source software, a monoculture isn't that bad a thing, as the Heartbleed exploit has shown. When something bad is discovered, people jump on it immediately and come up with a fix, which is deployed very very quickly (and free of charge, I might add). How fast was a fix available for Heartbleed? Further, people will go to greater lengths to make sure it doesn't happen again. Look at the recent efforts to rewrite OpenSSL, and the fork that was created from it.

    None of this happens with proprietary software. First off, the vendor always tries to deny the problem or cover it up. If and when they do fix it, it may or may not be really fixed. You don't know, because it's all closed-source. It might be a half-ass fix, or it might have a different backdoor inserted, as was recently revealed with Netgear. What if you think the fix is poor? Can you fork it and make your own that's better? No, because you can't fork closed-source software (and certainly not selected libraries inside a larger closed-source software package; they're monolithic). But the LibreSSL guys did just that in the Heartbleed case.

    Finally, monocultures aren't all that common in open-source software anyway; they only happen when everyone generally agrees on something and/or likes something well enough to not bother with forks or alternatives. Even the vaunted Linux kernel isn't a monoculture, as there's still lots of people using the *BSD kernels/OSes (though granted, there's far more installations of the Linux kernel than the *BSDs).

    • Monocultures are a nature of the need for interop between orgs. Standards form because it is easy to confirm it will work, easy to find employees/volunteers that can use it, it solves a problem well and the opportunity cost of looking at alternatives likely will be more than any incremental improvement they offer etc. I agree FOSS is fantastic for turn around of fixes and being able to confirm the quality of the fix. Closed source can solve the problem but you might never now.

      I think this calls for more mon

    • With open-source software, a monoculture isn't that bad a thing, as the Heartbleed exploit has shown. When something bad is discovered, people jump on it immediately and come up with a fix

      The reason (a reason) monoculture is still bad with open source is that we don't know when this exploit was discovered. It may have been discovered long before, by malevolent entities, who didn't reveal it because they were exploiting it.

    • With open-source software, a monoculture isn't that bad a thing, as the Heartbleed exploit has shown. When something bad is discovered, people jump on it immediately and come up with a fix, which is deployed very very quickly (and free of charge, I might add). How fast was a fix available for Heartbleed? Further, people will go to greater lengths to make sure it doesn't happen again. Look at the recent efforts to rewrite OpenSSL, and the fork that was created from it.

      "It" in "it doesn't happen again" being "a monoculture"? If you have a monoculture, a fork destroys it unless a new monoculture forms from the fork (i.e., if the forked-from project loses most of its market share).

      • Not really. A fork isn't a completely different product; the two forks share a codebase, which is why the word "fork" is used instead of "rewrite". How much of a monoculture there is depends on how divergent the forks are. Iceweasel and Firefox, for instance, barely diverge at all, whereas X.org and XFree86 are very different at this point (but still not completely different, the core X code is still mostly the same I'm sure).

    • With open-source software, a monoculture isn't that bad a thing, as the Heartbleed exploit has shown. ... How fast was a fix available for Heartbleed?

      Heartbleed showed that a monoculture, particularly one relying on poorly written and barely reviewed code is a bad thing. OSS or not. That the source code was fixed so easily just highlights to me how the heartbeat feature it was never properly reviewed or tested, and how people using openssl or incorporating it into their products never questioned it. The many eyes argument fails when you realize how few qualified programmers looked at the code. Given how wide spread openssl is, getting that fix rolled

    • by plover ( 150551 ) on Thursday April 24, 2014 @12:49AM (#46830465) Homepage Journal

      I think the bigger problem is that everything about encryption software encourages a monoculture. Anyone who understands security will tell you "don't roll your own encryption code, you risk making a mistake." I would still rather have OpenSSL than Joe Schmoe's Encryption Library, simply because at this time I trust them a bit more. Just not as much as I did.

      Another problem is that the "jump on it and fix it" approach is fine for servers and workstations. It's not so fine for embedded devices that can't easily be updated. I'm thinking door locks, motor controllers, alarm panels, car keys, etc. Look at all the furor over the hotel card key system a few years back, when some guy published "how to make an Arduino open any hotel door in the world in 0.23 seconds". Fixing those required replacing the circuit boards - how many broke hotels could afford to fix them, or even bothered to?

      The existence of a "reference implementation" of security module means that any engineer would be seriously questioned for using anything else, and that leads to monoculture. And in that world, Proprietary or Open doesn't matter nearly as much as "embedded" vs "network updatable".

      • The problems with OpenSSL aren't actually in the crypto parts. libcrypto is pretty solid, although the APIs could do with a bit of work. The real problems are in the higher layers. In the case of heartbleed, it was a higher-level protocol layered on top of SSL and implemented poorly. It was made worse by the hand-rolled allocator, which is also part of libssl (not libcrypto).
        • by plover ( 150551 )

          The problem isn't specific to OpenSSL or libssl or libcrypto, it's the overall idea in info security that "This is the One True Solution, thou shalt not Roll Thine Own crypto, lest thou livest in a state of Sin."

          It's important to keep in mind this paranoia is completely justified. I've seen some really poor home-grown crypto implementations, written by well intentioned but completely inept developers. And I know most older libraries never defended against side channel attacks. OpenSSL is a product people

    • by Lennie ( 16154 )

      Let's have an other look at what happend:

      Almost every vendor which included OpenSSL in their product jumped on this the first day.

      Of the vendors Apple and VMWare were the slowest to respond to the Heartbleed bug, what does that tell you ?

  • by Anonymous Coward on Wednesday April 23, 2014 @05:44PM (#46828473)

    I have been a bit surprised that all these companies using OpenSSL (Google, Yahoo, Facebook, etc) haven't ensured that this critical piece of technology is getting the support it needs to be done correctly.

    What other technology that is critical are these same/dependent companies overlooking in their investment of dollars in Open Source software??

    Will be interesting to see what happens going forward.

  • It's a best practice... how can it be wrong?

    • Hush! Here, dump a few 1000 bucks on getting an ITIL certificate and you'll know why best practice can NEVER be wrong! NEVER!

      Is it me or do certain IT certificates turn more and more into something akin to courses offered by a certain alien worshiping cult? You pay through the nose for courses of dubious quality so you need to sing their praise in the hope to get eventually at least the money out that you stuffed in...

  • by Anonymous Coward

    Maybe I'm missing something but since when is 17-20% market share (the estimates I've heard of the number of affected sites) a "monoculture"? Sure there were some biggies in there, but seems to me diversity worked pretty well in this case.

    • by jrumney ( 197329 )
      A large part of the low impact was older versions of OpenSSL from before the bug was introduced in the "stable" distributions of some widely used Linux distros.
    • Most web sites have no need for SSL/TLS. Therefore, 17% of web sites could mean ALL "secure" sites were affected. OpenSSL might have 90% market share in the sense that 90% of SSL connections use OpenSSL and that could still be 17% of web sites.

    • Yup. Heartbleed only affected software running openSSL.
      Apache *is* running on openssl, so given that's popularity, heartbleed did affect a lot of websites.
      Also curl and wget use it, so a lot of client-side scripts and cron jobs could have been attacked by a rogue webserver.

      BUT
      it didn't affect GnuTLS (though that one has has its share of problems, too).
      and it's very widely used too.
      So purple (= pidgin, adium and many other multichat systems), exim (= a huge chunk of email servers), cups (= anything that pri

  • Can't blame Microsoft for this or use the "many eyes" argument either.

  • by hessian ( 467078 ) on Wednesday April 23, 2014 @06:14PM (#46828659) Homepage Journal

    I am not anti-volunteer; I spend a lot of my time volunteering.

    But you need strong leadership.

    Otherwise, everyone does what they want to, which leaves huge holes in the project.

    Whether a piece of code is open source or closed source doesn't matter. The quality of the leadership of the team that produces it is vital in both cases.

    • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Wednesday April 23, 2014 @11:40PM (#46830283) Homepage

      Self-organization [wikipedia.org] is a perfectly reasonable way to run a project. It has several properties that are useful for geographically distributed open source projects, like how it avoids a single point of failure. You can't extrapolate global leadership maxims from the dysfunction of local groups you've been involved in. I'd argue that an open source program that requires "strong leadership" from a small group to survive is actually being led badly. That can easily breed these troublesome monocultures where everyone does the same wrong thing.

      I think the way Mark Shuttleworth organizes Canonical is like the traditional business role of a "strong leader". That's led to all sorts of pissed off volunteers in the self-organizing Debian community. Compare that against the leadership style of Linus Torvalds, who aggressively pushes responsibility downward toward layers of maintainers. The examples of Debian and Linux show volunteers can organize themselves if that's one of the goals of the project.

      • No, it isn't, at least not without really exceptional leadership.

        Linus holds himself to exceptionally high standards of work, standards which he expects everyone else who commits to the kernel to also adhere to. He's also a complete and total asshole and will think nothing of publicly chastising anyone who doesn't. Self Organisation works for the Linux kernel because for one, only the very best of the best are actually allowed commit privileges and for another anyone who fucks up or gets slack will be caug
  • Specious Argument (Score:4, Insightful)

    by Nethemas the Great ( 909900 ) on Wednesday April 23, 2014 @06:25PM (#46828727)

    I'm not sure it's a valid argument. The probability of errors that may be found in a given system is proportional to the complexity of that system. Likewise the cost to maintain and evolve a system is proportionally tied to its complexity. It is therefore a worthy to goal to reduce system complexity whenever possible. If network communication infrastructure is taken to be the system, then it naturally follows that the fewer implementations that exist for performing SSL/TLS communication the less likely there will exist security vulnerabilities. Relatedly the cost to identify and correct vulnerabilities will be proportionally smaller. Said simply, it's much easier to guard one door than it is to guard many.

    Suggesting that a "monoculture" is bad relies upon the same faulty premises of "security through obscurity." The failure with respect to OpenSSL and Heartbleed wasn't the monoculture. It was the lack of altruistic eyes scrutinizing it. More implementations would have only required more eyes.

    • It was the lack of altruistic eyes scrutinizing it.

      That was a secondary effect. People who might want to analyze code want to do a good job, and there's a lot of code worth analyzing.

      To do that job there are tools that help with that analysis. OpenSSL's use of non-standard internal memory management routines makes it resistant to use of such analysis tools.

      Is it impossible for a code auditor to keep everything in his head? No, but it's tough and error-prone. Some people have found OpenSSL bugs before,

      • That's largely what the OpenBSD team is doing - ripping out all of that unneeded memory management crap, killing OS/2, VMS, and MacOS7 support code, etc. The payoff should be more people looking at it,

        Pairing down unnecessary special memory routines and any silliness enabled by them is likely to be productive.

        Removing windows code appreciably reduces pool of interested parties and hence the number of people who care to audit your OpenSSL fork.

    • by Anonymous Coward

      Completely incorrect.

      Monocultures are proven disastrous for the long term survivability of anything.

      There is a reason human immune systems have evolved the way they have over millions of years. That is one in which everyones immune system behaves differently to issues. Some are more effective against one type of disease, and less effective against another.

      If everyone had the same immune system (monoculture) or exactly the same genetics, then a single virus capable of exploiting that fact could theoretical

  • Too bad they hadn't forked OpenSSL a while back. Now there is a competing library.

    Now we need to support that fork, and assess the feasibility of porting to Linux as well as the other BSD's, of course.

    Do they have a new name for it yet?

    If SSL = "Secure Sockets Layer", how about: ActualSSL (it's actually secure), DaemonSSL, Pitchfork(ed)SSL, something...

    • It looks like they called it: LibreSSL

      http://www.libressl.org/

      That's what it looks like, anyway.

      Support them if you can!

    • by Anonymous Coward

      Too bad they hadn't forked OpenSSL a while back. Now there is a competing library.

      Now we need to support that fork, and assess the feasibility of porting to Linux as well as the other BSD's, of course.

      Do they have a new name for it yet?

      If SSL = "Secure Sockets Layer", how about: ActualSSL (it's actually secure), DaemonSSL, Pitchfork(ed)SSL, something...

      We need a GNU/SSL fork too. GPL forever!

      • by Above ( 100351 )

        We need a GNU/SSL fork too. GPL forever!

        It exists, and is called GnuTLS [gnutls.org]. All the developers I've worked with who've looked at it and OpenSSL say it is worse than OpenSSL, although I don't remember the particulars of why. Feel free to support it if you prefer a GNU alternative.

      • by gatkinso ( 15975 )

        PolarSSL.

    • by gatkinso ( 15975 )

      PolarSSL is GPLv2

  • There are pros and cons to a monoculture in code, but the pros vastly outweigh the cons.

    It is no different than any other code-reuse system, from functions that are called from many places in a system, to common libraries, to open-source software running 1/2 the internet. Yes, if there is a bug in a widely-used piece of code, it affects a lot of parts of the system - and the more places it is used the worse the bug is - TEMPORARILY.

    The upside is, because this code is used in so many parts, these bug

  • When the Tsunami hit the east coast of Japan and caused so much saddening destruction.

    Our industry (MFP) was hit with a shortage of faxes, and this was purely down to a single low cost chip from a factory in the Tsunami effected region. It was a single point of weakness that no manufacturer was aware of. The supplier companIES were actually just all distributors for the one single producer/factory of the chip.

    Lucky the chip was simple and manufacturing was moved to another factory in Chine, but it still too

  • OK - here's a niche industry page listing about forty open source, commercial and cloud solutions that all have secured by SSL and their responsed to heartbleed:
    http://www.filetransferconsult... [filetransf...ulting.com]

    Of these...maybe a third had OpenSSL...most of the rest used a Java stack, and many of the rest were on IIS or using MS crypto. Within my own company (about 1500 people and 20 web apps on a mix of platforms), heartbleed affected exactly 3 sites.

    If you looked around other industries and saw >50% affected rates may

Order and simplification are the first steps toward mastery of a subject -- the actual enemy is the unknown. -- Thomas Mann

Working...