OpenSSL: the New Face of Technology Monoculture 113
chicksdaddy writes: "In a now-famous 2003 essay, 'Cyberinsecurity: The Cost of Monopoly,' Dr. Dan Geer argued, persuasively, that Microsoft's operating system monopoly constituted a grave risk to the security of the United States and international security, as well. It was in the interest of the U.S. government and others to break Redmond's monopoly, or at least to lessen Microsoft's ability to 'lock in' customers and limit choice. The essay cost Geer his job at the security consulting firm AtStake, which then counted Microsoft as a major customer. These days Geer is the Chief Security Officer at In-Q-Tel, the CIA's venture capital arm. But he's no less vigilant of the dangers of software monocultures. In a post at the Lawfare blog, Geer is again warning about the dangers that come from an over-reliance on common platforms and code. His concern this time isn't proprietary software managed by Redmond, however, it's common, oft-reused hardware and software packages like the OpenSSL software at the heart (pun intended) of Heartbleed. 'The critical infrastructure's monoculture question was once centered on Microsoft Windows,' he writes. 'No more. The critical infrastructure's monoculture problem, and hence its exposure to common mode risk, is now small devices and the chips which run them.'"
Is anyone surprised? (Score:5, Informative)
Besides, it's disingenuous to claim that no one knew that there were potential problems, the OpenBSD people were not exactly quiet about their complaints about OpenSSL. Of course, rather than considering their complaints on their merits, they were ignored until it blew wide open.
Re: (Score:1, Funny)
We already established that often corporations will use free software because of the cost, not because they're enthusiasts, and often those that are enthusiasts for a given project are specifically interested in that project only, not in other projects that support that project.
Besides, it's disingenuous to claim that no one knew that there were potential problems, the OpenBSD people were not exactly quiet about their complaints about OpenSSL. Of course, rather than considering their complaints on their merits, they were ignored until it blew wide open.
B-b-b-b-but the many eyes of open source makes all bugs shallow.
Heartbleed was very shallow, fixed as soon as iden (Score:5, Interesting)
I guess you're not a programmer, and therefore don't know what a shallow bug is. Conveniently, the rest of the sentence you alluded to explains the term:
"Given enough eyeballs, all bugs are shallow ... the fix will be obvious to someone."
If you have to dig deep into the code to figure out what's causing the problem and how to fix it, that's a deep bug. A bug that doesn't require digging is shallow. Heartbleed was fixed in minutes or hours after the symptom was noticed - a very shallow bug indeed. "The fix will be obvious to someone."
The presence or absence of bugs is an orthogonal question. That's closely correlated with the code review and testing process - how many people have to examine and sign off on the code before it's committed, and if there is a full suite of automated unit tests.
The proprietary code I write is only seen by me. Some GPL code I write also doesn't get proper peer review, but most of it is reviewed by at least three people, and often several others look at it and comment. For Moodle, for example, I post code I'm happy with. I post it with unit tests which test the possible inputs and verify that each function does its job. Then anyone interested in the topic looks at my code and comments, typically 2-4 people. I post revisions and after no-one has any complaints it enters official peer review. At that stage, a designated programmer familiar with that section of the code examines it, suggests changes, and eventually signs off on it when we're both satisfied that it's correct. Then it goes to the tester. After that, the integration team. Moodle doesn't get very many new bugs because of this quality control process. That's independent of how easily bugs are fixed, how shallow they are, depending on how many people are trying to fix the bug.
Re: (Score:2)
I have a couple problems with the implication that "short time to find/fix" is so acceptable.
1. Some amount of damage was done (and no one really knows for sure) through this bug. A fix was identified rapidly after the bug was -discovered-, but that's a long time after the bug was -introduced-.
2. For some systems, particularly those like SCADA systems where we really have deep information assurance concerns, patching software is not easy! Not everything can use "grab the patched source, rebuild and rei
true. Also unrelated to deep vs shallow (Score:2)
I agree quality code is important. I'm glad software architecture is now recognized as an engineering discipline, so you can choose to have a qualified Professional Engineer lead or review a software project.
All of which is largely a separate issue from the observation that with enough people looking at a problem, the solution will be shallow - obvious to someone.
Re:Is anyone surprised? (Score:4, Interesting)
The API is widely cited in API security papers as an example of something that could have been intentionally designed to cause users to introduce vulnerabilities. The problem is that the core crypto routines are well written and audited and no one wants to rewrite them, because the odds of getting them wrong are very high. The real need is to rip them out and put them in a new library with a new API. Apple did this with CommonCrypto and the new wrapper framework whose name escapes me (it integrates nicely with libdispatch), but unfortunately they managed to add some of their own bugs...
Re:Is anyone surprised? (Score:5, Insightful)
OpenSSL is one great example for what I dubbed "Monkey Island Cannibal security" in my talks (yes, believe it or not, you can actually entertain and inform managers that way, you'd be surprised how many played MI, and even if not that's at least something they can understand). But that whole Monkey Island spiel works as a perfect example for security blunders where one point gets improved over and over because everyone thinks that's the only point it could fail while the rest of the security system gets neglected even though the security problem is obviously there.
For those who don't know MI (or who forgot), there is a moment in Monkey Island where the cannibals catch your figure and lock him up in a hut. You can escape that hut via a loose panel in the wall. Now, every time the cannibals catch you again, the door of the hut gets more and more elaborate and secure, to the point where that bamboo hut has a code lock reinforced steel door befitting a high security vault in the end. Which of course has no effect on your chances to escape since you never pass that door (at least on your way out).
The point is that the cannibals, much like a lot of security managers, only look at a single point in their security system and immediately assume that, since this is their way of entering the hut, it must also be the point where you escape. Likewise, the focus on auditing OpenSSL lies always on the crypto routine, and you may assume with good reason that this is one of the most audited pieces of code in existence.
Sadly, the "hut" around it is less well audited and tested. And that's where the problems reside.
Re: (Score:2)
B-b-b-b-but the many eyes of open source makes all bugs shallow.
Everyone who attempted to read OpenSSL quickly lost their ability to see and they gouged out their eyes from the pain. OpenSSL is what you call obscurified code.
The bad guys don't have time to master second tier (Score:2, Insightful)
But the rest of us do!
It's a silly argument. Put your eggs in one basket... then guard the basket. 2-3 FT developers doesn't cut it when there are so many attackers and the motivation is much greater than bragging rights at def con.
Re:Is anyone surprised? (Score:5, Insightful)
Re: (Score:2)
> it appears that everyone is reluctant to updates anything, ever
Fixed That For You.
You don't touch core, production libraries if you don't have to for stable code. And new features, enhancements, or portability often hurt the eize and performance of otherwise stable code.
Re: (Score:2)
Re: (Score:2)
> The correct way to handle that is with rigourous and extensive test cases, not just closing your eyes and not updating.
Test labs and test time are quite expensive. Replicating the exact combination of tools you use in production, _with all possible current and new releases of all components_, is a task that quickly grows in a combinatorial fashion. This is not an SSL specific problem, this is a general software problem. A monoculture for development actually _aids_ this by following consistent API's a
1st one ever (Score:1)
pleese see this good article related :http://www.networkworld.com/weblogs/security/003879.html
OSS vs Reality (Score:5, Insightful)
In theory (the way OSS evangelists tell you) as a software package gets more popular, it gets reviewed by more and more people of greater and greater competency. The number of people using OSS packages has exploded in the past 10 years, but the number of people writing and reviewing the code involved doesn't seem to have changed much.
Re: (Score:2, Insightful)
We're reactive, not proactive - why look for problems if the software is already working?
This is why we missed Heartbleed, because there's no compelling reason to keep working once the product gets a green light. There never will be a compelling reason. The problem has no solution that doesn't involve throwing money at something that will never have a payoff...so we won't ever do it. People don't do things unless there's an observable negative to *not* doing them.
Re:OSS vs Reality (Score:5, Insightful)
That is the reality of the situation. In the fantasy land of OSS evangelists, thousands of highly skilled coders are constantly auditing big OSS projects.
Re: (Score:1)
"In the fantasy land of OSS evangelists, thousands of highly skilled coders are constantly auditing big OSS projects."
Do you know what a strawman argument is, right?
But now, for a reality check: this bug, while serious, affected maybe a few thousands out of millions of users and once discovered it was fully disclosed, audited, peer reviewed and patched *because* it was on an open source environment.
Now, please, tell me you can say the same about other closed source products.
Re: (Score:1)
Apples GOTO FAIL whilst not patched universally; was dealt with extremely quickly on Apples primary product lines.
I would actually equate the handling of both problems, but one is closed source the other open. (Since heartbleed wasn't made public for several weeks)
"Now, please, tell me you can say the same about other closed source products."
Yes, we can.
Re: (Score:2)
Re: (Score:2)
In theory (the way OSS evangelists tell you) as a software package gets more popular, it gets reviewed by more and more people of greater and greater competency.
Interesting.
In reality, as a software package gets more popular, the confidence in it increases. While there is some validity in this, often times the confidence increases disproportionally to the code paths being excercised. My employer had a mixed C/C++ library that has been in use in a dozen products and is almost unchanged for 10+ years. Within a week of porting it to a new C++ compiler I found a buffer overwrite. Nobody believed me. I had to show them the code, build it with multiple compilers, di
Re: (Score:2)
That wasn't my point at all. Dan Greer wrote about the dangers of closed source monoculture a decade ago. You will find very little disagreement on this site.
He's now saying that closed source monoculture is bad. I'm saying that, in theory, it's not. If code was actually reviewed by "millions of eyes" as it got more popular, then we could be pretty confident that a package as widespread as OpenSSL would not contain an exploit as brutal as Hearthbleed. But in the current situation, where code is more li
Re: (Score:2)
>He's now saying that closed source monoculture is bad.
Doh. Open source, obviously.
Re: (Score:2)
monoculture in general is bad.
what openssl does should be so understood that there should be numerous libs that do the same things and numerous plugins in for openssl that do the same thing.
but I suppose there was "nobody ever got fired for choosing openssl" mentality too which was in effect. and it's still true, there haven't been any stories of anyone getting fired for using it.
Closed and open are equivalent ... (Score:4, Informative)
That said, proprietary code can be open too. Some proprietary libraries are available with a source license option. You may have to ask, their ads don't necessary mention the source license option. It confuses some readers.
Re:Closed and open are equivalent ... (Score:5, Informative)
Yeah, no one tested it with the source before going against the binaries. Are you fucking high?
No, I merely read the account written by the folks who found heartbleed. It was automated testing of a live system. Closed or open source happens to be irrelevant for this particular discovery.
"“We developed a product called Safeguard, which automatically tests things like encryption and authentication,” Chartier said. “We started testing the product on our own infrastructure, which uses Open SSL. And that’s how we found the bug.”"
http://readwrite.com/2014/04/1... [readwrite.com]
Re: (Score:3)
Really, what is the chance that two independent companies with no interactions manage to find the same 2 year old bug with in 24 hours of each other?
Re: (Score:2)
Re: (Score:2)
That said, proprietary code can be open too.
No, it can't, by definition. If it's not available to everyone, then it's not "open" in this context. To quote the OSI [opensource.org], "Open source software is software that can be freely used, changed, and shared (in modified or unmodified form) by anyone". The essential missing part here is that sharing must be allowed, and the sort of commercial arrangements that get you source to proprietary code don't allow that.
Proprietary software that makes source available to customers has some of the properties of free softwa
Re: (Score:2)
That said, proprietary code can be open too.
No, it can't, by definition. If it's not available to everyone, then it's not "open" in this context.
It absolutely can be open. You can retain full ownership and control of your source code and still let your users have access it to it.
Sorry, hit submit not continue ... (Score:2)
That said, proprietary code can be open too.
No, it can't, by definition. If it's not available to everyone, then it's not "open" in this context.
It absolutely can be open. You can retain full ownership and control of your source code and still let your users have access it to it.
To quote the OSI [opensource.org], "Open source software is ... GNU project's four freedoms [gnu.org] ...
Good thing I didn't say proprietary software is FOSS, mere that it can be open. Sorry but OSI and GNU don't get to redefine the word open.
And you don't get to move the goal post. This discussion is about inspecting source code for bugs. And in this sense proprietary can be as open as FOSS.
Re: (Score:2)
In my experience, the main difference between open and closed source is the NDAs I'm bound with. Or rather, the effects such an NDA can possibly have.
In a CSS audit, the NDA will invariably include "and do not hand over any kind of source, lest we kill your firstborn", or a variation thereof. If I find something, it depends on the company that ordered the audit whether or not that bug will be even admitted, let alone fixed, and whether that fix will be delivered to everyone or whether they leave it open del
Re: (Score:2)
Apples and oranges (Score:5, Insightful)
With open-source software, a monoculture isn't that bad a thing, as the Heartbleed exploit has shown. When something bad is discovered, people jump on it immediately and come up with a fix, which is deployed very very quickly (and free of charge, I might add). How fast was a fix available for Heartbleed? Further, people will go to greater lengths to make sure it doesn't happen again. Look at the recent efforts to rewrite OpenSSL, and the fork that was created from it.
None of this happens with proprietary software. First off, the vendor always tries to deny the problem or cover it up. If and when they do fix it, it may or may not be really fixed. You don't know, because it's all closed-source. It might be a half-ass fix, or it might have a different backdoor inserted, as was recently revealed with Netgear. What if you think the fix is poor? Can you fork it and make your own that's better? No, because you can't fork closed-source software (and certainly not selected libraries inside a larger closed-source software package; they're monolithic). But the LibreSSL guys did just that in the Heartbleed case.
Finally, monocultures aren't all that common in open-source software anyway; they only happen when everyone generally agrees on something and/or likes something well enough to not bother with forks or alternatives. Even the vaunted Linux kernel isn't a monoculture, as there's still lots of people using the *BSD kernels/OSes (though granted, there's far more installations of the Linux kernel than the *BSDs).
Re: (Score:2)
Yes, I guess you could say that, but I'd add the qualifier "nearly", or maybe even "remotely".
Re: (Score:2)
Monocultures are a nature of the need for interop between orgs. Standards form because it is easy to confirm it will work, easy to find employees/volunteers that can use it, it solves a problem well and the opportunity cost of looking at alternatives likely will be more than any incremental improvement they offer etc. I agree FOSS is fantastic for turn around of fixes and being able to confirm the quality of the fix. Closed source can solve the problem but you might never now.
I think this calls for more mon
Re: (Score:2)
The other 90% doesn't run critical infrastructure services.
Re: (Score:2)
Because about 100% of the people don't care about the other 90%?
If there's like 10 people who use a software product, it's also just those 10 people who give a shit whether there's a bug in it.
Re: (Score:2)
With open-source software, a monoculture isn't that bad a thing, as the Heartbleed exploit has shown. When something bad is discovered, people jump on it immediately and come up with a fix
The reason (a reason) monoculture is still bad with open source is that we don't know when this exploit was discovered. It may have been discovered long before, by malevolent entities, who didn't reveal it because they were exploiting it.
Re: (Score:2)
With open-source software, a monoculture isn't that bad a thing, as the Heartbleed exploit has shown. When something bad is discovered, people jump on it immediately and come up with a fix, which is deployed very very quickly (and free of charge, I might add). How fast was a fix available for Heartbleed? Further, people will go to greater lengths to make sure it doesn't happen again. Look at the recent efforts to rewrite OpenSSL, and the fork that was created from it.
"It" in "it doesn't happen again" being "a monoculture"? If you have a monoculture, a fork destroys it unless a new monoculture forms from the fork (i.e., if the forked-from project loses most of its market share).
Re: (Score:2)
Not really. A fork isn't a completely different product; the two forks share a codebase, which is why the word "fork" is used instead of "rewrite". How much of a monoculture there is depends on how divergent the forks are. Iceweasel and Firefox, for instance, barely diverge at all, whereas X.org and XFree86 are very different at this point (but still not completely different, the core X code is still mostly the same I'm sure).
Re: (Score:2)
With open-source software, a monoculture isn't that bad a thing, as the Heartbleed exploit has shown. ... How fast was a fix available for Heartbleed?
Heartbleed showed that a monoculture, particularly one relying on poorly written and barely reviewed code is a bad thing. OSS or not. That the source code was fixed so easily just highlights to me how the heartbeat feature it was never properly reviewed or tested, and how people using openssl or incorporating it into their products never questioned it. The many eyes argument fails when you realize how few qualified programmers looked at the code. Given how wide spread openssl is, getting that fix rolled
Re:Apples and oranges (Score:4, Interesting)
I think the bigger problem is that everything about encryption software encourages a monoculture. Anyone who understands security will tell you "don't roll your own encryption code, you risk making a mistake." I would still rather have OpenSSL than Joe Schmoe's Encryption Library, simply because at this time I trust them a bit more. Just not as much as I did.
Another problem is that the "jump on it and fix it" approach is fine for servers and workstations. It's not so fine for embedded devices that can't easily be updated. I'm thinking door locks, motor controllers, alarm panels, car keys, etc. Look at all the furor over the hotel card key system a few years back, when some guy published "how to make an Arduino open any hotel door in the world in 0.23 seconds". Fixing those required replacing the circuit boards - how many broke hotels could afford to fix them, or even bothered to?
The existence of a "reference implementation" of security module means that any engineer would be seriously questioned for using anything else, and that leads to monoculture. And in that world, Proprietary or Open doesn't matter nearly as much as "embedded" vs "network updatable".
Re: (Score:2)
Re: (Score:2)
The problem isn't specific to OpenSSL or libssl or libcrypto, it's the overall idea in info security that "This is the One True Solution, thou shalt not Roll Thine Own crypto, lest thou livest in a state of Sin."
It's important to keep in mind this paranoia is completely justified. I've seen some really poor home-grown crypto implementations, written by well intentioned but completely inept developers. And I know most older libraries never defended against side channel attacks. OpenSSL is a product people
Re: (Score:3)
Let's have an other look at what happend:
Almost every vendor which included OpenSSL in their product jumped on this the first day.
Of the vendors Apple and VMWare were the slowest to respond to the Heartbleed bug, what does that tell you ?
Companies using OpenSSL should help out (Score:3, Insightful)
I have been a bit surprised that all these companies using OpenSSL (Google, Yahoo, Facebook, etc) haven't ensured that this critical piece of technology is getting the support it needs to be done correctly.
What other technology that is critical are these same/dependent companies overlooking in their investment of dollars in Open Source software??
Will be interesting to see what happens going forward.
Re: (Score:2, Informative)
I have been a bit surprised that all these companies using OpenSSL (Google, Yahoo, Facebook, etc) haven't ensured that this critical piece of technology is getting the support it needs to be done correctly.
Google has made a great number of contributions to OpenSSL.
Re: (Score:2)
That's the problem with most FOSS projects, people will use them but few will support with either time or finances.
Re: (Score:2)
Once you realize that SSL is just a big expensive security theater that has never offered any security, [youtube.com] I wouldn't blame them for not giving a fuck about OpenSSL, or web security in general.
Re: (Score:2)
"but... but... but..." (Score:2)
It's a best practice... how can it be wrong?
Re: (Score:2)
Hush! Here, dump a few 1000 bucks on getting an ITIL certificate and you'll know why best practice can NEVER be wrong! NEVER!
Is it me or do certain IT certificates turn more and more into something akin to courses offered by a certain alien worshiping cult? You pay through the nose for courses of dubious quality so you need to sing their praise in the hope to get eventually at least the money out that you stuffed in...
Monoculture? At 17%? (Score:1)
Maybe I'm missing something but since when is 17-20% market share (the estimates I've heard of the number of affected sites) a "monoculture"? Sure there were some biggies in there, but seems to me diversity worked pretty well in this case.
Re: (Score:3)
17% of sites could be 100% of SSL sites (Score:2)
Most web sites have no need for SSL/TLS. Therefore, 17% of web sites could mean ALL "secure" sites were affected. OpenSSL might have 90% market share in the sense that 90% of SSL connections use OpenSSL and that could still be 17% of web sites.
GnuTLS, NSS, etc. (Score:2)
Yup. Heartbleed only affected software running openSSL.
Apache *is* running on openssl, so given that's popularity, heartbleed did affect a lot of websites.
Also curl and wget use it, so a lot of client-side scripts and cron jobs could have been attacked by a rogue webserver.
BUT
it didn't affect GnuTLS (though that one has has its share of problems, too).
and it's very widely used too.
So purple (= pidgin, adium and many other multichat systems), exim (= a huge chunk of email servers), cups (= anything that pri
Haha (Score:1)
Can't blame Microsoft for this or use the "many eyes" argument either.
Re: (Score:2)
Security by obscurity [wikipedia.org] means that the product is secure only when you don't have the source code. The idea is that parts of the security mechanism would be simple to break if only you could see how they are implemented. A simple example is a hardcoded backdoor password in the code. Very hard to just stumble on, trivial to find with source access. Ideally security mechanisms should work equally well whether or not you have their source code, which is security by design [wikipedia.org].
This is a completely different conce
Re: (Score:2)
To give a concrete example, take a look at the DNS root zone servers operated by Verisign. They run a 50:50 mix of Linux and FreeBSD and increasingly a mix of BIND and Unbound. They use a userspace network stack on some and the system network stack on others. If someone wants to take out
Re: (Score:2)
Security by obscurity is by definition a bad idea. But the conclusion "CSS == SbO" is false. CSS can rely on SbO, but there is no immediate causal link. What keeps you from writing software that is actually secure by design (which would constitute the opposite of SbO) but leave the source closed? Yes, it could be opened and published without endangering the security of your system, but you decide against it.
That's just as valid.
The fallacy stems maybe from the fact that SbO must be CSS (for the obvious reas
Recognize limitations of volunteer efforts (Score:5, Insightful)
I am not anti-volunteer; I spend a lot of my time volunteering.
But you need strong leadership.
Otherwise, everyone does what they want to, which leaves huge holes in the project.
Whether a piece of code is open source or closed source doesn't matter. The quality of the leadership of the team that produces it is vital in both cases.
On the nose (Score:1)
He pointed to weak/absent leadership and tried to use it as an argument against the need for strong leadership.
Re:Recognize limitations of volunteer efforts (Score:4, Interesting)
Self-organization [wikipedia.org] is a perfectly reasonable way to run a project. It has several properties that are useful for geographically distributed open source projects, like how it avoids a single point of failure. You can't extrapolate global leadership maxims from the dysfunction of local groups you've been involved in. I'd argue that an open source program that requires "strong leadership" from a small group to survive is actually being led badly. That can easily breed these troublesome monocultures where everyone does the same wrong thing.
I think the way Mark Shuttleworth organizes Canonical is like the traditional business role of a "strong leader". That's led to all sorts of pissed off volunteers in the self-organizing Debian community. Compare that against the leadership style of Linus Torvalds, who aggressively pushes responsibility downward toward layers of maintainers. The examples of Debian and Linux show volunteers can organize themselves if that's one of the goals of the project.
Nope (Score:2)
Linus holds himself to exceptionally high standards of work, standards which he expects everyone else who commits to the kernel to also adhere to. He's also a complete and total asshole and will think nothing of publicly chastising anyone who doesn't. Self Organisation works for the Linux kernel because for one, only the very best of the best are actually allowed commit privileges and for another anyone who fucks up or gets slack will be caug
Specious Argument (Score:4, Insightful)
I'm not sure it's a valid argument. The probability of errors that may be found in a given system is proportional to the complexity of that system. Likewise the cost to maintain and evolve a system is proportionally tied to its complexity. It is therefore a worthy to goal to reduce system complexity whenever possible. If network communication infrastructure is taken to be the system, then it naturally follows that the fewer implementations that exist for performing SSL/TLS communication the less likely there will exist security vulnerabilities. Relatedly the cost to identify and correct vulnerabilities will be proportionally smaller. Said simply, it's much easier to guard one door than it is to guard many.
Suggesting that a "monoculture" is bad relies upon the same faulty premises of "security through obscurity." The failure with respect to OpenSSL and Heartbleed wasn't the monoculture. It was the lack of altruistic eyes scrutinizing it. More implementations would have only required more eyes.
Re: (Score:2)
It was the lack of altruistic eyes scrutinizing it.
That was a secondary effect. People who might want to analyze code want to do a good job, and there's a lot of code worth analyzing.
To do that job there are tools that help with that analysis. OpenSSL's use of non-standard internal memory management routines makes it resistant to use of such analysis tools.
Is it impossible for a code auditor to keep everything in his head? No, but it's tough and error-prone. Some people have found OpenSSL bugs before,
Re: (Score:2)
That's largely what the OpenBSD team is doing - ripping out all of that unneeded memory management crap, killing OS/2, VMS, and MacOS7 support code, etc. The payoff should be more people looking at it,
Pairing down unnecessary special memory routines and any silliness enabled by them is likely to be productive.
Removing windows code appreciably reduces pool of interested parties and hence the number of people who care to audit your OpenSSL fork.
Re: (Score:1)
Completely incorrect.
Monocultures are proven disastrous for the long term survivability of anything.
There is a reason human immune systems have evolved the way they have over millions of years. That is one in which everyones immune system behaves differently to issues. Some are more effective against one type of disease, and less effective against another.
If everyone had the same immune system (monoculture) or exactly the same genetics, then a single virus capable of exploiting that fact could theoretical
OpenBSD's Fork Is The Answer (Score:2)
Too bad they hadn't forked OpenSSL a while back. Now there is a competing library.
Now we need to support that fork, and assess the feasibility of porting to Linux as well as the other BSD's, of course.
Do they have a new name for it yet?
If SSL = "Secure Sockets Layer", how about: ActualSSL (it's actually secure), DaemonSSL, Pitchfork(ed)SSL, something...
DOH! Re:OpenBSD's Fork Is The Answer (Score:2)
It looks like they called it: LibreSSL
http://www.libressl.org/
That's what it looks like, anyway.
Support them if you can!
Re: (Score:1)
Too bad they hadn't forked OpenSSL a while back. Now there is a competing library.
Now we need to support that fork, and assess the feasibility of porting to Linux as well as the other BSD's, of course.
Do they have a new name for it yet?
If SSL = "Secure Sockets Layer", how about: ActualSSL (it's actually secure), DaemonSSL, Pitchfork(ed)SSL, something...
We need a GNU/SSL fork too. GPL forever!
Re: (Score:2)
We need a GNU/SSL fork too. GPL forever!
It exists, and is called GnuTLS [gnutls.org]. All the developers I've worked with who've looked at it and OpenSSL say it is worse than OpenSSL, although I don't remember the particulars of why. Feel free to support it if you prefer a GNU alternative.
Re: (Score:2)
PolarSSL.
Re: (Score:3)
PolarSSL is GPLv2
Re: (Score:2)
Only C (and its derivatives C++, Obj-C) seem to suffer from this buffer overrun issues because of its heavy dependence on pointers. Most other languages Pascal/Delphi, Java, Python, Basic have bounds checking and will not allow the programmer to read/write a variable x, using another variable y where y has no link to x.
If Pascal, with runtime bounds checking enabled, had been used instead of C, you would've gotten the an equivalent executable with less than 1% speed loss (due to error checking) compared to
Incorrect. Monocultures are not bad in software. (Score:2)
There are pros and cons to a monoculture in code, but the pros vastly outweigh the cons.
It is no different than any other code-reuse system, from functions that are called from many places in a system, to common libraries, to open-source software running 1/2 the internet. Yes, if there is a bug in a widely-used piece of code, it affects a lot of parts of the system - and the more places it is used the worse the bug is - TEMPORARILY.
The upside is, because this code is used in so many parts, these bug
Definitely felt the monoculture hit before! (Score:1)
When the Tsunami hit the east coast of Japan and caused so much saddening destruction.
Our industry (MFP) was hit with a shortage of faxes, and this was purely down to a single low cost chip from a factory in the Tsunami effected region. It was a single point of weakness that no manufacturer was aware of. The supplier companIES were actually just all distributors for the one single producer/factory of the chip.
Lucky the chip was simple and manufacturing was moved to another factory in Chine, but it still too
What monoculture? (Score:2)
OK - here's a niche industry page listing about forty open source, commercial and cloud solutions that all have secured by SSL and their responsed to heartbleed:
http://www.filetransferconsult... [filetransf...ulting.com]
Of these...maybe a third had OpenSSL...most of the rest used a Java stack, and many of the rest were on IIS or using MS crypto. Within my own company (about 1500 people and 20 web apps on a mix of platforms), heartbleed affected exactly 3 sites.
If you looked around other industries and saw >50% affected rates may
Re: (Score:2)