Bug In the GnuTLS Library Leaves Many OSs and Apps At Risk 231
New submitter williamyf writes "According to this article at Ars Technica, '[A] bug in the GnuTLS library makes it trivial for attackers to bypass secure sockets layer (SSL) and Transport Layer Security (TLS) protections available on websites that depend on the open source package. Initial estimates included in Internet discussions such as this one indicate that more than 200 different operating systems or applications rely on GnuTLS to implement crucial SSL and TLS operations, but it wouldn't be surprising if the actual number is much higher. Web applications, e-mail programs, and other code that use the library are vulnerable to exploits that allow attackers monitoring connections to silently decode encrypted traffic passing between end users and servers.' The coding error may have been present since 2005."
Now we'll find out... (Score:3, Insightful)
...who has been surreptitiously using GPL'd code in their proprietary stacks...
Re: (Score:3)
GnuTLS is actually under the lesser GPL.
Re:Now we'll find out... (Score:4, Insightful)
...who has been surreptitiously using GPL'd code in their proprietary stacks...
Why would anyone bother when they could just use OpenSSL and not have to worry about it?
Incompatible license (Score:5, Informative)
Re: (Score:2)
NSS perhaps (Score:2)
The GPL has an exemption for linking with libraries that are part of the operating system.
OpenSSL is not part of the "system libraries" on all platforms. Programs would have to have some sort of shim layer to use either OpenSSL on platforms with it or something else on platforms without it. If you distribute a client-side application designed for POSIX-like systems, most people aren't going to be willing to switch to a Mac or install Linux or FreeBSD in VirtualBox just to run it. (Windows is still preinstalled on the vast majority of desktop PCs sold in industrialized English-speaking countries.
Fortunately not OpenSSL (Score:3)
Thank god it is in gnuTLS that is not used by any applications serious about security. Just checked, only printer drivers seems affected in my Debian installation.
Roll your own (Score:4, Funny)
This is why you should always roll your own SSL scripts in php like the guy at Magic the Gathering Online Exchange did.
xubuntu seems to be completely dependent on gnutls (Score:2)
Try to remove that library, and you find that most of the critical software depends on it.
Re: (Score:2)
Gtk links to it for some reason, so any distro XFCE of Gnome based would likely have trouble removing it.
Re: (Score:2, Funny)
It has gnu in the name. RMS is easily confused for the guy with the beard that people follow with religious vigor.
Re: (Score:2)
Re: (Score:2)
try removing that package from your system.
On mine, it warned about *a lot* of software that have it as an indirect dependency.
We all knew it was coming... (Score:5, Informative)
From February 16 2008: Howard Chu of OpenLDAP: GnuTLS Considered Harmful [openldap.org]
Looking across more of their APIs, I see that the code makes liberal use of strlen and strcat, when it needs to be using counted-length data blobs everywhere. In short, the code is fundamentally broken; most of its external and internal APIs are incapable of passing binary data without mangling it. The code is completely unsafe for handling binary data, and yet the nature of TLS processing is almost entirely dependent on secure handling of binary data.
Incredible that GnuTLS is used anywhere at all. It's just mind boggling.
Re: (Score:2)
How come everyone and their brother haven't been turning in these for security bounties?
Re: (Score:2)
I see that the code makes liberal use of strlen and strcat
A bad tradesman blames his tools.
Re: (Score:2)
A bad tradesman blames his tools.
A bad tradesman uses poor tools.
Re: (Score:2)
He only thinks they are bad tools :p
Re: (Score:2)
Re: (Score:2)
Re:We all knew it was coming... (Score:5, Informative)
Just downloaded the latest patched source code. Here's the summary:
find . -name '*.c' | xargs grep strlen | wc -l
522
find . -name '*.c' | xargs grep strcat | wc -l
44
Just as flawed as ever.
With enough eyes... NOT (Score:5, Interesting)
I have always been critical about that conventional wisdom of "With enough eyeballs, all bugs are shallow".
I contend that is inacurate. With enough QUALIFIED AND MOTIVATED eyes, all bugs are shallow, and sometimes, some FOSS project lack enough Qualified eyes.
This bug, the KDE one, or even the Metafile bug in windows (and more importantly in WINE) among many others, show that many eyes are not enough.
Again one needs MOTIVATED AND QUALIFIED eyes AAAAAND good QA and test cases.
Cheers
Re:With enough eyes... NOT (Score:4, Insightful)
Perhaps using a safety aware language like Ada would be helpful too. C is known to be brittle, yet people insist in writing all sorts of mission critical code in it. I really wonder why.
Writing safety-aware code _somewhere_ (Score:3)
Since all machine code is potentially brittle, the argument for using "safety aware languages" is itself brittle. For instance, Ada is safe because it doesn't allow deallocation unless you use ada.unchecked_deallocation(), or in the alternate, build nothing on the heap, or just hope that the Ada implementation has garbage collection, or..., or... etc.
_Someone_ has to do the work to protect whatever the brittleness is at issue.
For years I have used "struct Buffer { char * start, char * end};" instead of just
Re: (Score:2)
Damnit, it sounds like you are saying that software development is hard. And required diligence. And time.
That is NOT what my pointy-haired boss wants to hear.
He wants to hear that we can whip out software using cheap graduates of questionable schools, while distracting these developers with inane meetings, stupid corporate requirements (have you filled out you quarterly performance objectives?), and also making them the first-line software helpdesk and general IT support.
And he wants it all yesterday.
Dil
Definition of "Enough" and "fase dichotomy" (Score:2)
ASIDE: Your point is mute [look up "moot" before attempting correction. 8-) ]. Enough is enough, and any less is not enough. That's the definition of enough.
Consider: "If you eat enough pudding you'll die"... the only test case is to keep eating pudding till you die. If you stop before you die you didn't eat enough. 8-)
Now the point that all eyeballs are not equal is fine and obvious. It only takes one metaphorical eyeball, connected to the correct brain, to find a bug. So one is enough if the rest of the c
Re: (Score:2)
Again one needs MOTIVATED AND QUALIFIED eyes AAAAAND good QA and test cases.
And maintainers who will accept patches which fix these problems, and not reject them for bullshit reasons like not appreciating minor details of the code style.
Severe, and yet not severe. (Score:4, Informative)
The bug requires a carefully-crafted certificate. That certificate will verify as valid and trusted when it should not be. The connection will still be secure, it will just be with an untrusted person.
So basically it allows a very dedicated attacker to forge a cert and become a MitM attack.
We all know governments have done this for years. It is widely known that root CA certificates have been violated by spy agencies. A few searches on Google will show bunches of news stories where attackers (all types, government attackers, ID theft attackers, etc) have made fake certificates, abused the CA model, and engaged in similar MitM attacks to what this allows.
SSL/TLS communications are just as secure as they always were. If you have personally verified and trusted the certificates the attack wouldn't work, it is only when your trust model allows a cert that you don't personally trust to be used in authentication, and even then it still allows a secure connection but to a wrongly-trusted individual.
The flaw is the trust model and using a cert that you don't personally trust to be valid, which is a well-known issue.
Re: (Score:2, Insightful)
"The connection will still be secure, it will just be with an untrusted person."
What are you smoking? A connection with a MITM is not "secure". This is WORSE than sending data in plaintext.
Re: (Score:2)
A connection with a MITM is not "secure". This is WORSE than sending data in plaintext.
Plaintext protocols also have a MITM. How is it any worse?
Re: (Score:2)
What are you smoking? A connection with a MITM is not "secure". This is WORSE than sending data in plaintext.
No, he is right. If the NSA is the man-in-the-middle, then you created a secure connection to the NSA, and the NSA will be friendly enough to create another secure connection to your original destination. It's not the secure connection you wanted, but it is secure. Nobody but the man-in-the-middle can listen in.
Re: (Score:3)
SSL/TLS communications are just as secure as they always where, which is to say broken in a widely used library under an implementation/trust model is that is very widely used.
Re: (Score:3)
SSL/TLS communications are just as secure as they always were.
No, it is not.
CA model is much more important than the public CA "trust". There is nothing stopping an application designer from using private CAs for their application. This bug breaks the trust to any CAs, including the private ones.
Let's think about it (as a thought experiment) what is required for this to be an effective attack.
SSL spoofing is already a common attack. Not just France [zdnet.com] and the NSA [cnet.com] but also regular old password-sniffers [blogspot.com]. This vulnerability falls under the same class of attack as SSL spoofing; a trusted certificate is secretly replaced by an untrusted certificate.
There were some common examples right after unicode was allowed in domain names and people came up with similar-looking links for major companies with unicode
Code audits (Score:3, Interesting)
Re: (Score:3, Interesting)
Seems pretty clear that GnuTLS has too few eyes. Most everything uses OpenSSL instead, and that's where the eyes are concentrated.
So much for Linus's law. (Score:5, Interesting)
"given enough eyeballs, all bugs are shallow"
Apple had their goto bug in TLS for about 18 months before they spotted it.
GnuTLS and therefore Linux has had their goto bug in TLS since 2005 (9 years) and it's only been spotted now as a result of the bow wave from Apple's disclosure.
Re: (Score:2)
Only GnuTLS is not a default part of Linux, its an optional library used by some packages... Most packages seem to use OpenSSL instead, some offer a choice at compile time but most distros build for openssl by default.
I just "tried" to remove the library libgnutls26 (Score:2)
An awful lot of stuff links to it. Browsers, flash, everything that dials out uses it on xubuntu. Are you saying that they are linking to it, but not using it? Or are they linking to it and then its using a wrapper to openssl.
Removing openssl invokes far fewer dependencies (Score:2)
xubuntu appears to depend largely on the package that "everybody knows is shit".
This does not make me happy.
Testing is hard (Score:5, Interesting)
Testing is hard. The tools you have make it even harder.
How do you build a bad certificate? Fuck, using the openssl tools is hard enough. Does anyone who uses them really understand WTF is happening? I know I don't - I just follow the instructions.
How would you go about building a bogus cert? Beats me. I'm pretty sure you can't do it with the standard tools. And who the heck is going to write their own cert building tools?
And yet, this stuff is at the core of transport security.
Mac and iOS not at risk (Score:2)
Because if they were, the headline would have been zOMG mac and iOS not safe !!!!!!!11!!!!
So when a weakness like this is found (Score:2)
Is everyone racing to change their passwords?
Deliberately introduced? (Score:2)
May it also be, the "coding error" was not an error at all, but a deliberately introduced bug? Government agencies always wanted to read our — and each other's — communications. Sometimes even for legitimate reasons...
Re: (Score:2)
Doubt it, GnuTLS is not really used for anything important.
If you want a conspiracy, you can ask why OpenSSL still has insane defaults like allowing SSLv2.
Freedom is better than dependency. (Score:4, Insightful)
So when Apple's proprietary encryption software suffered a problem, Apple users could do nothing but wait for Apple to deliver a fix; there's nobody else that are allowed to fix Apple's proprietary software but Apple. And when that fix ostensibly arrived, Apple users had to hope it wasn't bundled with some malware too (as is often in proprietary software [gnu.org]).
This bug was caught during an audit [gnutls.org]—"The vulnerability was discovered during an audit of GnuTLS for Red Hat.". Nobody but the proprietor can audit proprietary software. But with free software, users have the freedom to audit the code they run, patch that code, and run their patched code; users can choose to fix bugs themselves or get someone else to fix bugs for them. And users don't have to always trust the same people to do work on their behalf. Users can also choose to wait for a fix to be distributed, and then they can choose to check that fix to make sure it doesn't contain malware. For all we know some users have long spotted and fixed this bug in GNUTLS. Since all complex software has bugs bugs are unavoidable. We're better off depending on people we choose to trust. Software freedom is better for its own sake.
Re: (Score:2)
Re: (Score:2)
Apple's code was based on something "open source" but that does Apple's users no good because of what I already said: Apple's distributed code to its users are proprietary. Better to have the alleged "mess" to track down than to know there's no point in tracking down anything because what you'll find is something you're not allowed to inspect, modify, or share. Here you're really highlighting the difference between free software and open source: open source advocates don't want to talk about how people ough
Re: (Score:3)
Apple may have known about the issue for a while and not talked about it until it could release whatever proprietary blob alleges to be a fix. Apple's users might have known Apple's software was buggy too, but not been able to do anything about fixing Apple's code, since that's the nature of proprietary software. Apple has sat on exploitable security issues before [telegraph.co.uk]; in that case, governments used that iTunes security hole to invade people's computers (as RMS points out [stallman.org]). So in that case, apparently multiple
NSS (Score:2)
So can we all get on the bandwagon with Fedora and start using NSS instead?
http://fedoraproject.org/wiki/... [fedoraproject.org]
Re: (Score:2)
Slashdot's response to a devastating bug in a GNU library?
Let's speculate about Microsoft's security and mention Snowden for no reason.
Re: (Score:2)
Apple was already mentioned so it's another Slashdot Hat Trick.
Re: (Score:2)
Is there no audit log on that code? It should be obvious whose code is responsible.
Even if the NSA put an exploit into the library in 2005, why didn't those millions of eyes I've heard so much about find the problem for 9 years?
Re:Waiting for Microsoft's "Goto Fail" (Score:5, Interesting)
I think it was MS who had a bug in the past where if I got a certificate issues for "google.com\0.attacker.com", I could present that certificate for a request to "google.com" (due to DNS hijacking or a MitM attack) and it would pass validation because the CN was handled as a C-style string and treated the null byte as a terminator. Fixed long ago, but still. People have been messing up cert validation for as long as it's been around.
The scary thing is how many mobile apps just don't *do* cert validation. Either it's completely disabled, or they crippled it in some way (I've seen both not checking the trust chain and not checking that the cert is valid for the target site). The usual reasons are "oh, we just did that for testing" (but I'm looking at your release version...) or "yeah, one of the servers it connects to uses a self-signed cert" (fine, add explicit trust *for that cert* but don't just disable chain-of-trust checks!) Another common problem is leaving completely broken or outdated options enabled (export ciphers - 40-bit symmetric crypto, easily breakable with a home PC - ot SSLv2 or other such similarly stupid things). Even if your platform/framework/library has a perfectly bug-free TLS implementation, few people ever seem to actually use it correctly.
Re: (Score:3, Informative)
It was a bug in multiple implementations of TLS including OpenSSL, NSS, and Microsoft's thing because they didn't expect cert authorities to give out certs with null bytes in the CN field.
Re: (Score:2)
Re: (Score:2, Informative)
No the issue was with conditionals and braces. The same issue would have happened even if it were two return statements .
Re:Different Software - Same Problem (Score:4, Informative)
No the issue was with conditionals and braces. The same issue would have happened even if it were two return statements .
And a return statement before the end of a function is essentially a goto. A language that takes the step to rule out gotos should also not allow early returns.
Re:Different Software - Same Problem (Score:4, Funny)
Re: (Score:3, Insightful)
Yeah, force people to write a big pile of nested bracket spaghetti and manually back their way out of every case. Make them introduce a bunch of otherwise useless flag variables and extra conditional statements to keep track of it all.
The best part of it all: When all that extra obfuscation causes bugs, it would be harder to pin the root cause on a simplistic generalization like "goto === bad".
Re:Different Software - Same Problem (Score:4, Informative)
Yeah, force people to write a big pile of nested bracket spaghetti...
1. "nested brackets" (blocks) are by definition not spaghetti. Spaghetti is exclusively the result of gotos and their control equivalents (like the early return).
2. Nested blocks are refactorable into smaller functions. That's the way to cut them down to size, not to use gotos.
I mean really! People still trying to argue with structured code in 2014! You'd think it was still the 1980s.
Function call overhead (Score:3)
Nested blocks are refactorable into smaller functions.
And the program eats the function/method/message call overhead, the overhead of passing all local variables as arguments, and the overhead of constructing and destroying an object through which to return multiple values from each function call.
Re: (Score:2)
Nested blocks are refactorable into smaller functions.
And the program eats the function/method/message call overhead, the overhead of passing all local variables as arguments, and the overhead of constructing and destroying an object through which to return multiple values from each function call.
I think you need to be introduced to a modern optimizing compiler. It will handle the first two for you, just fine, as long as you are in the same compilation unit (or doing fancier global optimziation). Since you just refactored this from a single function, you are supposedly still in the same compilation unit. If you pack the data in something like a stack-allocated struct even the last one will be reduced or completely avoided.
Re: (Score:3, Insightful)
Wow, have you ever actually written production code? Just wow.
There's nothing cleaner than
if (input1 == null) {
return ERROR("input1 was NULL");
}
if (input2 == null) {
return ERROR("input2 was NULL");
}
if (input2 == null) {
return ERROR("input3 was NULL");
}
Substitute "throw new ERROR(..)" or "goto :error" depending on what kind of code your writing,
Re: (Score:2)
Re: (Score:2)
BTW, that code still returns from the middle, which is the pattern under discussion.
Seriously, it's quite common in production code to need to deal with bad input cleanly at the top of every function;
void Foo(int *low, int *high, char *name)
{
if (low == NULL)
{
return ERROR_PARM1;
}
if (high == NULL)
{
return ERROR_PARM2;
}
if (*low > *high)
Re: (Score:2)
Yes this is the usual way of doing it. I usually see people using goto clauses when they have to cleanup some resource on the error handling part of the code. e.g. deallocating memory or closing up files. I prefer to use a helper function for that and replicating the cleanup function call each time. Some times the cleanup code isn't the same. Using goto labels isn't any better than calling a function in terms of programming complexity.
Re: (Score:2)
Sure, but the reason for goto :clanup specifically is "ease of code review". You want to make it easy to demonstrate that every open has a matching close, every alloc has a matching free, and so on. WHen the code base end up with 1000 allocs and 999 frees, the faster and easier you can spot the matching bookends, the better.
I've actually used an oddball pattern where Foo() is nothing but the error checking, allocs, and frees, and in the middle it calls _Foo(...) which can then return from the middle. But
Re: (Score:2)
Re: (Score:2)
Fail.
His code checked input 2 twice.
Re: (Score:2)
Can still work perfectly fine in C++ code if the types are subclasses.
Re: (Score:2)
1. "nested brackets" (blocks) are by definition not spaghetti.
I called it spaghetti because the resulting mass of brackets looks just like a big steaming dish of spaghetti, and the extraneous control statements are almost as annoying as gotos to more than a single "error" label.
Nested blocks are refactorable into smaller functions. That's the way to cut them down to size, not to use gotos.
Some are, some not so much. Many situations call for a long list of sequential checks, which can be cleanly and clearly coded as a bunch of if .... return statements. If you put each case in a function you still have the following problems:
- If you do it the obvious way, you still need a deeply
Re: (Score:3)
1. "nested brackets" (blocks) are by definition not spaghetti. Spaghetti is exclusively the result of gotos and their control equivalents (like the early return).
Bullshit. One of the projects at my last job had a single function in C++ that was over 50 printed pages. 5-deep nested loops, not even counting conditionals. On a 1280p resolution monitor, 8pt font, 4 space-tabbing and properly indented code, the start of the deepest nested blocks were 4/5s or more across the screen. A lot of the crap was due to avoiding goto's. That is spaghetti. By using a few judicial goto's, I was able to reduce the code by a third alone. Goto's are not evil. Like any language construc
Re: (Score:2)
C++ makes using goto very hard. The replacement pattern: do {
Re: (Score:2)
On a 1280p resolution monitor, 8pt font, 4 space-tabbing and properly indented code, the start of the deepest nested blocks were 4/5s or more across the screen.
Sorry to be pedantic, but why would you give only the number of vertical lines (1280)? Since 2276x1280 is such an unusual resolution (I can only assume 16:9 when using the ???p notation), it would be clearer to give the number of pixels in both directions. Another piece of info missing is the DPI, without which one can't relate "pt" to pixels. [at least we know it's a progressive scan monitor, thank god you don't have to code on an interlaced display]
The case _for_ goto (Score:2)
The linux kernel is full of gotos. Assembly is bereft blocks and that sort of structure. So "goto" isn't the source of all evil.
Consier this example of the linux goto paradigm below. When taking locks and establsihing component preconditions you can write an optimal routine that does the stepwise creation, and includes the non-conditional cleanup. Then skipping the cleanup if all the parts succede. The example below is trivial, but when it comes to preserving locking orders it solves a hard problem very sim
Re: (Score:2)
Re: (Score:2)
Hell, the issue would have happened if there were no gotos in use, and instead both statements were method calls - the unintended method call would still have happened.
Re: (Score:2)
Both these bugs are caused by people using 'goto' like morons. Using 'goto' should start throwing compile-time errors to start forcing people off this relic of flow control.
Problem there is that it would break very old programs that just need recompiled and thus require a rewrite. A better way would be to have it disabled in the compiler by default so you have to enable a flag to override it so you are aware that it is there.
Re: (Score:2)
Proper C-style code depends heavily on the "goto :cleanup" pattern. You either write all exception-safe, all the time (not in C, obviously), or every function "allocates at the top and frees at the bottom".
Re: (Score:2)
ÂÂÂÂif (condition)
ÂÂÂÂÂÂÂÂstatement;
ÂÂÂÂÂÂÂÂstatement;
which was duplicating a line of code by mistake, and which would be a problem with almost any statement - statement is executed conditionally once and then executed unconditionally. Whether it is "goto fail;" (which is very
Re:First (Score:4, Insightful)
First, and yet another OSS-releated security risk :(
At least they are rare enough that it is news worthy. As compared to Windows where new exploits hardly ever get any attention because they are so frilling common as to be passé.
"Error" is Plausable Deniability (Score:5, Interesting)
Hot on the heels of Apple's SSL/TLS implementation "flaw" across all stacks, and the Snowden revelations of NSA infiltration for weakening crypto?
You don't have to be wearing Tin Foil, just to become a little suspicious...
I propose "Snowden" become a active tense op (Score:3)
Snowden:
(v) Adding a bit of code, hardware, or operation you know you shoudln't because an authority requires you do so.
"Hey honey, I'll be late for dinner, I have to snowden the latest release of firefox."
(n) the sneaky bit of intrusive technology
"Hey what's this bit?" "Shhh, that's the snowden."
I know he was the wistleblower, but we should enshrine his deed and the knowledge that this is happening using his name in memoriam.
Re: (Score:3)
First, and yet another OSS-releated security risk :(
At least they are rare enough that it is news worthy. As compared to Windows where new exploits hardly ever get any attention because they are so frilling common as to be passé.
Well, Slashdot seems to report on every vulnerability popping up on my Apple watchlist (often more than once), but not on all popping up on the RedHat watchlist. Draw your own conclusions from what you just said.
Re: (Score:3)
First, and yet another OSS-releated security risk :(
At least they are rare enough that it is news worthy. As compared to Windows where new exploits hardly ever get any attention because they are so frilling common as to be passé.
Well, Slashdot seems to report on every vulnerability popping up on my Apple watchlist (often more than once), but not on all popping up on the RedHat watchlist. Draw your own conclusions from what you just said.
Forgot to mention: Apple's TSL-bug was also open source.
Re:AHAHAHAHAH (Score:5, Interesting)
"Open Source Software is more secure because the code can be reviewed."
That's why this bug has existed since 2005. gg, guys. Thumbs up.
What do you mean? The many eyes found said bug that is why we are reading about it if thay had not it would still be sitting there undiscovered. Ever wonder how many bug go completely unnoticed in proprietary software because no one actually reads said code? Like for example a Windows bug affecting all 32 bit Windows OS's for 17 years: http://www.computerworld.com/s... [computerworld.com].
Re:AHAHAHAHAH (Score:5, Informative)
The bug was found due to observed behavior, not due to a code review.
Re:AHAHAHAHAH (Score:5, Insightful)
That may be, but once the behavior was observed, the observer didn't have to find the owner of the code to get it diagnosed. They may have, but the point is that anybody who found this behavior could've gone into the code and found out what caused the problem. Of course, if a black hat happened to be the one that found the bad behavior, they could've gone into the code to figure out how best to exploit it. So, the situation's not perfect, but still, it's probably a good thing that there were lots of eyes allowed to diagnose and fix the problem once it displayed itself.
Bug was NOT found due to being open source (Score:2, Insightful)
This bug wasn't found from being open source. Those "many eyes" missed this bug for nearly a decade. Security testing tools uncovered incorrect validation behavior in the compiled library, just like they would with a closed source library. The only difference is that the public can see the incorrect code and correct it immediately; that is what you should be citing as an advantage of
Re: (Score:2)
"Open Source Software is more secure because the code can be reviewed."
That's why this bug has existed since 2005. gg, guys. Thumbs up.
What do you mean? The many eyes found said bug that is why we are reading about it if thay had not it would still be sitting there undiscovered. Ever wonder how many bug go completely unnoticed in proprietary software because no one actually reads said code? Like for example a Windows bug affecting all 32 bit Windows OS's for 17 years: http://www.computerworld.com/s... [computerworld.com].
Um no, code review didn't find this - at least not the people that are supposed to. The bad guys apparently found and have been using this bug for quite some time. So obviously the black hats are more motivated to review the code than the white hats.
Re: (Score:2)
"Open Source Software is more secure because the code can be reviewed."
That's why this bug has existed since 2005. gg, guys. Thumbs up.
What do you mean? The many eyes found said bug that is why we are reading about it if thay had not it would still be sitting there undiscovered.
Unless of course it wasn't undiscovered, but actually used. Or even deliberately planted by the NSA.
Re: (Score:2)
It was only a couple of years ago someone found a significant bug in Unix that had been around since 1986, a 32-bit x 32-bit multiply routine that returned a 32-bit answer. It had been in Linux since the start in the early 90s and nobody had noticed it.
Re: (Score:2, Insightful)
How is this insightful? The only way this could be insightful is if the OP had said "This bug has existed since 2005, clearly we need greater adoption of open source software, to get more people interested in testing for bugs", because the option is closed software that has bugs no one can look at or fix.
I already have the the security update to this bug on all my machines, but if I had closed source who know when, if ever, a patch would have come.
Re: (Score:2)
How is this insightful? The only way this could be insightful is if the OP had said "This bug has existed since 2005, clearly we need greater adoption of open source software, to get more people interested in testing for bugs", because the option is closed software that has bugs no one can look at or fix.
Well, that's not true. Apple had a rather bad and embarrassing security bug, and someone could look at it and fix it - just had to be an Apple employee, who was paid for it.
Re: (Score:2)
"Open Source Software is more secure because the code can be reviewed."
That's why this bug has existed since 2005. gg, guys. Thumbs up.
Especially when it comes to areas like cryptography, it's the quality of the eyes - not the quantity - that matters.
Re: (Score:2)
Well, if your starting point is that "open source doesn't lead to bugs being identified and disclosed" then those very posters you are complaining against are partially right, in part. Consider:
Open source: anyone can read the code, but (based on our premise) this doesn't lead to identification and disclosure of problems. It can allow a prospective attacker to identify problems and not disclose.
Closed source: only internal staff can read the code, but (based on our premise) having many eyes looking doesn't
Re: (Score:2)
It's a fork of a 15yo fork of Red Hat.
Re: (Score:2)
Re: (Score:2)
To be fair, it IS off topic.
but not worth wasting a mod point on
Re: (Score:2)
To be fair, it IS off topic.
but not worth wasting a mod point on
If you can't find a remotely on-topic way to undo your moderation, you deserve to modded into oblivion. And why do you think the Offtopic mod exists in the first place? To only mod down offtopic posts that don't admit to be un-mod-posts?