OpenSSL Bug Allows Attackers To Read Memory In 64k Chunks 303
Bismillah (993337) writes "A potentially very serious bug in OpenSSL 1.0.1 and 1.0.2 beta has been discovered that can leak just about any information, from keys to content. Better yet, it appears to have been introduced in 2011, and known since March 2012."
Quoting the security advisory: "A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server." The attack may be repeated and it appears trivial to acquire the host's private key. If you were running a vulnerable release, it is even suggested that you go as far as revoking all of your keys. Distributions using OpenSSL 0.9.8 are not vulnerable (Debian Squeeze vintage). Debian Wheezy, Ubuntu 12.04.4, Centos 6.5, Fedora 18, SuSE 12.2, OpenBSD 5.4, FreeBSD 8.4, and NetBSD 5.0.2 and all following releases are vulnerable. OpenSSL released 1.0.1g today addressing the vulnerability. Debian's fix is in incoming and should hit mirrors soon, Fedora is having some trouble applying their patches, but a workaround patch to the package .spec (disabling heartbeats) is available for immediate application.
Gee, that's worse than no encryption isn't it? (Score:5, Informative)
"We have tested some of our own services from attacker's perspective. We attacked ourselves from outside, without leaving a trace. Without using any privileged information or credentials we were able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication."
Yikes. And it's been known for 2 years. That's some shit!
ASLR anyone? hype? (Score:2)
Re:ASLR anyone? hype? (Score:5, Insightful)
Re: (Score:2)
I've actually wondered about this too. Read overruns will crash a program just as badly as write overruns; Read AV in Windows [NT], Segmentation Fault in *nix (General Protection Fault in legacy Windows), etc. reading memory will tell you enough about the layout of memory to cherry-pick addresses pretty well, and probably to determine the ASLR mask, but you're still going to have the issue of what, within the heap, is allocated. You could probably do OK by starting from the stack (which is in a predictable
Re: (Score:3)
Read overruns causing occasional crashes isn't much of an issue for the attacker if the server is auto-restarting.
Re:ASLR anyone? hype? (Score:4, Informative)
Read or write overruns will only throw an exception if they go beyond the bounds of the applications total allocated memory so they hit an unallocated page. (page fault)
If you simply read into some memory that has been allocated by some other component, no exception will be thrown.
Reading outside the bounds of the application application pages is unlikely as you'd have to be the last or close to the last allocated block (when the application space has to be grown, doesn't work if its a realloc of a previously allocated page, so lower in the address space and not near the end of allocated pages) and/or have a large overrun so you went through all the other allocated blocks.
Re: (Score:3)
I've actually wondered about this too. Read overruns will crash a program just as badly as write overruns; Read AV in Windows [NT], Segmentation Fault in *nix (General Protection Fault in legacy Windows), etc. reading memory will tell you enough about the layout of memory to cherry-pick addresses pretty well, and probably to determine the ASLR mask, but you're still going to have the issue of what, within the heap, is allocated. You could probably do OK by starting from the stack (which is in a predictable enough location) and working from there, I guess?
ASLR was invented as a mitigation of "return oriented programming" which was itself a way to get around DEP/NX. As such, ASLR targets executable memory, making the memory addresses of candidate executable code fragments hard to guess. ASLR does not randomize data segments - there's no need since the original intent was to make executable locations hard to guess. Non-executable locations was not the problem ASLR tried to solve.
And in the case it would not matter at all if the location was randomized, since t
Re: (Score:2, Informative)
Existed != Known. Unknown if known until now.
Re: (Score:2)
If only they had written OpenSSL in Java instead of C! I'm wondering how many friends I can get on Slashdot with that statement.
(from http://blog.existentialize.com... [existentialize.com],
Re:Gee, that's worse than no encryption isn't it? (Score:4, Insightful)
"known for 2 years"
No, no, this has been the code part of the stable release of OpenSSL for 2 years. The bug has only been known by non-blackhats for up to a few weeks.
If anyone else like a blackhats or NSA or whoever knew about the bug before hand, we don't know.
Not necessarily known since 2012 (Score:5, Informative)
Who knows who knew what and when, but the 2012 statement is a misinterpretation of TFA where they seem to be saying it essentially started "hitting the shelves" in distros about then, whereas before then it was mostly only distributed in beta builds and head code.
Re:Not necessarily known since 2012 (Score:5, Insightful)
Context: "Bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. "
After so many years of this shit, it has to be intentional, just so people will post corrections.
Re:Not necessarily known since 2012 (Score:5, Interesting)
After so many years of this shit, it has to be intentional, just so people will post corrections.
Of course it is intentional, and yet no naming and shaming appears to be going on... why is that? Only a small handful of people are responsible for bringing this to our linux distros, and a few more responsible for keeping it there. Those people have lost the trust of the community and should never have any of their code submissions or bug priority lists accepted ever again, otherwise there is just no consequence for nefariously subverting the security of us all.
Re: (Score:3)
I don't think so in this case. I normally would have waited on the firehose for a submission with a better writeup, but this was relatively urgent news so I upvoted it anyway.
(Yes someone did understand you weren't talking about the potential intentionality of the bug, don't despair there are people capable of comprehension out there and you may even meet one face to face someday :-)
Thanks Jerks (Score:5, Funny)
Now how are we supposed to collect people's private information without their knowledge? Think of the children and all of the terrorists captured with this exploit in the wild!
sincerely,
NSA
Re: (Score:3)
Disregard that; we suck cocks.
NSA
P.S.: All your accounts are belong to us.
No Problem Here (Score:5, Funny)
Never trusted openssl - only use GnuTLS.
http://www.theregister.co.uk/2... [theregister.co.uk]
Um, whoosh? (Score:2, Funny)
How the fuck did this get modded up? Idiot mods (and "DarwinSurvivor", apparently) can't read a link, I guess...
The only way this could have been stupider is if it was actually the same link, instead of merely being a link that I could tell, just from the URL, was about exactly the same issue.
Morons.
Re: (Score:2)
I take it this is a server concern (Score:2)
As I can't imagine the servers I connect to being interested in snooping on my client data, I presume this bug is only a real concern for systems running services, not acting as clients.
Re: (Score:3)
I *think* it might be feasible to exploit your web browser to steal cookies or saved credentials if you connect to a rogue https site. Credentials are always nice for spamming. If you convince people to keep you open in another tab, you might get lucky and snoop some credit card numbers or banking credentials too. A regular person should fear mainly automated attacks like this.
(Please do prove me wrong if I didn't get the attack potential here right.)
Re:I take it this is a server concern (Score:5, Interesting)
No, you got it quite right. A server could grab browsing history, JS memory contents, stored passwords, and authentication cookies from a browser. It's not just web browsers, though; a malicious server could also steal email (from other email accounts) out of a mail client, and so on. For the handful of services that use client certificates, a server could steal the *client's* secret key.
Browsers (or other clients) that use multiple processes have some degree of safety, as this exploit can't read across process boundaries. It's also completely passive; just because every Chrome tab *can* get the cookies that are currently being used in every other Chrome tab doesn't mean that they are always loaded in each tab's process' address space (though I don't know if they are in practice or not).
Still, this is a grade-A clusterfuck security-wise. The ability for an unauthenticated attacker (all you need is an open TLS connection; that could be the login screen) to read memory off the other side of the connection is the kind of exploit you can make movie-grade "hacker" scenes out of. For a simple example you might see somebody pulling, you could use this exploit to decrypt any connection you recorded, assuming the server hadn't rotated its private key since then. If you can be fast enough and are in an intercept (MitM) position rather than just monitoring passively, you could even grab the keys in real-time and have complete control, invisibly, over the connection. From there, you could even read memory from the client and (continue reading from) the server at the same time!
You could probably do it automatically using a Raspberry Pi hiding behind the flowerpot in a café. I'm not joking.
I've been in the security world for years and I don't think I've ever seen so bad a vuln. Yes, things like "goto fail" were mind-blowingly stupid, but they still only let you MitM connections if you were in the right place at the right time. This one is strictly better and enables a huge number of alternative attacks.
Re:I take it this is a server concern (Score:5, Informative)
I don't think Chrome uses OpenSSL, although they are thinking about switching to it. They use NSS, same as Firefox. I'm not sure any browsers use OpenSSL - it's mostly used on the server.
Re: (Score:2)
You really think the guy behind hotgritsnatalyportmanphotos.org is trustworthy?
Re: (Score:2)
Re: (Score:2)
Client data might be used for "full spectrum" efforts e.g. propaganda, deception, mass messaging, pushing stories, spoofing, alias development or psychology.
i.e. the service you use is weekend.
The other aspect is how many groups knew of this crypto trick? The US and just a few friendly govs, their staff, their contractors and any ex staff or staff open to faith or cash needs.
Just another way in
http://www.businessweek.com/ar... [businessweek.com]
Let's keep this to ourselves (Score:2)
git blame of the bug please (Score:5, Interesting)
can someone link to the git blame of the bug please?
Re:git blame of the bug please (Score:5, Informative)
There's an analysis of the bug here [existentialize.com].
Re: (Score:3)
Mod parent up; it's very informative and worth reading.
Whether you get anything truly interesting out of the attack is a separate matter. Fortunately, the attacker can't control where the read is from (just its length) so you're more likely to get the session key (which the attacker will have anyway) than the private key from this sort of poking around around.
Beware memcpy()! If you don't know exactly where you're reading from and writing to and that you've got big enough memory chunks at both ends, you've
Is this for real? (Score:5, Interesting)
Is there anyone on the planet using TLS heartbeats via TCP for anything except exploiting this bug? What is even the point of heartbeats without DTLS?
Bugs are bugs yet decision to enable a mostly useless feature for non-DTLS by default in my view is not so easily excusable.
Re: (Score:3, Informative)
It's disabled in the base system OpenSSL in FreeBSD, so it can't be that critical.
Incidentally, that also means that the summary is ... imprecise: FreeBSD, by default, isn't vulnerable to this. If you have OpenSSL from ports installed, it is - though that also means the fix is a simple package/port upgrade. (The fixed version is in ports already, and packages are, I believe, being built.)
This is good (Score:3)
Well, it's not good that almost every major audit-able crypto library has been found to have trivial exploits (still waiting on issues in the Chrome and Mozilla SSL libraries).
It's good that eyes are looking, and people are finding these things. I imagine that without Snowden's revelations, nobody would have bothered to check. And these bugs would have been found much later or not at all, allowing espionage organizations to compromise many more private communications in the interim.
While the idea that the NSA or some other agency had a hand in these bugs is largely a conspiracy theory, the answer to whether they knew about these flaws and exploited them should be pretty obvious. After all, the NSA has probably done the very same code audits for the purpose of finding holes they can exploit.
And before somebody says a closed-source implementation wouldn't suffer these problems, quite frankly, if all of these libraries were closed-source, we wouldn't know if there was a vulnurability at all, or for that matter if any found would be fixed. There needs to be more eyes auditing the security code, not fewer.
Chrome's SSL uses a lot of the OS certificate mana (Score:3)
Chrome just uses the operating system for a lot of the certificate validation of HTTPS, so it can be vulnerable to security holes that apply to the operating system. Chrome wasn't vulnerable to "goto fail", but presumably it has been vulnerable to others in Windows and Mac OS.
Re:Chrome's SSL uses a lot of the OS certificate m (Score:4, Informative)
My understanding is that Chrome and Mozilla both use NSS. It's a bit outdated, so I could be wrong (given that Google forked webkit, I can imagine them forking NSS too).
Actually, with a quick Google search, it seems that Chrome on Android uses (used?) OpenSSL for certain functions. I'm curious to know if secure communication via Android devices can be compromised via those functions. At first glance, I'd say no, but I don't have enough domain knowledge to make this assertion.
NSS is thus far secure, but I really, really would like to see the results of multiple full and independent audits. If there's a problem in NSS, that would be about as big as it can get.
Like I said, it's a bit frightening that there are such large and somewhat obvious holes in these major crypto libraries found within three months of each other, but it's good to know that they're being found and fixed.
Re: (Score:2)
Did you miss all the RSA stories?
Whether they had a hand in this particular bug is conjecture. Whether they've had a hand in this sort of thing in general? They have.
Got the update... (Score:2)
Linux Mint, and I'd assume Ubuntu too, has already pushed the updates out. Happy happy. Joy joy.
Re: (Score:2)
We're all fucked (Score:5, Interesting)
Any data kept in RAM on an open-ssl box has probably been compromised. It sounds like that includes private keys, root certs, passwords, etc.
This is why passwords etc should be encrypted in RAM. It's funny, there's a Security Technical Implementation Guides (STIG) on that very item. It always sounded sort of ridiculous, but now I know why it was there.
Re: (Score:2)
There would need to be other exploits to cross program-program and user-user isolation.
(well and the data/mem mapped/read/accessed by the compromised program)
Re:We're all fucked (Score:5, Interesting)
Don't just encrypt them - move them out of process entirely. Have a security broker that knows your secrets, but doesn't talk to *anything* except local clients (on the assumption that if the attacker has arbitrary code execution, it's game over anyhow). Use inter-process communication to get secrets when needed, but preferably don't *ever* hold sensitive data in memory (for example, instead of using your private key directly, you ask he broker process to sign a binary blob for you, and it does so using your key and returns just the signature). Use "secure buffers" in managed code, or "secure zero" functions otherwise, to eliminate any sensitive data from memory as quickly as possible.
Yes, this used to sound paranoid. Actually, it still does sound paranoid. But, there's now a great example of a scenario where this is a Good Idea.
Of course, you have to make sure that broker is Really Damn Secure. Keep its attack surface minimal, make sure the mechanism by which it identifies whose key to use is extremely robust, and if possible make it a trusted part of the OS that is as secure from tampering as possible (Microsoft already has something like this built into Windows). There's also a question of how far to take it. For example, you could have the broker handle the symmetric encryption and decryption of TLS data (the bulk data part, after handshaking is completed) but that could impact performance a lot. Keeping the symmetric key in memory isn't so bad, really; it's ephemeral. However, if an attacker has a vuln like this and wants to read the traffic of a target user, they could attack the server while the user is using it, extract the symmetric key, and use it to decrypt the captured TLS stream. Keeping the key in-memory only while actually losing and (securely) purging it between response and the next request might be a good middle ground, perhaps?
Re: (Score:2)
"Of course, you have to make sure that broker is Really Damn Secure. Keep its attack surface minimal, make sure the mechanism by which it identifies whose key to use is extremely robust, and if possible make it a trusted part of the OS that is as secure from tampering as possible"
So in other words run it as root so when it gets compromised you're TRULY fucked. Yeah , genius idea. Far better is just to scatter parts of the password randomly around a block of memory and reassemble when required then delete wh
Re: (Score:2)
Any data? From a vulnerability that can read up to 64k in the process that does the TLS heartbeat? Not even with a choosable offset.
Re: (Score:2)
Also makes the code more difficult to debug, more difficult to fix, and increases the chances of exploitable bugs existing in the first place...
How many times have security holes resulted from trying to over complicate the code?
All those links (Score:4, Interesting)
https://www.openssl.org/news/secadv_20140407.txt
Windows (Score:5, Funny)
Good thing I use WIndows, so I'm safe.
Re:Windows (Score:5, Funny)
Unfortunately it is XP, so you are safe until 12:00.
Is SSH affected? (Score:2)
Does this effect SSH at all? It seems more likely this would effect TSL servers such as Apache and stunnel.
Re: (Score:2)
For sshd there was possibly some protection afforded by the privilege separation model. I'd store your old keys and wait to see something from someone who knows it cold.
Re: (Score:3)
Assuming it uses a version of openssl that supports the relevant TLS feature, SSH servers are absolutely vulnerable. Connect to one, carry out the attack while it waits for you to authenticate; now you can steal its secret key. This is also a way that a malicious SSH server could attack the client; possibly stealing things like the client private keys (SSH being one of relatively few places where asymmetric client authentication is common).
Re:Is SSH affected? (Score:4, Informative)
OpenSSH uses the libcrypto portion of OpenSSL for crypto primitives. It does not use TLS, and therefore SSH is not vulnerable to this attack.
Shut the fuck up when you don't know what you're talking about.
Re: (Score:3, Insightful)
Rather than get all aggro, I will state that I have tried to find a concrete answer to this question ("is OpenSSH vulnerable/impacted by this?"), and I still cannot. So before someone say "shut the fuck up when you don't know what you're talking about" to me, I'll provide the data (and references) I do have:
* OpenSSH links to the libcrypto.so shared library which is absolutely OpenSSL on most systems: ldd /usr/sbin/sshd followed by strings /whatever/path/libcrypto.so.X (you'll find OpenSSL references in th
Re: (Score:3)
As I understand it, this is a bug in a function of OpenSSL that is used in TLS sessions which isn't used by OpenSSH. OpenSSH does not use TLS.
Your webserver and mail server would though.
Yet again C bites us in the ass (Score:5, Insightful)
Yet again, C's non-existent bounds checking and completely unprotected memory access lets an attacker compromise the system with data.
But hey, it's faster.
Despite car companies complaining loudly that if people just drove better there would be no accidents, laws were eventually changed to require seatbelts and airbags because humans are humans and accidents are inevitable.
Because C makes it trivially easy to stomp all over memory we are guaranteed that even the best programmers using the best practices and tools will still churn out the occasional buffer overflow, information disclosure, stack smash, or etc.
Only the smallest core of the OS should use unmanaged code with direct memory access. Everything else, including the vast majority of the kernel, all drivers, all libraries, all user programs should use managed memory. Singularity proved that was perfectly workable. I don't care if the language is C#, Rust, or whatever else. How many more times do we have to get burned before we make the move?
As long as all our personal information relies on really smart people who never make mistakes, we're doomed.
Re: (Score:2)
Re: (Score:3)
Its probably possible to create a compiler mode that will compile bounds checking code into existing C programs. This would involve one compiler pass that would generate C output with the inserted code, and a second pass to generate the binary. This could be done with a new backend in Clang. It would also allow the inserted code to be easily seen since the source output could be dumped into a file. A good thing about this is that such a feature could be turned on or off in the compiler. This would be on by
Re: (Score:2)
There are lots of bounds checking libraries that can be used when building applications. The hard part is writing unit tests to find all these possibilities each time a patch is submitted.
Re: (Score:2)
Only the smallest core of the OS should use unmanaged code with direct memory access. Everything else, including the vast majority of the kernel, all drivers, all libraries, all user programs should use managed memory.
My computer is too busy calculating an MD5 in a managed memory VM that doesn't even have an unsigned or sized integer types and thus must perform basic left barrel roll operations in about 50 opcodes worth of abstraction container dereferencing, to allow me to respond to this post appropriately.
Re: (Score:3)
Blah blah blah.
Java 8 has a full SSL stack written in Java itself, so no buffer overflows there, and which uses AES-NI for hardware accelerated encryption if available. It also supports perfect forward secrecy and other modern features (no session tickets though).
If you look at the CVE history of JSSE what you will find is that occasional bugs like the Heartbleed attack (not checking length fields correctly) get reported as denial of service issues because they cause managed exceptions that might, if you wr
Re: (Score:3)
Yes, bounds checking is a hassle in C but throwing out the whole language isn't necessary. We could default to smart pointers and add a new type qualifier for people who want the old behavior. Dangerous code would look like:
It might be required to have a compiler switch to avoid breaking ABIs that are expecting simple pointers. The bounded pointer would have to be a struct with a slim pointer and a size in it that gets updated whenever the pointer mutates, and raises a si
Re: (Score:2)
Feel free to rewrite OpenSSL in a more secure language and still make it as generic and cross-platform as it is now, with no loss in performance.
It's sad no one agrees with this. (Score:2)
I don't understand why this is controversial. People consider it a bad idea to roll your own encryption code. Why isn't it a bad idea to roll your own bounds checking? Because it's easy and you won't screw it up? I'm sure people writing their own hash functions feel the same way.
Do people seriously prioritize speed over security? Of all of the things my computer might squander its gigahertz on, squandering them by checking bounds on things that will never actually be out of bounds isn't something I can
Re: (Score:2, Insightful)
Because "bounds checking" is no silver bullet - it's an artificial limitation that DOES slow things down, unlike seatbelts and airbags, which have infinitesimal impacts on vehicle performance.
That's not true. Safety features like seatbelts, airbags, roll cage, etc. add an appreciable amount of weight to the car. "some hypermilers take it to the extreme, removing important safety features like rearview mirrors or even the car's airbags." [howstuffworks.com]
Either that, or you're too stupid to program successfully in C.
Apparently so are the OpenSSL developers, and all the other people who have been bitten by bounds errors over the years. Too bad there's no operating system written by perfect beings.
Yes!!! (Score:5, Funny)
*air-punch*
I knew procrastinating Debian upgrades for most of a decade would pay off! I am VINDICATED!
It's really annoying (Score:2)
Re:It's really annoying (Score:4, Insightful)
There may seem to be more now because there is more auditing going on since the NSA revelations reminded people what had to be done, and also the slower trend of case law starting to punish mishandling of customer data. The halcyon days are over and the backlog is being cleared up.
Re: (Score:2)
Yah, like all that oh-so-secure code that used to float around back in the 70's and 80's? I remember when systems used to get hacked by dial-up modem on a regular basis. There were and have been security holes in things forever. It just used to be harder to exploit most of them remotely and there were fewer people trying to exploit them.
Re: (Score:2)
Is there no testing protocol for security issues?
A lot of open source code is just thrown out there with the hope of enough random people reviewing it: "with enough eyeballs, all bugs are shallow". The testing protocol you are thinking of is instead called professional code audits. That's what OpenBSD does, and it's what Microsoft puts a lot of money into. It's basically paying for the eyeballs, to ensure that those eyeballs actually exist and are possessed by competent people.
Re:It's really annoying (Score:4, Funny)
This bug is almost 10 years old
Well look who natively counts in binary.
Hello Joshua! Give my regards to Dr Falken.
All these bugs are mind blowing (Score:4, Insightful)
But, regardless of the root cause (intentional malice or just sloppiness) I'm glad eyes have been checking these code bases with more diligence over the past several months. In the end it means more security for us users, regardless of our platform of choice.
Thank you again, Edward Snowden, for the collective wake up call!
Now if we could just get our governing officials to fix some of these egregious laws...
RHEL / CentOS / Fedora updates now available (Score:5, Informative)
RHEL updates are available:
https://rhn.redhat.com/errata/RHSA-2014-0376.html [redhat.com]
CentOS updates are available:
http://lists.centos.org/pipermail/centos-announce/2014-April/020249.html [centos.org]
Fedora updates are available, hitting the mirrors, but you can get it earlier, instructions here:
https://lists.fedoraproject.org/pipermail/announce/2014-April/003205.html [fedoraproject.org]
https://lists.fedoraproject.org/pipermail/announce/2014-April/003206.html [fedoraproject.org]
Re: (Score:2)
As for Debian / Ubuntu:
The 1.0.1g package is for the testing and unstable versions (Jessie, sid), in Wheezy the bug is fixed in v1.0.1e-2+deb7u5 [debian.org].
To check your OpenSSL version... (Score:2)
To get your version, run:
openssl version
Yes, I looked it up... in hindsight, I guess it was kind of obvious and I should have been able to figure it out. Anyway, it's done. Hope it saves someone a few seconds....
Older Versions Safe (Score:5, Insightful)
Distributions using OpenSSL 0.9.8 are not vulnerable
This is why I haven't upgraded my Linux servers in 23 years.
Quick test shows Yahoo user passwords (Score:5, Informative)
Filippo Valsorda's online tool for checking web servers for the Heartbleed vulnerability [filippo.io] is quite an eye opener. As well as telling you whether the server is vulnerable, it displays a small snippet of the memory it retrieved (there are scripts on Github that will show you the whole 64KB I believe).
In the quick tests I did on login.yahoo.com (used for Yahoo's email and probably all other Yahoo services), I saw three different user's passwords and at least part of their usernames. And you can just sit there refreshing the page to see more! Madness!
News cycle (Score:3)
Does anyone else see anything odd about the search results for this story?
I Googled "heartbleed" around 15 minutes ago and looked through 13 pages of results. I was looking for some info a little on the hardcore side, and the Google results were kind of surprising. There were tons of big well-known sites at the very top of the list - Fox, CNN, BBC News, Reuters and Forbes, etc; then a whole lot of mainstream "tech news" sites (PC World, ZDNet and so on) and blogs (HuffPo for example), then finally some more tech oriented or actual tech ones (YCombinator, Netcraft, StackOverflow) with a tiny sprinkling of blogs and relevant support forums (Cisco). US-CERT's listing was down on page 3 or so and honestly there just were not that many "hardcore" sites to be seen.
Running the search again after clearing cookies, the layout has changed a lot. The big news sites hits have slid way down (Fox News is on p. 3 now, for instance) with tech news and blogs moving up. All in all, the harder tech sites are floating upward and the less so are moving down. It's like the lava lamp version of a security scare.
Wondered what other Slashdotters think, it just seems a bit... strange, somehow. Don't these things usually bubble around in the tech community for a bit before surfacing in the mainstream world? It's like every big news site on the planet picked it up simultaneously, followed by the mainstream tech news site, and finally it began to filter down into the tech world. Could just be an artifact of Google's update cycle, but it definitely piqued my curiosity.
Re:Ironic (Score:5, Insightful)
Irony rears it's head on the day that patches for a Linux vulnerability are announced at the same time Microsoft ends its patching and update service for Windows XP.
How is a vulnerability in OpenSSL, which is a library that can be compiled for multiple platforms, a "Linux vulnerability"?
Re: (Score:3)
You're probably thinking of OpenSSH. OpenSSL is independent as far as I know.
Re:Ironic (Score:5, Funny)
Re: (Score:2)
Basically it means if you know any UNIX sysadmins, they'll be pretty cranky for the next week or so as they've been busy trying to put the poop back in the baby.
Oh yeah, and lots of your gadgets and favorite cloud services may be vulnerable, so anything stored on them may be in the hands of others.
Minimal jargon explanation (Score:5, Informative)
Basically, an attacker can connect to many secure Internet services - could be a banking website, or your email server, or a server hosting software updates, or possibly your corporate VPN - and learn everything that the server knows. This includes the private key (sort of like a super-complex and super-secret password) that is used to *make* the service secure. The attacker can then get all the data that the server sees, ranging from normal user passwords to all your emails and banking info.
This vulnerability is many, many kinds of bad. I'm simplifying a lot here. Basically, an awful lot of data is at risk right now, because of this bug.
This site has a pretty great explanation that most people likely to be found on /. will be able to follow, even if not normally security types: http://heartbleed.com/ [heartbleed.com]
Re:Things are starting to turn around (Score:5, Insightful)
That's not a fair generalization. Though there are plenty of "ideologically driven amateurs" — especially in the Linux (compared to BSD) world — they are mostly found among the noisy advocates, rather than actual developers.
Somewhere higher up the bug is described as a "simple bounds check" — which would be easy to implement. The truth is, probably, in between somewhere.
NSA, I am sure, know plenty of holes — if not custom-made by the authors doors — into proprietary software too.
I am disappointed at the quality of open source software — especially pieces as famous and fundamental as OpenSSL, and I agree, that open source's claimed advantage of there being "thousands of eyeballs" verifying its correctness is overblown.
But to declare it to be "losing" is a silly jump just as far in the direction opposite to the enthusiastic proclamations of the above mentioned ideology-driven advocates.
Re:Things are starting to turn around (Score:5, Insightful)
Somewhere higher up the bug is described as a "simple bounds check" — which would be easy to implement. The truth is, probably, in between somewhere.
It's not the fix of the code that's messy. It's the fix of the trusts using that code to function. They are all broken. After the upgrade keys need to be replaced, certificates re-issued, endpoints and clients reconfigured to trust new keys, and in some cases customers and end-users may need to be involved. For anything of CDE level security or higher, it's as big a cleanup job than the one that gave us openssl-blacklist, but the blacklist for this would be neither complete nor easy to assemble.
I predict a lot more interest in turning on CRL pathways in the future.
Re:Things are starting to turn around (Score:5, Insightful)
That's not a fair generalization. Though there are plenty of "ideologically driven amateurs" — especially in the Linux (compared to BSD) world — they are mostly found among the noisy advocates, rather than actual developers.
...
systemd devs seem bound and determined to prove you wrong there...
original eyeballs meant the FIX to a known bug (Score:4, Informative)
On a side note, regarding advantage of there being "thousands of eyeballs" verifying its correctness" -
ESR's famous quote is "with enough eyeballs, all bugs are shallow - the fix will be obvious to someone."
The quote doesn't say anything about correctness. It says that when strange behavior is noticed, someone will see a clear fix. A shallow bug is one that's right there on the surface, where you can see the source of the problem. That's in contrast to one where you have to spend hours searching for what's causing the problem. It makes no claim of how quickly or easily a bug will be discovered - just how it can be fixed once it's discovered.
Re: (Score:3, Interesting)
Shill much?
Two anonymous cowards with IDs less than 1000 digits apart write anti-open source posts at the same time?
Re: (Score:2)
Shill much?
Two anonymous cowards with IDs less than 1000 digits apart write anti-open source posts at the same time?
LOL!
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Figures - I write a dozen thoughtful posts that get no love, but when I take a brain shart on a post, that gets the mod points.
Re: (Score:2)
The problem with a closed source effort is what we saw with Prism http://www.theguardian.com/wor... [theguardian.com]
The legal system and dev staff stay with the closed source product.
With open source code - when an issue is found days, months, years later it can be corrected, fully understood and fed back into further world wide crypto
Re: (Score:3)
The SSL core team all appear to be professionals.
I have not checked, but most of the contributors probably are too.
The same is true of most big open source projects (like the Linux Kernel).
The differences are:
1) There is better disclosure of bugs in open source
2) some bugs can be discovered by third party audit (as the GNUTLS bug was)
Re: (Score:2)
Re: (Score:2)
Exploits are programs developed to take advantage of flaws and vulnerabilities, so most software is not "stuffed" with them.
Non sequitur. Just because an exploit has not yet been developed for a given vulnerability does not mean that the vulnerability does not exist.
Re: (Score:2)
I read that as "potential exploits", but by all means take the point for this one--I stand corrected.
Re: (Score:2)
I can only hope this is a clever attempt at whooshing people. Otherwise...
Re: (Score:2)
nope.
you have to reboot after update for this one. but thanks for playing
Re: (Score:2)
sarcasm? you never have to reboot to restart services.
Re: (Score:2)
That's absolutely not the case. OpenSSL has a stable ABI within each major release, like 0.9.8 or 1.0.1. This is no different from any other library. They have a slightly odd version scheme compared to most other libraries but that's about it.
Re: (Score:3)
Running 12.04 LTS and updated openssl to 1.0.1-4ubuntu5.12
Which contains the patch.
http://changelogs.ubuntu.com/c... [ubuntu.com]
System is still vulnerable. Seeing this reported on askubuntu as well. filippo checker confirms site is still vulnerable after upgrade
Make sure you restart the service. Any processes launched before installing the patch may still include the old version of the shared library.