OpenSSL Warns of Critical Security Vulnerability With Upcoming Patch (zdnet.com) 31
An anonymous reader quotes a report from ZDNet: Everyone depends on OpenSSL. You may not know it, but OpenSSL is what makes it possible to use secure Transport Layer Security (TLS) on Linux, Unix, Windows, and many other operating systems. It's also what is used to lock down pretty much every secure communications and networking application and device out there. So we should all be concerned that Mark Cox, a Red Hat Distinguished Software Engineer and the Apache Software Foundation (ASF)'s VP of Security, this week tweeted, "OpenSSL 3.0.7 update to fix Critical CVE out next Tuesday 1300-1700UTC." How bad is "Critical"? According to OpenSSL, an issue of critical severity affects common configurations and is also likely exploitable. It's likely to be abused to disclose server memory contents, and potentially reveal user details, and could be easily exploited remotely to compromise server private keys or execute code execute remotely. In other words, pretty much everything you don't want happening on your production systems.
The last time OpenSSL had a kick in its security teeth like this one was in 2016. That vulnerability could be used to crash and take over systems. Even years after it arrived, security company Check Point estimated it affected over 42% of organizations. This one could be worse. We can only hope it's not as bad as that all-time champion of OpenSSL's security holes, 2014's HeartBleed. [...] There is another little silver lining in this dark cloud. This new hole only affects OpenSSL versions 3.0.0 through 3.0.6. So, older operating systems and devices are likely to avoid these problems. For example, Red Hat Enterprise Linux (RHEL) 8.x and earlier and Ubuntu 20.04 won't be smacked by it. RHEL 9.x and Ubuntu 22.04, however, are a different story. They do use OpenSSL 3.x. [...] But, if you're using anything with OpenSSL 3.x in -- anything -- get ready to patch on Tuesday. This is likely to be a bad security hole, and exploits will soon follow. You'll want to make your systems safe as soon as possible.
The last time OpenSSL had a kick in its security teeth like this one was in 2016. That vulnerability could be used to crash and take over systems. Even years after it arrived, security company Check Point estimated it affected over 42% of organizations. This one could be worse. We can only hope it's not as bad as that all-time champion of OpenSSL's security holes, 2014's HeartBleed. [...] There is another little silver lining in this dark cloud. This new hole only affects OpenSSL versions 3.0.0 through 3.0.6. So, older operating systems and devices are likely to avoid these problems. For example, Red Hat Enterprise Linux (RHEL) 8.x and earlier and Ubuntu 20.04 won't be smacked by it. RHEL 9.x and Ubuntu 22.04, however, are a different story. They do use OpenSSL 3.x. [...] But, if you're using anything with OpenSSL 3.x in -- anything -- get ready to patch on Tuesday. This is likely to be a bad security hole, and exploits will soon follow. You'll want to make your systems safe as soon as possible.
This is fundamentally incorrect (Score:5, Informative)
"Everyone depends on OpenSSL. "
Nope. Many, many projects and vendors have migrated to LibreSSL, which was forked from OpenSSL in 2014 because that latter project's codebase was deemed to be a shitshow.
Re: (Score:3)
If LibreSSL is smoother to set up, no surprise that a lot of computer users made the switch.
Re:This is fundamentally incorrect (Score:4, Insightful)
"Everyone depends on OpenSSL. "
Nope. Many, many projects and vendors have migrated to LibreSSL, which was forked from OpenSSL in 2014 because that latter project's codebase was deemed to be a shitshow.
LibreSSL is one of many.
A primary problem with OpenSSL and projects that seek to do the same things is that they do too much. So there's lot of code and complexity.
Writing good crypto code is hard. One way to make it easier is to reduce the scope and focus on fewer things. For example performing the major functions of a CA is not really necessary in an SSL library. These things would be better separate.
I got to review a few crypto library options with respect to the sourcing of random numbers within (kind of important for cryptography) and found that two of them got exactly zero entropy because, according to the comments within, the hardware sources of entropy were not trustworthy and expected the OS to bail it out with deterministic software that was written without knowledge of what hardware it would be run on. This kind of counter logical thought process goes into a lot of open source security software. Good intentions, with a generous dose of foot shooting derived from misplaced mistrust. OpenSSL doesn't seem to suffer that way, but of course suffers in other ways, which is why it ended up as a slashdot post today.
So we really are screwed and the world would have been better off with something simpler than X.509, PKI and all the malarky that goes along with it. Good luck persuading the IETF to go back now.
Re: (Score:2)
So we really are screwed and the world would have been better off with something simpler than X.509, PKI and all the malarky that goes along with it. Good luck persuading the IETF to go back now.
Since I am ignorant to this area, what would be a better alternative?
Re: (Score:2)
So we really are screwed and the world would have been better off with something simpler than X.509, PKI and all the malarky that goes along with it. Good luck persuading the IETF to go back now.
Since I am ignorant to this area, what would be a better alternative?
I don't have a practical alternative. Maybe getting a time machine and going back to flood various standards bodies with engineers that have a focus on simplification and streamlining the security specs. I introduced simpler, more compact certs into USB - since x.509 certs didn't meet the size requirements or work for the usage models in usb. The forces of x.509 arrived to fight back. So I tried, but we are too entrenched to fix anything.
Re: (Score:2)
Maybe getting a time machine and going back to flood various standards bodies with engineers that have a focus on simplification and streamlining the security specs
There were people on the standards bodies who tried to do this but we got shouted down by the great mass of professional meeting-goers who just wanted to get their corporate's pet ideas into the standard. In the crypto area, the IETF became ISO around about 2000-2010. At the moment it's actually easier to get a standard through ISO (via their fast-track process) than through the massive bureaucracy and big-corporate phalanx that is the IETF. I've seen standards groups meetings totally flooded with Google
Re: (Score:2)
Maybe getting a time machine and going back to flood various standards bodies with engineers that have a focus on simplification and streamlining the security specs
There were people on the standards bodies who tried to do this but we got shouted down by the great mass of professional meeting-goers who just wanted to get their corporate's pet ideas into the standard. In the crypto area, the IETF became ISO around about 2000-2010. At the moment it's actually easier to get a standard through ISO (via their fast-track process) than through the massive bureaucracy and big-corporate phalanx that is the IETF. I've seen standards groups meetings totally flooded with Google advocates to make sure that exactly what Google wants becomes the standard. Not even ISO is that bad because you have delegates from each country, not delegates from whoever can pack the room most effectively.
IEEE 802 worked a bit like that, but less bad because you had to maintain attendance to keep voting rights. A company wanting to flood the room would have to spend a lot of money flying people to meetings over a long period and most wouldn't keep that up for long.
ISO (I was a US representative for a while, despite being British) felt a bit handicapped. Most of the crypto people present were from the representative government spooks and they did not trust each other one bit and were constrained in bringing t
Re: (Score:1)
He means theoretically a better option could have existed if everyone hadn't been wasting their time on this yarn ball.
Re: (Score:3, Interesting)
And it was guaranteed that there would be another bad security bug in OpenSSL. And there will be another one in the future. They don't know what they are doing.
Re:This is fundamentally incorrect (Score:4, Interesting)
Nope. Many, many projects and vendors have migrated to LibreSSL, which was forked from OpenSSL in 2014 because that latter project's codebase was deemed to be a shitshow.
Bug fixes and features are flowing from OpenSSL to LibreSSL not the other way around.
OpenSSL has undertaken a major multi-year effort redesigning of codebase for version 3 while LibreSSL picks away at nits and ports new features from OpenSSL.
The sun rises, ... (Score:3, Interesting)
... the sun sets, the Sun crashes, OpenSSL has another gaping security hole. It is the way of things.
Why is half the Internet still using this neverending vulnerability machine?
Hmm... (Score:1)
could be easily exploited remotely to compromise server private keys
So use hardware keys.
Not on Windows (Score:1)
Re: (Score:1)
Re: (Score:1)
Re: Not on Windows (Score:1)
Announcement timing (Score:1)
Shouldn't everyone have kept their mouth shut until at least Monday, to not give people the whole weekend to come up with and potentially exploit a zero day?
Re:Announcement timing (Score:5, Insightful)
Shouldn't everyone have kept their mouth shut until at least Monday, to not give people the whole weekend to come up with and potentially exploit a zero day?
Have you looked at the openssl code? You don't need advanced warning to tell you that there are zero days within.
Re: (Score:1)
"There's a bug in it!" Doesn't really give a lot of detail to anyone writing zero days. Given the codebase, that is pretty much a given.
Setting up everything to be ready to exploit a bug doesn't really take all that much time. Writing an exploit probably won't take that much time.
What will be the real race will be between patchers and exploiters. I expect something to be in the wild the same day, though probably not widely spread enough to detect. Popcorn time.
Grabs the popcorn (Score:5, Funny)
My server doesn't support HTTPS, only HTTP. Joke's on everyone else.
Re:Grabs the popcorn (Score:5, Funny)
I use HTTPS for my server. I'm always afraid I'll lose the private key though, so I push it to my public github account so I have a backup.
Re: (Score:1)
Re: (Score:2)
Hmm, it can be tricky, maybe I should just put it here for safe keeping:
-----BEGIN EC PRIVATE KEY-----
MIGkAgEBBDAHfniXdf7zPJUu48LItqKR13DLOYI6b4kFQ1KtfYAS+ny9biLANSkt
pmF680xXosGgBwYFK4EEACKhZANiAARKzZZzFcCT0TisR98BttPBXgKyvEWhj5Am
XathUFHJXN1tuagBlYDRJDVUKkRx3I/BYUPvt1MoGMbA7ECEzl2yEZ1QVVuQc+Rm
r8N/Bn9rb0rf+2kfe17PT2SVHRhmTyM=
-----END EC PRIVATE KEY-----
Be last to upgrade (Score:4, Interesting)
One strategy that has paid off for me in the past avoiding major OpenSSL problems is only upgrade to latest major version of OpenSSL once they stop supporting previous version.
It has been quite obvious OpenSSL 3.0 is a massive reorganization of the codebase and would take quite some time to stabilize. Hardly surprising there are still major problems being found.
My advice to all is stick with 1.1.1 for production use until the last minute.
Re: (Score:1)
Heh, yea, I second this sentiment. I was also once again saved by not upgrading. I had avoided HeartBleed the same way; I even tested a fresh build of the first vulnerable release and it was just too fast. I had no qualifications to review the actual code's cryptographic validity, I just observed it was 20% faster than the previous release and just thought "nope, no way it's that much faster and still secure, someone is gonna find a hole in this big enough to drive a truck through but it won't be me."
Re: (Score:3)
Even Debian's experimental doesn't have v3 yet when I checked last night.