Emergency Software Patches Are on the Rise (nbcnews.com) 43
Emergency software patches, in which users are pushed to immediately update phones and computers because hackers have figured out some novel way to break in, are becoming more common. From a report: Researchers raised the alarm Monday about a big one: The Israeli spyware company NSO Group, which sells programs for governments to remotely take over people's smartphones and computers, had figured out a new way into practically any Apple device by sending a fake GIF through iMessage. The only way to guard against it is to install Apple's emergency software update. Such emergency vulnerabilities are called "zero days" -- a reference to the fact that they're such an urgent vulnerability in a program that software engineers have zero days to write a patch for it. Against a hacker with the right zero day, there is nothing consumers can do other than wait for software updates or ditch devices altogether.
Once considered highly valuable cyberweapons held mostly by elite government hackers, publicly disclosed zero-day exploits are on a sharp rise. Project Zero, a Google team devoted to identifying and cataloging zero days, has tallied 44 this year alone where hackers had likely discovered them before researchers did. That's already a sharp rise from last year, which saw 25. The number has increased every year since 2018. Katie Moussouris, founder and CEO of Luta Security, a company that connects cybersecurity researchers and companies with vulnerabilities, said that the rise in zero days is thanks to the ad hoc way that software is usually programmed, which often treats security as an afterthought. "It was absolutely inevitable," she said. "We've never addressed the root cause of all of these vulnerabilities, which is not building security in from the ground up." But almost paradoxically, the rise in zero days reflects an online world in which certain individuals are more vulnerable, but most are actually safer from hackers.
Once considered highly valuable cyberweapons held mostly by elite government hackers, publicly disclosed zero-day exploits are on a sharp rise. Project Zero, a Google team devoted to identifying and cataloging zero days, has tallied 44 this year alone where hackers had likely discovered them before researchers did. That's already a sharp rise from last year, which saw 25. The number has increased every year since 2018. Katie Moussouris, founder and CEO of Luta Security, a company that connects cybersecurity researchers and companies with vulnerabilities, said that the rise in zero days is thanks to the ad hoc way that software is usually programmed, which often treats security as an afterthought. "It was absolutely inevitable," she said. "We've never addressed the root cause of all of these vulnerabilities, which is not building security in from the ground up." But almost paradoxically, the rise in zero days reflects an online world in which certain individuals are more vulnerable, but most are actually safer from hackers.
maybe add more QA time and stop deadlines? (Score:2)
maybe add more QA time and stop deadlines?
It's seems like the easy that is to update then the less testing that you get!
Re: (Score:3)
Or fundamentally change the way we create complex software because things aren't getting simpler. Just like hardware isn't getting simpler.
Re: (Score:1)
Re: (Score:3)
I suppose I'm not seeing how this could map to specifics. Broadly speaking there are classes of historical problems with systematic ways to detect/avoid, but a great deal of security vulnerabilities are design issues of random software X that are just uniquely problematic in a specific context.
Not every piece of code in the world can reasonably have some specific standard to save it. In fact I would say that an open implementation tends to win over a 'standards body' for making useful, reliable, and secure
Re:maybe add more QA time and stop deadlines? (Score:4, Informative)
Security QA is pretty hard. You can have penetration testers come in, but ultimately they will never have as much time spent on it as the general world will spend.
What I generally see are a lot of analysis tools with some manual auditing that can be more effective than you might think, but if your product attracts enough attention, it's hard to compete with the malicious community manually examining all sorts of nuance of how your product works. A very popular product can be subjected to decades of man-hours in a few days upon release, and there's no way you can budget in enough to compete. You can impress upon all your developers the importance of thinking through the potentially unsafe failure modes of deliberately bad input and do better than an audit and if your team is pretty good you can have a solid outcome, but one slip up is all it takes.
Deadlines are tricky as it's also a very competitive landscape. Slow and steady does not, in fact, win the race in business. Look no further than Windows itself, which was a cesspool of security issues for years and the market chose to give them more money which ultimately allowed Microsoft to largely address the fundamental issues in time rather than other more hardened desktops that tried to proceed more carefully to avoid precisely some of the pitfalls that Windows suffered from.
Re: (Score:2)
QA is not the answer. No matter what the kind of QA, you'll never find all problems by just "testing" it. In any large enough piece of software there are probably an order of magnitude more bugs than QA will *ever* discover, no matter how rigorous the process. Short of analyzing the source code by other highly experienced developers, testing will simply not find even a fraction of the issues.
Instead, what you need are good developers programming experienced in avoiding security issues in a language with v
Re: (Score:3)
I think it is overstating the case that almost all are bad memory handling, and that's a dangerous overstatement. It leads to a complacency like "If I just write my code in (Java, Go, Rust) I'm all good, it's only the C developers who have to worry".
For example, parsing feature rich file format as interchange strategy. Even in a secure language, you run as privileged user and the malicious interchange says 'include /etc/shadow' and your secure language will happily do that because that is, in fact, a featu
Re: (Score:3)
Re: (Score:2)
Or you decode your GIF in a safe language... or perhaps not run the GIF decoder with enough privileges to take over the system. Gif is not even that complex a format, but I guess performance has to trump safety... we wouldn't those gifs to take longer than 1ms to decode.
Re: Or you decode your GIF in a safe language (Score:2)
Re: (Score:2)
It was not a GIF file. It was an Adobe Photoshop PSD file saved with a ".gif" extension.
Re: (Score:2)
Also companies, stop letting internal QA people go!! Yes, use external testers. Internal paid QA people are important too!
Re: (Score:3)
Maybe stop "faster is better", and "oh, our competitor put out a new update/version, we have to do it, too", and concentrate on the core, rather than bells and whistles that most people don't use... and *then* you have time for adequate testing.
0 day means something different (Score:4, Insightful)
It's not an estimate of severity, it's an estimate of how long until it's exploited in the wild. 0 Day just means it's already being exploited, it does not necessarily indicate a severe threat.
Re: (Score:1)
zero day means it could already be exploited, not that it is already being exploited
and on the subject of the title, security bugs are not up, patches are up.
Re: (Score:1)
Almost everybody gets it wrong. A 0-day does not necessarily have to be actively exploited to be a 0-day. Read the wikipedia entry it has some history on the term.
The short of it is that the current meaning of "0-day" is a vulnerability that has no patch available to fix it.
Re: (Score:2)
Zero day does not refer to the first point at which it was exploited, but the first point at which it was known.
A zero day vulnerability is brand new, previously unknown and affecting the current version of the software/hardware affected. It also includes enough information to reproduce the flaw, enabling bad actors to build malware around it.
It's a PDF vulnerability (Score:2, Informative)
Good and Bad (Score:2)
It's bad that software seems to have so many flaws, but it's good that they are being patched quickly, even if it might be disruptive to customers.
Re: (Score:2)
I find that android is a bit retarded about updates though, I'll leave the house, look at the phone after it's charged and it's asking to update at a time when I have an expensive connection. Then when I get home, forget all about updates, plug it in to charge it gets a good free connection and does nothing.
Stupid. Why not just automatically update overnight when I'm charging the phone. A good candidate task for AI to learn when a good time to update is.
Re: (Score:2)
It's annoying when there are bugs in patches.. :(
I wonder.... (Score:1)
Re: (Score:2)
I doubt there is good quality data.
Open source is a popular target for research, obviously, because they can just farm github for all sorts of codebases to analyze from all sorts of random developers. So some teenager with no security practices posts some homework code or some code to manage their game library or something, that gets sucked right up into the target of researchers right along a professionally managed significant codebase. In this way, there's a mountain of available source code that has ne
Question on Legality (Score:2)
I appreciate that this question is secondary to the presence or absence of vulnerabilities in the software platforms concerned, but why aren’t companies like NSO Group being prosecuted? If this was a lone hacke
Re: (Score:2)
Hacking your own computer isn't a crime.
Hacking someone else's computer (without their permission to do so) is a crime.
As long as the NSO Group is doing its penetration-testing exclusively on devices that they own (or with the explicit consent of the owners of the devices), no crime has been committed.
Re: (Score:2)
Commercial malware vendors avoid prosecution by only selling to people who are authorized to access the data, usually law enforcement and secret services. As such there is no unauthorized access, at least on paper.
Re: (Score:2)
From what little I’ve read about this sort of spyware being detected, the most prevalent groups finding it on phones have been folk like journalists - a group for whom a justifiable case for interception might be very hard to make. OK, I get that protections of the freedoms
Re: (Score:2)
Oh of course they know it gets abused, but in that case they can argue that they sold it in good faith and the contract said it should not be abused, so they did their part.
Sixteen Windows Zero Linux, hooda thunkit (Score:3)
Sixteen out of 45 Windows. Four Android. Zero Linux.
Yeah, I know, Android is Linux. If Android development was actually open we would have that fixed too. Doing our best.
Good share of Apple crapware in there too. All aligns pretty with what we've been telling you. Never believe Apple or (especially) Microsoft when they claim they've turn the page. Just more of the same old same old.
Re: (Score:2)
Android is linux-based but it is not linux.
It lacks important components.
Re: (Score:2)
You're full of it. Fire up a shell and take a look around. Compile a Linux program and run it. It's Linux
Re: (Score:2)
Android is linux-based but it is not linux.
It lacks important components.
That is literally not how this works. If it's got a Linux kernel and a Unixlike userland then it's Linux. And if you want to, you can use Linux Deploy (with root) to install a FULL Linux distribution userland on your Android system, or use Termux (with or without root) to install enough missing pieces to make Android behave like the Linux you know and love.
Frankly though, any Android system with a terminal app installed is immediately recognizable as Linux to anyone who is familiar with it. And some version
Re: (Score:2)
That's only the ones that Google has discovered though. If you look through the CVE database there are plenty of issues for the Linux kernel that needed quickly patching (fortunately most were responsibly disclosed so the fix came in the next release cycle), and even more in Linux distros.
Re: (Score:2)
We are talking about the zero days in this thread, please keep up.
For shame, as usual (Score:1)
This article has so much wrong with it that I'm surprised to see this on Slashdot. Or I would be, if this were 20 years ago.
How long before the patches are hacked (Score:2)
Re: (Score:2)
even the NSA was not able to protect their own data, do we really think Microsoft or Apple are better at it?
In short, yes. The NSA is unable to attract the best and brightest because they do not pay as much as the private sector, and because you have to get a security clearance and pass a drug test. Putting those three factors together disqualifies almost everyone worth hiring, because almost everyone will want more money and/or fail to get clearance and/or wants to use illicit substances.
Has been standard in the open source community (Score:1)
But it only now starts happening in the psychopathic thug community, where they only ever give a shit if you live or die if it affects their bottom line / cocaine stream.
You can never catch them all (Score:2)
A high level language is developed in a lower level language, there will be flaws in the millions of lines of code that is involved.
If someone thinks they can QA everything, good luck with that. Do you think you can simulate an OS failure that impacts your software. Even more fun, a hardware or driver flaw, they happen and in unexpected ways. Will you emulate that the p