Vulnerabilities in Popular Open Source Projects Doubled in 2019 (zdnet.com) 21
A study that analyzed the top 54 open source projects found that security vulnerabilities in these tools doubled in 2019, going from 421 bugs reported in 2018 to 968 last year. From a report: According to RiskSense's "The Dark Reality of Open Source" report, released today, the company found 2,694 bugs reported in popular open source projects between 2015 and March 2020. The report didn't include projects like Linux, WordPress, Drupal, and other super-popular free tools, since these projects are often monitored, and security bugs make the news, ensuring most of these security issues get patched fairly quickly. Instead, RiskSense looked at other popular open source projects that aren't as well known but broadly adopted by the tech and software community. This included tools like Jenkins, MongoDB, Elasticsearch, Chef, GitLab, Spark, Puppet, and others. RiskSense says that one of the main problems they found during their study was that a large number of the security bugs they analyzed had been reported to the National Vulnerability Database (NVD) many weeks after they've been publicly disclosed. The company said it usually took on average around 54 days for bugs found in these 54 projects to be reported to the NVD, with PostgreSQL seeing reporting delays that amounted to eight months.
Nvd (Score:1)
Re:Nvd (Score:5, Insightful)
The headline is wrong, vulnerabilities didn't increase, our ability to detect vulnerabilities increased.
Re: (Score:1)
+1 Insightful. And 100% correct. I always said you were the most insightful person here.
Re: (Score:3, Interesting)
The headline is wrong, vulnerabilities didn't increase, our ability to detect vulnerabilities increased.
Could be. Or it could be more vulnerabilities, with complexity increasing and the surface to attack getting correspondingly lager, with malware attackers shifting to new targets.
Hard to tell which: Insufficient data.
No, the reports doubled (Score:5, Insightful)
Re: (Score:2)
This.
The visibility over the vulnerability is proof that the "many eyes can read the source code" tenet of open-source holds true.
Closed source code is just as vulnerable, but the vulnerabilities aren't reported until they're being exploited by malware.
Re: (Score:2)
Further, reports don't necessarily mean problems. "Security researchers" as of late have gotten press with reported issues that are more them disagreeing with something or outright being wrong.
For example, someone claimed Nintendo was checking passwords as they were typed before being submitted, as evidenced by the button lighting up when they typed the first 8 characters of their password. Failing to check that entering a *wrong* password also made the button activate when the password could *theoretically
I know what we can do... (Score:5, Funny)
Re: (Score:2)
Just add "no more buggy code" to Code of Conduct. This should do it.
Or take the Microsoft approach. "All code is equal, Features should not be labelled as buggy just because they don't fit into the programmers idealized functionality"
Just like COVID-19 (Score:1)
If you don't test or look for it, you don't find anything. Finding bugs is the most important thing, just like testing is.
Number of Open Source projects more than doubled? (Score:2)
The Dark Reality of Closed Source (Score:5, Insightful)
Is that you have no idea how many bugs there are and even if you did no one could do anything about them except the provider, who may be disinterested or long gone.
All software has bugs. That's not what's different about open source.
Re: (Score:2)
Quite. This report is meaningless without a comparison of similar closed source projects/products.
The other problem with this is that it just says "security vulnerabilities" without any attempt to dig deeper. Not all security problems are the same; how easy are they to exploit, what is the potential impact, etc ? A remote exploitable root shell bug needs to be fixed much more quickly than, say, a local exploit that just lets you see a list of file names.
Snake oil (Score:2)
How many of these reports “have no exploits” or require a calamity of other events (physical access / root level privileges)?
Having read and dealt with some of these findings, there is a sense that some of these are more about $$$ and have nothing to do with vulnerability.
Re: (Score:2)
I have had arguments with one "security researcher" who had opened an issue that local root/administrator can update their firmware. The firmware is signed and the signing was correctly enforced, but the hardware platform didn't require any additional password to update firmware beyond having root/administrator on the box.
These opinions do more harm than good to the security community. After being inundated with issues like the above across the industry, my default reaction to a new security finding in the
Should detail languages of implementation (Score:2)
Open source projects are not all made in the same way. One crucial difference between them is in their respective languages of implementation, a detail which can be presumed to be highly relevant yet which does not seem to have been studied nor recognized in the report.
Some programming languages are safer than others by design, and in principle this should translate into application bugs being less easily exploitable than in applications written using less safe languages. This deserves to be studied and
Oxidize it! (Score:1)
Probably need to work on that 70% of CVE's that are due to memory and concurrency errors.
From "a report" in "a study"... (Score:1)
...makes believers everywhere...
Who needs professionals?
The Point (Score:1)