Are Bug Bounties the Right Solution For Improving Security? 58
saccade.com writes Coding Horror's Jeff Atwood is questioning if the current practice of paying researchers bounties for the software vulnerabilities they find is really improving over-all security. He notes how the Heartbleed bug serves as a counter example to "Linus's Law" that "Given enough eyeballs, all bugs are shallow." "...If you want to find bugs in your code, in your website, in your app, you do it the old fashioned way: by paying for them. You buy the eyeballs.
While I applaud any effort to make things more secure, and I completely agree that security is a battle we should be fighting on multiple fronts, both commercial and non-commercial, I am uneasy about some aspects of paying for bugs becoming the new normal. What are we incentivizing, exactly?
You buy eyeballs and loyalty. (Score:5, Insightful)
NSA is buying security holes [techdirt.com] to use against us. This is part of what Snowden revealed with the leaks.
Offering a bounty, even though it is not as much as the security problem could fetch on the grey market, creates a certain loyalty towards the vendor, and makes it easier to go to them, and ensure the hole gets patched. It also attracts more eyeballs to your software, as finding a problem means money. Google has gone even further - by offering grants [google.no] for research into specific products, where you get money for checking security of the software, not just finding security prolems.
So I believe it is a good thing; it probably means more holes will be reported directly to the vendor, and not sold for exploit. It probably attracts eyeballs as well...
The problem is (Score:4, Funny)
It pays better to exploit the bugs...
Re: (Score:3)
Depends on whether you're able to actually monetize it. Exploiting a bug doesn't make money come out of your computer.
Re: (Score:2)
It pays better to exploit the bugs...
Even if a blackhat secretly monetizes a bug, a whitehat can still officially grab the bounty for the same bug.
Re: (Score:2)
Re: (Score:2)
Enough eyeballs and heartbleed ... (Score:5, Interesting)
... He notes how the Heartbleed bug serves as a counter example to "Linus's Law" that "Given enough eyeballs, all bugs are shallow."...
I think the big issue with the Heartbleed bug was that the OpenSSL code base was so egregiously poorly written and maintained that eyeballs started bleeding whenever they looked at it. imo, the OpenSSL code base never had enough eyeballs looking at it to make its bugs shallow. It was painful to look at, so eyeballs avoided looking at it.
.
I still think that Linus' Law hold true, or at least is a very good guideline. I think exceptions like the OpenSSL code base are needed to hone the point that Linus' Law makes.
I also take issue with the headline, as I do not think there is any one right solution for improving security. The improvement of security is a multi-faceted endeavour and an ongoing process.
Re:Enough eyeballs and heartbleed ... (Score:5, Insightful)
I think the only thing the OpenSSL bug shows is how flimsy the underlying framework of the internet is. Most of the shit we all use, trust and take for granted was coded in someone's basement over the weekend a long time ago. All it takes is one clever guys to take a good look at the code to exploit it, and it's probably fair to say he'll be the only one to review the code ever...
Re: (Score:1)
I couldn't agree more. Just last month we learned that NTP is maintained by a single person who does it on his own time and dime (and who could stop any time to look for a paying job). I opened the changelog and there is a lot of ongoing activity, so it's not like the person is just answering the occasional email.
While I am very much into open source, and I don't blame it for this, I do believe that there is a tendency associated with it to take certain things for granted. That makes some of the biggest
Re: (Score:2)
... Just last month we learned that NTP...
NTP.org is its own problem. Even when there was more than a single person maintaining it, the development looked less than favorably upon code improvement suggestions from the community.
... I do believe that there is a tendency associated with [open source] to take certain things for granted....
You've hit on the main problem. It is not open source, per se, as you imply in your message. It is community involvement.
.
Where you have a community that is involved and stays involved, bugs are shallow. When you have a community, such as NTP.org, where suggestions are pushed away, bugs become very deep.
Re: (Score:2)
...Most of the shit we all use, trust and take for granted was coded in someone's basement over the weekend a long time ago. ...
... and the code written in nice air-conditioned offices in Redmond, Washington has shown itself to be so much more secure over the years ....
Re:Enough eyeballs and heartbleed ... (Score:4, Interesting)
I think the big issue with the Heartbleed bug was that the OpenSSL code base was so egregiously poorly written and maintained that eyeballs started bleeding whenever they looked at it. imo, the OpenSSL code base never had enough eyeballs looking at it to make its bugs shallow. It was painful to look at, so eyeballs avoided looking at it.
I agree. Heatbleed is not a counter example, it is simply evidence that the original "Linus's Law" was not complete. A better version of it would be
Given enough eyeball hours, all bugs are shallow
With the definition of "enough" being dependent on the complexity of the code in question.
Re: (Score:3)
A better version of Linus' Law would be the original one.
Re: (Score:2)
8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
Seems to work for Firefox. Every time there's a N.0.0 release, there's a N.0.1 release in less than a week - every... freaking... time. I'd wish they'd focus on getting things done correctly rather than quickly and churning out new major version numbers.
Re: (Score:3)
I think the big issue with the Heartbleed bug was that the OpenSSL code base was so egregiously poorly written and maintained that eyeballs started bleeding whenever they looked at it. imo, the OpenSSL code base never had enough eyeballs looking at it to make its bugs shallow. It was painful to look at, so eyeballs avoided looking at it.
That's really just speculation.
So let's everyone ask ourselves this question: how many times do we personally browse open source code, looking for vulnerabilities or other bugs?
Let me guess that the answer is: I mostly run precompiled binaries, and might rarely take look at a particular small piece of code to solve a specific problem (which I came across by running the binary).
I suggest that it just is likely that most OSS projects are like OpenSSL: only the core developers take a look at the codebase.
My so
Re: (Score:3)
...So let's everyone ask ourselves this question: how many times do we personally browse open source code...
"Everyone" does not need to do it. You've set up a premise that fails on face value.
Re: (Score:2)
Ok, good counterargument, but I still suspect that the amount is extremely low outside the main developer team.
The romantic vision of hackers around the world sitting comfortably next to a fireplace with a ThinkPad and browsing source code in the evening is just a fantasy...
Re: (Score:2)
Re: (Score:2)
A shallow bug is one that can be fixed, or at least understood and described, quickly, easily, or simply.
That doesn't mean the bugs will be found, it characterizes what happens after they are found.
I don't believe Linus' Law has anything to do with the number of bugs *found*, rather bugs *fixed*.
It is the open source community that says more bugs will be found because anyone can read the source - but then no one reads the source. And then people (mis) understand that Linus' Law somehow means that all bugs
Re: (Score:2)
He falsely notes that, since Linus' law has absolutely nothing to do with it. He seems to think Linus said: If enough people work on a project, there won't be any bugs.' Linus' Law refer to the ability to track down, understand, and fix a bug once it has been discovered.
And my first thought when I read the title was in line with yours. What a stupid question. Just as with secu
Re: (Score:2)
I'm a bit worried, though. What's the safeguard against a software engineer introducing defects, getting someone else to report it, and splitting the bounty?
Or, even for old code, it may tempt someone to share proprietary code with someone outside the company, in order to find bugs and share the bounty.
Re: (Score:2)
I'll admit, whenever stuff like this comes up, I always think of this [dilbert.com]...
the 'Right Solution' ? (Score:2)
Re: (Score:1)
-1 redundant
That's from the fucking article. You're not helping.
What exactly is the problem? (Score:2)
Re: (Score:2)
The problem is, Jeff is uncomfortable with the idea. That's the whole of the foundation in the linked article. But there is this point:
Not all reports of security issues will be real issues, and if you offer bounties some people will be looking for an easy payout.
Most of the article is useless junk:
Re: (Score:2)
It may be an effective component to your total bug strategy, but it should be the last on the list. The primary effort should be oriented to not releasing the bugs to begin with.
Let's say I create an adversarial system in my company. I pay developers a base salary plus an at-risk bonus for delivery of software QA by the deadline. If they deliver before the deadline, the at-risk bonus increases.
QA has base salary plus can earn that at-risk bonus by finding the bugs between when it's delivered to them for
Posting them on the internet... (Score:2)
... is the best solution. Nothing gets them to fix the bug besides liability. The fear of lawsuits, embarrassment, etc... that gets them to take it seriously. Nothing else.
MS had some bugs that they knew about for a decade that they didn't patch.
You jump right to setting their nuts on fire.
start at the root, we must have securable hardware (Score:4, Insightful)
But recent developments have made clear that securable hardware is sine qua non. All firmware must be in socketed memory, so that you can take it out and externally check it. You can't trust an untrusted system to check itself. All firmware must be protectable with a hardware readonly jumper or switch.
I know that this is inconvenient and a revolution on how hardware is currently made. but if people started demanding it en mass, it would not cost very much. And I mean the firmware in disk drives and optical media players and especially routers.
There may be other requirements.
This is sine qua non. Without this we have nothing.
Re: (Score:2)
Of course it's possible. My company does it all the time. There are several effective methods, the simplest being putting a jumper on the write-enable pin of the device that holds the firmware, and the removal of that jumper before you ship.
Re: (Score:3)
What I personally think is really scary is that a lot of devices in our PCs are ready to accept new firmware at any moment. There usually are no safeguards that I can enable to prevent malicious code being injected to core components like BIOS, CPU microcode, HDD, DVD...
Now, in general, hardware security is a tricky concept, because currently the hardware layer is simply fully trusted.
Re: (Score:2)
Back in the old days, you'd generally have to either replace the physical chip or at least move a jumper in order to write to firmware.
The current status quo is simply to lower costs for support; you don't have to send out a tech to update someone's firmware, you can just have them download a file and run it. Companies can save money on QA because you can fix mistakes later on. To make matters worse, the vast majority of consumers don't understand the problem.
I agree with you 100%, but this is money we're
Re: (Score:2)
Re: (Score:2)
Adding a write-protect jumper only costs a few cents.
And if you want to keep the convenience of downloadable upgrades, don't install the jumper.
Sure, only a few percent of us would buy a motherboard because the BIOS had the option to add a write protect jumper, but that's still a few percent more sales.
Plus it's a marketable difference - if you've got it and your competitors don't, then you can use scare tactics;
"Unlike our competitors, we care about your computer's health, that's why all our motherboards h
Re: (Score:2)
Adding a write-protect jumper only costs a few cents.
1cent times 1 billion devices is 10million "lost profit".
Re: (Score:2)
And 1 percent less sales * 1 billion devices * $100 a board is $1 billion in "lost profit".
If the jumper costs %0.1 of the profit on the device, then it only needs to improve sales by %0.1.
back when you need to burn eproms code was better (Score:2)
back when you need to burn eproms code was better on release there where still updates but the base code. Even with older dos / windows game where mostly done on release with some updates. Now days things are more buggy and software comes out with things that will be added later with updates that are easy to install.
Someone will buy for the information (Score:2)
It's either someone who will fix the bug or someone who plans to exploit it.
it scales (Score:3)
What about better QA and no don't have the dev's d (Score:2)
What about better QA and no don't have the dev's do testing.
Also the QA needs to have full access to what they are testing. They need to be able stuff that end users can do as well being able to manually set up the system / data in different ways not only to make easier / faster to test out some modes but to also to setup for some unusual modes / settings.
QA needs to be able to think out of the box and it does not need to be some job that is just a side job to some ones other job.
The downside... (Score:4, Funny)