Security Researcher Makes His Point By Hacking Into Zuckerberg's Facebook Page 266
Eugriped3z writes "Whitehat Palestinian hacker Kahlil Shreateh submitted a bug report to Facebook's Whitehat bug reporting page not once, but twice. After it was ignored the first time and denied outright on the second occasion (which included links to an example as proof), he hacked Mark Zuckerberg's personal timeline, leaving both an explanation and an apology. From the article: 'In less than a minute, Shreateh's Facebook account was suspended and he was contacted by a Facebook security engineer requesting all the details of the exploit. 'Unfortunately your report to our Whitehat system did not have enough technical information for us to take action on it,' the engineer wrote in an email. 'We cannot respond to reports which do not contain enough detail to allow us to reproduce an issue.' Facebook has a policy that it will pay a minimum $500 bounty for any security flaws that a hacker finds. However, the company has refused to pay Shreateh for discovering the vulnerability because his actions violated Facebook's Terms of Service.'"
Guilty of being Palestinian (Score:1, Interesting)
How much you want to bet it's because they don't want to be seen giving money to someone in Palestine?
Not worth it (Score:5, Interesting)
Facebook has a policy that it will pay a minimum $500 bounty for any security flaws that a hacker finds.
That's absolutely not worth the money. He's better off taking the publicity he got from this and turning it into a high-paying job.
Re:Won't pay? (Score:4, Interesting)
No they aren't, because *finding* a security flaw is not the same thing as illegally *exploiting* a security flaw. If you need a proof of concept, you can hack your own account.
Re:Take it public (Score:5, Interesting)
Bzzt. If Facebook's logging weren't broken, that should be all they need. The existence of the post itself, having been posted to a wall where he should not have been allowed to post, should have been enough to determine trivially that the bug was real. Further, the post's database record should contain the posting IP address and the ID of the server that handled the request. From there, they should have been able to look at the server's request logs to determine precisely how the attack happened (assuming the researcher was using a structurally valid URL in the request, as opposed to exploiting a null character handling bug in the web server itself).
But even if they looked at the logs and couldn't figure out what happened, IMO, it is still completely unacceptable to just close a bug like this. It's one of those bugs that, if real, is borderline catastrophic in scope. You do not close a bug like that as "cannot reproduce". You contact the originator and say, "Hey, can we get more information about this? We need to try to reproduce the problem."
It's sad that it takes somebody posting on the CEO's Facebook page to get the attention of Facebook's security staff. This means one of two things: they are grossly mismanaged or are woefully understaffed—probably the latter, IMO. Either way, it tells me that Facebook does not take security seriously enough. If bug screeners do not have time to properly follow up on bugs that are this severe, then they need to double or even triple the number of screeners.
Also, this brings into serious question the way that Facebook screens bugs in the first place. Where I work, a bug like this would have been tagged as a security bug the moment it came in. This causes additional people to review the bug, significantly reducing the likelihood of a serious mistake. Closing the bug without asking for more information strongly suggests that a single, hopelessly overworked individual made a mistake, and that the company as a whole failed to have proper processes in place to ensure additional review that would otherwise have caught that mistake quickly and followed up with the original reporter. Not good. Not good at all.
And as long as I'm criticizing Facebook's security practices, IMO, a service like this should have several publicly visible, official security testing accounts for precisely this purpose, with various restrictions on various posts, etc. so that security researchers can properly hammer on their site's security. For example, there should be an official test account that looks an awful lot like Mark Zuckerberg's account. If a researcher is able to post on the wall of that account, there can be no doubt whatsoever about the fact that a bug exists. Likewise, there should be more complex accounts with various security settings, complete with a list of that content and the expected behavior (e.g. you should not be able to read the barcode image entitled "nude_selfie_for_my_boyfriend.jpg").
In short, I suspect there's plenty of blame to go around for this error. What matters is not who gets blamed, but rather how Facebook fixes their processes to ensure that such mistakes do not get made in the future. And I would emphasize that this does not involve firing anyone. People make mistakes. That's why processes are supposed to be designed to mitigate those mistakes. A company like Facebook is big enough that they should know this. If they don't, then perhaps this object lesson will get their attention and cause them to change their ways. If not, it's time to run, not walk, to a competing service.
Either way, what the researcher did was IMO wholly appropriate. He initially performed the smallest attack that could potentially have proven that there was a flaw. When the first report was casually dismissed, he then escalated that attack,
Re:Take it public (Score:5, Interesting)
Even if you're right, and 99% are bogus, there's no excuse for having a process where you choose "Not a bug" instead of "Need more information" with a request for steps to reproduce. That should be drilled into employees as the only valid response until they are relatively certain that the problem was user error. This culling was premature; you must assume that the bug *might* need investigation until it is clear that it does not. Anything less is negligence.
But the bigger problem is that there's no good way for Facebook to be certain that it wasn't user error unless the account is known (by Facebook) to have settings that should have prevented posting. That's what makes the CEO's page an obvious choice. IMO, there's also no excuse for a company the size of Facebook to not provide an account that is preconfigured to not allow posts so that if a researcher successfully posts on it, the subsequent security bug report has automatic credibility (and, hopefully, additional logging by Facebook's servers, immediate reaction from their security response team, etc.). Perhaps call the test account Zark Muckerberg.