Web 2.0 Under Siege 170
Robert writes "Security researchers have found what they say is an entirely new kind of web-based
attack, and it only targets the Ajax applications so beloved of the 'Web 2.0' movement.
Fortify Software, which said it discovered the new class of vulnerability and has named it
'JavaScript hijacking', said that almost all the major Ajax toolkits have been found vulnerable. 'JavaScript
Hijacking allows an unauthorized attacker to read sensitive data from a vulnerable
application using a technique similar to the one commonly used to create mashups'"
Okay, I'll be the first to ask. (Score:5, Insightful)
"In an example attack, a victim who has already authenticated themselves to an Ajax application, and has the login cookie in their browser, is persuaded to visit the attacker's web site. This web site contains JavaScript code that makes calls to the Ajax app. Data received from the app is sent to the attacker."
Re:Okay, I'll be the first to ask. (Score:3, Insightful)
Is that title sarcastic? (Score:4, Insightful)
This is a vulnerability that appears only when passing Javascript between client and server. An attacker has to get a potential-victim who is logged-in to a site, that uses the JSON format to exchange data using AJAX, to visit a page they've setup. Then the attacker can intercept the data as it travels between client and server, a man in the middle attack. From the article:
So it's a known method of attack, but because it's aimed at web sites using AJAX it has to be labelled 'Web 2.0'. Ugh.
Re:They discovered this? (Score:1, Insightful)
sigh (Score:5, Insightful)
I still maintain that the collective blindness to these security issues comes from our absolute refusal to see HTTP requests as function calls. This is partly due to the silly ideology of the REST crowd.
Rephrase the situation as follows and see if this doesn't make you pee your pants: "Any site can instruct your browser to execute an arbitrary function on another site using your authentication credentials."
Re:Leaves web to trusted sites only (Score:2, Insightful)
All very well and good until one of those ten gets infected by something nasty. I seem to recall seeing an article recently where a big site like CNN or one of them got hit by a worm and was actually serving up infected pages for 48 hours or so till it was discovered and cleaned out. The solution is not to rely on the servers being secure (although that can't be ignored either if you're securing the servers), but to ensure that even IF the servers are compromised that you arn't vulnerable.
As much as I hate to admit it, Vista is actually close in it's security implementation, the user really should be required to approve certain actions, just not every action. A more robust security model, such as the one used in SELinux (with a more sane administration interface), combined with some user confirmation such as that used in Vista would lead to a much more robust OS. Add to that more secure apps running in at least partially sandboxed environments and possible infection vectors would be reduced to almost nothing (not nothing mind you, but much better than today).
Applications should be designed like banks, secured and fairly well defended. OSes should be designed like Area 51, armed and not afraid to shoot.
Ultimatly of course, it's up to the user to be the last line of defense, so even though we can do alot to make the users life easier, the final part of the puzzle is user education.