Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Mozilla The Internet

Mozilla Experiments With Site Security Policy 68

An anonymous reader writes "Mozilla has opened comments for an new experimental browser security policy, dubbed Site Security Policy (SSP), designed to protect against XSS, CSRF, and malware-laced IFRAME attacks which infected over 1.5 million pages Web earlier this year. Security experts and developers are excited because SSP extends control over Web 2.0 applications that allow users to upload/include potentially harmful HTML/JavaScript such as on iGoogle, eBay Auction Listings, Roxer Pages, Windows Live, MySpace / Facebook Widgets, and so on. Banner ads from CDNs have had similar problems with JavaScript malware on social networks. The prototype Firefox SSP add-on aims to provide website owners with granular control over what the third-party content they include is allowed to do and where its supposed to originate. No word if Internet Explorer or Opera will support the initiative."
This discussion has been archived. No new comments can be posted.

Mozilla Experiments With Site Security Policy

Comments Filter:
  • by Anonymous Coward
    It's what I use for XSS protection. NoScript's security can get kind of annoying sometimes, but it has been useful for some pages I've come across which have tried to load my user/pass data across a affiliate site.
    • by sec_login_test ( 656626 ) on Friday June 06, 2008 @03:25PM (#23686059)
      NoScript is designed so the user can protect themselves. SSP is designed so that the website owner can protect users from other users, malicious widget developers, or perhaps unscrupulous CDNs.
      • Re: (Score:3, Funny)

        by Anonymous Coward
        unscrupulous Canadians?
      • Re: (Score:2, Interesting)

        <quote>NoScript is designed so the user can protect themselves. SSP is designed so that the website owner can protect users from other users, malicious widget developers, or perhaps unscrupulous CDNs.</quote>

        SSP would be great considering the huge amount of attacks....from google, to microsoft, etc. Which have been successful.

        The only problem with SSP is if microsoft doesn't implement it it will be shot down because IE has 60+% market share. Without IE it would be useless because you can't put t
        • It would be silly to rely on SSP as the only form of protecton. As an additional measure you can use it even if not 100% of browsers implement it - you're just lowering risk/attack surface.

          News headlines like "IE does not implement important security protocol that Firefox does" will get chairs moving fast in Redmond.
          • But the question is if Microsoft will adopt the standard or make their own.
            • <quote>But the question is if Microsoft will adopt the standard or make their own.</quote>

              MSP [Microsoft Security Policy], OSS [Open Site Security] (OSS is totally on Microsoft's side), MSS [Microsoft Site Security]

              all sound like completely reasonable candidates. Or even

              SSP [SilverLight Security Policy] if they really want to make it their own :P

              Than they can apply for ISO standardization and release half ass documentation. ..."Ring any bells"
            • This isn't a standard.

              As much as I dislike Microsoft, sometimes I really wonder about the unthinking Microsoft bashing on Slashdot. Here you've more or less assumed Firefox's non-standard feature will become a standard. But what justification is there for that? IE doesn't have a similar feature in this case, but there are several areas where IE does things differently than Firefox or has features that aren't in Firefox, so why are those never added as standards?. Why does the Firefox implementation

          • It was never meant to be a primary form of protection. As the documentation says "its another layer of protection". Basically it adds another layer but obviously its just extra, you still need to make it so your app is secure without it. But it can't hurt. There is a ton of little secuirty layers which can be bypassed. Like characters in a form...obviously you can change the form but that just makes it that much harder and doesn't really take away anything from a normal user.
      • Re: (Score:1, Insightful)

        by Anonymous Coward
        If a website owner is going to include links from a third party site, it absolutely SHOULD be his/her responsibility to validate each and every one of those sites, if necessary, in a granular fashion. NoScript is a band-aid for the failure of content owners to do this.

        Who is responsible for creating the seamless browsing experience? Users shouldn't have to block scripts at cnn.com and then waste time scratching their head while looking at their noscript menu to try and figure out whether or not the content
      • NoScript is designed so the user can protect themselves. SSP is designed so that the website owner can protect users from other users, malicious widget developers, or perhaps unscrupulous CDNs.
        So you have to trust the site that's being used to attack you not to attack you? Isn't this a variation of asking the drunk if he's drunk?
    • by Anonymous Coward
      Q: Why not just include NoScript by default?

      A: NoScript's security can get kind of annoying sometimes
    • Because most people just want to visit a site and view the cool scripted content without having to figure out which of the twenty scripts on the page will make the game work or the video play.

      Even if you could explain to an average user what NoScript does, they'd probably just enable scripts for every page they visit without caring what the script does.
  • by RelaxedTension ( 914174 ) on Friday June 06, 2008 @03:27PM (#23686093)
    This makes me wonder if it would be good for helping in situations like ISP's using Phorm or their ilk.
    • Re: (Score:3, Informative)

      Not from what I understand of Phorm. You'd need to browse over encrypted connections so that the ISP hosted proxy can't (literally) read your email. (Or your visits to /. or lolcatz or whatever.)

      Their demo page surprised me. I didn't think that stuff would work. If the add-on stops that from being exploitable, it is a good thing, even if it doesn't prevent MITM attacks.

    • It could for a little while though since their already doing DPI to do that all they have to do is add a few more headers to allow their advertisers sites to be loaded.

      Or even worse it wouldn't surprise me if their DPI ad injecting boxes just stripped all SSP headers, which of course would open their users up to more threats but it isn't like they care much.
  • The first thing I thought.. OH, IE catches up, Mozilla moves ahead. FF3 is awsomeness, and now better security development? Impressive. Yes, it's not client side, but offers a way for site owners to add that extra bit of security... hmmm Mr Banker? hello? Are you listening? I hope that something solid comes of this to offer better security for people in general and the Internet as a whole.

    No, it won't stop all identity theft attacks, but it is improvement.

    Perhaps one day I'll get to use my pager/phone as se
    • Re: (Score:3, Interesting)

      Perhaps one day I'll get to use my pager/phone as second path authentication for the bank? I am hoping for it, but any improvement in the meantime is a good thing.

      My father in law had a keyfob lcd thing to which his employer would send him passwords when he wanted to use his takehome work laptop. When he turned it on, it sent a signal to the home base, and the keyfob would show a number. He used this number to log in. This way, if the laptop got stolen, it was effectively worthless.

      Is this what you're
      • by tsa ( 15680 )
        My bank uses this system to confirm payments. It works very well and is easy to use.
      • by klui ( 457783 ) on Friday June 06, 2008 @04:51PM (#23687295)
        What you've described is an RSA SecurID one-time-password. It may appear that your father's fob communicated with the remote server, but the truth is each fob has a unique seed and a clock that creates one-time-passwords based on the value of its clock and an optional salt (PIN). The remote server that validates the OTP knows the seed of your father's fob and is able to authenticate each password it receives. After each valid authentication the passcode is discarded and cannot be used again (within limits of the passcode length). The system allows for clock skew between the fob and the validating server. Current system functionality may have changed as this description is quite old.
        • Interesting. Thanks for the explanation. Perhaps something similar could work for getting onto a bank website? Along with the usual detritus one gets when one signs up for a new bank account, is such a keychain used for authenticating logins...on such a huge scale, how much do those things cost?
          • by Richy_T ( 111409 )
            They keyrings are fairly cheap (considering) but the server software is somewhat pricey (though not in terms of the kind of money banks should be willing to spend on such things).
          • There is a system being trialled in the UK at the moment, by Barclays bank. It's not the same as this system, but it does increase security massively. You're given a little chip and pin machine, and when you want to access your bank account, you put your card into the machine, enter your pin, and it gives you an access code.
            Having seen it in action, it's a brilliant system. Unfortunately, I'm not a Barclays customer, so I haven't used it personally, but I'm waiting quite excitedly for my bank to implement a
            • by jrumney ( 197329 )
              It's well beyond a trial now. My girlfriend got hers from Barclays when they rolled it out properly late last year, and I got one recently from Nationwide. It's a bit annoying, as you need to carry the little calcuator sized machine around wherever you might want to use your internet or telephone banking. I think Barclays requires it for login, Nationwide only requires it for activities that result in money leaving your accounts, so you can at least still check your balance and transfer between accounts if
              • by mikiN ( 75494 )
                Strange that to many people here this seems to be new.

                My bank has been sending me OTPs for some 12 years now, at first by regular mail, currently as a text message to my phone, to be used every time I want to make a transaction.
                (New) usernames and passwords have to be collected in person at the bank by presenting them with the confirmation letter and proper ID.

                Friends of mine who do internet banking with other banks have been using a personal card reader/PIN pad and a challenge/response system for at least
          • IIRC, the key (license) fees end up being over $100/user (per year?).
        • FWIW, NSA was using this technology in the '80s.
    • Bank of America has this now. I don't use it as I don't have a cell phone for them to text info to, but they have it as a second-path authentication option.

      I had read about the European key fob type things and wondered when something like that would show up here. I guess BofA decided to take the cheaper for them route since many now have cell phones, but personally, I've simply never been able to justify the additional cost for a cell phone over (formerly) a landline, and the hurdle got higher recently wh
  • by Anonymous Coward

    The prototype Firefox SSP add-on aims to provide website owners with granular control over what the third-party content they include is allowed to do and where its supposed to originate.

    But the real problem is still on the client end.

    For example, I block google-analytics.com, because I don't want to be a part of whatever Google stores by means of "ga.js".

    Perhaps I'm misunderstanding SSP, but the last thing I want my web browser to do is automatically "decide" that, for example, my trust in slashdot.or

    • Re: (Score:3, Interesting)

      by profplump ( 309017 )
      I agree it would be useful to have better client-side protection, but I don't understand how this system could possible make things worse.

      Currently the options for limiting the scope of JavaScript are:
      1. Turn JS off
      2. Prevent certain files from loads (i.e. /etc/hosts or the like)

      This does not interfere with either of those, and adds:
      3. Allow site administrators explicitly list allowed scripts/domains and block all others.

      You can still turn off JS, and you can still prevent certain files from loads. If you c
    • First off you are correct, you don't understand SSP. It does not force-enable JavaScript on the client's end. It allows site developers to force-disable JavaScript that they have not verified. It is sort of like NoScript for site developers that don't have 100% control over the source of their site. Consider the case where a site is built using a multi-author CMS. The group could agree to only use scripts that they wrote and ones server from [favorite_script_site]. SSP would prevent clueless Sally from addi

  • If the ISP isn't virus-scanning uploads, the ISP is inviting attacks.

    If SSP is as transparent as SSL (and how could it not be, since it's only 4 letters away in the alphabet!) then it will work. If it takes user intervention, it will probably fail.

    There's no reason to believe Opera or IE would not adopt it, if it works.
  • pages Web (Score:3, Funny)

    by HTH NE1 ( 675604 ) on Friday June 06, 2008 @03:40PM (#23686291)

    which infected over 1.5 million pages Web earlier this year.
    That reminds me: I need to update my page Web.
  • FF3 'Killer App' ? (Score:3, Interesting)

    by apachetoolbox ( 456499 ) on Friday June 06, 2008 @03:41PM (#23686299) Homepage
    This sounds like a great idea! Maybe this will be the killer app that pushes FF past IE.
  • FINALLY (Score:5, Informative)

    by LeafOnTheWind ( 1066228 ) on Friday June 06, 2008 @03:42PM (#23686307)
    As someone who has worked in web security, let me say that many of us have been begging for stricter control over security protocols for years. With all the AJAX going around, more and more sites are proving vulnerable to browsers that are just too friendly with the same-origin policy. If you check out the OWASP Top 10 [owasp.org], you'll see that a whole bunch of these attacks could be prevented by better browser security.

    The best case would be a restructuring Javascript and the DOM as well, but I would be excited to see any increased security. After I used a reflected XSS attack to essentially gain control over a client's browser and all their cookies last year, I don't trust any web application.
    • Re:FINALLY (Score:5, Insightful)

      by hedwards ( 940851 ) on Friday June 06, 2008 @03:52PM (#23686435)
      This seems like something that would be very useful in this day and age. Noscript ends up being very annoying because a lot of sites will link in a large number of scripts from other servers. It's difficult a lot of the time because you really have to investigate to know what akamai.net does for example, and then there's admdt.com and the like. The list is often times 20 different sites without a good way of restricting the permissions to just the current site.

      I block everything by default, and rarely allow things permanent permissions, but trying to figure out which one is causing the site to not work is a real pain in the ass. Really it's something which shouldn't be expected of a user. the JSON ans XSS vulnerabilities really ought to be enough to convince at least financial institutions not to include that kind of crap on their sites.

      But of course, this assumes that the system works, is well designed and protects against the things that developers really shouldn't be doing automatically. But I'll be giving it a whirl just to see if it helps at all.
  • A modded up comment says it helps the web site owners to protect one user from another (malicious) user. But it is part of the browser. How does a browser help the server?
    • If a website's code is changed by a malicious, outside party, it usually has to reference an outside server at some point. SSP will block access to the outside server because it will recognize the current website has no need to have you interacting with it.

      At least that's how I'm understanding it.

    • Re:Very confusing. (Score:5, Informative)

      by mikeazo ( 1303365 ) on Friday June 06, 2008 @03:57PM (#23686491)
      because the server specifies the site policy. the browser uses the site policy to know what to block. if I am running my site and have my policy set up correctly, if an XSS is found in my site and an attacker tries to exploit it, hopefully my policy is defined well enough that a browser knows not to run the attacker's javascript. so the browser doesn't help the server, the server helps the browser know what it should run and what it shouldn't.
    • The site owner includes additional HTTP headers in the response describing all of the 'good' JavaScript on the page, where it is coming from, what it is allowed to make requests to, etc...

      If a site owner implements these headers injected JavaScript won't be able to do much since it will be in the sandbox specified by the site-owner.
    • Re:Very confusing. (Score:5, Informative)

      by pavon ( 30274 ) on Friday June 06, 2008 @04:00PM (#23686541)
      Say you are a webmaster. You serve pages that you generate and trust them. However, some third parties would like to include content on your pages that is served from servers that you don't control. For example advertisements - these are almost always served from a different computer than the main webpage. Another example is embedded content from another site like a YouTube movie, or all these little panels that the social networking sites are starting to introduce. This gets worse when users themselves are allowed to put this sort of content on your site (say you run a forum or social networking site).

      Because you don't control these web servers, the content they are serving could be replaced with malicious content, or just content that goes beyond what you gave them permission to do. Or a user could intentionally post malicious content.

      This allows a webmaster to indicate what third party content is allowed on any page (if any), and what that third party content is allowed to do (text, images, animated images, javascript, plugins, etc). The web-browser then enforces the rules that the webmaster set.
      • by dv8ed ( 697300 )
        This is great for security, but what does it break? There are a lot of useful things that could get potentially caught up in a policy like this (eg, the del.icio.us bookmarklet.) If this can knock down malware, great, but I'm a little wary of taking control of what third-party scripts users are allowed to run on their own machines.
  • Its very one's job... But shouldnt the webmasters be responsible for their content. If an ISP can deep packet filter, and can be forced to by law, why cant a website be forced to filter content it displays, whether it comes from a CDN or other businesses. Pushing this off to the user, is doomed to failure for the majority.
    • Re: (Score:3, Insightful)

      by rgviza ( 1303161 )
      If you care about your machine and what happens to it, it's your job. If you just want to flail around in anger when your box becomes a big paperweight, leave it up to someone else.

      It's the job of the authorities to lock up the crackers and other people that commit electronic crimes. It's your job to lock your front door.

      At the end of the day, the authorities and your ISP can't do anything until the threat is known. By then the damage is done.

      99.9% of stuff can be mitigated with NoScript, a $50 firewall, an
      • while a i completely agree with your assessment of the general user and the responsibility we should all take with everything we do. I do know, that from an IT perspective, both from the side of the help desk and the web application developer, it is a lot more effective to block it on the website end, though not cheaply, than at the user end exactly because of the societal and personal reasons you mention.
  • by mrkitty ( 584915 ) on Friday June 06, 2008 @04:07PM (#23686659) Homepage
    This is something a lot of us in the industry have been writing about. Here's my rant from last October Browser Security: I Want A Website Active Content Policy File Standard!
    http://www.cgisecurity.com/2007/11/08 [cgisecurity.com]

    Jeremiah Grossman's thoughts
    http://jeremiahgrossman.blogspot.com/2008/06/site-security-policy-open-for-comments.html [blogspot.com]
  • by fred fleenblat ( 463628 ) on Friday June 06, 2008 @04:33PM (#23687063) Homepage
    In all likelihood it will be years before the enabling technology is in place to prevent the most vicious malware of all, the dreaded rickroll.
  • by Giorgio Maone ( 913745 ) on Friday June 06, 2008 @05:17PM (#23687659) Homepage

    As I commented here [hackademix.net], SSL and SSP are orthogonal technologies whose correct and joint adoption should be required for any website performing sensitive transactions: the former ensuring integrity and, to a certain extent, identity; the latter defining and guarding application boundaries.

    Those websites should encourage their users to adopt a SSP complaint browser, and complaint browsers should educate users to prefer SSP complaint sites with visual clues, just like we're already doing with EV-SSL (and for better reasons in this case, maybe).

    On my side, I'm considering to highlight valid SSL + restrictive SSP websites as more reliable candidates for NoScript whitelisting.

    • In theory SSP could be useful with one huge caveat, HTTP response splitting vulns completely negate it. If a site's vulnerable to XSS, it's badly coded and no SSP style mechanism corrects that. I tried the browser extension yesterday before this hit slashdot...
      • Currently only supports X-SSP-Script-Source.
      • Regex based when it should hook into the mozilla parser sink.
      • Converts application/xhtml+xml to (IIRC) text/html+ssp.
      • Filtered inline script from an XHTML page, removing the opening script element but le
    • I want to browse the web, not complaints. :(
  • Advertisers may not like it. Currently they use scripts from multiple domains and dozens of CDNs.

    SSP will require them to cut down number of domains needed to whitelist (otherwise SSP whitelist would look like AdBlock's database ;) and won't let them add new domains without getting publishers to change SSP.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...