New Firefox Standard Aims to Combat Cross-Site Scripting 160
Al writes "The Mozilla foundation is to adopt a new standard to help web sites prevent cross site scripting attacks (XSS). The standard, called Content Security Policy, will let a website specify what Internet domains are allowed to host the scripts that run on its pages. This breaks with Web browsers' tradition of treating all scripts the same way by requiring that websites put their scripts in separate files and explicitly state which domains are allowed to run the scripts. The Mozilla Foundation selected the implementation because it allows sites to choose whether to adopt the restrictions. 'The severity of the XSS problem in the wild and the cost of implementing CSP as a mitigation are open to interpretation by individual sites,' Brandon Sterne, security program manager for Mozilla, wrote on the Mozilla Security Blog. 'If the cost versus benefit doesn't make sense for some site, they're free to keep doing business as usual.'"
as an end user (Score:2, Insightful)
I really hope the default policy is "only allow scripts from the current domain" and "do not allow the site to override my choice".
Re: (Score:3, Informative)
It doesn't quite work that way, it's much more fine grained, i.e. as a site owner I can say something like:
allow /foo/bar.cgi?weird looking strings and block anything else
so if an attacker finds a cross-site scripting flaw in say "/login.php" the client won't accept it, protecting my client, and protecting the site owner as well (bad guys aren't harvesting credentials from users, etc.).
Re: (Score:2)
No, CSP doesn't work like that. You can't specify path patterns or something like that. If you have an XSS flaw on your site an attacker still can inject scripts. But the scripts won't get executed because CSP only allows external scripts from white-listed hosts.
Re: (Score:2)
Re: as an end user (Score:2)
That is the problem - a mandatory restriction that amounts to "we don't trust the html generated from our own domain".
Re: (Score:2, Funny)
As an end user I really hope that the sites I visit have a default policy of "we only serve up our own shit". ...
Fuck.
Re: (Score:2)
Content delivery networks like AOL and Google are hosting them for free, so it makes sense for websites to just include them and use the big guys' bandwith.
Re: (Score:2)
As an end user I really hope that the sites I visit have a default policy of "we only serve up our own shit".
Then I guess you don't visit Wikipedia, eBay, or any other site that allows its subscribers to submit works to be displayed on the site.
Re: (Score:3, Interesting)
Noscript does this.
Which brings me to the observation that, at least as far as I can tell from the blurb, this entire thing sounds a bit redundant in light of the ready availability of Noscript. Why not just make it part of the default firefox install instead?
Re:as an end user (Score:5, Insightful)
Because, as a user I might not know which of the 47 different domains that CNN pulls scripts from are *supposed* to be serving scripts and which are some guy trying to get my facebook account details (not that I have one or read the CNN site regularly; largely because of the number of bloody domains they pull scripts from), whereas the owners of the CNN site *will* know which domains they're supposed to be pulling scripts from and can state so to the browser.
Re: (Score:3, Insightful)
Sounds like a bug rather than a feature to me. Th
Re:as an end user (Score:5, Insightful)
NoScript solves a different problem, which is that you don't trust the site. What this aims to solve is the problem of knowing what the site itself considers trustworthy, so that you're not required to issue a blanket statement of distrust: If you trust the site, you can (supposedly) trust its own trust list.
Re: (Score:2)
here,here, mod up....ooops used them all up, very informative you are though!!!
I would mod you up, if they could give me more then just 5 damn points!
Then isn't this feature nonsense?? (Score:2)
The page from the primary domain refers to scripts on those other domains as a matter of trust. If CNN doesn't trust a domain's scripts, then they won't refer to them in the first place!
OTOH if the http connection is being attacked (say from an infected system on the LAN) and references to bad domains are being injected, then that could be a real problem but not one that is solved by this new feature. Only https would prevent this attack.
Re: (Score:2)
After the stunt the noscript author pulled with adblock's filterset, I will never use it again. It simply cannot be trusted. It is malware.
How does this change userland? (Score:5, Insightful)
I'm sorry, but NO site can be trusted 100% from a user's perspective... and giving site owners the tools to help prevent XSS from their side doesn't help with the fact that users still shouldn't trust absolutely.
The reason something like this scares me is that it lulls users into a higher level of trust... and doesn't protect them from hacked sites, or sites that choose not to implement this.
Of course, I'm slightly paranoid. And of course, this isn't transparent to Joe Sixpack, so he's going to trust|!trust based on whatever it is he's basing it on now. And for security-critical sites like banks, this is a good thing... but I try very hard to make sure my friends & family are a bit paranoid too, so they'll take precautions.
Re: (Score:2)
Dude. How are *you* going to know that it is ok to run scripts on Slashdot.org that originate from slashdotscripts.com and not scriptsforslashdot.com? Even if you are a lunatic and micromanage the trusted sources of these scripts, how would selectively running any of them do you any good? I would imagine almos
Re:How does this change userland? (Score:4, Insightful)
Dare I say it?
Site XXXX is attempting to run a script on site YYYY.
(C)ANCEL or (A) LLOW?
All snark aside, why would I allow either of those domains to run a script on slashdot.org? Since I trust slashdot to a certain extent, I would allow from scripts.slashdot.org. But allowing scripts from a completely different domain? No way.
The point is that my security policy is annoying to implement. For site mybank.com I need to enable scripting. But if things were perfect, I could enable only for scripts from $SUBDOMAIN.mybank.com, so I don't get hosed by scripts from $HACKERSITE.bankmy.com. And if legitimate sites are hosting their scripts from an entirely different domains... well, that would have to change. Instead I have to take an all-or-none approach, since the sites I need security the most on are the ones where I need to enable scripting. That just sucks.
Re: (Score:2, Insightful)
Slashdot is currently pushing js from c.fsdn.com.
I think you have a pretty dim view of the ecosystem (or maybe you are viewing some really marginal sites, who knows). For the most part, a given page that you visit is not going to contain malicious code that sniffs for when you have a https cookie for your banking site and then mysteriously steals all your money. I say this confidently, as I am quite certain that the bad guys are much happier with the simpler task of installing mal-ware keyloggers.
The only b
Re: (Score:2)
You're right that I'm not the most knowledgeable (to put it lightly) about the ecosystem. However, I think it's atrocious that I cannot easily and selectively block scripts from operating on sites I want to view. And not just for the sake of security... also for the sake of performance on older machines.
Re: (Score:3, Insightful)
And I know that fsdn.com is also a trusted site.
Funnily enough, I know I don't want fsdn.com's content because the side bar is annoying bloatware that cripples the utility of the site. I'm very glad to have NoScript on the case, blocking it for me. (Which makes me wonder how many other horror websites there are out there whose horrible bloat I've been saved from by virtue of my browsers blocking XSS.)
Properly constructed sites function without XSS (Score:2)
And I'm not running any of 'em. And I never have.
Re: (Score:2)
That would be like paypal.com inexplicably using paypalobjects.com [robtex.com]! Unpossible.
Re: (Score:2)
why would I allow either of those domains to run a script on slashdot.org?
You wouldn't. Slashdot would. This is about the site creator specifying a white list, and not about the visitor being prompted about it.
But if things were perfect, I could enable only for scripts from $SUBDOMAIN.mybank.com, so I don't get hosed by scripts from $HACKERSITE.bankmy.com.
Am I misunderstanding the description of this extension, because to me this sounds exactly like what it does. You enable scripts from domains you specify. Thus, no javascript injections or form hacking will get a page to retrieve foreign scripts without the attacker being able to physically alter the document.
Re: (Score:2, Interesting)
That reminds me -- since recently I have to tell NoScript to allow scripts from fsdn.com in order to browse slashdot.org successfully. I *know* that FSDN is slashdot's parent company, but it doesn't seem right that I can't use slashdot's discussion interface without giving permission to all of FSDN.
Similarly, recently I have to allow gstatic.com and/or googleapis.com to use Google-enabled websites that worked fine before.
Like the parent post's point: it's getting harder for a user to selectively narrow per
Re: (Score:2)
I absolutely hate having to figure out which domain that I've blocked that will allow access to the content and which ones are dodgy. And further whether the site that's serving up the content is safe enough to allow.
Re: (Score:2)
At some point, you need to trust *someone* to tell you who else you can trust... and that'll always be a problem.
Re: (Score:2)
If you're slightly paranoid like I am, how would you know to trust the provided list of "trusted script serving sites"?
The web server is telling you a list of sites that it trusts to serve scripts to be run in its pages.
Re: (Score:2)
What irritates me is that all the browsers I've ever heard of run everything they can by default. The only distro coming even close to something sane is Gentoo with the "restrict-javascript" USE flag with firefox (that pulls in noscript, but still does not enable it by default).
Of course I can't know about everything, feel free to correct me.
Re: (Score:2)
Indeed, I wish noscript would allow me to whitelist domains and even specific scripts on a per-site basis. So, for example, I could whitelist maps.google.com's use of javascript from gstatic.com but not allow any other sites, like images.google.com, to pull in javascript from gstatic.com.
Re: (Score:2)
eBay and MySpace? (Score:3, Insightful)
CSP is effectively server-side NoScript. And it isn't exactly new either. This has been in development as a Firefox extension for at least a year. The article mentions it being first crafted back in 2005.
The issue I take with this article is that they suggest this feature could even possibly be integrated into eBay or MySpace. These two giants seem like the exact opposite type of market that would use this -- any site that allows users to post their own data is not going to possibly survive the wrath they w
Re: (Score:3, Informative)
Apparently, you have no idea what XSS means. Neither eBay or Myspace allow the execution of user-provided scripts for obvious reasons. Given the market share of Firefox, the big sites will implement CSP pretty soon.
Re: (Score:2)
I've been suggesting a fix like this for years but my suggested implementation let users have add further limitations. It's stupid not to let users tighten controls even if they can't make controls any weaker than the site has configured. You'll never have perfect security but at least this is a step in the right direction.
Re: (Score:2)
This mechanism isn't intended for users -- this is a tool for site authors, to cooperate with them in enforcing their policies. The site still has to make a best effort at implementing those policies themselves to protect all their visitors using browsers that don't support CSP (which includes every officially released versio
Cost vs. Benefit? (Score:4, Interesting)
If the cost versus benefit doesn't make sense for some site, they're free to keep doing business as usual.'
The author gave the best reason for not implementing this.
The benefits of this, and other various security implementations, won't be seen until it's tested. The costs of testing? Way too high compared to the current cost of operation. This is a very hard proof-of-concept problem, and unless this is already built into development standards, I doubt any deployments would switch.
Which would you take, the option which delays production for a week, or the option to just hit "next"?
Re: (Score:2)
The TD Ameritrade settlement for instance was an absolute joke, they ended up losing a lot of personal information and then they ended up with a slap on the wrist. It's not going to be cost effective for organizations to secure their sites as long as they're free to pass on the cost to th
Article on this and related technologies (Score:3, Interesting)
Re: (Score:2)
I (as a site owner) can actually do something to protect my site and my users against flaws in my site that is relatively easy and non-intrusive (that's the key!).
Unless your users run something besides Firefox.
If MS did this we'd all be crying about how this isn't sanctioned by W3C, and it's "embraceandextend" (tag?).
Re: (Score:3, Informative)
If MS did this we'd all be crying about how this isn't sanctioned by W3C, and it's "embraceandextend" (tag?).
Extinguish.
It's Embrace, Extend, Extinguish. That last E makes all the difference in the world.
Headline: Google other ad publishers revenues drop (Score:2, Interesting)
Re: (Score:2)
Obviously if a site does not contain one of these headers it will default to allow from all. Also called not breaking the whole internet with your new browser feature.
The XSS FAQ (Score:3, Informative)
The Cross-site Scripting (XSS) FAQ http://www.cgisecurity.com/xss-faq.html [cgisecurity.com]
This is great for Firefox users... (Score:2, Interesting)
Re: (Score:3, Insightful)
Well, if Firefox users find it effective, then other companies will follow suit. It's just a standard Mozilla is adopting, though it seems to have been defined in house, that won't stop anyone else from using it.
Re: (Score:2)
IE has an XSS Filter... I don't use IE enough to have bothered to investigate it though, otherwise Opera, Safari, Chrome, don't seem to be doing anything special about XSS, at least not advertising it, other than patching their own vulnerabilities against a few known methods.
A step in the right direction (Score:2)
The first trap you will fall into thinking about this is that it should be the end-all security policy, and will solve our problems. It won't. That's not the intent, and also impossible given our diverse browser ecosystem.
The ability to tell the browser, via out-of-band, non XSS-able information, that certain scripts should not be executed, however, is a very powerful defense in depth measure, and makes it one step harder for attackers to make an attack work.
Security is a war of attrition. Bring it on.
NOT a standard (Score:3, Informative)
Standard? (Score:3, Informative)
More than a "Firefox standard", it seems to me that this is an extension. I'm all for it, but let's call things by their name.
Yea. they are free. right. (Score:2)
just like they have forced the 'humongous, scary ssl warning error' instead of the previous acceptable and understandable error message. it forced a lot of small businesses who used the certificates they themselves signed to buy 3rd party certificates from vendors. again with this change, all small businesses will have to spend more on web development charges, because most end users will set their firefox to the prevent setting for this new feature. the 'free to do business is usual' bit is bullshit. rememb
Re: (Score:2)
Your last paragraph reminds me, hey Firefox is open source, let's just fork it!
Re: (Score:2)
Anyone doing business should have a legitimate SSL certificate for the site and not use a self-signed certificate. Anyone using a website should be wary of any business site using a self-signed certificate.
Self-signed certificates are okay for personal servers where you know you or a friend signed the cert, but if you're doing business it is a VERY BAD IDEA to use or trust self-signed certs. Firefox's behavior is correct in this regard.
RFC? (Score:4, Insightful)
Is this 'standard' endorsed by anyone else or written up as part of an RFC? Calling something a standard when you are the only guys doing sounds like a certain company that was started by Bill and Paul.
I am not trying to troll here, since I am all for the solution, I am just ensuring that this properly documented and shared by the right entities (think W3C).
Re: (Score:3, Informative)
I should have the read the article first, since no where in the article do they mention the word 'standard'. When they do decide to make it happen I do hope the submit the proposal to the right organisations, as to avoid making this a one browser standard.
Re: (Score:2)
Bill and Paul made about $100 billion and their bugs have become the standard that most "standards" can't dislodge.
Anyone can proclaim a "standard", recall what "RFC" stands for? It's not "peer-reviewed and passed by governing bodies."
If Mozilla is saying this is how they're building it into the code base, W3C can ignore it, but it's W3C who won't be compatible with what is standard.
Re: (Score:2)
I was thinking the same thing. If this was Microsoft, Apple or even Google claiming a "new standard" based on a feature only they've adopted (and even created) they would quite rightly get chewed out. The only way something anyone does alone (especially if they're still the minority in terms of market share) could be considered a "standard" is if your attitude to language is exceptionally flexible.
Massive Overkill (Score:3, Informative)
This proposal looks like massive overkill to me. Implementing the restriction on inline script tags is equivalent to saying - our web developers are incompetent and naive and cannot be trusted to take basic security measures, so we feel making our web development practices more cumbersome and inefficient (if not impossible) is a healthy trade off.
A more effective program would be to develop and promote standardized html sanitization routines for popular web development languages, so that user entered html could easily be accepted under certain restrictions. Most web logs do this already.
Alternatively a less draconian solution would be to allow inline scripts to execute if the script tag includes a response specific serialization value that is also present in the HTTP headers. 64 bit values would make forging a inline script essentially impossible, because there would only be a 1/2^64 probability of a subsequent accidental match.
Re: (Score:2)
our web developers are incompetent and naive and cannot be trusted to take basic security measures so we feel making our web development practices more cumbersome and inefficient (if not impossible) is a healthy trade off.
The real question is, can YOU trust your web developers? And is this really that more cumbersome and inefficient than every other measure? It's just another tag. In fact, it is *just* a tag. It is also in the source of the problem - the web browser. You could argue everything else is a workaround, and finally we are getting help from the people responsible for inventing the problem.
A more effective program would be to develop and promote standardized html sanitization routines for popular web development languages
Yes, except, this is not easy, it is already being done, and it isn't quite working.
If one tag could eliminate the risk of exte
Re: (Score:2)
It is not "just a tag" - it is a header that enables a mandatory restriction on inline scripts in addition to selective restrictions on other elements. And if you have incompetent web developers for a public facing site, you are likely to have much more serious problems than unfiltered user content.
One of the serious problems with this is many applications dynamically generate javascript on the fly. The only way to handle that under this specification would be to generate lots of little temporary files tha
Re: (Score:2, Informative)
Hackers can just fire up a different browser, so the number of hackers this will stop are exactly ZERO.
It's not about stopping hackers from running these scripts. It's to protect the users if a hacker have managed to insert a remote script via a form on the webpage. It protects users running firefox if the site have implemented the tag.
PLEASE GOD NOOO! ;) (Score:2, Informative)
Please don't let this become the same horrors, that it is with plugins.
If you ever tried to add a applet or anything embedded into a site, that came from some other domain (like a mp3 stream), you will know what I am talking about.
It just gets blocked, except if you have a signed certificate and other shit, that you can only get for money. And then it is still a huge mess to set up.
In my eyes this stifled web technology quite a bit.
Additionally, what do you do, when you yourself have several domains and sub
XSS (Cross-Site Scripting) definition? (Score:2)
So is there an official definition of "Cross-Site Scripting" somewhere? Since that phrase started to be used in scary security stories a few years ago, I've been collecting the definitions that various stories provide, and I've been a bit disappointed. Mostly, they aren't even "definitions", in the usual dictionary sense of the term. I.e., I can't use most of the purported "definitions" to decide whether what I'm looking at is an instance of the phrase. And in general, no two stories or sites seem to us
Re:XSS (Cross-Site Scripting) definition? (Score:4, Informative)
My impression is that "Cross-Site Scripting" is an empty scare phrase that really just means "anything involving two different machines and a script -- whatever that may be".
Cross site scripting is exactly what it sounds like; running a script from one site in another site's security sandbox (i.e. scripting across sites). The script tag allows scripts to be loaded by a page from any site. These scripts then run in the same namespace and sandbox as any other scripts on that page. It's basically the web equivalent of an arbitrary code execution vulnerability. It isn't quite as bad as the client-side version, because there is (in theory) no way of escaping from the sandbox that the browser constructs from each site.
If you don't properly sanitise user-provided data then it's quite easy[1]. Imagine, for example, that Slashdot allowed arbitrary HTML. If it did then I could put a script tag in this post referring to a script in my domain. Your browser would then load this script and run it as if it were provided by Slashdot. If you enter your password, I could harvest it. Even if you don't, my script could send HTTP requests to Slashdot with your login credentials and post spam. If you've entered personal information in your user profile, I could harvest this.
You probably don't have any private information on Slashdot, so it's not a particularly attractive target for identity theft, but the large number of page views means that it might be useful for spam. Imagine, for example, a cross-site scripting vulnerability being used so that everyone with excellent karma who went to the Sony story posted something written by Sony PR.
For sites like eBay, it's much more important. These sites have full names, postal addresses, and often credit card numbers. If I can run a script in their pages' sandbox then I can access all of this information as the user enters it.
This idea is for each domain to provide a whitelist of domains that are allowed to provide scripts (or other resources). If I persuade eBay's code to load a script from my domain then FireFox can check my domain name against the list published by eBay, see that it is not there, and refuse to run the script.
[1] This isn't the only way of persuading a site to load your scripts, but it is the simplest to explain.
Re: (Score:2)
Well, yeah; I've done lots of web scripting, and I get all that. I've even written demos of the dangers, usually to try to impress on others (such as managers) why it's a potential threat to users. This hasn't usually been too successful, as shown by the fact that those people usually continue to run their browsers with scripting enabled.
My question wasn't about how you write web scripts. My question is why you'd add a modifier like "cross-site" to it. Defining it as a script on one machine (the server)
Re: (Score:2)
Cross site scripting is exactly what it sounds like; running a script from one site in another site's security sandbox (i.e. scripting across sites)
You somehow read this as:
Defining it as a script on one machine (the server) which runs on another machine (the client) adds no information, because that's how almost all web scripting works
Note the difference between your definition and mine. My definition (shared by everyone else) involves three computers:
If site 2 is not operated by a trusted party, then this is a cross-site scripting vulne
But I want and need X-site scripting! (Score:2)
I'm a bit out of my league knowledge-wise here, but in my company I have a company web application that would benefit very much from being able to do something in the window of another site. Why can't a browser (not the web app) be set to very specifically allow a particular web application to make use of another specified website. E.g. that would allow me to fill out a form with data from the web app or vice versa to get data into my MySQL database without having to fill out the data manually, which is err
Re: (Score:2)
It's extremely useful when you are able to trust the other host.
However, if you don't trust the other host, you shouldn't be including their script in the first place. Because that script may contain something like a javascript function to send the cookies of the first domain
Next step (Score:2)
Next step: educate PHP users so that they have a clue about security?
"Disable JavaScript" (Score:2)
Well, it's about time somebody does something. For years JavaScript has been an on/off affair, and it's been driving me nuts both as a web surfer and a developer.
They can do whatever they want for Joe Average to ensure advertisers won't complain, but please, can I have the ability to allow scripts to run only from the same domain as the originating page? Please? Just a simple checkbox will do, thank you.
Re: (Score:2)
Your suggestion is absurdly logical, hence it shall not pass.
Why do all the good ideas get by-passed? Is it some sort of "nerd pride", that we must never do things the easy way.
Re:Good idea (Score:5, Insightful)
First thoughts on that:
If I say that my site trusts domain1.com, but domain1.com isn't using this and ends up having all sorts of dodgy scripts they're passing along, would this block them, or would they count as coming from domain1.com?
Re: (Score:2)
When content is fed to you by/through domain1.com, do you see it as coming from domain1.com?
If the answer to that is YES, then you're FUCKED if domain1.com is serving up shit.
Re:Good idea (Score:5, Interesting)
If I say that my site trusts domain1.com, but domain1.com isn't using this and ends up having all sorts of dodgy scripts they're passing along, would this block them, or would they count as coming from domain1.com?
Domain1 woudn't need to use this - this is a client-side security measure. If your site uses it and declares trusted third-parties, it's enough.
Also, what is "passing along" supposed to mean? Scripts (or any other stuff) would either come from domain1 or not. If not, it wouldn't be trusted.
If domain1 proxies scripts from other sources, this means they come from domain1, as far as HTTP is concerend - and they would be trusted.
The problem I see however is domain1 declaring additional trusted domains when delivering its scripts, thereby allowing for "cascaded domain trust", which
would pretty much defeat the new system. This can easily be prevented by not accepting additional trusted domains from elements that are third-party though.
Re: (Score:2)
Perhaps it would be more effective to modify Firefox so that it will only execute scripts from other domains which are directly referenced by the original domain. That seems much safer to me.
Re: (Score:3, Insightful)
The other major problem with this solution is that it requires changes at the web site level.
In other words, you're only safe if the web site author opts into the security solution.
What are the chances that the hundreds of millions of web sites out there will all opt into this feature?
Re: (Score:2)
From the summary:
As long as this "standard" is the one called HyperText Markup Language, then this makes sense. HTML is intended to say what scripts run on a page. If that's broken, then the HTML should be fixed. Somehow I suspect it'
Re: (Score:2)
Because if it doesn't form a TLA (Three-Letter Acronym) it won't catch on.
Re: (Score:2)
If you will notice, he meant that "Content Security Policy" as a name is less effective than, say, "Cross-Site Scripting Prevention Policy." He was not talking about XSS as an acronym.
Re: (Score:2, Informative)
It extends well beyond scripts into other content areas. It can be used to limit the domains that are allowed to serve images, css, and so on (this is all for a given page).
You're doing it wrong (Score:2, Flamebait)
If you're having to modify individual files to set HTTP headers, you're doing it wrong. Also, polluting sites' namespaces (even worse than they already are with robots.txt/favicon.ico) is a bad idea.
But then, you already betrayed your cluelessness when you revealed that you put Flash on the Web.
Re: (Score:2)
Re: (Score:2)
...why?
Re: (Score:2)
But then, you already betrayed your cluelessness when you revealed that you put Flash on the Web.
Yeah! Damn their highly-adopted [google.com] prescient, open security model [adobe.com] and their 99% global penetration! [adobe.com] Get off my lawn!
Re: (Score:2)
Re: (Score:2)
And can you be sure you have all the legitimate user agents?
Certainly ie, mozilla, safari, chrome and opera will cover most of the desktop / laptop user agents, but what about all the obscure browsers used on phones and other mobile devices?
Re: (Score:2)
No, just block the common ones (googlebot and yahoo slurp are the majority of it). If you're actually trying to protect this content then you need to password protect/etc. it, robots.txt is not the way to prevent exposure. As well you could also use the meta tag in HTML documents:
meta name="robots" content="noindex"
But I agree, robots.txt is far less painful and much quicker. Thing to remember as well when robots.txt was invented the web was a much simpler place and everyone online was pretty much skilled i
Re: (Score:3, Informative)
Ugly, lots of over head...
And requires me to figure out the useragent of either every browser out there (to allow) or every bot out there (to deny). At least, as far as I can tell.
No, only "bots" (spiders, nowadays) actually check robots.txt, per the RFC. User-initiated requests don't/shouldn't (no browser I've ever seen) do not request/parse robots.txt.
Re: (Score:2)
Oh, please don't do that. Don't assume that we have rights to that directory. I already really really wish I could set robots.txt for just my subdirectory, but no can do since some semi-moron thought it would be a good idea to make me mail my school department's webmaster to exclude part of my directory.
You can do everything that you do with robots.txt via robots meta tags [robotstxt.org] and streamline their inclusion with some server-side scripts if so desired.
Re: (Score:2)
Hey! It doesn't need to be plain ol' text.
As a theoretical, it could be hamstrung html that pisses off some users by not recognizing UTF-8 in order to prevent malicious posting.
Or something. I'm sure we could figure out a decent implementation.
Re: (Score:2)
Don't let other people serve content via your site.
Problem is that security flaws such as cross-site scripting (XSS) allow exactly this (insert arbitrary HTML/JavaScript into the page which is then rendered by the client browser.
Re: (Score:3, Informative)
Don't depend on user-generated content, since it's shit. If your site can't provide it's own content, at least properly filter incoming user content down to plain ol' text.
I suggest you resign from Slashdot as soon as possible then ...
Re: (Score:3, Interesting)
Why is this modded troll?
99.99999% of attacks are the result of:
Malicious ads and clickthrough "offers" after a sale is processed
Vulnerabilities in PDF, Flash, etc.
Malicious content uploaded by users (javascript, sql injection, malformed jpegs, what have you)
Domain hijacking
General "LOL I GOT UR PASSWORD" shenanigans
Re: (Score:2)
Re: (Score:2)
Moronic users are victims.
Unless you're talking about the special type of moron who will somehow mash their keyboard and enter LOL'); DROP TABLE USERS;
Re: (Score:2)
Re: (Score:3, Interesting)
Sexconker is modded a troll - quite unfairly. Cross site scripting sucks. Simple as that. I go to a site, first thing I see is noscript's popup message that anywhere between 2 and 20 sites want to run scripts in my browser. I click the popup, to see WHO wants to run scripts. Sometimes, it's easy to see who wants to do what, and deciding to allow site a, but not site b is quite simple.
Often enough, it's just not that simple. I want to see some stupid flash presentation, and the only way to see it is to
Re: (Score:2)
Yeah, I love the (relatively) new one where you have to allow javascript from some skank ass ad server to see the flash file.
They cram an ad in before the flash file actually starts to load, and you have to watch it. Some sites let you just wait and then serve up the flash anyway (they assume the ad shithouse is down at the moment, or something went wrong after the ad played), but you've got to sit and wait and watch a black square for 30-60 seconds.
So to see the flash, I've got "Temporarily allow all this
Re: (Score:2)
It is just a client protection, like Microsofts HttpOnly-Cookies [microsoft.com]. You can still send any packages to the server.
Re: (Score:2)
We still can. The mentioned MAFIAA flaws are based on a vulnerability in server-side code. <iframe [...]> is posted as the search criteria and incorporated into the site output to the user. It's all HTTP (submitting the "evil" param") and HTML (returning a usable <iframe>.
The Firefox implementation "protects" only from scripts included via <script src=..> from another domain. Pure HTML (like above) or in-page scripts aren't blocked. In most* cases, that's a few minutes of extra work, tops;
Re: (Score:2, Funny)
I guess the other browsers will just ignore this unless of course they jump on board and implement it too.
Exactly. Also, it will rain tomorrow. Unless it doesn't.