Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Cross-Site Scripting Hits Major Sites 161

An anonymous reader writes "Dark Reading and SC Magazine covered a story about hackers posting cross-site scripting (XSS) vulnerabilies en mass on dozens of high profile websites including Dell, MSN, HP, Apple, Myspace, YouTube, MSN, Cingular, etc. The media coverage drew the hacker's attention to the publication's websites where they got a taste first-hand. On message board wall-of-shame is PC World, MacWorld, Fox News, the Independent, and ZDNet UK. "...not only did we get the "scoop" on the XSS site problems, but we also got the message loud and clear: Don't assume you're immune to XSS vulnerabilities. They're everywhere." The news comes shortly after Mitre (CVE) released statistics showing XSS has become the most popular exploit. Unfortunately new XSS attacks are growing increasingly severe and scanners are unable to find many of the issues on modern websites."
This discussion has been archived. No new comments can be posted.

Cross-Site Scripting Hits Major Sites

Comments Filter:
  • by mrkitty ( 584915 ) on Monday September 25, 2006 @11:12AM (#16185615) Homepage
    • by Dwonis ( 52652 ) on Monday September 25, 2006 @11:18AM (#16185697)

      I particularly like this example [dlitz.net].

      Here's the spoiler [dlitz.net].

    • I'm sorry, I really don't grok Cross-Site Scripting issues...

      Are there really that many web "programmers" out there that don't check their user-supplied inputs? I mean, that shit is CS 101...

      • Re:I don't get XSS (Score:4, Interesting)

        by truthsearch ( 249536 ) on Monday September 25, 2006 @11:32AM (#16185923) Homepage Journal
        Actually, CS 101 is data types and algorithms. Earning my CS degree taught me little of input validation. Most programmers learn security in one of two ways: proactively reading up on it or having one of their applications hacked. Unfortunately I think many average programmers don't consider input validation as much of a priority until after a hole they provided is exploited. When I ask many web developers what they do to prevent SQL injection attacks, for example, only about half have even considered it. Scary.
        • Re: (Score:3, Interesting)

          by RevDobbs ( 313888 ) *
          Earning my CS degree taught me little of input validation.

          Maybe I lucked out with a particularly cluefull teacher, but input validation was beaten into me learning Basic on an Apple II(e?) in high school: everyone fails the first round of the Craps game assignment when the teacher asks "what happens when I bet a negative amount?".

        • Re: (Score:3, Insightful)

          by CastrTroy ( 595695 )
          You've said it. I've found that school teaches very little about programming in the real world. Sure you get some security courses, adn they are great, but I think there should be a class on the common problems that show up in programming. Things that should really be taugh are:
          1. Always use parameterized queries and never construct your own queries by concatenating strings. This will not only speed up the application, it will make sql injection attacks a non-issue
          2. Always verity the users input. You ne
        • by queenb**ch ( 446380 ) on Monday September 25, 2006 @12:29PM (#16186733) Homepage Journal
          The biggest problem is farming everything in the world out to $8/hr guys in some foreign country. If you pay $8/hr, you're going to get an $8/hr guy. Keep in mind that Wal-Mart starts at $9/hr. Given these two statements, I fail to see why it's surprising that such simply fixed vulnerabilities continue to plague software.

          2 cents,

          QueenB
          • I've seen more than a few domestic $200+/hr developers make basic security mistakes. The problem has nothing to do with the location of the developer or the amount they're paid. Of course if you find a truely great developer most likely you'll need to pay him/her well. But that certainly doesn't mean paying someone well guarantees you more secure code.
            • No, but paying someone crappily will almost always ensure that you have crap for code.

              2 more cents,

              QueenB
          • by Xemu ( 50595 )
            If you pay $8/hr, you're going to get an $8/hr guy

            That's easy to fix, just pay $1600 per hour and you get a $1600/hr guy! My god, how good he must be.

            Hint: Your logic has a flaw.
        • Actually, CS 101 is data types and algorithms. Earning my CS degree taught me little of input validation. Most programmers learn security in one of two ways: proactively reading up on it or having one of their applications hacked. Unfortunately I think many average programmers don't consider input validation as much of a priority until after a hole they provided is exploited. When I ask many web developers what they do to prevent SQL injection attacks, for example, only about half have even considered it. S
        • by Tiger4 ( 840741 )
          Sir, are you suggesting that I cannot trust my users to provide valid input into a program THEY asked to run? That is the most preposterous thing I've ever heard! Next thing, you'll be saying they'll just try to run their own programs, on MY machines. Maybe even try to get system privs and run as SU. No way! Thats crazy talk.
        • by DrSkwid ( 118965 )
          Anyone who says they are a "web developer" is going to be vulnerable to plenty of vectors.

      • Re: (Score:1, Insightful)

        by Anonymous Coward
        I've taught programming on and off for about 30 years at one of the best schools in the country, and I have never heard anyone even mention checking user input for something malicious. I have never seen it in a text book. This sort of thing isn't addressed by Knuth. It never comes up since in the recent past all development was done for trusted users. For example, you wouldn't care if a user on an isolated computer running Microsoft Word attempted an exploit. You don't need to check user input when tea
        • What about when the management at a company doesn't want their Word user to escalate their privileges to SYSTEM level? I'll admit, I've only had one teacher tell me to check inputs for malicious content, and that was in a security class. But I have had it beaten into my head to check for valid input any time you gather input, as to not break your program. Starting from there, it is easily extended while checking valid input to also check for valid input that will not break your system in some fashion.
        • Re: (Score:1, Insightful)

          by Anonymous Coward
          I've taught programming on and off for about 30 years at one of the best schools in the country, and I have never heard anyone even mention checking user input for something malicious. I have never seen it in a text book. This sort of thing isn't addressed by Knuth. It never comes up since in the recent past all development was done for trusted users. For example, you wouldn't care if a user on an isolated computer running Microsoft Word attempted an exploit. You don't need to check user input when teaching
      • Re:I don't get XSS (Score:4, Informative)

        by Yvanhoe ( 564877 ) on Monday September 25, 2006 @11:43AM (#16186067) Journal
        That's it. They allow users in forum to post links, and URL. URL can have a lot of strange characters in it, & ? ! # etc... Apparently, the basis of XSS is to make a link that appears like a valid URL but that will, in some clients, execute as a javascript code, usually in order to steal cookies (therefore, an opened session) of the user watching the post. There seems to be a shield vs sword thing growing between attackers and web developers. You have numerous ways of "hiding" a code in an URL, hexadecimal notation, strange utf-8 characters and so on. Here again, an incomplete implementation of a standard is the cause of major headaches.
        • Re:I don't get XSS (Score:4, Interesting)

          by Jerf ( 17166 ) on Monday September 25, 2006 @12:49PM (#16186971) Journal
          Apparently, the basis of XSS is to make a link that appears like a valid URL but that will, in some clients, execute as a javascript code
          No, that's just one way to do it. XSS is any insertion of Javascript code into a site that shouldn't be there, and there are a surprising number of ways to do that. <a href="javascript:alert('hi!')">text</a> is just one of the easier ones.

          I say this because people need to be aware that links are not the only vector. My favorite one I've seen so far is <bgsound src='javascript:bad_code()'>. If you choose poorly and are trying to filter out bad tags (instead of what you should be doing, specifying only exactly what tags and attributes are allowed and forbidding anything else that looks like a tag), did you remember to block out the BGSOUND tag? If not, that auto-executes; it doesn't even need to be clicked. (IE may have closed that; I saw this in the IE 4 era.)
      • There is a tension between what users want to do that's legitimate, and what users can do maliciously.

        For example, I'm developing a myspace-like system, with which I am presently grappling with these issues.

        Ideally, I'd like to give users perfect creative freedom to do whatever they want on their profiles and online community pages. After all, they should be able to express themselves, no?

        So before these attacks became well-known, it was a perfectly reasonable stance to say that we should NOT filter user i
        • Before you laugh, even computing greats have made similar mistakes. RMS, of Emacs, GNU and GPL fame, used to rail against people using passwords on their accounts. He had no password on his account on the MIT AI ITS machine, which was accessible through the ARPANet. Theoretically, a lot of bad things could have happened to him, but they didn't because yesterday's ARPANet users had respect for him and people like him.

          In other words, it wasn't a mistake; He just had a better understanding of the threat than

          • Oh, I had the same understanding as him, at the time. I didn't use a password either.

            I think my basic point stands, that we have to be much more paranoid now than we did then, and that on a personal level I think it really stinks.

            D
        • Another important thing to note is that preventing XSS is not as simple as it seems. In fact, preventing it may be just plain impossible if we don't want to prevent people from doing things like showing videos and Flash, with the OBJECT tag. There are apparently huge security holes in allowing it, but if you don't, then you have a world without music or video. If anyone has tips on securing this, please reply to this and let us all know. I was thinking that it might be necessary to allow only certain URLs b

          • Good thinking, and I thank you for raising the idea. But it won't work in my case.

            As you know, at this very moment, YouTube is going bust in bandwidth bills hosting all that video, unless someone buys them out for US$1.5 billion first.

            I don't want to host video. I want to let people point to their videos whereever they might be, so your idea won't work.

            D
      • Re: (Score:3, Insightful)

        by TheAJofOZ ( 215260 )
        The problem isn't that they didn't validate the user input, so much as that validating user input is really, really hard. RSS aggregators are discovering the problems with validating that HTML is safe. See http://www.feedparser.org/docs/html-sanitization. h tml [feedparser.org]
        The trouble is that an approach like that limits what you can do too much: http://www.symphonious.net/2006/09/10/stripping-st yles-as-part-of-sanitation/ [symphonious.net]
        Any site that wants to support formatted comments, like Slashdot, has to deal with this. The plus s
    • A paper on cross-site tracking is available here, [stanford.edu] along with two preventative extensions called SafeHistory and SafeCache.

      To help safeguard from scripting attacks, I also use NoScript extension [mozilla.org].

      The CookieSafe extension [mozilla.org] will block and help you manage cookies better than Firefox's built-in manager. ...other interesting privacy tools are...
      Stealther (prevents recording of history and blocks ReferentHeader)
      Tor anonymizer + Foxyproxy extension
      ImgLikeOpera can switch image prefs with ease
      Flashblock (stops flash a
  • Scripting? (Score:3, Funny)

    by Anonymous Coward on Monday September 25, 2006 @11:16AM (#16185667)
    <script language="javascript">document.write("It's very hard to check for XSS. I can understand why most people don't bother.")</script>
  • You know, I've been waiting for this feature on weather.com
  • by possible ( 123857 ) on Monday September 25, 2006 @11:22AM (#16185751)
    The reason most vuln scanners can't find XSS vulns on modern sites is because of the increased amount of JavaScript and Flash (with ActionScript) that's in use. But some scanners [rapid7.com] can grok this stuff to varying degrees of completeness.
    • Re: (Score:3, Insightful)

      by Tackhead ( 54550 )
      > The reason most vuln scanners can't find XSS vulns on modern sites is because of the increased amount of JavaScript and Flash (with ActionScript) that's in use.

      Which is why I'm so happy that the currently-in-demonstration phase of the new Slashdot discussion system presumes/requires that Javascript be active.

  • Move on... (Score:2, Informative)

    by 955301 ( 209856 )

    So would it be technically possible at this point to move away from the web application and back to the client server app? Here's a path example:

    * Java Client
    * Servlet Interface for the client
    * Java webstart deployment
    * Java plugin on the clients

    For this path the questions would surround authenticating the client and the hassle of installing the java plugin.

    Rinse and repeat for the obligatory Microsoft solution.

    I've never been a fan of web applications and form given the simplicity of creating an SQL inject
    • Not just possible, but it works and has been shipping for a while by Canoo [canoo.com].
    • First off let me preface by agreeing with you that a slightly thicker client model makes sense over the web application model.

      That said, however, I don't think that will necessarily solve the problem. Most HTML form based code is already being intercepted by Java servlets and processed by Java, meaning the developers are taking the form fields and dumping them straight into SQL (for SQL injection issues). So moving to a different client isn't going to change that problem, the fields will just be captu

    • You can do that but it won't solve the problem. The problem is that input from their users isn't validated properly. There just taking it and using it as raw code. It can happen in a web page, or an applet, or even other kinds of software. I would argue that a web page is actually considerably more secure because the attack is limited to things that that website can access (unless it exploits a browser bug). The solution is fewer idiot programers running around on the web and more code review and such.
      • by 955301 ( 209856 )
        I disagree, but not directly. The medium you use to build an app affects how you can divide your time out across development. Mucking around with html and browser nuances deplete your effort in other typically last minutes areas such as adequate field validation. All else being equal, if you use a medium intended for applications and not just html you're likely to have more time to sanity check your work.

        And seeing as the idiot programmers are multiplying... ;)
    • Re: (Score:2, Insightful)

      by profplump ( 309017 )
      A) It's just as simple to prevent an SQL injection attack. Failing to clean your input is just a stupid mistake, regardless of the input method. Mistakes happen, but let's not pretend that input validation is complicated.

      B) The reason people like to build web interfaces is that the client, server, and transfer mechanism already exist. Writing a new one for each project is much, much more work.
      • by 955301 ( 209856 )
        Is it really? I mean, don't we all reuse some part of our previous projects? Even if it's just a basic framework such as Hivemind, or a favorite logging facility or just plain experience. Those things apply in the client server world too. And frankly, mocking up and implementing a swing application is not difficult. Heck, it was built for all the things AJAX is toted as the solution to.
  • by djuuss ( 854954 ) on Monday September 25, 2006 @11:30AM (#16185909)
    .. XSS links YouTube
  • scanners (Score:3, Interesting)

    by rilian4 ( 591569 ) on Monday September 25, 2006 @11:34AM (#16185953) Journal
    ...and scanners are unable to find many of the issues on modern websites
    Obviously the hackers can find systems with this vulnerability...ergo there exists a means to scan for it...

    Draw you own conclusions from there...
  • by Billosaur ( 927319 ) * <wgrother AT optonline DOT net> on Monday September 25, 2006 @11:35AM (#16185965) Journal

    ...remains unaffec... FOJSF{09fiE*EU90av['vlwIOA934MAwadpskf[aepfkfa[-09 u9a

    • by _xeno_ ( 155264 ) on Monday September 25, 2006 @11:48AM (#16186163) Homepage Journal

      A while ago, someone posted a link to a webpage that, when clicked, caused their post to be moderated up. Their post was at +5 for quite a while until enough replies got moderated up pointing out that the link wasn't what it claimed to be.

      So, in a sense, Slashdot has already been hit by a cross-site scripting vulnerability. The fix for XSS vulnerabilities like that involves requiring a secret token to be sent to take user actions, to prevent people from creating forms off-site and submitting them as the user. I suppose checking the referrer may work too, but I wouldn't count on it.

  • Web 2.0 anyone? (Score:2, Insightful)

    by griffon666 ( 1005489 )
    Web 1.0: Simple fishing scam
    Web 2.0: Cross-Site Scripting
    • by blowdart ( 31458 )
      It's not really a 2.0 problem (assuming you were being a little serious and not just going for +5 funny). Any web site that accepts user input should be checking, and that includes your company's brochure-ware site, slashdot's new (awful) reading interface, digg.com or bbc.co.uk.

      What I do find worrying is that when I talk about this (and I do now and again, because I am a presenting whore) is that some people who are implementing ajax suddenly think because they're getting xml from an environment they set

    • by koehn ( 575405 ) *
      Actually, not true for AJAX, if well-implemented.

      I've been working with JsOrb [jsorb.org] (which lets you call your Java interfaces from Javascript) and one of the nice things about it is that when used correctly it makes XSS vulnerabilities go away. Since the data is encoded inside XML messages, the browser takes care of properly escaping all those goofy characters into & for you.
      • Re:Web 2.0 anyone? (Score:4, Insightful)

        by blowdart ( 31458 ) on Monday September 25, 2006 @12:05PM (#16186399) Homepage
        Bad assumption. If you're assuming everything is coming down correctly encoded you're a fool, all it takes is a bit of javascript that submits to your back end without encoding and *bang*
        • by koehn ( 575405 ) *
          That's my point. I don't need to assume that everything is coming down correctly. The browser will encode the data for me. If the XML that comes down from the appserver is invalid, then it's not a valid document and won't parse, so my code fails. If a user submitted text that is JavaScript, the browser escapes the characters for me, and the user sees the text.

          I'd be a fool if I received HTML fragments from the appserver, and those fragments were partially user-generated content, and those were unescaped. Bu
  • by oohshiny ( 998054 ) on Monday September 25, 2006 @11:49AM (#16186185)
    Before web designers blame themselves for this, the existence of XSS is really a fundamental design flaw in the way JavaScript and browsers work. It should have been obvious as soon as JavaScript came out that these kinds of attacks would become a major issue over time, but the "ooh shiny" attitude of the computer industry meant that people adopted JavaScript without knowing what the implications were. In fact, the other big security hole and productivity drain of the industry, C/C++, got adopted in a similar way.

    Writing any substantial piece of software in C, C++, or JavaScript without creating safety or security issues is extremely expensive and beyond the ability or resources of most developers. For C and C++, there are alternatives you can choose today. For JavaScript, you just have to minimize its use or simply not worry about it and let the client fix it with tools like NoScript.
    • No, it actually is the web designers fault (not that I'm any better than they are). The way that browsers work is based on the assumption that the website won't willing screw itself. By not validation user input and just dumping that junk out as markup, the website is making a big mistake. These are issues with *server-side code* not javascript or browsers.
      • Re: (Score:3, Insightful)

        by julesh ( 229690 )
        The point is, though, that browser developers could have made script filtering substantially easier. If you want to accept HTML-formatted input to a web application, you have little choice but to try to filter out any scripts a malicious user may have inserted. And doing so is hard, because there are so many different ways scripts can be inserted into HTML. Script tags, event handler attributes, any attribute that can take a URL (e.g. src, href in many different elements), style attributes, style tags, ..
      • The way that browsers work is based on the assumption that the website won't willing screw itself.

        Yes, and that's a bad assumption.

        By not validation user input and just dumping that junk out as markup, the website is making a big mistake.

        Indeed. But the web browser is making an even bigger mistake by not validating input from the web site and screwing the user. Why is it doing that? Because that's what the web standards say it should do.

        A standard is bad if (1) real developers have a propensity of making
  • but it's probably pointless. Not enough developers care about their craft.

    There's a prominent "popular science" website out there (no, it's not this one [popularscience.com] that I'm thinking of) that has ENORMOUS XSS vulnerabilities in its image gallery. They pass captions and img src in URL encoded query string parameters. Yuck.

    I noticed this about a year ago and reported it to the development team, with a demonstration link that put in a (sorta not nice) image and caption. No response, and when I checked six months ago the vulnerability was still there. So much for being a nice guy.

    • The problem from what I have seen is in the attitude of the people running the websites. Their attitude tends to be that cross-site scripting doesn't directly impact their servers but only impacts the systems viewing the website. Since this doesn't have a direct impact on their server's it's not a high visibility threat to them. Their attitude towards their visitors is "We are secure but it sucks to be you."
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      Same here; I used to work for a grocery company that allowed customers to upload photos to their image lab for processing. I discovered several XSS vulnerabilities in their interface and pointed them out; a year later, they're all still there. Not only does it leave all of their customer's private photos vulnerable, but someone could leverage the exploit to do all sorts of nasty things, especially since there are ActiveX controls you must install for the site to work...
    • by julesh ( 229690 )
      If you're think about the site whose domain name's MD5 hash is "c6af41da42ae8a500747c2c920106c98" (or "7574dff77b3e597ee2e337984c8a27ce" without no LF at the end of it), I reported it to them some time around 2003 or so. They just don't care.
      • Nice use of a digest to verify a piece of information known by an unknown party on a public forum without revealing the actual information!
  • Too Lazy? (Score:2, Informative)

    by Anonymous Coward
    It looks like the attacks can be prevented by simple user input validation. Are the above mentioned high profile website developers/architects being too lazy or nobody knew about this type of vulnerability until recently ? I cant see how Joe Average will know about this exploit because he will not bother to read the query string(or even understand what it does) if it points to major website.
  • by Anonymous Coward
    I've seen the interesting effects of this first hand with a customers server, which I was tasked to unhack. Took a while to spot the reason the server was hacked because stupidly I didn't think of XSS when I considered the range of hacks that had occurred. When I did finally start grepping the access_logs and saw the rather odd things being passed through an enquiry form script things started to piece together. I've filched a copy of the script passed and its quite impressive, though its probably reasonb
  • Where is the law in these cases?

    I'm sure there are ways to know who the hacker is, so why don't they use the information to catch the criminals and put them on trial?
  • Which is why I always use SafeHTML [pixel-apes.com] whenever my applications ask for input.
  • by Joe U ( 443617 ) on Monday September 25, 2006 @01:09PM (#16187251) Homepage Journal
    I'm a web developer and I've said this dozens of times.

    VALIDATE ALL INPUT EVERYWHERE.

    Validate on the client. (For bandwidth reduction)
    Validate at the APP Tier (For security)
    Validate at the Data Tier(For security and integrity)

    If you accept input from a web page, scrub it, and that doesn't mean stripping brackets or quotes, it means putting in a list of valid characters and tossing or replacing absolutely everything else.

    Yes, you might wind up validating something that doesn't need to be validated or scrubbing something that doesn't need to be, the performance hit is worth it.

    Also, Stored Procedures are a great resource, if you design them properly you add an extra layer of security that can actually improve your application performance. (All my recent projects have Stored Procedure execute only rights.

    If your db code has select * from table in it, you're doing it wrong.

    Ok, enough ranting from me.
  • by trevdak ( 797540 ) on Monday September 25, 2006 @01:17PM (#16187347) Homepage
    As a content manager for the U of Rochester when I was a student there, I witnessed thousands of attempts at XSS every month. All of this thanks to one idiot who decided he wanted to put a mambo website up on the student activities server, we had our main server breached and multiple websites defaced. once you're breached, everyone wants to try to hack you again. One interesting thing I noticed is that the majority of XSS attempts will try to call a script in a file with a .gif or .jpg name. This way, if a curious person sees the attempt and tries to visit the linked script, all they get is a broken image. However, the file_get_contents php function, or other such functions, will read those as PHP. I've seen these scripts uploaded to government websites, university servers and many other places. The one that was put on the U of Rochester server attempted to delete all of the files on the server and put in code for what looked like a perl proxy server (i dunno, it was kinda obfuscated, and I'm not too good at perl yet). The XSS scripts are quite complex, too. Some of them create HTML/javascript console interfaces for people to interact with the server as if they had an SSH connection. And they're all over the place. I've got a website that's had less than 1000 hits, and I've seen three separate attempts to use XSS on it.
  • by Dom2 ( 838 ) on Monday September 25, 2006 @04:52PM (#16191095) Homepage

    How many "web" templating systems do you know that automatically escape HTML unless told otherwise? I know of one that can be made to do so: Mason [masonhq.com]. Even then, you have to enable it, as it's not turned on by default.

    What about PHP, ASP, JSP and so on? Will they ever grow up and automatically escape HTML by default? I doubt it very much.

    In the meantime, there's always mod_security [modsecurity.org] if you're willing to invest the time configuring it. But it's no guarantee...

    -Dom

  • I quite often see people using $PHP_SELF ( or better $_SERVER['PHP_SELF'] ) in their php applications (for example, for the form action on a self posting form). What most of them don't realise is that it is user input, and very easy to inject any content into this.

    I think this is a major XSS vector, because this is unknown (really now, wouldn't you expect a $_SERVER variable to be safe?)

    For example:

    <form action="<?php echo $_SERVER['PHP_SELF'];?>" method="get">
    <input type="

With all the fancy scientists in the world, why can't they just once build a nuclear balm?

Working...