Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×
Privacy Security IT

Browser History Sniffing Is Back 161 161

An anonymous reader writes "Remember CSS history sniffing? The leak is plugged in all major browsers today, but there is some bad news: in a post to the Full Disclosure mailing list, security researchers have showcased a brand new tool to quickly extract your history by probing the cache, instead. The theory isn't new, but a convincing implementation is."
This discussion has been archived. No new comments can be posted.

Browser History Sniffing Is Back

Comments Filter:
  • Easy work-around (Score:5, Informative)

    by richkh (534205) <rkh-slash AT nekomi DOT cx> on Sunday December 04, 2011 @07:38PM (#38261030)
    Fixed cache size of 0.
  • Unreliable (Score:3, Informative)

    by cpicon92 (1157705) <> on Sunday December 04, 2011 @07:41PM (#38261048)

    This tool seems unreliable (at least in Chrome). I've been on YouTube five times in the past 48 hours and it still showed up grey on the sniffer.

  • Javascript required? (Score:5, Informative)

    by betterunixthanunix (980855) on Sunday December 04, 2011 @07:47PM (#38261094)
    This appears to require Javascript. Thank you, noscript.
  • Reality: Only a small number of users use NoScript et al. This is a problem for those that don't, and even if you do, what about when the site you want someone from requires JS?
  • How (Score:5, Informative)

    by farnsworth (558449) on Sunday December 04, 2011 @08:19PM (#38261314)
    This seems to work by loading well-known resources into an iframe and using a heuristic of the "time to load" to tell if it's cached or not. Hence, whether or not you have visited that site. I just scanned the source code, but this is what it looks like. It any case, it's not like this code reveals your history -- just whether or not your browser has visited one in a set of popular sites.

    Yay stateless web.
  • Re:Easy work-around (Score:5, Informative)

    by icebraining (1313345) on Sunday December 04, 2011 @08:53PM (#38261566) Homepage

    Cache and history are completely different features. 0 cache means you'll have to download the same CSS/JS/image files over and over again for each page on the same website, which is a waste of resources for both you and the server.

  • by Hentes (2461350) on Sunday December 04, 2011 @08:53PM (#38261574)

    This is already fixed in most browsers, you need to update/reconfigure yours.

  • by icebraining (1313345) on Sunday December 04, 2011 @09:00PM (#38261630) Homepage

    That's because the test consists on downloading a file and measuring if it was instantaneous (cached) or not. Of course, the second time you run it, the script itself will have downloaded (and therefore put in cache) the same file, tricking itself.

  • Re:Easy work-around (Score:4, Informative)

    by icebraining (1313345) on Sunday December 04, 2011 @09:35PM (#38261838) Homepage

    There are plenty of cases where actual content - images, videos, etc - benefit from caching. It's not all garbage.

  • by TubeSteak (669689) on Sunday December 04, 2011 @09:58PM (#38261986) Journal

    It's very easy to manage

    I've installed NoScript for non-technical people and it almost immediately caused them headaches.
    There are plenty of internal academic/work websites that rely on fetching scripts from third parties...
    Which is exactly what NoScript is designed to prevent.

    And then there are endless websites that want you to allow scripts from a CDN or Google APIs or social bookmarking.
    NoScript's generic default allows aren't inclusive enough to keep websites from breaking for most people.

  • by icebraining (1313345) on Sunday December 04, 2011 @10:15PM (#38262086) Homepage

    Oh, ok, but NoScript blocks those too, so betterunixthanunix's point still stands.

  • Re:Easy work-around (Score:5, Informative)

    by fuzzyfuzzyfungus (1223518) on Sunday December 04, 2011 @10:24PM (#38262120) Journal
    You'd also have to ensure that page elements don't load in any deterministic or controllable order, and that the number of requests the browser has going concurrently isn't predictable: If I can control the order in which your browser loads my page's elements, I can make useful inferences about the load time of a 3rd party resource, without any client javascript, by sandwiching its loading between the loading of two resources(at dynamically generated URLs, to ensure that you couldn't possibly have cached them) on servers I control. Not perfect, because various other factors could affect the time it takes your requests to hit my servers; but likely better than nothing.

    It would also be a bit tricky because inferential attacks wouldn't necessarily have to ask politely for the state of the resource they are interested in, they could instead attempt something else(say a script operation that will fail in a particular way unless the 3rd party resource is available). Barring a solution much cleverer than I can think of, you'd be at considerable risk of having to break progressive loading of pages entirely(or at least into human-visible-and-annoying stuttery chunks) in order to keep prevent a combination of a page interrogating its own state on the client, and correlating with timestamps on requests to servers controlled by the attacker...
  • Re:A possible fix (Score:4, Informative)

    by _0xd0ad (1974778) on Monday December 05, 2011 @01:03AM (#38262784) Journal

    AdBlock Plus lets you do that very easily.

    Block on third-party sites except
    Block on third-party sites except,, and (write similar rules to block the other 3 facebook domains)

  • Re:Easy work-around (Score:3, Informative)

    by Anonymous Coward on Monday December 05, 2011 @01:36AM (#38262992)

    If I correctly understand how this attack works, they could easily read your proxy cache the same way.

  • Re:Easy work-around (Score:4, Informative)

    by KXeron (2391788) <kxeron.digibase@ca> on Monday December 05, 2011 @04:55AM (#38263782) Homepage

    There is a large difference between "user" and "customer", the problem is you may think that you are a "customer" (or at least potential customer) of every site you visit, but this is incorrect.

    "Customer" implies that there is a business relationship in play, however if it is a forum or other free resource, you will never be a customer as there is nothing to purchase. Not every website on the internet is a business.

    It is often seen as abuse when a user downloads or needlessly accesses a resource (files) multiple times and website administrators often have no qualms blocking abuse, it means less load on their site's server and more resources free (bandwidth, connection slots on the webserver daemon) for other users and on top of that: potentially lowering their bill.

    Coming from experience, I've seen people use download managers and misconfigure them purposefully so they open 20-100+ connections to a file feeling that the website somehow owes them that file, doing so on a webpage with a browser is no different.

  • by Ensign Morph (1824130) on Monday December 05, 2011 @07:58AM (#38264258)

    Nope. NoScript protects against the relatively common attack vector of malicious 3rd-party scripts being injected into a site you already trust, via exploits or ad networks. In this scenario the site will try to load malware from [] or whatever, which it doesn't normally do and you haven't permitted, so it fails. It saved me once when a PHP forum I visited often got hacked.

The first myth of management is that it exists. The second myth of management is that success equals skill. -- Robert Heller