Internet Security Moving Toward 'White List' 316
ehud42 writes "According to Symantec, 'Internet security is headed toward a major reversal in philosophy, where a 'white list' which allows only benevolent programs to run on a computer will replace the current 'black list' system' as described in an article on the CBC's site. The piece mentions some issues with fairness to whose program is 'safe' including a comment that judges need to be impartial to open source programs which can change quite rapidly. Would this work? The effort to maintain black lists is becoming so daunting that white lists may be an effective solution."
Works for me! (Score:3, Insightful)
This is the stupidest idea (Score:3, Insightful)
Again? (Score:5, Insightful)
We know how this ended (certificates given left and right without proper verification).
Now they try again with new certificates, which are more expensive.
So that's about that part.
What about site filters. Whitelisting sites in security suites has got to be the dumbest idea I've heard in a long time. Last I checked there's like billions of pages out there, some of which safe and some not.
So now that we find it impossible to cover the entire subset of malicious pages, what do we do? Yes, we try to cover the even great subset of legal pages.
This will either end with many small harmless sites filtered out, or sites having to pay ransom to all security suite vendors out there to get whitelisted or something of a similar nature.
Not happening.
Will only be useful for people who dont experiment (Score:3, Insightful)
For a private user with a mostly static set of application, it should still work but expect the occasional blocked program.
For developers and the rest of the
And why would I trust Symantecs opinion? (Score:5, Insightful)
This leads to the conclusion that all other "security"-companies where either in bed with Sony, or that their "security"-products are utterly useless. I'm not sure, which is worse.
So why again should I give a rats ass about the opinion of those guys, when it comes to security?
Re:I can see it now (Score:4, Insightful)
I mean, if it's not Microsoft, it's not really "official", what makes you sure you should be running this application. You probably shouldn't. There's a nice Microsoft alternative which is "official". Wouldn't you like to download that instead? Yes/No
You forgot option 3:
[T]hanks, but I already did download an alternative to Microsoft.
Seriously, though, how can anyone possibly believe this could ever work? The computing world is driven by countless specialist applications, many of them written in-house by small businesses, or just by individuals to solve a specific problem they have. It's pretty obvious that no organisation could possibly whitelist all of this stuff effectively, without having some sort of automated system that every malicious developer in the world could abuse just as easily.
Whitelist keeper = make money (Score:5, Insightful)
After all, businesses would be willing to pay to get their products into said whitelist, while one hardly expects virus makers to pay for getting their creations into a blacklist.
Of course, i'm sure the Symantec guys are naturally not at all thinking of all those extra $$$
Where are the Web Safety basics ? (Score:5, Insightful)
Q: Where has the Internet failed?
A: Its main proponents and enthusiasts ignored Drivers' Ed for the info-superhighway. They didn't teach people how to use web browser and email programs, didn't show how to read a URL and pay attention to the protocol and domain, nor instill the habit of mousing-over links to see where they go beforehand. Teaching people about the padlock symbol should have also included how to deal with SSL certificate alerts.
The result of this neglect is that people cannot recognize authenticity on the Internet, so the value of the Internet's "currency" is spoiling. Imagine if people weren't clued-in on how to authenticate a $20 bill: Over time only certain government and corporate entities would be trusted to handle currency to prevent spoiling by counterfeiters.
Our job as Internet cognoscenti is to keep correcting the people around you on the right way to use Web and email. Granted, this is not a cure-all given the other major factor here (Windows malware) but its several steps in the right direction. This stuff is not hard.
The alternative is an Internet-II re-worked around big corporations and government sites through a whitelist enforced by Trusted Computing remote attestation. Don't think they won't be opportunistic enough to scare the public into that corner.
No longer a computer (Score:4, Insightful)
Developers will be the first to notice: you can still write and compile a program, but you cannot test it. But the typical user will also be affected: what about the useful firefox extension you like? Bummer, not on the list. Want to use facebook? Sorry, the javascript in the new version is not approved.
The white list is a pretty futile anyway, because you can program on several levels. Javascript is only an example: what if the browser is approved, but your javascript code does nasty things? Or what about a heap overflow in the browser? Suddenly you are running custom code, but how is the white list going to notice this?
Two questions... (Score:2, Insightful)
2: Didn't we have this discussion not too long ago except the "List" would've been administered by MSFT (&co), called TCPA (then Palladium then NGSCB then OMGWTFBBQ) and be a little bit more "hardware-assisted"? (For anti-microsoft-fanboy coverage, check out AgainstTCPA [againsttcpa.com], for msft coverage try Microsoft, Wikipedia [wikipedia.org] has some rather neutral insights)
You maybe more right than some realize (Score:5, Insightful)
Yes, "trusted computing" had all that DRM stuff and crypto signatures and all components authenticating themselves and their drivers, but essentially that's what you need to have a bullet-proof whitelist.
- E.g., if you don't have a strong hash to be sure that it indeed is the program you think you're running, and it's an untampered executable, then you don't really know what you're running. (E.g., if you were to do it just by name, and you allow, say, "WoW.exe", then you'll also run a virus attachment called "WoW.exe" just as cheerfully.)
- E.g., if you don't make the system startup itself bullet-proof, people will use spoof drivers and whatnot to compromise that security
So basically we're essentially back to the same Palladium shit that we ranted and raved against as the great Satan. It's what MS wanted in Vista in the first place, but apparently realized grudgingly that noone else wanted. And _of_ _course_ Vista would be on the list. In fact, better than that, Vista was supposed to be the one enforcing it. (Which, if you think about it, is pretty much needed. If the OS doesn't do it, and doesn't double-check its startup and components at that, any other link down the chain is not guaranteed to be guaranteed enough to be the uncompromised.)
So now it's snuck back under the same claim that you need it to protect you from the evil hackers. Right.
Well, the problems are the same any way anyone wants to slice it. E.g.,
- it essentially discourages running stuff you compiled yourself. (Just changing the options you compile a kernel with, for example, is enough to change the hash, if the hash is any good. So essentially the only safe thing a "trusted computing" system should conclude there is that the system itself has been tampered with and is no longer secure or trustable.)
- it places an undue burden on small time developpers and hobbyists. I know if I was distributing a small utility on sourceforge, I'd be annoyed if I had to re-certify it every time I refactor something or fix some obscure bug. Doubly so if it costs anything to get it certified, which would likely be the case if a commercial entity is doing it. Getting it virus scanned, ran through some automated heuristics, hashed, and put on the list, can take some time and infrastructure and a paid employees time costs money.
And, frankly, even if it was something as trivial as 10$, why would I pay it for something that makes me no money? It'd be like ROI except without the R. And if you want it thoroughly dissected and certified that it 100% can't possibly be a virus, then it'll cost a heck of a lot more than that.
- it can be used to shaft you the other way around too. A program can authenticate the system it runs on, and some might even need to. (E.g., I sure hope an anti-virus utility pipes up loudly if it thinks it runs on a system where the OS itself has been compromised. E.g., I sure hope a banking applet pipes up loudly if it runs in a browser that's been compromised.) So there's nothing to keep someone from making a program that refuses to run in Wine or a flash applet that refuses to work in Mozilla.
And if you think noone other than MS would ever do that, think again. There was this recent story even on Slashdot about webmasters who explicitly don't want Mozilla users because they block their ads.
Etc.
Re:Works for me! (Score:5, Insightful)
Surf safe. Use Noscript.
What happened to good OS design? (Score:5, Insightful)
E.g., whatever happened to running something in a sandbox, ffs? You can go as far as running something untrusted (e.g., a plugin, ActiveX control, etc) in a virtual box, but even a chroot jail is a good start. It _is_ possible to isolate something to the point where it can't do any harm at all, and can't touch anything except itself. It's also possible to nice it to the point where it only runs when nothing else wants to, so it can't DOS your system that way.
So why doesn't anyone do just that already? E.g., MS could have fixed their own ActiveX crap that way ages ago. Instead we got this baroque, but fundamentally broken, model where you get to decide (or have decided for you based on zones) whether something can't run at all, or can run with full rights as an executable. Except if a malicious one slipped through the cracks, it's still a full executable running on your machine.
Heck, even Java is essentially the wrong way about it as a browser plugin. It tried to implement itself some restrictions which belong in the OS or browser itself, and if the JVM itself is compromised (there _have_ been a couple of JVM vulnerabilities), it can do anything. Kudos to Sun for trying that, but it's a workaround essentially. It shouldn't have been the JVM which does that, it should have been the OS and browser.
Whitelisting is just an extra step in that wrong direction, essentially. Instead of making sure that a malicious thing in the browser can't touch anything else, we're one step further in the baroque, fragile and monumentally work-intensive direction of determining which of them should be allowed. Except again, if something slipped through the cracks, you'll still get screwed so hard you'll walk bow-legged for a week.
Am I the only one who finds that dumb?
Re:Where are the Web Safety basics ? (Score:3, Insightful)
Re:High time too (Score:3, Insightful)
Re:Is it me (Score:3, Insightful)
Re:Is it me (Score:1, Insightful)
So the user gets his little toy, the machine is (mostly) secure and everybodys happy.
And provide some (not too easy/convenient) means for transit to "trusted software" to the user.
Re:What happened to good OS design? (Score:1, Insightful)
Re:What happened to good OS design? (Score:4, Insightful)
The problem is that, like a computer with its Ethernet cable unplugged, an application completely isolated from everything else is useless. For example: at the very least you need to allow an embeddable object (like a Java applet, ActiveX, etc.) to draw itself on screen. To do that you need to enable it to do a large number of GUI-oriented calls. What happens if one of these calls is found to be exploitable by a malicious process? It would be like you did nothing at all for security.
Todays software has *so many* interdependencies that it's practically impossible to segregate everything into neat little boxes whose security can be managed individually. For example, a modern Windows application can (and often does) interact with a large number of subsystems that have been, and still are, found fallible, which fall into these broad categories:
The obvious "solution" is: blame Microsoft - it's bad design practice to enable so many possible interactions throughout the system. But this would mean that users won't be able to use such nifty things like "live" copy & paste throughout their applications (OLE), Explorer shell extensions (like WinZip), unified database drivers (ODBC, OLE, ADO), etc. -- and all of these things are selling points (AND, unsurprisingly, these are some of the more important things users miss when they try to use Linux). If you try to do it partially, for example disable OLE calls from ActiveX controls, business users will be angry because their embedded ActiveX applications will stop working.
And if you DO try to lock everything down, you'll get hordes of angry users complaining about needing to click "Allow" every time they move the mouse pointer :)
Re:Where are the Web Safety basics ? (Score:3, Insightful)
The car analogy (as is often the case) doesn't fit. PC culture has been driven by pros and enthusiasts alike who can informally make recommendations, and a large chunk of the population cultivate relationships with their "PC guy" type friends and relatives. The best anyone can do in this situation of fraud proliferation is to educate people on the most basic and effective measures, esp. since the service-based model of security if failing. In a culture with a growing market of "Geek Squad" and "Nerdmobile" techs administering virus scanners and such, we find that criminals increasingly run amok.
Since the issue is web surfing (driving), your analogy could only be saved by asserting that what people need are paid chauffeurs to do their web surfing for them.
Re:Where are the Web Safety basics ? (Score:4, Insightful)
Once you lower the bar, there's no raising it back up again.
Re:What happened to good OS design? (Score:5, Insightful)
And how many of those good programs are at Sourceforge? What happens when a program at version 2.5.11 goes to version 2.5.12? Will Symantec and company suddenly rush to create the hashes needed to keep up with open-source development?
Implmenting a policy like this can only benefit the large, established developers who'll be publishing software well-known to the whitelisters.
What about programs that run on, say, Java? Will every version of Azureus need to be whitelisted, or just the JVM software that talks directly to the operating system? What about programs that update themselves online? Will the new version still be whitelisted, or will the program stop working until McAfee updates its hash database?
I suppose you could let users add unknown programs to their whitelist, but given that we know many users will click OK in response to any dialog box, that seems to undermine the entire system. If someone's gone to a bogus website to download that "NFL Game Tracker" that was advertised in recent spams, do you think they'll then refuse to add it to their whitelist if given the chance? I think they'll click the OK button and install the Storm trojan.
As other posters have said, there are other, better ways to solve these problems than whitelisting.
Re:What happened to good OS design? (Score:3, Insightful)
Am I the only one who finds that dumb?
Re:What happened to good OS design? (Score:3, Insightful)
First of all your VM thing is a bit of a pipe dream. People are already upset about the cost of Windows. Do you think they are going to be happy about having to purchase multiple copies AND licenses for a VM? Tack on all the latest licensing issues and limited install issues and you have a recipe for great fun. Nevermind that its only been relatively recently that hardware has made this much of a feasable possibility for the desktop. Now take all those computers out there that aren't leet hot off the shelf gaming machines...you know...the ones that most of the people affected by this kind of security issue actually use...and try to run VMs on them.
The people who figure out to go use something like VMPlayer and some of the free applications like the ubuntu/browser appliance thing are not the people who are hit hardest by this kind of security problem. Quite frankly I think blacklisting was a moronic idea from day 1. Marcus Ranum has a good paper on the dumbest ideas in security and "Enumerating Badness" and "Default Permit" are both in there. Whitelisting is actually the correct solution that was supposed to happen ages ago.
By the way, your solution doesn't really solve much unless those VMs are clean on every boot, no writing anything, and that makes things terribly difficult. Explain to grandma that she has to turn off the freeze, install program XYZ, and then turn the freeze back on. You are frequently lucky to explain the install program XYZ part. So your default permit virtual machine gets infected, stays running as a VM zombie now. Sure its easier to clean up, but rather than solving the problem of getting tagged in the first place you just raise the bar of complexity an order of magnitude and expect joe sixpack user to understand how to operate the new monstrosity.
The best part will be when joe sixpack gets 3 VMs zombied without shutting them down...now his 1 zombie box is instead 3 zombie boxes! Hooray. Oh and please ignore the fact that more modern malicious code can tell when they are in a VM enviroment and behave differently. And god forbid there be a vulnerability in the VM part.
Re:What happened to good OS design? (Score:2, Insightful)
With that out of the way, I'm not saying a white list is bad, but as with any security methodology, it does impose some down sides.
There is really no way to enforce globally what is a white listed program, as different organizations have different needs. So you are still prone to the jackass not researching what mindfark.vbs is and allowing it to unconditionally run.
Now, media, is a frequently used trojan horse to deliver viruses, in addition to executables. With the billion websites, digital cameras, etc, etc out there, are we going to be able to use this approach efficiently?
The only way for this to accurately work is to keep a lits of names and signatures of allowed to run programs. What happens when there is an update? Now we need to keep track of multiple versions, and the more versions we store, the easier it is to slide things by since signatures are usually not 1 to 1, we are increasing the chance of collisions. There are multitude of scripts that can modify files to create collisions with legitimate files. Only a matter of time for whatever algorithm is used. Only a matter of the right number of noops, incrementing of a worthless variable, or modification of the metadata and other non-viewable information in the media is found to cause a signature collision. Remember, the hacker has all the time in the world to sit in a test enviornment trying to match a signature without ever raising any alert to those they wish to attack.
Re:What happened to good OS design? (Score:3, Insightful)
ISTM that a different security model would remove the need for many of these programs, so its moot to ask "what about app foo".
I know just enough about computer security to know that I know almost nothing, so please enlighten me. Its seems there is a massive industry based on very failed concepts of security that have been kept around and worked around for too long. Many times on slashdot we say that we're not responsible for someone's failed business model. Likewise in this case. If your web-based virus scanner can't work and may even be completely unneeded, that's kinda too bad, isn't it?
Re:What happened to good OS design? (Score:4, Insightful)