How to Save Mac OS X From Malware 222
eXchange writes "Well-known hacker Dino Dai Zovi has written an article at ZDNet discussing last week's discovery of a critical threat to Mac OS X, and another announcement of a Trojan horse exploiting this discovery. He suggests that Snow Leopard, or Mac OS X 10.6, should integrate more robust means of preventing malware attacks. Some of the suggestions he has include mandatory code-signing for kernel extensions (so only certified kernel extensions can run), sandbox policies for Safari, Mail, and third-party applications (so these applications cannot do anything to the system), and some lower-level changes, such as hardware-enforced Non-eXecutable memory and address space layout randomization."
signed kernel modules would be good for apple too (Score:4, Informative)
Signed kernel modules would not just stop malware but it would stop some of the hacked (and custom written) kernel modules being used to get OSX to run on non apple machines (or being used to make the experience of using OSX on those machines better)
Re:Sandbox? (Score:3, Informative)
Because running as the user is basically just as good. The user doesn't care what a piece of malware has infected or destroyed, only that it has done so.
Address space layout randomization (Score:5, Informative)
Apple already does address space layout randomization in Leopard (Mac OS X 10.5)
See "Library Randomization" on
http://www.apple.com/macosx/features/300.html#security [apple.com]
Notice that the new security features list also includes code signing and sandboxing. The technology is there, it's just not setup throughout the system.
Re:mandatory code-signing? (Score:4, Informative)
hardware-enforced Non-eXecutable memory?
Unless you can could turn it off, it just sounds like DRM.
This isn't DRM. This is what prevents a stack overflow or buffer overrun from executing code. There is absolutely nothing evil or even potentially evil about it. Marking your data segments 'NX' means that they can't be executed, even if something 'bad happens'.
mandatory code-signing?
Again this isn't evil. I think it would be great if ALL code always had to be signed. It would pretty much kill morphic virii, and put a real dent in the spread of rootkits etc.
The key to 'good' vs 'evil' with mandatory code-signing is who holds the keys. If I hold the keys to MY computer, then there is NOTHING WRONG with mandatory code-signing, because if there is something I want to run that hasn't been signed by [OS-vender] I can sign it myself to run on my computer, my network, my enterprise...
Microsoft does not require ... (Score:2, Informative)
Microsoft does not require that the code be signed by them. They simply require that the code be signed, by any certificate issued by a signing authority.
All the code we develop for Windows is signed by us, and installs perfectly fine on Vista, and Microsoft has never seen a single line of our code.
"local" != "physical" (Score:4, Informative)
You can run it via SSH as long as someone is logged into the console.
If you can ssh in, you already have local access.
"Local" is the counterpart of "remote". A "remote exploit" is one that you can perform without already having local execution access on the machine.
What you are talking about is "physical access".
Re:signed kernel modules would be good for apple t (Score:2, Informative)
Re:Summary For The Lazy (Score:4, Informative)
For the most part, that distinction is clear, although a few programs blur the lines, we should probably be asking if that is a useful thing to do or just a security mess from lousy design.
With properly coded applications, both should be data if stored locally anyway. When accessed via a browser, we should establish a convention. I see no reason for word or HTML files to do anything outside of the sandbox of the program opening them.
That's easy, data... until you change the file to be executable and assign it a proper extension.
That's not a simplistic solution, nor even a solution in most cases. It is just a way for the manufacturer to transfer blame for security failures. Most don't even seem to be intended to increase overall security. That doesn't mean you can't make good security changes or simplify things in ways that make things easier for users. Seriously, what we have now is not working.
This is, in my opinion, a misstatement of the problem. The problem is not that users run programs that they shouldn't. It is that users want to run programs they don't trust, but without significant risk. They can do it today using VMs, but surely OS manufacturers should be able to come up with a more convenient method of letting people run potentially dangerous software in a safe way. The main problem now is users have to take a gamble. I want to play this game, if it is a game, so I'll guess it isn't malware and give it a try. The OS should be telling them it is malware or if it is unknown, should be telling them what it is trying to do, before it does it. You'd think this incredibly common use case would be a priority by now, but for the most part only Windows has a big trojan problem and they also have a monopoly so why should they care?