Gosling Claims Huge Security Hole in .NET 687
renai42 writes "Java creator James Gosling this week called Microsoft's decision to support C and C++ in the common language runtime in .NET one of the 'biggest and most offensive mistakes that they could have made.' Gosling further commented that by including the two languages into Microsoft's software development platform, the company 'has left open a security hole large enough to drive many, many large trucks through.'" Note that this isn't a particular vulnerability, just a system of typing that makes it easy to introduce vulnerabilities, which last time I checked, all C programmers deal with.
Advertisement? (Score:5, Insightful)
To me, it sounded like a big advertisement for Java.
It's the developers decision to use unsafe code in the
A hunting rifle can be used to kill people. Does that mean the trigger should only work after inserting a valid and current hunting license?
So you mean to tell me (Score:5, Insightful)
C'mon now. There is no vulnerability. Don't post this sort of crap. Its strictly knee-jerk material meant to bend a few people out of shape and start flames.
J2EE is great (for its target area)
Both are secure, stable and reasonably fast if you are a GOOD programmer. ANYONE who does ANY C or C++ code that will be used in industry needs to ENSURE that they just take a few extra precautions and are aware of secure coding techniques in both languages. Its rather quite simple.
To sum it up: nothing to see here folks.
Don't disagree with Microsoft... (Score:5, Insightful)
What a surprise! (Score:5, Insightful)
In Java, everything is an object! Oh...except for the basic types, you need to use object wrappers for those.
Java is a type-safe language at the VM level... (Score:5, Insightful)
To support C/C++ semantics (ad-hoc pointers) you'd have to throw all that out the window and I assume that's what he's talking about.
Pat Niemeyer,
Author of Learning Java, O'Reilly & Associates and the BeanShell Java Scripting language.
What a well researched article! (Score:5, Insightful)
Hey, what about the keyword unsafe in C#? Sheesh.
Re:Advertisement? (Score:2, Insightful)
Re:Advertisement? (Score:5, Insightful)
C and C++ allow for buffer overflows. They allow for improper or intentional coding to cause software to try to violate memory space of other functions or programs. They allow for memory allocation without necessarily providing any cleanup later. In the hands of bad, sloppy, lazy, or malicious programmers these traits have always proven to be a problem time and again on many different platforms. This doesn't mean that these languages are the wrong tool; I'd argue that part of Linux's success is because the kernel and most of the GNU-implemented services are written in these languages, which are flexible. Too much flexibility for the wrong purpose leads to problems though, just as too much rigidity leads to problems when things need to be flexible.
Re:Advertisement? (Score:2, Insightful)
DISCLAIMER: COMPLETELY OFF-TOPIC
I don't know what the law is, but if a hunting rifle can only be legally used for hunting, this actually a pretty good idea. The card mechanism could also be used to enforce hunting seasons.
I realize this offends some people's sense of rights, but I'm not particularly inclined to defend somebody's "right" to use a firearm outside its intended purpose.
Someone should tell (Score:2, Insightful)
Why oh why (Score:4, Insightful)
It's the same with C. We should know by now "you cannot use C to handle untrusted data (ie, data from untrusted machines on the net)". All such data need to be handled in a sandboxed system, a system with safe memory access. This means something like Java or similar things.
A lot of people will make posts that say things like "C doesn't cause the problems, it's incompetent or lazy programmers who cause the problems." Whatever. No excuse. That's like saying "we shouldn't need seat belts or airbags; all we need is to make sure that drivers don't make mistakes." Drivers and programmers do make mistakes and that's why we need safety mechanisms in both cases. C provides none. Programming in C is like driving around in a car from the fifties, with no seat belts, no airbags, no head rests, no ABS.
So any decision to extend the use of C is just foolish. What is the purpose of doing this? If people must use horrible legacy code then just use it, but why drag that into new frameworks like .NET?
It does not compute, for me at least.
Re:James Gosling is an expert in this area (Score:1, Insightful)
Re:Java is a type-safe language at the VM level... (Score:2, Insightful)
Of course the article contains a few brief quotes from Gosling and fails to clarify whether he knew (and spoke of) the differences between safe managed code, unsafe managed code, and native code...
However, it is clear that *you* do not know.
Re:Advertisement? (Score:5, Insightful)
The article is heavy on sensationalism and short on content so it is difficult to tell what is actually being debated here, but I think that Gosling is claiming that support of C type handling in itself creates a chink in the armor of the CLR, regardless of any particular project's use of that feature.
Re:Advertisement? (Score:5, Insightful)
He just doesn't get it, or he's spreading FUD (Score:2, Insightful)
Re:Phew! (Score:3, Insightful)
Re:Rediculous (Score:5, Insightful)
To me this looks like a similar problem as allowing running native code via ActiveX. Yeah, we have permissions, signing and what ever - how much does it take for a trusted but buggy ActiveX applet to be exploited?
Huge mistake, IMHO. And do not compare this to JNI - I am no Java expert, but AFAIK you simply cannot call JNI functions from something like web applet by design, whereas here it is on the discretion of the app developer.
Re:Advertisement? (Score:3, Insightful)
Obviously.
but if a hunting rifle can only be legally used for hunting
A hunting license licenses the owner to take a certain type of game (deer season, etc) on certain land (assigned state land, private land, etc) during certain times (hunting seasons, obviously) with certain tools (shotgun only, bow, etc). It only grants this, in the case of firearms, to people who already legally own them. A "hunting rifle" is simply a subset of rifle suitable for a certain task (which varies for the types of game). In every case the "hunting rifle" set overlaps other sets such as the "target shooting" set or "clay pigeon" set.
this actually a pretty good idea. The card mechanism could also be used to enforce hunting seasons.
No, it's a terrible idea. Even suspending basic rights and also assuming for a moment that a "hunting rifle" with no other legitimate purpose exists you're proposing that it be completely inoperable for 11 months (or whatever) of the year? And you see nothing wrong with forbidding use of a dangerous tool except for the brief times we let people loose in the woods with them? Not allowing people to become comfortable, or even passingly familiar with it until they're hopped up on adrenaline in the forest? You see nothing inherently dangerous with that at all?
Re:Phew! (Score:3, Insightful)
Fair enough, but at least 90% of the stuff written in C and C++ doesn't need to be.
Re:Advertisement? (Score:5, Insightful)
Nobody is going to use C or C++ to write a completely new program under .NET. There are occasions where I might use C for something I wanted to make cross platform but no way would I ever go near C++.
Most people who are going to use the new .NET support are people who have legacy C programs and want to gradually transition them to the .NET base in stages. The makes a good deal of sense.
The other constituency is folk who are writing stuff that is almost but not quite at driver level.
Re:JNI (Score:3, Insightful)
Java lets you write to the user's filesystem. Does that make it insecure? You could run a program to wipe out your hard drive!
But Java allows for a "sandbox". So does
Re:Phew! (Score:4, Insightful)
I feel confident in predicting that most of userland will eventually run in the context of some virtual machine or other. Of course, that doesn't exactly make me a prophet, since that's the plan for Longhorn, but I think it will become the norm on other platforms as well.
It would be nice if, in the long run, operating systems became irrelevant when it comes to choosing applications. You go with whatever has the best track record for speed, security, or whatever, and then just choose whatever applications you like. Since the virtual machine runs everywhere, so will your software.
Re:Phew! (Score:5, Insightful)
I mean seriously, this is like claiming ASSEMBLY is a worthless insecure language because you can hang the system while in supervisor mode, due to ineptness? Sheesh.
Re:Phew! (Score:3, Insightful)
Re:Advertisement? (Score:3, Insightful)
A hunting rifle is fine for some purposes, but decorating your house with them is unwise. Java, effectively, has support for making absolutely certain that the rifle cannot be fired, and therefore you can feel okay about having it on your mantle.
Of course, he's theoretically wrong; the C standard actually does not exclude the possibility of preventing programs from doing bad things, by, for example, giving a bus error if you dereference a pointer to freed memory. You could have garbage collection in C if you really wanted, because there is a limited amount you can do to pointers and still be necessarily able to use them again. It's just that C implementations almost never do anything like this, because it would be slower and more resource-hungry than Java (because Java has limitations which then permit optimizations by the system). On the other hand, it might be worthwhile having such an environment, so that you could run untrusted code by developers who expect to be trusted.
you got it backwards (Score:4, Insightful)
You arae kidding, right? Do you seriously believe Java is the first or only language to guarantee runtime safety? Safe languages are the rule, not the exception.
To support C/C++ semantics (ad-hoc pointers) you'd have to throw all that out the window and I assume that's what he's talking about.
C# distinguishes safe and unsafe code. C#'s safe code is as safe as "pure" Java code. You can think of C#'s unsafe code (or its equivalent in C/C++) as code linked in through the JNI interface, except that C#'s unsafe code has far better error checking and integration with the C# language than anything invoked through JNI.
Altogether, C#'s "unsafe" construct results in safer and more portable code than the Java equivalent, native code linked in through JNI.
Pat Niemeyer, Author of Learning Java, O'Reilly & Associates and the BeanShell Java Scripting language.
Well, then I suggest you learn some languages other than Java before making such ridiculous statements.
Re:Java is a type-safe language at the VM level... (Score:2, Insightful)
Yes, actually, it does. Have you checked it recently? The only overhead that natively compiled Java code would have over comparable C++ is that it always does array bounds checking. Other than that you just have to ask yourself, what kind of optimization can a static compiler (C/C++) do that a dynamic, profiling runtime compiler (Java) can't do?
Java suffers a penalty of about 500% on the average because of pointer chasing. There is no compiler or optimization that exist today that can optimize that, but Java as a language requires it, while C/C++ does not. (Think about how an array of complex numbers can be implemented in either languages, for example)
Static versus dynamic compilation has little to do with this, since both Java and C/C++ can be compiled statically or dynamically (although Java tends to benefit more from dynamic compilation).
Re:Java is a type-safe language at the VM level... (Score:3, Insightful)
Re:Java is a type-safe language at the VM level... (Score:2, Insightful)
I run projects developing business database-based software. Stuff like java and .net suits me fine. Do I need to access hardware beyond what those languages give me? Nope. Ever likely to? Probably not, but there are ways to interface.
How critical is performance? It's as important as it needs to be. If a daily process has a 12 hour window to run in and takes 1 hour instead of 30 minutes, do I care? It's fast enough.
Re:Phew! (Score:5, Insightful)
For most applications assembly is a worthless insecure language, and you should stick to a higher level language if you don't want to introduce problems (for anything larger, but probably including "hello world").
Re:Java is a type-safe language at the VM level... (Score:3, Insightful)
Still, some of us C/C++ coders get pretty tired of the assumption that all technology benefits accrue to Java (or will Real Soon Now - the 4GHz and 5GHz Pentiums should really help) while none accrue to C/C++.
Oh, and can I mention that heap? Java's insistence on the garbage collection model prevents the determinstic destruction of objects, upon which some programming idioms rely ("resource allocaation is initialization" is one). I don't mind that a heap exists for programmers who wish to use it, but It Would Be Nice If Java also permitted automatic allocation (and, more importantly, deallocation) in the C++ model. The RTE would have to deal with stale references to automatic objects, but Java could take care of that by raising an exception when auto destruction would create a stale reference.
Re:Phew! (Score:5, Insightful)
That's just restating the question.
If managed languages make a certain class of exploits impossible or very unlikely while C doesn't, then C is insecure relative to those languages.
A good C programmer might be able to cut the exploit rate down to some very small value, but they're going to work pretty hard to get to that point while people in managed languages get it for free. And good C programmers still fuck up sometimes.
Of course, there's other ways to screw up. No language is immune from security problems. Using a "managed" language is nothing more than risk management, but it's pretty effective.
Re:Phew! (Score:5, Insightful)
Re:Advertisement? (Score:5, Insightful)
All use of unsafe features in
As far as I know, there is no example of unmanaged code that can violate the managed code type system, and
Also, this ignores that C/C++ support is much more complicated in
Frankly, this seems like a bit of sour grapes to me.
Re:Phew! (Score:4, Insightful)
Are they really that high grade then?
Re:Phew! (Score:2, Insightful)
Bullshit. The rate of security holes will be lower, yes. But type safety never guarantees correctness. As such, it never guarantees security.
Re:Phew! (Score:3, Insightful)
It all comes down to bad OS design in general. Take the IE exploits for example. Why the heck can you get so much system access through an exploit in a web browser?!? Lets be honest here, the security model employed in most of today's OS's is mind boggling in it's ineptness.
Linux is not immune either. Many distributions out there still have absolutely retarded setup's like having server daemons running as the root user. You run each server daemon under it's own user account and give the user no permissions on anything that it doesn't need for that particular daemon and you can at least save the rest of the system if the deamon is hacked.
I love linux, but I'm sick of having to apply SELinux patches, Pax/Grsecurity patches, ACL patches, and setup complicated user jails just to feel like my system is safe.
Re:Phew! (Score:3, Insightful)
C programmer deals with it (Score:5, Insightful)
Yes, CowboyNeal, but do they want to deal with it, and should they deal with it?
For every programmer who reads security bulletins and keeps tabs on the latest string-copying buffer overflow issues and fundamental security principles, there are a hundred who don't know or care.
C is a high-level language that:
Programmers want to be productive -- most want to make things make colourful stuff happen on the screen, not fiddle around with buffer guard code. So the more security can be built into the language and its running environment, the better.
Many languages, such as Python or Ruby, provide security against what I mention in my first bullet, through a virtual machine. They're not impenetrable, and are of course, as dynamic languages, subject to a different class of security holes (eg., string evaluation of code), but they're a step up from the C level.
Other languages, like Java, provide capability-based security models, allowing for sandbox environments with fine-grained control over what a program may or may not do. Java's security system is ambitious, but since most Java apps run on the server these days, it's not frequently used, and except for browser applets, Java code tend to run unchecked.
In a way, Java tries to do what the OS should be doing. Programs run on behalf of its human user, and their destructive power is scary. Why should any given program running on my PC have full access to my documents or personal data? As we're entering an age where we have more and smaller programs, and the difference between "my PC" and "the net" is increasingly blurred. Operating systems need to evolve into being able to distinguish between different capabilities it can grant to programs, or processes -- we need to think about our programs as servants that are doing work for us by proxy.
The same way you wouldn't let a personal servant manage your credit cards, you don't want to let your program do it -- unless, of course, it was a servant (or, following this metaphor, program) hired to deal with credit cards, which introduces the idea of trust. The personal accountant trusts the bank clerk, who trusts the people handling the vault, who trust the people who built the vault, and so on.
In short, any modern computer system needs to support the notions of delegated powers, and trust.
Programmers will certainly never stop having to consider vulnerabilities in code. But painstakingly working around pitfalls inherent in one's language, be it C or indeed .NET -- we need to evolve past that. The users, upon whom we exert so much power, certainly deserve it.
Re:Phew! (Score:2, Insightful)
Re:Phew! (Score:5, Insightful)
Different != Better (Score:4, Insightful)
- segfault in C
- throw an unhandled exception in python
- churn up 2GB swap in java (or something similar)
In any case, think DoS. The solution is to program competently, regardless of language.
Re:Phew! (Score:3, Insightful)
It would help if C automatically initialized ram (and I've known at least one C that did, but it isn't a part of the language specs), just like it helps not to be able to use 0 as a pointer address. But the only thing that would make C safe would be to put it in a box, and not let it look outside. Even then you'd get buffer overflows within the code, but if it was restricted to not use pointer references outside the bounds of the program that would help.
Saying that "C is not inherently insecure" is like saying a sharp knife without a hilt isn't dangerous to the user. With reasonable care one can usually insure that nothing adverse happens to one, but it always takes a lot of care and attention, and the time that you slip can hurt you badly.
Re:Phew! (Score:5, Insightful)
These languages are inherently insecure because they allow for mistakes that other languages do not allow for. Combine this with the fact that it doesn't take a fool to make mistakes and you have the perfect proof that C is inherently insecure. Refute either of those arguments and you might have a point.
The problem with C is that it takes the inhuman capability to not make mistakes to end up with secure software. That's why all of the long lived C software projects have to be constantly patched to correct mistakes. Buffer overflows are no accidents, they are the predictable result of using C. Use C -> expect to deal with buffer overflows, memory leaks, etc. Every good C programmer knows this.
The difference between good C software and bad C software is that in the latter case the programmers are surprised whereas in the first case the programmers actually do some testing and prevention (because they expect things to go wrong). Mozilla Firefox comes with a talkback component because the developers know that the software will crash sometimes and want to be able to do a post mortem. The software crashes because the implementation language of some components is C. IMHO mozilla firefox is good software because it crashes less than internet explorer and offers nice features.
Of course we have learned to work around C's limitations. Using custom memory allocation routines, code verifiers and checkers, extensive code reviews we can actually build pretty stable C software. The only problem is that C programmers are extremely reluctant to let go of their bad habits. Some actually think they are gods when they sit down behind their editors and do such stupid things as using string manipulation functions everyone recommends to avoid like the plague, trying to outsmart garbage collecting memory allocaters by not using them, etc. If you'd build in a switch in the compiler which enforces the use of the improvements listed above, most of the popular C software simply wouldn't compile. But then it wouldn't be C anymore because the whole point of C is to allow the programmer to do all the bad things listed above, even accidentally.
IMHO programmer reeducation is an inherently bad solution to inherent security problems of the C language. You can't teach people not to make mistakes. You need to actively prevent them from making mistakes. You can make mistakes in other languages but they are a subset of the mistakes you can make in C. Therefore you should use those languages rather than C unless you have a good reason not to. Any other reason than performance is not good enough.
Re:Phew! (Score:3, Insightful)
buffer overflows are not the only kind of bug that plagues development. quite a few "plain old logic errors" or "insecure designs" are source of problems.
I mean I just reinstalled pam last night [for the second day in a row... diff versions] with maybe 20 patches applied to it. I doubt all 20 [or any at all] were due to buffer overflows.
A proper programmer would do proper bounds checking on their own [e.g. I need to store N bytes, do I have N bytes available]. People who don't shouldn't be writing software. Period.
And yes, "shit happens" but you can just as easily screw up the logic or other aspects in a typesafe language and end up with lowered security.
SSL rollback anyone?
Tom
Re:James Gosling is an expert in this area (Score:3, Insightful)
Rule for selecting programming language: (Score:3, Insightful)
Rule:
If you don't need to spend 5-10% of your development time to speed/size optimizing your program to make it useable, you are not using language/abstractions that is high level enough to your task.
Explanation:
If I use high level languge (say Haskell/OCaml/Clean/Common Lisp) and use all it's abstarction powers, program code will usually be 10-50% of the size compared to same program written in C/C++ (and Even Java). Now that X% (50-90%) of slack will take X% of development time and contain X% of the bugs. It will make the program much harder to change too. You can see that the 5-10% spent into optimization (you can even write the fast parts in C if you like) will pay.
If you don't belive, compare code of gnu-arch arch [srparish.net] to darcs [abridgegame.org]. Both are similar version control systems.
Re:Phew! (Score:1, Insightful)
Ah...another one that has never been a novice.
Re:Phew! (Score:3, Insightful)
Re:Advertisement? (Score:3, Insightful)
"Nested type" and "inner class" are, as far as I know, equivalent terms and the latter has been a common term used in both languages. However, this is the first time I know of someone who differentiates the two terms in practical use.
What makes the implicit "this.Outer" the essential feature of an "inner class" in your terminology? I'm also curious as to why you consider it such an important feature.
Once more this seems to be a matter of designer taste, but unlike anonymous inner classes, I do not see how this is particularly useful.
I have seen 2 broad categories where the reference to the outer class is useful:
- Inner classes for logic encapsulation in a complex class: passing the reference explicitly is no pain here, though.
- Anonymous inner classes: most useful because the class declaration itself is overkill, doing the constructor plumbing even more so.
I always saw the non-static inner class as a nice element of syntactic sugar that quickly gets out of control because it solves a non-problem if you ignore anonymous inner classes. In the process they complicate the language with special semantics and syntax for little need. Of course, once you bring this in to support anonymous inner classes it pays for itself. But the problem is that anonymous inner classes have almost always been a crufty solution themselves... 99% of the times I used them effectively they were functors and what I wanted was a method, not a full class.
Ignoring anonymous inner classes for a sec:
In Java, I do not have to pass an instance of my object to the inner class, so I get 'for free' the ability to write this (horrible) syntax:
int x= this.ExtremelyLongNameBecauseWeKnowClassesGetLong
If the class has a non-trivial size (which for non-anonymous classes can easily happen) or the reader of the code is not familiar with the special semantics of Java's non-static inner classes, it's not immediately obvious where this reference came from.
As a programmer I have to make an explicit decision to NOT keep an extra reference in my object. if I forget to put the static modifier, I get this dancing implicit this.Outer 'for free' too, when normally I just don't need and pollutes the contents of the object.
In C#, you have to explicitly decide you'll need a reference to your outer type, and then use it:
public MyFooInnerClass(MyBarOuterClass bar)
{ myBar = bar; }
int x = this.myBar.privateMember.foo();
The latter code seems to me more straightforward and readable, not depending on special semantics to know where myBar came from.
Sure, I have to pass the 'outer' object to the constructor myself instead of letting the compiler do it, but in exchange I don't have to worry about a different instantiation or reference syntax. Anyone who has never seen this done before will not be too surprised this works, and will have little trouble understanding what is going on.
In order to keep things clean in Java in my code I always made inner classes static until I actually NEEDED the outer reference. That is not different than explicitly passing the reference to the outer object in the constructor when you find you need to, only the latter is always an explicit decision on the programmer.
That seems to me a good thing, but once more it's a matter of taste.
Isn't Java or parts of it written in C/C++? (Score:2, Insightful)