Security Review Summary of NIST SHA-3 Round 1 146
FormOfActionBanana writes "The security firm Fortify Software has undertaken an automated code review of the NIST SHA-3 round 1 contestants (previously Slashdotted) reference implementations. After a followup audit, the team is now reporting summary results. According to the blog entry, 'This just emphasizes what we already knew about C, even the most careful, security conscious developer messes up memory management.' Of particular interest, Professor Ron Rivest's (the "R" in RSA) MD6 team has already corrected a buffer overflow pointed out by the Fortify review. Bruce Schneier's Skein, also previously Slashdotted, came through defect-free."
ANSI C (Score:5, Insightful)
That is what they get for mandating the code be in ANSI C. How about allowing reference implementation in SPARK, ADA or something else using design-by-contract. After all, isn't something as critical as a international standard for a hash function the type of software d-b-c was meant for?
Re: (Score:3, Interesting)
Why do they even have a reference implementation for a hash function in a programming language? Wouldn't just defining the function mathematically be less error prone and just as effective?
The reason is in the summary... (Score:5, Insightful)
Re: (Score:1)
Re: (Score:1, Informative)
1. Because we, rank and file developers, have to use it afterward (and some of us write in C or C-derived languages, like oh, I don't know, pretty much all applications on your desktop?)
2. Because it is impossible to compare performance of cryptographic algorithms if they are not written in a same language (preferably directly convertable to machine code)
Re: (Score:1, Insightful)
Ummmm... because it wouldn't be a reference implementation if it wasn't actually implemented?
Re:ANSI C (Score:5, Insightful)
Presumably one of the things they want to evaluate is performance.
Re: (Score:2)
Re:ANSI C (Score:5, Insightful)
Because you can't compile a mathematical definition.
If we imagine that the hash function came only as a mathematical definition, how would your test your new implementation in LangOfTheWeek is correct?
Well, you have 2 options. One, you can prove that your program behaves, in every important way, the same as the definition. This is long, tedious work, and most programmers don't even have the necessary skills for this. Two, you can make a reference implemention in some other language, and compare the outputs.
Now, given, say, 100 programmers each working on their own functions, we should have 1 resulting behaviour. This will mean that everybody implemented the algorithm 100% correctly. However, the actual number will be between 1 and 100, depending on the skills of the programmers, and the care they've taken in implementing the functions.
Now, what's the result here? (no pun intended). It's likely to be chaos.
That's why it's very convenient to have a single reference source.
Re: (Score:2)
Difference between "program" and "mathematics"? (Score:2)
Because you can't compile a mathematical definition.
If you've read the works of E.W. Dijkstra (start with Cruelty [utexas.edu]), you'd understand that a programming language isn't much more than a formal system for expressing mathematical definitions. Perhaps Haskell or another purely functional language might fit your intuitive understanding of a "mathematical definition" better than a procedural language like C, C++, P*, or Java.
Re: (Score:1)
Re: (Score:2)
Nooo... Itsa mee... MAARIOOO!!
Reference implementation (Score:4, Informative)
In a word, no. A reference implementation is supposed to be a working version of the code, not just a mathematical description. With a working version, it's possible to do things like test its real world performance or cut and paste directly into a program that needs to use the function. That's obviously only possible if you have a version that works on real-world processors.
Consider Skein as an example. One of the things that Bruce Schneier described as a major goal of its design is that it uses functions that are highly optimized in real-world processors. That means that it's possible to make a version that's both very fast and straightforward to program, an important criterion for low-powered embedded applications. You won't discover that kind of detail until you implement it.
Re: (Score:1)
Re: (Score:3, Informative)
Mathematically anything is feasible, however if you place a real-world constraint such as it requiring an implementation then that greatly narrows the field down.
Furthermore one of the judging factors is the speed and portability of the algorithm upon a wide variety of commonly used platforms - it doesn't make sense to come up with a super-cool hash function that only works well on say an x86.
The short of it is that people make mistakes from time to time, and it is true that perfection is an important facto
Re: (Score:3, Insightful)
An implementation in a programming language is the a way to define a function mathematically.
That is what a programming language is - a way to precisely define an algorithm.
But C is a low-level language, and therefore maybe a bad choice for function definitions. Instead, in my experience, implementing an algorithm in a high-level functional language like Haskell will often result in a beautiful and readable mathematical function definition.
Re: (Score:1, Troll)
Re: (Score:2)
What if they need one of those pesky algorithms for that?
Look up bootstrapping. Most modern compilers can compile themselves using a stage or two of simpler compilers, built with either an existing compiler, an assembler, or maybe some hand written machine code. All the important parts of the compiler (parser, optimizer, etc.) can be written in the same language that the compiler can compile.
Trusting trust (Score:2)
Look up bootstrapping. Most modern compilers can compile themselves using a stage or two of simpler compilers
Look up trusting trust [bell-labs.com]. Defects in the compiler, whether intentional or unintentional, can propagate themselves to the compiled work, even if the compiled work is the compiler itself.
Re: (Score:2)
Look up trusting trust [bell-labs.com]. Defects in the compiler, whether intentional or unintentional, can propagate themselves to the compiled work, even if the compiled work is the compiler itself.
This always gets me though: why doesn't this apply to C?
Languages with diverse vs. single-source compilers (Score:2)
This always gets me though: why doesn't [the trusting trust attack] apply to C?
Bruce Schneier pointed out [schneier.com] that one can bootstrap a compiler using a different implementation of the language as a (probabilistic) measure against defects introduced by trusting trust. Build it on systems with different compilers, bit-compare the binaries generated on each system, and if they match, you can be reasonably sure that there is no such defect. But unlike C, which has implementations from GNU, Borland, Watcom, M$, Green Hills, and numerous other vendors, a lot of the managed languages lack multip
Re: (Score:2)
Unless you manually prove all the basic results of set theory for yourself (as well as philosophically agreeing that zf is a good choice of a set theory) and then build mathematics from it and derive formal languages from that, you probably shouldn't trust any code on any computer. Even then you can't trust the hardware, or even the wetware in your head.
I was just explaining how to bootstrap a compiler, not the finer points of epistemology.
Re: (Score:3, Insightful)
C is a bad choice for mathematical function definitions, but it's a fantastic choice for integrating into virtually any stage of a software project. It can be used in an OS kernel, a standard portable crypto library (e.g. OpenSSL), embedded firmware, what have you. All of this with NO more library dependencies than the bare minimum memory management, and most crypto/hash functions don't need those because their state fits in a fixed-size structure. So you can have the mythical 100% standalone C code that fi
cryptol (Score:4, Interesting)
But that just raises the question of how to define a hash function mathematically? The lambda calculus, Godel Numbers? Things like cryptographic hash functions don't tend to be nice algebraic thingies like f(x)=x*x+7, especially since they're usually iterative and deliberately messy - the pretty functions are likely to be less secure.
On the other hand, there are things like cryptol [galois.com] in which you may be able to specify hash functions more mathematically. For example, here is a cryptol implementation of skein [galois.com].
Re:ANSI C (Score:5, Insightful)
What did they get? You realize this is just an ad for Fortify, right? Out of 42 projects, they found 5 with memory management issues using their tool. Maybe instead of switching to SPARK, the 5 teams that fucked up could ask the 37 that didn't for some tips on how to write correct C.
PS: (Score:1, Funny)
The alternative was supposed to be throwing money at Fortify by the way. If your conclusion is to switch to SPARK then Fortify needs to work on their PR, *cough*, I mean blogging.
dandelion orchard (Score:3, Insightful)
I know nothing of the sort. How about asking some developers who have a history of getting both the security and the memory management correct which intellectual challenge they lose the most sleep over?
The OpenBSD team has a history of strength in both areas. I suspect most of these developers would laugh out loud at the intellectual challenge of the memory management required
Re: (Score:3, Insightful)
What I get out of this:
"We're going to give you more shit to think about by making you use C. if you can't deal with all the stupid shit C throws at you, you suck."
Which is a shit argument. Just use a better language that gives people less to worry about, and develop from there. Having to debug the shit out of a program for obscure memory management issues shouldn't be a test of your competence. You should be able to focus on the task at hand, nothing else.
Re:ANSI C (Score:5, Insightful)
That is what they get for mandating the code be in ANSI C.
Because most of the systems out there use C for the performance sensitive bits. (and when asm optimization is done, people generally use a C implementation as a reference since C and asm are similar in many ways).
When they start doing Linux and Windows and other popular systems primarily in Ada you can start going WTF over people posting ANSI C code. Until Java, Ruby and Python aren't dependent on C/C++ implementations for their functionality we'll just have to suffer with C.
Re: (Score:2)
Could not it be done with Haskell? Haskell creates code equal to C especially for math problems (or so they say).
Re: (Score:3, Interesting)
I hate to have to repeat it for the thousandth time, but Java's so-called virtualization comes crashing right down if you have even a single threading bug. Let me explain how it works.
Java gets compiled to machine code at runtime. Unlike machine code made from C code, the machine code really does have some nice protections from address and type confusion, with a generally acceptable performance penalty.
However it does NOT have ANY protections from threaded race conditions, so if you make any mistake in this
Re: (Score:2)
Re: (Score:2)
"Just Works"? How much Java have you actually used? For the space of SEVERAL MONTHS, the official "production-quality" Sun JVM had 64-bit JIT bugs that made it crash very often on very popular projects like Eclipse. They and their users had to wait for months for the JVM to be fixed upstream, and for those fixes to trickle down into their managed environments, which often takes another few months of testing. Don't talk to me about "Just Works", Java is software just like any other, and far from the highest
Re: (Score:2)
memory leaks are not a security issue. You're thinking of buffer overflows.
Why Java and not Haskell, Erlang, or Lisp? which also have these features and then some?
I would take DbC as an important language feature over some "ultra secure" memory idea that seems to have little real world value.
Re: (Score:2)
memory leaks are not a security issue.
Yes, they are. What happens to the program when it leaks all of the available memory space?
Re: (Score:2)
Yes, they are. What happens to the program when it leaks all of the available memory space?
Do you count denial of service a "security issue"? if the answer is Yes then I will concede your point.
Re: (Score:2)
Because you want vendors to just use the reference code, without screwing it up by implementing their own C version from the ADA reference code.
Re: (Score:3, Insightful)
That is what they get for mandating the code be in ANSI C. How about allowing reference implementation in SPARK, ADA or something else using design-by-contract. After all, isn't something as critical as a international standard for a hash function the type of software d-b-c was meant for?
C is the lowest common denominator. Once you have a library in C that's compiled to native code, you can link just about any other language to that library: Perl, Python, Ruby, Java, Lisp, etc., all have foreign function interfaces to C.
C also can compile to just about any processor out there: x86, SPARC, POWER, ARM, etc. Over the decades it's become very portable and optimized.
C is also know by a large subset of the programmers out there, so if people want to re-write the algorithm in another language, the
Re:ANSI C (Score:4, Insightful)
After all, isn't something as critical as a international standard for a hash function the type of software d-b-c was meant for?
No.
The key desirable characteristics of a good secure hash function are (in this order):
If the NIST AES contest is any example, what we'll find is that nearly all of the functions will be secure. That's not surprising since all of the creators are serious cryptographers well-versed in the known methods of attacking hash functions, so unless a new attack comes out during the contest, all of the functions will be resistant to all known attack methods.
With security out of the way, the most important criterion that will be used to choose the winner is performance. This means that every team putting forth a candidate will want to make it as fast as possible, but it still has to run on a few different kinds of machines. The closer you get to the machine, the faster you can make your code -- and this kind of code consists of lots of tight loops where every cycle counts. But assembler isn't an option because it needs to run on different machines.
That leaves C as the best choice. It's the portable assembler.
Re: (Score:3, Interesting)
With security out of the way, the most important criterion that will be used to choose the winner is performance.
Probably no one will ever see this self-reply, but I just noticed that the Skein site makes a good argument that the paring down of the candidate list should (will?) happen in the other order.
The idea is that since it's a huge amount of work to cryptanalyze a hash function, and since it's easy to measure performance (in time and space), the thing to do is to first toss out all of the slow and/or memory-hungry candidates. Obviously, if all of the fast and tight candidates were found to have security flaws
Re: (Score:1)
Oh, interesting. VALGRIND [valgrind.org] looks like dynamic analysis. The results in the article are from static analysis. Each is really useful, but they tend to find different sorts of problems.
If you'd like to download the five problematic submissions and run VALGRIND against them, please email me the results. I'd be interested to see what is actually detected.
Re: (Score:2)
If you'd like to download the five problematic submissions and run VALGRIND against them, please email me the results. I'd be interested to see what is actually detected.
*seconds the request*
$MY_SLASHDOT_USERNAME at gmail
Disclaimer (Score:2, Interesting)
I should add that I work for Fortify and that I initiated the SHA-3 review in my spare time as a private project. The Slashdot article on December 21 caught my interest.
Re: (Score:2)
If the other 37 projects didn't have any signficant flaws on the first round of this contest then that doesn't say to me "well, obviously no-one can do memory management properly" - it says that people make mistakes.
Re: (Score:3, Interesting)
Yeah, both very good points.
People do make mistakes. Even geniuses, when they're trying really hard to be careful. Personally, I see recognizing that as a validation for code review (including the automated code review that I do).
I want the winning entry for this competition to be flawless to the extent that's feasible. Right now, my job includes finding SHA-1 for cryptographic key generation, and telling people to replace that with something better. I don't want to be pulling out SHA-3 in a couple year
Re: (Score:2)
So which of the vulnerabilities did you hold back on so that you can exploit it in a couple of years? ;)
Re: (Score:2)
I think that was the article I submitted, expressly with the intent that it would catch the attention of people like yourself who could contribute to the auditing. This improves the quality of the submissions, perhaps identifies flaws in algorithms, and in general leads to a better contest between better competing implementations.
Personally, I'm gloating a little because the functions I considered to have such cool names (eg: Blue Midnight Wish) all came through clean and are also listed on the SHA3 Zoo as
Re: (Score:1)
Yes, it was yours. Thank you very much for the inspiration. I'm glad to help in what little way I can.
I'm really in awe of the people (whoever they are) who are actually evaluating the algorithms themselves.
Re: (Score:1)
...the people (whoever they are) who are actually evaluating the algorithms themselves.
I figured out who they [tugraz.at] are.
Who's this Bruce Shneieier guy? (Score:5, Funny)
"... because implementation is where people screw up." ... came through defect-free."
"Bruce Schneier's Skein,
So by deductive logic, Bruce is a robot. Also previously slashdotted.
Re: (Score:2)
No, all it proves is that he's not a person. This leaves all advanced alien lifeforms, artificial intelligences and supernatural phenomena as possibilities.
Re: (Score:2)
Bruce is a robot
And that's not all... [geekz.co.uk]
Hardly "memory management" (Score:4, Insightful)
This doesn't follow from TFA. The blog points out two instances of buffer overflows. The first one you could argue they messed up "memory management" because they used the wrong bounds for their array in several places... but they don't sound very "careful" or "security conscious" since checking to make sure you understand the bounds of the array you're using is pretty basic.
But that's not what bothered me. The second example is a typo where TFA says someone entered a "3" instead of a "2". In what dimension is mis-typing something "messing up memory management"? That just doesn't follow.
Re: (Score:2)
In what dimension is mis-typing something "messing up memory management"? That just doesn't follow.
I haven't evaluated the code in question, because math scares me, but if someone makes a fencepost error (or just a typo - but fenceposting is a common cause of off-by-one errors) it's entirely possible to be mucking with memory that is a byte or a page off from the memory you think you're working on. So that's an example of how mis-typing something could cause an error in memory management (if in one function you have it right, and in another you have it wrong... you can't even get it right by getting luck
Re: (Score:1)
drinkypoo the math is as easy as 3 >= 3. See here:
bcount = ss.sourceDataLength;
ss.sourceDataLength = bcount + databitlen;
if (ss.sourceDataLength < (bcount | databitlen))
Re: (Score:1)
One reply deep in comment 26951319 [slashdot.org] I demonstrate that typing the "3" instead of "2" improperly access memory space that may or may not be allocated. This type of out-of-bounds access is mismanaging memory.
In defense of C (Score:5, Insightful)
Buffer overflows are not hard to avoid, they are just something that must be tested. If you don't test, you are going to make a mistake, but they are easy to find with a careful test plan or an automated tool. Apparently those authors who had buffer overflows in their code didn't really check for them.
C is just a tool, like any other, and it has tradeoffs. The fact that you are going to have to check for buffer overflows is just something you have to add to the final estimate of how long your project will take. But C gives you other advantages that make up for it. Best tool for the job, etc.
Re:In defense of C (Score:4, Insightful)
Buffer overflows are not hard to avoid, they are just something that must be tested.
No, they're a huge pain in the ass 99% of the time, what's worst is that pointers work even when they absolutely shouldn't. I recently worked with some code that instead of making a copy of the data just kept a void pointer to it instead. Since that code was passed just a temporary variable that was gone right after being passed into it, it should error out on any sane system, but the problem is it worked - when it was called later, using the pointer to a temp that's alreday gone it would just read the value out of unallocated memory anyway, and since the temp hadn't been overwritten it worked! Only reason it came up was that when called twice it'd show the last data twice, since both would be pointing to the same unallocated memory location. It's so WTF that you can't believe it. Personally I prefer to work with some sort of system (Java, C#, C++/Qt, anything) that'll give you some head's up that will tell you're out of bounds or otherwise just pointing at nothing.
Re: (Score:3, Informative)
Which is why tools like Valgrind or Numega BoundsChecker exist, they provide much more granular information about how memory's being used and abused, the problem you just described would flag up instantly as writing to previously free'd data along with a few source code locations relevant to where it was allocated/free'd.
Re: (Score:2)
I don't think the performance estimate will change much. You only have to check the input, and the complexity is within the rounds of the underlying block cipher/sponge function or whatever is used to get the bits distributed as they should be.
I blame the bugs on tight time schedules and inexperienced programmers/insufficient review. Basically, cryptographers are mathematicians at heart. There is a rather large likelyhood that C implementations are not their common playing field.
C isn't the problem, it is really... (Score:4, Insightful)
I suspect the problem is related to the poor coding practices used in academia. I see college professors who write code that barely compiles in GCC without a bunch of warnings about anachronistic syntax. Some of the C code used constructs that are unrecognizable to someone who learned the language within the past 10 years, and is completely type unsafe.
I can't tell much from the code on the link, but I do see #define used for constants which is no longer appropriate (yet is EXTREMELY common to see). C99 had the const keyword in it, probably even before that.
Re: (Score:2, Insightful)
I suspect the problem is related to the poor coding practices used in academia. I see college professors who write code that barely compiles in GCC without a bunch of warnings about anachronistic syntax.
You know you'll learn a lot in a class when, after being told at the very least his c++ code is using deprecated includes, the professor tells you to just use '-Wno-deprecated'. I've basically come to the conclusion that I am just paying the school for a piece of paper, and I will learn little outside my personal study.
Re: (Score:1)
That depends, what is the class for? If it's a class teaching how to use C++, then you have a point.
If it's just about any other CS class, however, probably the language you are using doesn't matter so much, but rather what you are using the language to do.
I'm guessing that in this instance, the fact that the professor is using some wacky set of C constructs is not nearly as important as what is actually being taught, e.g. an algorithm. That is, ignore the deprecated stuff because that's not what is importa
Re: (Score:2)
Oh those were the days. At least my prof explained the difference.
With a #define, the preprocessor picks up the constant while using const delays it to the compiler. Of course, he said they were equally good, just a matter of style.
We never did learn why we had to put
using namespace std;
near the top of the program other than because we were supposed to use the standard namespace. I never bothered to ask or find out because by that time I had realized I'm an atrocious programmer and wouldn't be doing that
Re: (Score:3, Informative)
C got const much earlier; it was there in 1989. And at least in the past, a static const int FOO was less useful than #define FOO, it wasn't "constant enough" to define the size of an array. But yes, you see macros too often.
void *? That's cutting edge (Score:2)
I've had to work on an app where the main developer didn't know / didn't care about void *, and used char * everywhere instead. In fact he used char* even when the type was unique, and type cast at every call, and at the beginning of the called function.
When I called him on it, he said that I was doing philosophy and that he had real work to do.
Re: (Score:2)
I can't tell much from the code on the link, but I do see #define used for constants which is no longer appropriate (yet is EXTREMELY common to see). C99 had the const keyword in it, probably even before that.
I don't know where to start here.
const has been in C since c89.
const int SIZE=4;
int data[SIZE];
is fine in C++ and does the same as using a #define for SIZE.
It doesn't work in C prior to C99.
In C99 it defines a VLA - i.e. it's identical to using int SIZE=4; and not using const for the array parameter.
Ti
Re: (Score:2)
n C99 it defines a VLA
Really? Is that required by the standard, or is that be up to the compiler? I would expect the compiler to treat that example as a fixed-length array.
(I'll have to go try this in GCC when I get home...)
Re: (Score:2)
$ gcc -std=c99 -o test test.c
test.c: In function `main':
test.c:7: error: variable-sized object may not be initialized
test.c:7: warning: excess elements in array initializer
test.c:7: warning: (near initialization for `data')
$ g++ -o test test.c
$
Re: (Score:2)
Thanks, wow... so, doesn't that mean they broke functionality that worked in previous revisions of C?
Re: (Score:2)
Thanks, wow... so, doesn't that mean they broke functionality that worked in previous revisions of C?
?
No functionality broken. You can't use a const int to define an array size in C prior to C99 at all. (That's where this entire subthread started - someone was commenting about the use of #define rather than const int which is a reasonable criticism of C++ code but not C code)
It is an(other) area where C and C++ will forever behave differently (I assume).
$ cat compiler.c
#include
struct c { char c[2]; };
int t
Re: (Score:2)
Gaaah. the <stdio.h> is obviously missing. I even previewed that post :-(
Re: (Score:2)
Heh, the "somebody" was me :) And you assume correctly about C++. Since I code in C++ not C, I took it for granted that const int worked in C since it works in C++. ugh.
Thanks for the detailed replies.
uhh, lint... (Score:4, Informative)
$ cat bo.c
int a[3];
void f()
{
a[3] = 1;
}
$ lint bo.c
bo.c:4: warning: array subscript cannot be > 2: 3
Lint is so basic, I can't imagine not using it....
Re: (Score:1, Insightful)
Lint isn't magic.
Re: (Score:2)
no lint is basic, so basic there is no good reason to not always use it. (bo.c) is equivalent to the only flaw described in the published summary. Dynamic analyzers (required to catch your example) are just stuck in the shadow of the halting problem.
Where is lint? (Score:2)
Lint is so basic, I can't imagine not using it....
Lint is not found in Ubuntu. Did you mean Splint? And can you recommend anything analogous for C++ programs, for which Splint has no front-end?
Re: (Score:2)
Re: (Score:2)
It's splint these days (at least on Linux). And using splint on any nontrivial large code base will bury you in tons of mostly irrelevant warnings. If you dare to attempt cleaning up the mess, you'll find that you have to annotate your code. And then you have the problem that splint will only spit out correct warnings if your annotations are correct, so you have just doubled the potential sources of error (now it's annotations+code instead of just code).
Tools like splint just don't understand the flow of
Re: (Score:2)
Most lints I've seen have atrocious false positive rates.
I tried to write a minimal program that would give no warnings, I couldn't do it. If my main didn't have a return statement, it complained about that. If it had one that was not reachable, it complained about unreachable code. If it had one that COULD be reached, it complained that return could be called from main.
Other SHA-3 news: conference starts this week! (Score:2)
Previously Slashdotted (Score:1)
DJB threw his hash in the ring, too (Score:3, Interesting)
MD6 by Rivest and Skein by Schneier et. al. seem to be getting a lot of attention, but another celebrity cryptographer, Dan J. Bernstein, also has a hash in this race, called "CubeHash."
DJB continued his tradition of offering cash rewards for people to find security problems with his code, giving out (so far) monthly prizes of 100 Euros to the most interesting cryptanalysis of CubeHash.
So far, the primary criticism of CubeHash is that it's slow, running some 10 to 20 times slower than many of the others in the competition. Dan brushes off this criticism by stating on his site [cr.yp.to]: "for most applications of hash functions, speed simply doesn't matter."
To be honest, when compared efforts like MD6 and Skein, with their mathematic proofs of security, VHDL and other in-hardware reference implementations, and their amazing optimizations in both speed and efficiency (Skein can process half a GByte of data per second on modern hardware, and consumes only 100 bytes) -- entries like CubeHash seem to have that longshot underdog appeal, like a New Zealand soccer World Cup team.
Re: (Score:2)
So far, the primary criticism of CubeHash is that it's slow, running some 10 to 20 times slower than many of the others in the competition. Dan brushes off this criticism by stating on his site: "for most applications of hash functions, speed simply doesn't matter."
That dude must have missed out on the small thing called "P2P", since most of them rely on making tons of hash checks per block, per file and whatever. And yes, they do need the properties of a good hash not just a checksum so it can't be poisoned with bad data. Any kind of backend processing lots of signed messages? Usuaully you hash the message to check it against the digest, then check the signature of the digest. What about mobile devices with low battery and CPU capacity? I'm not buying it.
Re: (Score:1, Funny)
If you step into my heap one more time with your fucking malloc, I'm going to derefernce your null pointer bitch!
-Christian Bale
Re: (Score:1, Troll)
Re: (Score:2)
What's a "null pointer bitch"?
Re: (Score:1)
One of those female pointer dogs, but one that doesn't actually exist.
Re: (Score:2)
I just read that as unmangled code.
I'm really behind on the latest programming paradigms.
Re: (Score:3, Funny)
Re: (Score:3, Interesting)
Read something about http://research.microsoft.com/en-us/groups/os/singularity/ [microsoft.com] :)
probably wouldn't be bad for a lot of applications (Score:2)
At the very least, using a C-like language with safety, like Cyclone [thelanguage.org], would be a reasonable performance/safety tradeoff for a lot of users compared to the current tradeoffs (which leave quite a bit to be desired [cormander.com]). I'm guessing the main stumbling block would be reimplementation overhead (Linux already exists in C, has a lot of code, and is a fairly quickly moving target) and lack of interest on the part of kernel hackers (who have little interest in using non-C languages), rather than performance of the resu
Re:this is why... (Score:4, Insightful)
If you're still writing unmanaged code, you get what you deserve. It's 2009, not 1989.
Try running managed code in the 4 MB RAM of a widely deployed handheld computer. Now try making that managed code time-competitive and space-competitive with an equivalent program in C++ compiled to a native binary.
Re: (Score:2)
My phone has a Java runtime. It works, and it's in fact a very sensible choice for the application (where security and binary portability matters more than performance). Even today, many embedded devices are powerful enough to run bytecode-interpreted languages, and this will only become more true in the future.
Re: (Score:2)
it's in fact a very sensible choice for the application (where security and binary portability matters more than performance)
Except that binary portability doesn't matter, and while security is an absolute requirement, performance must be as high as possible.
Many applications hash huge volumes of data. SHA-256 can hash around 60 MBps on a ~2Ghz core, and that's too slow for many applications. WAY too slow. I have an application where I'd like to be able to hash over 20 MBps on an XScale processor. The rest of the system can easily sustain this data rate, but the hash is the bottleneck. The hash should not be the bottleneck
Re: (Score:2)
Wait, why is this managed code not compiled to native binaries and optimized?
Because the device uses different digital signing keys for managed and unmanaged code, and end users don't have the unmanaged one.
Re: (Score:2)
unmanaged code is dead.
Yes, because it warms my heart to think of reference crypto code that would crash if it wasn't running in a sandbox.