High-level Languages and Speed 777
nitsudima writes to tell us Informit's David Chisnall takes a look at the 'myth' of high-level languages versus speed and why it might not be entirely accurate. From the article: "When C was created, it was very fast because it was almost trivial to turn C code into equivalent machine code. But this was only a short-term benefit; in the 30 years since C was created, processors have changed a lot. The task of mapping C code to a modern microprocessor has gradually become increasingly difficult. Since a lot of legacy C code is still around, however, a huge amount of research effort (and money) has been applied to the problem, so we still can get good performance from the language."
Old debate (Score:5, Informative)
Well, we ran our own tests. We took a sizable chunk of supposedly well-written time-critical code that the gang had produced in what was later to become Microsoft C [2] and rewrote the same modules in Logitech Modula-2. The upshot was that the M2 code was measurably faster, smaller, and on examination better optimized. Apparently the C compiler was handicapped by essentially having to figure out what the programmer meant with a long string of low-level expressions.
Extrapolations to today are left to the reader.
[1] I used to comment that C is not a high-level language, which would induce elevated blood pressure in C programmers. After working them up, I'd bet beer money on it -- and then trot out K&R, which contains the exact quote, "C is not a high-level language."
[2] MS originally relabled another company's C complier under license (I forget their name; they were an early object lesson.)
Re:Old debate (Score:5, Insightful)
If anything, C is a so-called mid level language. If it wasn't, you'd be using an assembler instead of a compiler.
Re:Old debate (Score:5, Insightful)
If you're going to go with the jargon as it's most often used nowadays (which is a perfectly reasonable thing to do), then C would certainly be about as low as you can get without manipulating individual registers - i.e., without being assembly language.
Re:Old debate (Score:4, Insightful)
New debate (Score:4, Interesting)
Speed of program languages or machine languages are not measured by how high or low level they are to us. They are also measured by time to develop and implement the program. The article basically makes a point of it, that it's "better to let someone else" to optimize the low-level code while you write with the high-level language. You could write a super fast machine coded program, but it'll take you much longer to write it than with a simpler higher level language.
The new debate is over datatypes and the available methods to manipulate them. Older hardware gave us the old debate with primitive datatypes and a general set of instructions to manipulate the data. Newer hardware can give us more than just primitives. For example, a unicoded string datatype seen by the hardware as a complete object instead of an array of bytes. With hardware instructions to manipulate unicoded strings, that would pratically take away any low-level implementation of unicoded strings. The same could be done for UTF-8 strings. We could implement hardware support for XML documents and other common protocols. How these datatypes are actually implemented in hardware is the center of the debate.
Eventually, there will be so many datatypes that there will be seperate low-level languages specifically designed for a domain a datatypes. The article makes the point there exists an increase in complexity for newer compliers to understand what was intended by a set of low-level instructions. Today's CPUs have a static limit of low-level instructions. The future beholds hardware implemented datatypes and their dynamic availability of low-level instructions. Newer processors will need to be able to handle the dynamic set of machine language instructions.
Does the new debate conflict with Turing's goal to simply make a processor unit extensible without the need to add extra hardware? For now, we have virtualization.
Re:Old debate (Score:3, Informative)
Actually, I think Forth is a little lower. The RPN nature of the language makes for a considerably closer mapping from language use to stack use for one thing, and for another, Forth atoms tend to be more primitive and more prefab than what a particular expression in C might produce.
C remains my favorite for anything that requires speed. It has always seemed to me that
Re:Old debate (Score:3, Interesting)
The statement "C is not a high level language" is not logically equivalent to the statement "C is a low level language", so the OP is still entitled to his beer money :-)
Re:Old debate (Score:3, Informative)
Actually the quote from my copy of K&R, on my desk beside me is,
C is not a "very high level" language...
emphasis is mine.
Re:Old debate (Score:3, Informative)
Re:Old debate (Score:3, Insightful)
Re:Old debate (Score:3, Informative)
Uh, K&R is slightly older than Java or C#... there was no such thing as memory management or virtual machines (as we know them today) back then.
Actually, there were virtual machines [wikipedia.org] back then, just not on micros or minis.
And as far as this high-level/low-level thing goes, I'd call C a "mid-level" language.
Re:Old debate (Score:3, Informative)
Didn't Infocom implemented their "database query system" (which eventually became their famous text-adventure game engine) using a virtual machine they called the Z-machine? As far as I know that system predated Java and C# by a few decades.
http://en.wikipedia.org/wiki/Z-machine [wikipedia.org]
-dZ.
Re:Old debate (Score:5, Informative)
Re:Old debate (Score:5, Informative)
"Lisp is very old language, second only to Fortran in the family tree of high level languages." A Little history [bath.ac.uk]
Whereas C (rather like Fortran) wanted to stay "close to the metal", Lisp wanted to transcend metal to get closer to the math [stanford.edu]. Hence, innante elegance
Still waiting for the Visual.Lisp.Net, though
Re:Old debate (Score:4, Funny)
Some things never change.
Re:Old debate (Score:5, Informative)
Of course, these benchmarks measure only speed, are just for fun, and are "flawed [debian.org]", but they are still interesting to play with. If you haven't seen the site before, enjoy fiddling with things to try and get your favourite language on top
Re:Old debate (Score:5, Interesting)
If I had mod points I'd certainly mod you informative. Those benchmarks might be synthetic and flawed but as a general illustration of how the various languages differ, that link is fantastic.
Of course I'll just use it for my own ends by convincing my managers that we're using the right languages - "Yes boss you'll see that we use C++ for the stuff that needs to be fast with low memory overhead, Java for the server side stuff, stay the fuck away from Ruby and if you say 'Web 2.0' at me one more time I'll be forced to wham you with a mallet!" ;-)
Re:Old debate (Score:5, Interesting)
What's wrong with Ruby, as a replacement for a very ugly language called Perl?
Ruby is an elegant language, fully Object Oriented, and does just as well as Python and Perl...
Ruby On Rails OTOH is a different story and don't want to get into a flame war over it, but Ruby itself is pretty good for a lot of things you'd otherwise write in Perl but don't like the ugliness of Perl...
I've found some people don't get the distinction between Ruby and Ruby on Rails.
Re:Old debate (Score:4, Interesting)
Nothing wrong with the language that a proper implementation couldn't cure, basically.
Re:Old debate (Score:3, Insightful)
First-class reentrant continuations and dynamic typing (another major efficiency hog) probably constrain you to, in the best case, the same box as compiled Scheme - about the same as Java.
Re:Old debate (Score:4, Interesting)
* It's not sufficiently prettier than Perl
* It's not Perl
Perl may look ugly but it is to most programming languages as English is to most other languages. Perl is a brawling, sprawling mess of borrowed, Hamming-optimized idioms that is extremely ugly from the POV of a syntax engineer and extremely expressive from the POV of a fluent speaker.
Ruby is more like Esperanto - elegant, clean, and spoken by practically no-one because it isn't very expressive.
Re:Old debate (Score:4, Interesting)
Re:Old debate (Score:3, Insightful)
Re:Old debate (Score:3, Informative)
Uh, Java and C# are strongly typed and structured languages.
Re:Old debate (Score:3, Informative)
Re:Old debate (Score:3, Informative)
Re:Old debate (Score:3, Informative)
Yep C is very weakly typed (some could say that it's untyped, as is ASM) as only the compiler does some sanity check, and even then it doesn't work too hard at it.
Re:Old debate (Score:3, Interesting)
How about the fact that you can't use an integer as an array index in Ada and you have to use natural numbers (defined as a positive or null integer), because array indexes can't be negative (in most languages anyway, some -- like Python -- are exceptions to this quite common rule) and you therefore shouldn't be allowed to use a number that might ever be negative as an index. C# merely gives you a warning if your index is explicit (e.g., myArray[-1]) and doesn't do anything otherwise, before throwing an Ind
Re:Old debate (Score:3, Interesting)
And although you call Cobol legacy, it really isn't. Many financial institutions still run applications written in cobol since it is too costly and risky to migrate the old code to a new language. Cobol was meant for the financial industry, and its probably there to stay. Colleges and universities are even starting to teach it again since it is
Re:Along those lines... (Score:3, Informative)
One interesting feature the compiler/IDE system I was using at the time (TopSpeed's) had was this concept that all their language compilers (M2, C, C++, etc) all compiled into an intermediate binary form, and their final compiler did very heavy optimizations on that "byte code".
That's no different to most compilers. GCC for instance parses the "frontend" language (C, C++, etc) into an intermediate language and performs most optimisations on that intermediate language before translating it to assembler i
Re:Along those lines... (Score:3, Informative)
Re:Along those lines... (Score:5, Interesting)
I sped up some C code by unrolling a loop with Duff's Device [catb.org]. Duff's Device, for those who haven't encountered it, makes an ingenious use of the often-maligned C behavior that case statements, in the absence of a break or return statement, fall-through.
Duff's Device takes advantage of the fall-through by jumping into the middle of an unrolled loop of repeated instructions. If eight instructions are unrolled, Duff's Device iterates the loop
times, but enters the loop by jumping to the
'the unrolled instruction from the end of the loop. (This sounds complicated, but isn't; just look at the code and it becomes clear.)
...
...
The whole point of Duff's Device is speed and locality of code. Speed: because the loop is unrolled, more instructions are executed for each jump back to the top (and jumps are, relatively, expensive, because they mean any preloaded instructions must be tossed out ans re-read. Locality: (hopefully) all the instructions can be cached, so the processor doesn't have to re-read them from memory.
But what gcc does with Duff's Device on ARM targets is just bizarre. gcc uses a jump table (good) to directly change the Program Counter (good, so far). But instead of jumping into the loop (which would be good), gcc uses the jump table to jump to
a redundant assignment and
an unconditional jump.
Yes, gcc very smartly makes a jump table (which directly changes the Program Counter, just like a jump would) to jump to a jump. This is simply a waste of code and time:
Why a jump table just to set up an unconditional jump? Why the redundant mov, which could have been done once, prior to the jump table jump? Who knows, that's what gcc does.
In this particular case, the object is to copy halfwords to a memory address, which address is really mapped to an output device. ARM processors, of course, are optimized for word addresses, so the "best" way to do this would be to load multiple words (LDM), shift the upper
C and Smalltalk is what happened. (Score:5, Informative)
A final blow to Modula-2 was simply Borland didn't create a Modula-2 compiler. For many years when you said Pascal you reall meant Turbo or Borland Pascal. Borland was the Pascal company and they add objects to pascal and eventual created Delphi.
I am sure Topspeed has closed up shop. There just isn't much room for compiler makers anymore. You have the free software at the bottom end and the Microsoft Monster at the top. Only a few niche players are left. Ada seems to be a place where a good compiler company can still make a few dollars.
Re:C and Smalltalk is what happened. (Score:4, Interesting)
Less space for alternative vendors (Score:3, Insightful)
Re:C and Smalltalk is what happened. (Score:3, Informative)
Small nitpick: They did indeed create a Modula-2 compiler - I think even called Turbo Modula-2 - at the end of the 80s for CP/M. I purchased it back then for my C-128 (those where the days *looks at current laptop* - not). However, CP/M then already had begun its way into obsolecence, and Borland's German division needed almost 6 months to deliver the damn thing. When I finally got it it was more or less unusable, as the IDE froze or something like that when you trie
Re:C and Smalltalk is what happened. (Score:3, Interesting)
This seems generally to be true, but some small outfits are apparently still making money selling compilers. In the early 90s I used the Power C compiler for DOS. It was a nice compiler and cheap ($20). Recently I was amazed to see that the company, Mix Software [mixsoftware.com], is still in business, with the same low prices. How they do this I have no idea.
Re:Old debate (Score:3, Informative)
Bah (Score:5, Insightful)
Re:Bah (Score:5, Insightful)
You have two choices when using SIMD instructions in C:
The claim that C cannot inline functions from another source file is also wrong. This is a limitation in gcc, but other compilers can do it, and IIRC the intel compiler can. It is certainly not "impossible".
When you pass a C file to a compiler, it generates an object file. It has absolutely no way of knowing where functions declared in the header are defined. You can hack around this; pass multiple source files to the compiler at once and have it treat them as a single one, for example, but this falls down completely when the function is declared in a library (e.g. libc) and you don't have access to the source.
Re:Bah (Score:3, Insightful)
Which usually isn't a big problem anyway since the code sections in which that's an advantage are usually quite small and infrequent, so if you really need the performance you can make a very little sacrifice of inserting conditional compiling statements with different code for the platforms which you are interested on.
It's certainly not an ideal solution but it's a very attractive one, and it has the advan
Re:Bah (Score:3, Insightful)
You see this all the time in SW Engineering. If there is a well defined high level API specifying what something is trying to do rather than how it should be efficiently (at the time) done, it will eventually be far more efficient to use the API because it will be get dedicated instructions in the chipset or even be completely implemented in a dedicated HW device where
Re:Bah (Score:3, Informative)
It didn;t say much at all otherwise, but it did have a nice collection of adverts.
Optimisation:
You don't have to hack around, some compilers do it for you. The new MS compiler does a 'whole program optimisation' where it will link things together from separate object modules. Still cannot handle libraries, but then, that's
Re:Bah (Score:5, Insightful)
High Level (Score:5, Insightful)
Now I hear most people referring to C and C++ as "low level" languages, compared to Java and PHP and visual basic and so on. Funny how that works out.
I like Assembler. There's something about interacting intimately with your target hardware. It's a shame that it's no longer feasible with today's variety of hardware.
Assembler (Score:4, Insightful)
Re:High Level (Score:3, Interesting)
A minor observation about the feasability of working with the target hardware: the two most popular instruction set architectures for commidity hardware, PowerPC and IA-32, have both been stable since the mid 90s. The programming guide for PowerPC processors is still pretty much the same document as it was in 1996, around the same time the P
Re:High Level (Score:5, Insightful)
Rather, usually whats done is that most of the code is written in C, and only those parts that REALLY REALLY have to be optimized, like interrupt handlers for example, can be done in assembly. People use assembly for routines that, for example, have to take exactly a certain number of instruction cycles to complete.
But it should be avoided as much as possible. It's just not worth losing the portability.
More and more these days, microprocessors are embedding higher level concepts, and even entire operating systems, just to make software development easier.
The myth of assembly performance (Score:4, Insightful)
Coding large apps in assembly is usually way beyond the point of diminishing returns in terms of performance.
Don't be so sure (Score:4, Insightful)
Amazingly far back (try the 80s) a professor friend of mine had a marvelous example of compiler-generated code where the compiler had done such an amazing job of optimising register use that you had to trace through more than 20 pages of assembler output with colored markers to trace from where the register was loaded to where it was used.
No way I would ever have the huevos to code that way in assembler. On a RISC machine or (Heaven help us) the Itanic it gets lots worse.
Article is theory not practice - no measurements (Score:3, Interesting)
I don't agree with the basic premise of the article at all - but I've also written equivalent programs in C and more modern languages and compared the performance.
Re:Article is theory not practice - no measurement (Score:4, Informative)
No, what they say is "the proof of the pudding is in the eating." (Just pointing it out because most people get it wrong.)
Um no. (Score:3, Informative)
From google:
Results 1 - 10 of about 326,000 for "the proof is in the pudding". (0.47 seconds)
Results 1 - 10 of about 118,000 for "the proof of the pudding is in the eating" [definition]. (0.30 seconds)
They're not right, of course, but then, sadly, you're not either, since what people say has changed. It's changed to something nonsensical, which people quote without understanding, which is annoying, like "I could care less!":
Results 1 - 10 of about 2,180,
Re:Article is theory not practice - no measurement (Score:4, Funny)
Inaccurate summary (Score:5, Insightful)
This is not true. What they mean, I think, is "the task of mapping C code to efficient machine code has gradually become increasingly difficult".
It's very simple (Score:5, Interesting)
If you don't believe me, I suggest you look at some of the assembly code output of gcc. I'm no assembly guru, but I don't think I would have done as well writing assembly by hand.
Re:It's very simple (Score:5, Informative)
I don't believe this as much as the people who I see repeating that sentence all the time...
Not many years ago (with gcc), I got an 80% speed improvement just by rewriting a medium sized function to assembly. Granted, it was a function which was in itself, half C code, half inline assembly, which might hinder gcc a bit. But it's also important to note that if the function had been written in pure C code, the compiler wouldn't have generated better code anyway since it wouldn't use MMX opcodes... Last I checked, MMX code is only generated from pure C in modern compilers when it's quite obvious that it can be used, such as in short loops doing simple arithmetic operations.
An expert assembly programmer in a CPU which he knows well can still do much better than a compiler.
Re:It's very simple (Score:3, Interesting)
The more interesting question is if a person with only passing familiarity with assembly can do better then the compiler, and the answer to that is usually no these days.
An expert assembly programmer in a CPU... (Score:5, Insightful)
It used to be the case that I could always increase the speed of some random C/Fortran/Pascal code by rewriting it in asm, parts of that speedup came from realizing better ways to map the current problem to the actual cpu hardware available.
However, I also discovered that much of the time it was possible to take the experience gained from the asm code, and use that to rewrite the original C code in such a way as to help the compiler generate near-optimal code. I.e. if I can get within 10-25% of 'speed_of_light' using portable C, I'll do so nearly every time.
There are some important situations where asm still wins, and that is when you have cpu hardware/opcodes available that the compiler cannot easily take advantage of. I.e. back in the days of the PentiumMMX 300 MHz cpu it became possible to do full MPEG2/DVD decoding in sw, but only by writing an awful lot of hand-optimized MMX code. Zoran SoftDVD was the first on the market, I was asked to help with some optimizations, but Mike Schmid (spelling?) had really done 99+% of the job.
Another important application for fast code is in crypto: If you want to transparently encrypt anything stored on your hard drive and/or going over a network wire, then you want the encryption/decryption process to be fast enough that you really doesn't notice any slowdown. This was one of the reasons for specifying a 200 MHz PentiumPro as the target machine for the Advanced Encryption Standard: If you could handle 100 Mbit Ethernet full duplex (i.e. 10 MB/s in both directions) on a 1996 model cpu, then you could easily do the same on any modern system.
When we (I and 3 other guys) rewrote one of the AES contenders (DFC, not the winner!) in pure asm, we managed to speed it up by a factor of 3, which moved it from being one of the 3-4 slowest to one of the fastest algorithms among the 15 alternatives.
Today, with fp SIMD instructions and a reasonably orthogonal/complete instruction set (i.e. SSE3 on x86), it is relatively easy to write code in such a way that an autovectorizer can do a good job, but for more complicated code things quickly become much harder.
Terje
Re:It's very simple (Score:4, Interesting)
In the past, most compilers were dreadful at optimizations. Now, they are just horrible. I guess that is an improvement, but I still believe there is a lot of good research to come here.
I do agree that the playing field has become pretty even. For example, with the right VM and the right code you can get pretty good performance out of Java. Problem is "the right VM" depends greatly on the task the program is doing.. certainly not a one vm fits all out of the box solution (ok.. perhaps you could always use the same VM, but app specific tuning is often neccesary for really high performance).
At any rate.. people just need to learn to use the best tool for the job. Most apps don't actually need to be bleedingly fast, so developing them in something that makes the development go faster is probably more important then developing them in something to eek out that tiny performance gain nobody will probably notice anyway.
C is the 3vil (Score:4, Funny)
ahah now we know why my java program is so slow. damn C slowing it down.
Great article! (Score:5, Funny)
It goes both ways (Score:5, Interesting)
For instance, 20 years ago there was nothing strange about having an actual quicksort machine instruction (VAXen had it). One expectation was still, at the time, that a lot of code would be generated directly by humans, so instructions and instruction designs catering to that use-case were developed. But by around then, most code was machine generated by a compiler, and since the compiler had little high-level semantics to work with, the high-level instructions - and most low-level one's too - went unused; this was one impetus for the development of RISC machines, by the way.
So, as long as a lot of coding is done in C and C++ (and especially in the embedded space, where you have most rapid CPU development, almost all coding is), designs will never stray far away from the requirements of that language. Better compilers have allowed designers to stray further, but stray too far and you get penalized in the market.
Re:It goes both ways (Score:5, Informative)
While the VAX had some complex instructions (such as double-linked queue handling), it did not have a quicksort instruction.
Here [hp.com] is the instruction set manual.
Re:It goes both ways (Score:3, Informative)
Re:It goes both ways (Score:3, Interesting)
One guy I knew realized that he was never going to get his rig stable enough to run through the whole test, so he set up a
High-level languages have an advantage (Score:5, Insightful)
The more abstract a language is, the better a compiler can understand what you are doing. If you write out twenty instructions to do something in a low-level language, it's a lot of work to figure out that what matters isn't that the instructions get executed, but the end result. If you write out one instruction in a high-level language that does the same thing, the compiler can decide how best to get that result without trying to figure out if it's okay to throw away the code you've written. Optimisation is easier and safer.
Furthermore, the bottleneck is often in the programmer's brain rather than the code. If programmers could write code ten times faster, that executes a tenth as quickly, that would actually be a beneficial trade-off for many (most?) organisations. High-level languages help with programmer productivity. I know that it's considered a mark of programmer ability to write the most efficient code possible, but it's a mark of software engineer ability to get the programming done faster while still meeting performance constraints.
Re:High-level languages have an advantage (Score:5, Insightful)
Especially since you can combine. Even in high-performance applications there's typically a only a tiny fraction of the code that actually needs to be efficient, it's perfectly common to have 99% of the time spent in 5% of the code.
Which means that in basically all cases you're going to be better off writing everything in a high-level language and then optimize only those routines that need it later.
That way you make less mistakes, and get higher-quality better code quicker for the 95% of the code where efficiency is unimportant, and you can spend even more time on optimizing those few spots where it matters.
Re:High-level languages have an advantage (Score:3, Informative)
This sound perfectly reasonable in theory. In practice, however, it's not. Users want speedy development AND speedy execution. I developed a Java image management program for crime scene photos, and the Sheriff Patrol's commander told me flat out: we'll never use this. It's too slow.
I rewrote the program using C++ and Qt, and gained a massive
Re:High-level languages have an advantage (Score:3, Insightful)
Except it doesn't. Nobody has written a compiler that smart, and I don't care what anyone says: I don't think anyone ever will.
Learning how to invent and develop algorithms is important. Learning how to translate those algorithms into various languages is important. And knowing how the compiler will translate those algorithms into machine instructions- and how the CPU itself will process those machine instructions, will
Typical Java Handwaving (Score:5, Insightful)
I've been programming professionally for over 20 years, and for those 20 years, the argument is that computers are now fast enough to allow high level languages and we don't need those dirty nasty assemblers and low level languages.
What was true 20 years ago is still true today, well written code in a low level language tailored to how the computer actually works will always be faster than a higher level environment.
The problem with computer science today is that the professors are "preaching" a hypothetical computer with no limitations. Suggesting that "real" limitations of computers are somehow unimportant.
If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?"
Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and
Typical "/." Handwaving (Score:5, Insightful)
The "appeal to an expert" fallacy?
"What was true 20 years ago is still true today, well written code in a low level language tailored to how the computer actually works will always be faster than a higher level environment."
It also means that portability becomes ever harder, as well as adaptability to new hardware.
"If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?""
It's about algorithms. Computers just happen to be the most convienent means for trying them..
"The problem with computer science today is that the professors are "preaching" a hypothetical computer with no limitations. Suggesting that "real" limitations of computers are somehow unimportant."
With the trend towards VM's and virtualization, that "hypothetical" computer comes ever closer.
"Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and
Now who's handwaving?
Re:Typical "/." Handwaving (Score:3, Insightful)
I'd say you are. His first statement wasn't a logically fallacy, he was just pointing out this argument has been going on for a long time.
You made a good point about portability, but I think that was your only point. And its easily shot down byt the fact that its just as easy to port a standard C/C++ API to a new environment as it is to port Java/.NET to a new environment.
He made an excellent point about many new graduates not knowing how the CPU actually works and you replied w
Re:Typical "/." Handwaving (Score:3, Insightful)
I'd argue that in the real world (or at least business world) we need the solution to be developed in the shortest amount of time, with the most amount of security. While a VM based language is not guaranteed to provide quicker time / security, in most cases it probably will.
Re:Typical "/." Handwaving (Score:3, Insightful)
I've never come across that fallacy in philosophy class, however, if you mean the "Improper Appeal to Authority" fallacy then it isn't. If the above poster was a movie star or a well known public figure and their comments about the article are being referenced to prove a point (assuming said movie star or public figure isn't an expert programmer), then that would be an improper appeal to authority. In any case, the insight and experience of long time programmer is valuabl
SW Industry - Down The Drain (Score:3, Informative)
Yay. With continued displays of attitudes like that, I'm going to leave the industry.
It is getting increasingly difficult to hire S/W engineers that understand that there is an operating system and also hardware beneath the software they write. I need people NOW that can grok device drivers, understand and use Unix facilities, fiddle with DBs, write decent code in C, C++, Java, and shell, and can also whip tog
Re:Typical Java Handwaving (Score:5, Insightful)
"Computer science is no more about computers than astronomy is about telescopes" -- Edsger Dijkstra quotes (Dutch computer Scientist. Turing Award in 1972. 1930-2002)
Sorry, you're arguing against Dijkstra: you lose. :)
Quoted often, but still wrong (Score:3, Insightful)
I see this quote everywhere, and just because it's by some semi-famous academic, nodody questions it and takes it for granted. The quote is utter rubish.
With astronomy you have stars, which aren't man made and thus only scarcely understood and the tools we use to look at them, teleskopes, which are man-made. We understand them.
Computers and Comp
Re:Quoted often, but still wrong (Score:3, Informative)
Not. Even. Wrong.
If astronomy was called "telescope science" you'd also forget that it was about ways of looking at the skies. Computers are more flexible that that - they are used to model and study all kinds of nat
Re:Typical Java Handwaving (Score:5, Insightful)
I've designed compilers before, and I wouldn't class constructing a C/C++ compiler as "trivial" :)
One could also make the opposite argument. Many computer courses teach languages such as C++, C# and Java, which all have connections to low level code. C# has its pointers and gotos, Java has its primatives, C++ has all of the above. There aren't many courses that focus more heavily on highly abstracted languages, such as Lisp.
And I think this is more important, really. Sure, there are many benefits to knowing the low level details of the system you're programming on; but its not essential to know, whilst it is essential to understand how to approach a programming problem. I'm not saying that an understanding of low level computational operations isn't important, merely that it is more important to know the abstract generalities.
Or, to put it another way, knowing how a computer works is not the same as knowing how to program effectively. At best, it's a subset of a wider field. At worst, it's something that is largely irrelevant to a growing number of programmers. I went to a University that dealt quite extensively with low level hardware and networking, and a significant proportion of the marks of my first year came from coding assembly and C for 680008 processors. Despite this, I can't think of many benefits such knowledge has when, say, designing a web application on Ruby on Rails. Perhaps you can suggest some?
I disagree. I think software sucks because software engineers don't understand programming
Re:Typical Java Handwaving (Score:5, Insightful)
Which machine, chum?
"I've been programming professionally for over 20 years..."
OK, bump chests. I've been at it for 35+. And? Experience doth not beget competence. There are uses for low-level languages and those that require them will use them. Try writing a 300+ module banking application in assembler. By the time you do, it will be outdated. Not because the language will change, but because the banking requirements will. Using assembler to write an application of that magnitude is like trying to write an Encyclopedia article with paper and pencil. Possible, but 'tarded.
"Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and
More like, 'software sucks today for the same reason it always has -- fossized thinkers can't change to make things easier for those who necessarily follow them.' Ego, no more.
Re:Student Perspective (Score:4, Informative)
My software engineering program has been very Java intensive. My software engineering class, object oriented class, and software testing class were all java based. We dabbled in C# a bit as well.
However, I also had an assembly class, a programming languages class where we learned perl and scheme(this language sucks) and about five algorithms classes in C++. I also had an embedded systems class in both C and assembly(learned assembly MCU code, then did C).
I feel like this is all pretty well rounded; I've learned a bunch of languages and am not really specialized in one. I'd say I am best at Java right now, but I can also write C++ code just fine.
I've never been told a computer has any kind of crazy limitless performance. In embedded systems, I learned about performance. Making a little PIC microcontroller calculate arctan was fun(took literally 30 seconds without a smart solution). I also learned that there is a trade off between several things such as performance, development time, readability, and portability.
We are taught to see languages as tools, you look at your problem and pull a tool out of the tool box that you think fit the problem best. You have to weigh whats important for the project and chose based off of that.
The final thing I'd like to point out is that one huge issue with software today is it is bug ridden. How easy something is a test makes a big difference in my opinion. Assembly and C will pretty much always be harder to test than languages like Java and C#.
I don't think the universities are the problem, at least not in my experience.
Re:Typical Java Handwaving (Score:3, Insightful)
I think his point was not that abstractions are bad, but that not knowing what's happening behind the scenes isn't good.
Even to optimize
Re:Typical Java Handwaving (Score:3, Insightful)
INCORRECT
. Shifts and adds are sometimes faster for certain constants. Power of two, maybe power-of-two plus one. But for any arbitrary constant, this is false on most processors. Multipliers are much faster than a stream of many shifts and adds. Furthermore, the compiler should hold the knowl
Single Page Version of the Article (Score:3, Informative)
Some comments on the article (Score:5, Insightful)
OK, this is nitpicking but there are some exceptions - I remember that TASM would convert automatically long conditional jumps to the opposite conditional jump + an unconditional long jump since there was no long conditional jump instruction.
This paragraph is complete crap. If you're using a Dictionary API in a so called "low-level language", it's as possible for the API to do the same optimization as it is for the runtime he talks about; and you're still letting "someone else do the optimization".
That's surely true. But the opposite is also true - when you use an immense amount of too complex semantics, they can be translated into a pile of inefficient code. Sure, this can improve in the future, but right now it's a problem of very high level constructs.
Not exactly true I think [greenend.org.uk]. Yes, the approach on that page is not standard C, but on section 4 he also talks about some high level performance improvements which are still being experimented on, so...
What I didn't see in TFA... (Score:4, Insightful)
I didn't see anything mentioning that many high-level languages are written in C. And I don't consider languages like FORTRAN to be high-level. FORTRAN is a language that was designed specifically for numeric computation and scientific computing. For that purpose, it is easy for the compiler to optimize the machine code better than a C compiler could ever manage. The FORTRAN compiler was probably written in C, but FORTRAN has language constructs that are more well-suited to numeric computation.
Most truly high-level languages, like LISP (which was mentioned directly in TFA), are interpreted, and the interpreters are almost always written in C. It is impossible for an interpreted language written in C (or even a compiled one that is converted to C) to go faster than C. It is always possible for a C programmer to write inefficient code, but that same programmer is likely to write inefficient code in a high-level language as well.
I'm not saying high-level languages aren't great. They are great for many things, but the argument that C is harder to optimize because the processors have gotten more complex is ludicrous. It's the machine code that's harder to optimize (if you've tried to write assembly code since MMX came out, you know what I mean), and that affects ALL languages.
LANGUAGES are not interpreted (Score:3, Informative)
Programming languages are not "interpreted". A language IMPLEMENTATION may be based on an interpreter. Every major implementation of Common Lisp today has a complier, and most of them don't even have an interpreter any more - everything, including command-line/evaluator input, is compiled on-the-fly before being executed.
Flawed Argument (Score:4, Interesting)
I'm a big fan of high level languages and I believe eventually it will be the very distance from assembely that high level languages provide that will make them faster by allowing compilers/interpreters to do more optimization. However, it is just silly to pretend that C is not still far closer to the way a modern processor works than high level languages are.
If nothing else just look at how C uses pointers and arrays and compare this to the more flexible way references and arrays work in higher level languages.
Imaginary history (Score:5, Interesting)
C was not a reaction to LISP. I can't even imagine why anyone would say this. LISP's if/then/else was an influence on ALGOL and later languages.
C might have been a reaction to Pascal, which in turn was a reaction to ALGOL.
LISP was not "the archetypal high-level language." The very names CAR and CDR mean "contents of address register" and "contents of decrement register," direct references to hardware registers on the IBM 704. When the names of fundamental languages constructs are those of specific registers in a specific processor, that is not a "high-level language" at all. Later efforts to build machines with machine architectures optimized for implementation of LISP further show that LISP was not considered "a high-level language."
C was not specifically patterned on the PDP-11. Rather, both of them were based on common practice and understanding of what was in the air at the time. C was a direct successor to, and reasonably similar to BCPL, on Honeywell 635 and 645, the IBM 360, the TX-2, the CDC 6400, the Univac 1108, the PDP-9, the KDF 9 and the Atlas 2.
C makes an interesting comparison with Pascal; you can see that C is, in many ways, a computer language rather than a mathematical language. For example, the inclusion of specific constructs for increment and decrement (as opposed to just writing A
Re:Imaginary history (Score:4, Informative)
You forgot "CONS" which comes from the IBM cons cells (a 36bit machine word on the 704), which is the block holding both a CAR and a CDR.
The thing is, the names only existed because no one found any better name for them, or any more interresting name (Common Lisp now offers the "first" and "rest" aliases to CAR and CDR... yet quite a lot of people still prefer using CAR and CDR).
LISP has always been a high level language, because it was started from mathematics (untyped lambda calculus) and only then adapted to computers.
And the fact that Lisp Machines (trying to get away from the Von Neumann model) were built doesn't mean that Lisp is a low level language, only that IA labs needed power that the Lisp => Von Neumann machines mappings could not give them at that time.
Lisp is a high level languages, because Lisp abstracts the machine away (no memory management, not giving a fuck about registers or machine words [may I remind you that Lisp was one of the first languages with unbound integers and automatic promotion from machine to unbound integers?])
"The Truth about C++ Revealed" (Score:5, Funny)
From:
Subject: The truth about 'C++' revealed
Date: Tuesday, December 31, 2002 5:20 AM
On the 1st of January, 1998, Bjarne Stroustrup gave an interview to the IEEE's 'Computer' magazine.
Naturally, the editors thought he would be giving a retrospective view of seven years of object-oriented design, using the language he created.
By the end of the interview, the interviewer got more than he had bargained for and, subsequently, the editor decided to suppress its contents, 'for the good of the industry' but, as with many of these things, there was a leak.
Here is a complete transcript of what was was said, unedited, and unrehearsed, so it isn't as neat as planned interviews.
You will find it interesting...
__________________________________________________ ________________
Interviewer: Well, it's been a few years since you changed the world of software design, how does it feel, looking back?
Stroustrup: Actually, I was thinking about those days, just before you arrived. Do you remember? Everyone was writing 'C' and, the trouble was, they were pretty damn good at it. Universities got pretty good at teaching it, too. They were turning out competent - I stress the word 'competent' - graduates at a phenomenal rate. That's what caused the problem.
Interviewer: problem?
Stroustrup: Yes, problem. Remember when everyone wrote Cobol?
Interviewer: Of course, I did too
Stroustrup: Well, in the beginning, these guys were like demi-gods. Their salaries were high, and they were treated like royalty.
Interviewer: Those were the days, eh?
Stroustrup: Right. So what happened? IBM got sick of it, and invested millions in training programmers, till they were a dime a dozen.
Interviewer: That's why I got out. Salaries dropped within a year, to the point where being a journalist actually paid better.
Stroustrup: Exactly. Well, the same happened with 'C' programmers.
Interviewer: I see, but what's the point?
Stroustrup: Well, one day, when I was sitting in my office, I thought of this little scheme, which would redress the balance a little. I thought 'I wonder what would happen, if there were a language so complicated, so difficult to learn, that nobody would ever be able to swamp the market with programmers? Actually, I got some of the ideas from X10, you know, X windows. That was such a bitch of a graphics system, that it only just ran on those Sun 3/60 things. They had all the ingredients for what I wanted. A really ridiculously complex syntax, obscure functions, and pseudo-OO structure. Even now, nobody writes raw X-windows code. Motif is the only way to go if you want to retain your sanity.
[NJW Comment: That explains everything. Most of my thesis work was in raw X-windows. :)]
Interviewer: You're kidding...?
Stroustrup: Not a bit of it. In fact, there was another problem. Unix was written in 'C', which meant that any 'C' programmer could very easily become a systems programmer. Remember what a mainframe systems programmer used to earn?
Interviewer: You bet I do, that's what I used to do.
Stroustrup: OK, so this new language had to divorce itself from Unix, by hiding all the system calls that bound the two together so nicely. This would enable guys who only knew about DOS to earn a decent living too.
Interviewer: I don't believe you said that...
Stroustrup: Well, it's been long enough, now, and I believe most people have figured out for themselves that C++ is a waste of time but, I must say, it's taken them a lot longer than I thought it would.
Interviewer: So how exactly did you do it?
Stroustrup: It was only supposed to be a joke, I never thought people would take the book seriously.
the author... (Score:3, Insightful)
Lisp and operating systems (Score:3, Insightful)
Huh? I would argue that commercially successful (as in boxes sold to Fortune 500 companies and used in production) operating systems have been written in three languages:
* Assembly
* C
* Lisp [andromeda.com]
Are there any commercially successful OSs written in C++ yet?
(revealing my ignorance and posting flamebait, all in one)
More Myth here (Score:3, Informative)
Take a look yourself on http://shootout.alioth.debian.org/ [debian.org]
C's faster than Java. It will probably always generally be so, unless you're trying to run C code on a hardware Java box.
This article says Java, for example, CAN be faster. But it doesn't say "C is almost always faster than Java or Fortran, usually faster than ADA, and C can be mangled (in the form of D Digital Mars, for instance) to be faster than C usually is. Often, Java is a pig, compared to C, BUT THERE ARE TIMES WHEN IT ISN'T. Really. There are times, few and far between, when it's actually, get this, FASTER. It's fun to look for those few times. And if you write programs which do that, that'd be cool. And as processors get wackier and wackier, there will be more and more times where this is true. Meanwhile, if your developers write good code, Java's easier to develop in and debug." Which would be more completely correct.
Excuse, me, now. I have to go back to my perl programming.
The advantage of Fortran is purely coincidental. (Score:3, Insightful)
When Fortran was made, nobody thought that CPUs of 30 years in the future will have vector processing instructions. In fact, as Wikipedia says [wikipedia.org], vector semantics in Fortran arrived only in Fortran 90.
The only advantage of current Fortran over C is that the vector processing unit of modern CPUs is better utilised, thanks to Fortran semantics. But, in order to be fair and square, the same semantics could be applied to C, and then C would be just as fast as Fortran.
The fact that C does not have vector semantics reflects the domain C is used: most apps written in C do not need vector processing. In case such processing is needed, Fortran can easily interoperate with C: just write your time-critical vector processing modules in Fortran.
As for higher-level-than-C languages being faster than C, it is purely a myth. Code that operates on hardware primitives (e.g. ints or doubles) has exactly the same speed in C, Java and other languages...but higher level languages have semantics that affect performance as much as they can help performance. All the checks VMs do have an additional overhead that C does not have; the little VM routines run here and there all add up to slower performance, as well as the fact that some languages are overengineered or open the way for sloppy programming (like, for example, not using static members but creating new ones each time there is a call).
Forth (Score:3, Interesting)
Some of the real optimization issues (Score:5, Interesting)
The article is a bit simplistic.
With medium-level languages like C, some of the language constructs are lower-level than the machine hardware. Thus, a decent compiler has to figure out what the user's code is doing and generate the appropriate instructions. The classic example is
char tab1[100], tab2[100];
int i = 100;
char* p1 = &tab1; char* p2 = &tab2;
while (i--) *p2++ = *p1++;
Two decades ago, C programmers who knew that idiom thought they were cool. In the PDP-11 era, with the non-optimizing compilers that came with UNIX, that was actually useful. The "*p2++ = *p1++;" explicitly told the compiler to generate auto-increment instructions, and considerably shortened the loop over a similar loop written with subscripts. By the late 1980s and 1990s, it didn't matter. Both GCC and the Microsoft compilers were smart enough to hoist subscript arithmetic out of loops, and writing that loop with subscripts generated the same code as with pointers. Today, if you write that loop, most compilers for x86 machines will generate a single MOV instruction for the copy. The compiler has to actually figure out what the programmer intended and rewrite the code. This is non-trivial. In some ways, C makes it more difficult, because it's harder for the compiler to figure out the intent of a C program than a FORTRAN or Pascal program. In C, there are more ways that code can do something wierd, and the compiler must make sure that the wierd cases aren't happening before optimizing.
The next big obstacle to optimization is the "dumb linker" assumption. UNIX has a tradition of dumb linkers, dating back to the PDP-11 linker, which was written in assembler with very few comments. The linker sees the entire program, but, with most object formats, can't do much to it other than throw out unreachable code. This, combined with the usual approach to separate compilation, inhibits many useful optimizations. When code calls a function in another compilation unit, the caller has to assume near-unlimited side effects from the call. This blocks many optimizations. In numerical work, it's a serious problem when the compiler can't tell, say, that "cos(x)" has no side effects. In C, it doesn't; in FORTRAN, it does, which is why some heavy numerical work is still done in FORTRAN. The compiler usually doesn't know that "cos" is a pure function; that is, x == y implies cos(x) = cos(y). This is enough of a performance issue that GCC has some cheats to get around it; look up "mathinline.h". But that doesn't help when you call some one-line function in another compilation unit from inside an inner loop.
C++ has "inline" to help with this problem. The real win with "inline" is not eliminating the call overhead; it's the ability for the optimizers to see what's going on. But really, what should be happening is that the compiler should check each compilation unit and output not machine code, but something like a parse tree. The heavy optimization should be done at link time, when more of the program is visible. There have been some experimental systems that did this, but it remains rare. "Just in time" systems like Java have been more popular. (Java's just-in-time approach is amusing. It was put in because the goal was to support applets in browsers. (Remember applets?) Now that Java is mostly a server-side language, the JIT feature isn't really all that valuable, and all of Java's "packaging" machinery takes up more time than a hard compile would.)
The next step up is to feed performance data from execution back into the compilation process. Some of Intel's embedded system compilers do this. It's most useful for machines where out of line control flow has high costs, and the CPU doesn't have good branch prediction hardware. For modern x86 machines, it's not a big win. For the Itanium, it's essential. (The Itanium needs a near-omniscient compiler to perform well, because you have to decide at compile time which instructions should be executed
Initially (Score:3, Insightful)
Depends on the job... (Score:3, Interesting)
We're where we are today because, for many years, C was the one you could get for free. The others cost hundreds of dollars.
I remember the first time I encountered a computer that shipped from the vendor with GCC instead of a proprietary compiler - it was like see