Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Java Urban Performance Legends 846

An anonymous reader writes "Programmers agonize over whether to allocate on the stack or on the heap. Some people think garbage collection will never be as efficient as direct memory management, and others feel it is easier to clean up a mess in one big batch than to pick up individual pieces of dust throughout the day. This article pokes some holes in the oft-repeated performance myth of slow allocation in JVMs."
This discussion has been archived. No new comments can be posted.

Java Urban Performance Legends

Comments Filter:
  • Nonsense (Score:5, Funny)

    by hedge_death_shootout ( 681628 ) <stalin@linTEAuxmail.org minus caffeine> on Sunday October 09, 2005 @09:29AM (#13750205)
    These java urban performance legends are rubbish - java is highly performant in a rural or urban setting.
  • Robomaid (Score:5, Interesting)

    by Doc Ruby ( 173196 ) on Sunday October 09, 2005 @09:30AM (#13750212) Homepage Journal
    How much time have I spent with Electric Fence and valgrind finding memory leaks in my C programs? In Java, the auto garbage collection is as good as Perl's, without that tricky "unreadable code" problem ;). And I can always tune garbage collection performance by forcing a garbage collect when I know my app's got the time, like outside of a loop or before creating more objects in storage.
    • If you spend alot of time using stuff like valgrind and searching for memory leaks, then you are doing something very wrong. Its really not hard to just free the memory you allocate. The fact that people can write entire operating systems in C without having these problems indicates that the problems are with people, not C.
      • Re:Robomaid (Score:5, Insightful)

        by Doc Ruby ( 173196 ) on Sunday October 09, 2005 @09:51AM (#13750281) Homepage Journal
        Of course, that's why I program everything in machine language. Memory allocation in hex opcodes worked for me in 1981, it still works in 2005. Programming languages don't leak memory, programmers leak memory.

        In seriousness, look at the bugzillas and changelogs of any programming project. Memory leaks abound. So the reality is that memory de/allocation programming eats quite a lot of productivity, and impedes many otherwise ready releases. Of course we can de/allocate memory properly in C. And we can compute with scientific calculators. When we've got something better, we use it.
        • Re:Robomaid (Score:3, Insightful)

          by AArmadillo ( 660847 )
          The company I work for does development in mostly C/C++, and we haven't had a memory leak bug issued in 3.7 years according to our database. If memory leaks are your biggest problem, you probably aren't making effective use of smart pointers and structured allocation/deallocation.

          Also, let us not forget that java has memory leaks too [ibm.com] and in my opinion they are much less obvious than C++ memory leaks.

          • Re:Robomaid (Score:3, Interesting)

            by Doc Ruby ( 173196 )
            I didn't say memory leaks are my biggest problem. The insufficiency of voice recognition and dataflow programming tools are my biggest problems :). But I still get memory leaks. And it still costs me more to hire programmers who don't generate memory leaks than to use their less-qualified peers and a JVM. If we programmed dataflow graphs, we'd spot redundant objects more easily. And voice recognition would let us drop them, without paying programmers to act as that middleware. A man can dream...
            • Re:Robomaid (Score:3, Insightful)

              by Q2Serpent ( 216415 )
              If you are hiring programmers who can't understand simple memory allocation and deallocation, you are more likely to run into a maintenance nightmare.

              Good programmers know how to write structured code that's easy to maintain, and a side effect of this is that memory allocation becomes second nature (like most other "problem": buffer overruns, memory leaks, etc). When you write good code, these problems rarely exist. The code structure sees to it.

              Just because Java tries to save you from one issue doesn't mea
              • Re:Robomaid (Score:4, Insightful)

                by Doc Ruby ( 173196 ) on Sunday October 09, 2005 @11:24AM (#13750635) Homepage Journal
                Please forward me the resumes of these programmers, which include "Skills" like "I never write memory leaks". And whose "Certifications" include "Good Programmer", not those other resumes which say "Bad Programmer".

                The fact is that Java programs include more intelligence about programming from the compiler and the JVM. So programmers are more productive than with C. Which means that the same programmer budget can produce more product, or a lower budget can produce the same product. With distributed teams, outsourcing, OSS integration projects, those considerations are crucial to project success. Sure, quality projects also require good design and management. But Java eliminates some quality problems right out of the box. That can't be dismissed merely by saying "hire good programmers, not bad". Because it's a continuum, with a sliding scale of costs that isn't always proportional to productivity.
                • Re:Robomaid (Score:4, Funny)

                  by halleluja ( 715870 ) on Sunday October 09, 2005 @04:18PM (#13752044)
                  The fact is that Java programs include more intelligence about programming from the compiler and the JVM.
                  It is particularly sad the Visual PnP-generation of today cannot identify the merits of GOTO's and self-modifying code.

                  I have never had any memory leaks, just by including the following code snippet:

                  /* Distributed under GPL */
                  #include <stdlib.h>

                  static void* repo = malloc(100000000);

                  int main(int argc, char** argv)
                  {
                  /* do stuff, just increment & cast *repo when I need to utilize free memory. */
                  free(repo);
                  }
                • Re:Robomaid (Score:3, Insightful)

                  by Q2Serpent ( 216415 )
                  Productivity may be slightly hindered by the language (how easy is it to encapsulate functionality into easy-to-use packages), but I argue that it is dominated much more by the easy-to-use packages that are used.

                  Java certainly helps here, but good libraries can be written in many languages. In fact, most libraries are written in C so that they can be used from many languages. A library written in C plus a small wrapper for the language-du-jour is a library usable from many languages. If you have good develo
          • Re:Robomaid (Score:5, Insightful)

            by Mr. Shiny And New ( 525071 ) on Sunday October 09, 2005 @10:35AM (#13750424) Homepage Journal
            You're right, Java does have memory leaks, in a sense, but these same problems can arise in any language. You could have a cache implemented in C/C++ that is never properly emptied, so over time it fills up. Or you could have a request log where, due to a bug, requests are not always removed from the log after they are processed. Sure, you might have included code to free the objects in the cache or log when you are done with them, but the programming error is that you forgot to remove the item. This is not a language issue, this is just a programming mistake.

            At least in Java you only need to worry about one problem: removing items from caches, logs, queues, etc when you are done with them; in C++ you have to also worry about de-allocating that object, which requires strict planning since the object must not be referred to anywhere else when you de-allocate it. In general you can't prove that the object isn't still referred-to by someone who had it before it was put into the cache/log/whatever; instead you need to establish rules and write documents to make sure everyone knows what the lifecycle of an object should be.

            As for your company not having a memory bug logged in 3.7 years, that's great, but without knowing what kind of applications you write, or how they're used, it's hard to say if that's significant. Some programs, like, say, grep, or mv or ls or any one-time-use utilities may be rife with memory leaks, but who cares? When the program terminates the memory is freed, and the user will likely not notice the leak. But even if your apps are long-running network services or whatever, I bet the time spent by developers preventing memory leaks is significant, even if that time is completely spread out throughout the development cycle instead of in large, easy-to-spot chunks running Valgrind or some other profiler.
    • Re:Robomaid (Score:5, Informative)

      by radtea ( 464814 ) on Sunday October 09, 2005 @04:10PM (#13752003)
      And I can always tune garbage collection performance by forcing a garbage collect when I know my app's got the time, like outside of a loop or before creating more objects in storage.

      No, you can't. At most you can give the JVM a hint that it would be nice to collect garbage now. The JVM is free to ignore that hint, and many JVMs will do just that.
  • by GillBates0 ( 664202 ) on Sunday October 09, 2005 @09:30AM (#13750214) Homepage Journal
    JVM memory allocation isn't "SLOW". It's just pleasantly unhurried.
  • by onion2k ( 203094 ) on Sunday October 09, 2005 @09:31AM (#13750218) Homepage
    JVMs are surprisingly good at figuring out things that we used to assume only the developer could know.

    Yes they are. Now. 10 years ago when Java applets were being embedded in webpages (to show rippling water below a logo :) ) they weren't. The performance of the language has greatly improved while the perception of language has remained the roughly same (at least amoung the general coding community).

    Just goes to show that even if you have a great technical product you'll still need the marketdriods. Unfortunately.
  • BULLONEY!! (Score:3, Insightful)

    by Anonymous Coward on Sunday October 09, 2005 @09:31AM (#13750219)
    These articles keep popping up flatly stating that Java's slowness is a myth. But, no matter how many times you say it is a myth and how hard you try to create a new perception, the FACT is that people's real-world experience, no matter how anecdotal, consistently demonstrates that Java is MASSIVELY slow than similar apps in C or C++.

    Java is slower. Don't even get me started on C#.
    • Re:BULLONEY!! (Score:5, Interesting)

      by pla ( 258480 ) on Sunday October 09, 2005 @09:58AM (#13750308) Journal
      Don't even get me started on C#.

      I should hope not! Any form of benchmarking of Microsoft's .Net violates the EULA. And we wouldn't want that!

      So when Microsoft declares their interpreted inverse-polyglotic language as "faster" than compiled pure C, just accept it. Best for everyone that way.


      the FACT is that people's real-world experience, no matter how anecdotal, consistently demonstrates that Java is MASSIVELY slow than similar apps in C or C++.

      Well, the linked article contained a number of what I will graciously call "assumptions" (rather than "outright lies") about allocation patterns in C/C++ that simply don't hold true in most cases.

      For example, the parent mentions the old "stack or heap" question... Which no serious C coder would ask. Use the heap only when something won't fit on the stack. Why? The stack comes for "free" in C. If you need to store something too large, you need the heap. But then, you can allocate it once, and don't even consider freeing it until you finish with it (generational garbage collection? Survival time? Gimme a break - It survives until I tell it to go away!). As for recursion... You can blow the stack that way, but a good programmer will either flatten the recursion, or cap its depth.


      And the article dares to justify its "assuptions" by comparing Java against a language interpreter such as Perl. Not exactly a fair comparison - Yes, Perl counts as "real-world", but in an interpreter, you can't know ahead of time anything about your memory requirements. At best, you can cap them and dump the interpreted program if it gets too greedy. Now, some might point out that Java gets interpreted as well - And I'll readily admit that, for doing so, it does a damn fine job with its garbage collection. But if you want to compare Java to Perl, then do so. Don't try to sneak in a comparison to C with a layer of indirection.


      One last point - The article mentions that you no longer need to use object pooling. SURE you no longer need it - Java does it implicitly at startup. You can avoid all but a single malloc/free pair in C as well, if you just steal half the system's memory in main(). Sometimes that even counts as a good choice - I've used it myself, with the important caveat that I've done so when appropriate. Not always. I don't have malloc() as the second and free() as the second-to-last statements in every program I write. And that most definitely shows in the minimal memory footprints attainable between the two languages... Try writing a Java program that eats less than 32k.
      • Re:BULLONEY!! (Score:4, Informative)

        by LarsWestergren ( 9033 ) on Sunday October 09, 2005 @10:21AM (#13750382) Homepage Journal
        The stack comes for "free" in C. If you need to store something too large, you need the heap. But then, you can allocate it once, and don't even consider freeing it until you finish with it

        Or NOT consider freeing it when you forget about it... as the case may be.

        Try writing a Java program that eats less than 32k.

        No problem at all. [sun.com]
        • Re:BULLONEY!! (Score:5, Insightful)

          by pla ( 258480 ) on Sunday October 09, 2005 @10:35AM (#13750426) Journal
          Or NOT consider freeing it when you forget about it...

          Programming takes some level of skill.

          If you "forget" to increment a pointer, it won't have advanced the next time you use it. If you forget to open a file, assuming your program doesn't crash, anything you write to it goes to the bit bucket.

          If you forget a wife' birthday, you'll have a pissed-off wife.



          "Try writing a Java program that eats less than 32k."
          No problem at all.

          Uh-huh... Now do the same thing without needing a special, stripped-down, nearly featureless JRE. I don't need a special build of GCC to satisfy the condition, why did we shift from apples to oranges to make it possible in the Java world?



          And most of these points end up exatly there - Apples and oranges. Java can do better than a few worst-case scenarios in C. Java can fit in a sane amount of memory with special builds. Java can outperform C by-way-of Perl. All nice claims, but self delusion doesn't make for better programmers.
          • Re:BULLONEY!! (Score:5, Insightful)

            by gaj ( 1933 ) on Sunday October 09, 2005 @10:42AM (#13750452) Homepage Journal
            It's not really apples to oranges to use a smaller profile JRE for creating smaller profile programs. The JRE includes libraries and features such as threading and such. Let's see you build a C program that uses pthreads and one or two major libraries and still have your memory use come in under 32K. Remember, apples to apples, right?
          • Re:BULLONEY!! (Score:5, Insightful)

            by earthbound kid ( 859282 ) on Sunday October 09, 2005 @10:44AM (#13750466) Homepage

            Programming takes some level of skill.

            If you "forget" to increment a pointer, it won't have advanced the next time you use it. If you forget to open a file, assuming your program doesn't crash, anything you write to it goes to the bit bucket.

            If you forget a wife' birthday, you'll have a pissed-off wife.

            I'm not about to forget my girlfriend's birthday, because my computer warns me about it three days ahead of time. Similarly, why should we risk forgetting to deallocate stuff from the heap, when we can have the computer do it itself? Yes, programming takes some skill. But we shouldn't start programming using only our thumbs while dangling over a pit of lava, just to up the skill factor needlessly. If we're going to do something more difficult, we need to get a benefit for it. The benefit of not using a garbage collecting language is that your code will probably be slightly faster than the equivalent code in a GC language. But, is that slight performance increase worth the pain in the ass of doing it the manly way? In most cases, not really. But hey, judge for yourself: find the best language for your particular job and forget the rest.
    • Really? (Score:5, Informative)

      by Mark_MF-WN ( 678030 ) on Sunday October 09, 2005 @10:25AM (#13750394)
      Really? Can provide a few REAL world examples? Can you name one? Personally, I'm running Azureus and Netbeans right now, and they're not perceptably different from C++ desktop applications like KDevelop or OpenOffice.
    • I don't code in java so my only experience with it is through applications. Most notably Freenet and Azureus and Eclipse. All are brilliant apps that absolutly excell at what they do. There is however not a single person that can with a straight face claim that these programs do not hog resourcers like a .... well like a java app.

      Simply put there may be fast efficient tiny java programs out there but I don't see them. I don't really care about how garbage is collected. All I care is that a simple editor go

      • Simply put there may be fast efficient tiny java programs out there but I don't see them.

        If Java was as bad as you say it would NOT be so popular in embedded devices. There is a huge disconnect between the archaic concept of Java as a slow pig and the real world popularity of Java on small systems like printers and cell phones.

        • If Java was as bad as you say it would NOT be so popular in embedded devices. There is a huge disconnect between the archaic concept of Java as a slow pig and the real world popularity of Java on small systems like printers and cell phones.

          Doesn't that use J2ME though? And, I would assume, J2ME gives you less facilities than J2SE and J2EE, so are we comparing like with like?

          Not saying, you're wrong, I just want to know if the comparison is valid.
  • As I see it. (Score:4, Interesting)

    by Z00L00K ( 682162 ) on Sunday October 09, 2005 @09:33AM (#13750223) Homepage Journal
    The memory allocation management routines are normally running when the JVM thinks it's best, but as a programmer it is usually possible to predict the best time when to actually take care of the housekeeping. Even if the memory management cleanup takes the same time in both cases Java has a tendency to issue them in the middle of everything. So if I as a programmer does the garbage collection at the end of a displayed page and Java does it uncontrollable in the middle of the page the latter case is more annoying to the user.
  • by expro ( 597113 ) on Sunday October 09, 2005 @09:33AM (#13750225)
    How many JVMs can you afford to run on your system for different apps, and how can you make sure they are all the right size, the garbage collectors are in an appropriate mode that can keep up with generated garbage, etc. I can run lots of native apps, which in many cases have no need for a significant heap like Java brings into the picture in far less space than a single JVM. A JVM runs much heavier on the system, and when I run Netbeans, it is continuously on the verge of eating my 1.2 GB powerbook alive, in fact I have to frequently restart Netbeans to get memory back. It has a long way to go in real practical terms even if they have theoretical solutions to some of the problems. I am porting my server software away from Java for the same reasons. This is JDK 1.4, and I am about to try it on 1.5, but I don't think there is that much difference.
    • by LarsWestergren ( 9033 ) on Sunday October 09, 2005 @09:50AM (#13750278) Homepage Journal
      How many JVMs can you afford to run on your system for different apps, and how can you make sure they are all the right size, the garbage collectors are in an appropriate mode that can keep up with generated garbage, etc.

      Try "java -X". You will see that you can decide what the "right" size is. See this link, a guy forces the JVM to run a web server on 16 mb of memory and still get decent performance [rifers.org].
      • by Bastian ( 66383 ) on Sunday October 09, 2005 @11:30AM (#13750659)
        Manually determining how much memory to give to an application was one of the things that us Mac users were more than happy to leave behind with the transition to OS X. Hell, we were more than happy to kill that "feature." Dancing on its coffin and all that. Are you seriously suggesting the world go back?

        Besides, I thought the biggest examples of Java was that it was all cool and dynamic and took care of things for you. It seems fair to me to expect such a language/platform to be smart enough to figure out that it's not polite to walk to the buffet, heap three plates with a ridiculous pile of food, and go back to the table and graze on a few pieces of celery, leaving the rest of the food untouched.
    • Sun is introducing shared memory in the new VM that should alleviate this somewhat.
      The idea is that the majority of the "fatness" comes from the libaries. If you only have to load the core libraries once, and each VM can use them then additional VMs won't add so much overhead.
      If this works, I'm wondering, will we get side-effects from the over-abuse of static member variables.
  • by PepeGSay ( 847429 ) on Sunday October 09, 2005 @09:37AM (#13750237)
    This article is actually debunking some people's reasons why Java has poor performance. It does little to debunk my actual real world experience that it *is* slow. I'm glad to see that performance has increased alot, but I remember some all (well 90% or something) Java applications, like the original JBuilder, that made me want to claw my eyeballs out when using them. Those apps and other early apps are where Java's performance issues really took hold in many people's psyche.
    • by NeoBeans ( 591740 ) * on Sunday October 09, 2005 @10:27AM (#13750397) Homepage Journal
      I remember some all (well 90% or something) Java applications, like the original JBuilder, that made me want to claw my eyeballs out when using them.

      Valid point... for 1999. But Java has been through a lot of changes since "the original JBuilder" came out. There have been three major changes to the class libraries (1.2 [sun.com], 1.4 [sun.com], and the 5.0 [sun.com]).

      Honestly, I understand why people are down on Java -- it's because there have always been two strengths to the Java "platform":

      1. It provides a simple programming model (single inheritance, indirect memory management, ...)
      2. A rich class library for specific tasks that are easy to learn (but difficult to master, obviously).

      ...and early implementations weren't able to keep up with native code in any way, shape or form.

      That said, I think you hit the nail ont the head -- people are still thinking in terms of 1999 and not 2005 when they think of Java -- at least on Slashdot. But the typical Java developer is busy writing enterprise apps, not writing kernel code, device drivers, or anything else that requires C/C++.

      • Odd, then, that NetBeans and Eclipse are still slower than a sedated elephant on my Athlon 64 3200+ with 512 MB RAM. My computer must be living in a time warp.

        Or maybe the truth is that Java is "effectively sluggish, though not technically slow" because its runtime optimization techniques require such huge amounts of RAM.

        Although I'm well aware of the limitations of the Great Computer Language Shootout [debian.org] as a tool for measuring a language's overall utility, it does tell the naked truth about performance.

        • Odd, then, that NetBeans and Eclipse are still slower than a sedated elephant on my Athlon 64 3200+ with 512 MB RAM. My computer must be living in a time warp.

          It must be, so far I'm running Eclipse and/or Oracle JDeveloper on all the following platforms with no problems:

          Dell P4 2Ghz 512MB RAM, Windows 2000
          Toshiba P4 2Ghz laptop 512MB RAM, Gentoo Linux (2.6 kernel)
          Athlon XP 1900, 1GB RAM, WinXP/Gentoo Linux dual boot

          I will give you a caveat. JDeveloper was pretty crusty until version 10G came out. Runs prett

  • by bhurt ( 1081 ) on Sunday October 09, 2005 @09:40AM (#13750249) Homepage
    the people who keep telling me allocation in Java is slow (much slower than 10 instructions) are generally experienced Java programmers. I use Ocaml, so I'm quite aware of how fast a generational garbage collector can be (btw, on the x86 in Ocaml, allocations are only 5 simple instructions). But from all first hand reports I've heard, Java allocation is still slow. It may be faster than C++, but it's still slow.
    • by ucblockhead ( 63650 ) on Sunday October 09, 2005 @12:52PM (#13751046) Homepage Journal
      In C++, it only requires 1 instruction to allocate on the stack. Less than that, actually, if you are allocating multiple objects in one scope.

      You can't just compare Java allocation to:

      Foo *foo = new Foo();

      You also have to compare it to:

      Foo foo;

      Experienced C++ programmers will prefer the latter both because it is faster and because you don't have to worry about freeing it. One of the problems with a lot of the "See, Java's faster than C++!" comparisons is that they tend to translate directly from Java to C++, ignoring the things you can do in C++ but not Java (like stack allocation) that have a huge performance impact.

  • First! (Score:5, Funny)

    by Anonymous Coward on Sunday October 09, 2005 @09:54AM (#13750290)
    First post here from my Java workstation. Take that!
  • by MarkEst1973 ( 769601 ) on Sunday October 09, 2005 @10:00AM (#13750313)
    The laws of optimization are:

    1. Make it work.
    2. Make it work well.
    3. Make it work fast

    And #3 is the most interesting... how fast is fast? In an absolute sense, sure, C/C++ will always be faster. But does the end user notice this in a webapp? NO!

    I have a p3 450mhz box running Tomcat/MySql. It serves my local applications fast enough. The server can render the page much more quickly than my browser (on a p4 1.5ghz box) can render it. As a webapp, java on an old machine is plenty fast.

    Java as a desktop GUI is an altogether different story, but I'm not using java on the desktop. This point is moot to me.

    "Fast enough" to not be noticeable. That is the secret of #3. In a webapp, this is easily achieved in java.

  • by vadim_t ( 324782 ) on Sunday October 09, 2005 @10:01AM (#13750319) Homepage
    The copying collector sounds really fast indeed, but I can immediately see two problems:

    The first one is the need for a huge amount of memory. It would seem that the optimal way of dealing with this is restricting the amount of memory available to the application, otherwise any app can grow to the maximum size allowed by the VM, whether it needs it or not. But this sounds rather crappy to me, now every developer needs to figure out an right limit for the application.

    The second is that performance is going to suck when garbage collection is performed. The slowdown could be a lot larger than a single execution of malloc/free, especially if virtual memory is taken into account. The unused half of the memory will often be quickly moved to swap by the VM, especially when the process grows to sizes in the order of hundreds of MB. Then GC will force bringing all that back to RAM, while possibly swapping the previously used half to disk. Exactly the same situation as what's described with heap allocation, but a whole lot worse.

    It sounds to me that even if malloc is slower, it's a lot less inconvenient in applications like games, where something that is always slow can be taken into account, but where a sudden run of the GC could be really inconvenient.

    But this is not my area of experience, so it's just what came to mind. Can anybody confirm or refute these suspicions?
  • This is Not News (Score:5, Interesting)

    by putko ( 753330 ) on Sunday October 09, 2005 @10:07AM (#13750337) Homepage Journal
    Here is a paper (PostScript) from 1987 on the topic of GC being faster than manual allocation [princeton.edu].

    The author went on to make a very fast GC that set speed records.

    If you are looking for factual arguments, with performance measurements and so on, just look at his work over the last few decades -- you'll see he did a lot of work in these very practical areas.

    When you see how productive guys like him can be, it makes me wish that some people would just stay alive, and keep working, for a few hundred years more, instead of our typical mortal lifespans.
    • "When you see how productive guys like him can be, it makes me wish that some people would just stay alive, and keep working, for a few hundred years more, instead of our typical mortal lifespans"

      ... but that would be circumventing God's garbage collector! Just as in Java, when God has no more use for you, he'll "collect" you on his own timetable.

      Of course, you could always suggest someone is collected with a shotgun...

      Laugh, funny etc.
  • by iambarry ( 134796 ) on Sunday October 09, 2005 @10:29AM (#13750402) Homepage
    The article's main point is that Java's memory allocation is faster than malloc, and it's garbage collection is better than cumulative free's.

    However, thats not the problem. All memory in a Java program has to be allocated dynamically. Other languages offer static memory alternatives. Static memory use will be more efficient in many cases.

    The my language is faster than yours argument is inherently stupid. There is no "best" language. You need to use the right tool for the right job.

    --Barry

  • by defile ( 1059 ) on Sunday October 09, 2005 @10:39AM (#13750442) Homepage Journal

    I think automatic garbage collection is great, but _my_ problems with Java's memory allocator have little to do with performance. Rather, it sucks so hard at dealing with out-of-memory conditions.

    Why would you ever run out of memory? If you're on a microcomputer, you tend to have an arbitrary limit anyway. The JDK default is 64MB max heap. This is to prevent runaways, but setting the heap size to some arbitrarily high number has really awful performance characteristics -- god help you if you set the number higher than the amount of RAM you have available, or applications that want to share the machine. There's a whole other rant about how stupid automatic garbage collection is on a timeshared environment (like a server), but it's not the point I want to make.

    The point I want to make: since Java is supposed to run everywhere, you really can find environments where you only have 1 or 2MB of heap to play with. Having constrained resources is nothing new to programmers, they've been dealing with it forever. But Java's allocator combined with constrained resources leaves you with very little wiggle room.

    If you've ever developed a Java applicatoin that manages an in-core cache you might have experienced how fucked you are when you get around to the logic of cache expiration. Ideally you can keep cache forever, as long as a) you know it's not stale, and b) you don't need the memory for something better. A is not a problem in Java, but setting up a system to facilitate B is actually really hard. In C/C++ you can wrap malloc() so that if it fails, you can start dumping cache until it succeeds. The solution is elegant, dare I say beautiful.

    In Java this is totally impossible. When you run out of memory the garbage collector is tripped, and if it can't free anything up it generates an OOM exception. You can't realistically catch these exceptions -- they can bubble up from anywhere at any time. The only place you can catch them is at the top of your program loop and the only good it does there is let you say "OUT OF MEMORY" in a user friendly way before exiting the application -- assuming you have the resources left to even say that of course.

    So how does this effect your cache model? Well, you end up having to guess at how much cache to maintain. Guess too high and your application breaks when it's run in a low memory environment. Guess too low and you're not caching enough, the user experience suffers when it has no reason to. The above scenario is just one example.

    Essentially, the problem is one of prioritized memory. Java gives you no usable way of saying memory allocated for purpose X is fine as long as purpose Y is not short. Designers and apologists can make excuses for why Java doesn't support this and maybe provide a good reason, but those reasons hardly qualify Java as a general purpose programming language.

    Goto also.

    • In C/C++ you can wrap malloc() so that if it fails, you can start dumping cache until it succeeds. The solution is elegant, dare I say beautiful.
      In Java this is totally impossible.

      In Java there are things called weak references which you can use for this exact purpose. A weak reference is a pointer to a pointer, but the pointed-to object is considered available for garbage collection (as long as it isn't referred to anywhere else). This means you can do this:

      Object toBeCached = new Object();
      WeakR

  • by dzafez ( 897002 ) on Sunday October 09, 2005 @10:53AM (#13750509) Homepage
    I haven't done anything with java in the last 5 years.
    *everybody.screem("w00t!");*

    I can understand the discussion about memory allocation is legitimate.
    *everybody.agree();*

    Now, saying this would not be the case anymore, so hence java is fast now, would be false.
    *everybody.status = iritated;*

    Writing jevecode does make yu handle a lot of errors...
    *everybody.ask("is this not good?");* ...BY HAND!! IT WILL NOT COMPILE ELSEWAY.
    *some.ask("is this not good?");*

    Maybe there is a loss of speed for handling all those errors as well.
    *FirstHalfOfCoders.grab(stone);*

    C coders don't check for every possible error.
    *SecondHalfOfCoders.grab(stone);*

    Maybe, sometimes, it is ok for a programmer, if from that code, there could
    be errors. While on the other side you buy speed with insecurity.
    *FirstHalfOfCoders.throw(stone);*
    *SecondHalfOfCoders.throw(stone);*
    *me.troll();*

    AND BY THE WAY; I LOVE THE "GOTO" STATEMENT!
    *me.run(away);*
  • by JavaNPerl ( 70318 ) * on Sunday October 09, 2005 @10:57AM (#13750529)
    I have been doing Java programming for several years now and ported many C/C++ applications to Java, mostly server side apps and I'd say roughly 85% of the time the Java apps outperformed the originals, sometimes by an order of magnitude. Now these were more redesigns than straight ports and the performance gains were not because Java was any better from a performance standpoint, but because design is more of a factor in speed than the language used, especially for larger applications. Usually when I find big performance hurdles that are hard for me to overcome, I find I would have same issues in most languages, so finding a better design is usually the solution. If you are writing small - medium apps or mostly GUI apps then I might have reservations about Java, but for larger apps Java is a good choice.
  • by jolshefsky ( 560014 ) on Sunday October 09, 2005 @11:15AM (#13750592) Homepage

    I've never believed that Java's garbage collection is the root cause for its slowness. I do believe that Java's GC is the cause for its random (and more notably, its inconveniently timed) stutters.

    I think the more general Java slowness comes from the obfuscation of efficiency. In C for instance, ugly code correlated with inefficient code. This is no longer the case for object-oriented programming in general, and it is possibly worst correlated in Java.

    The example in the article provides a starting point for what I'm saying. It's based on the algorithm for a point in some Cartesian graphics system:

    public class Point {
    private int x, y;
    public Point(int x, int y) {
    this.x = x; this.y = y;
    }
    public Point(Point p) { this(p.x, p.y); }
    public int getX() { return x; }
    public int getY() { return y; }
    }

    public class Component {
    private Point location;
    public Point getLocation() { return new Point(location); }

    public double getDistanceFrom(Component other) {
    Point otherLocation = other.getLocation();
    int deltaX = otherLocation.getX() - location.getX();
    int deltaY = otherLocation.getY() - location.getY();
    return Math.sqrt(deltaX*deltaX + deltaY*deltaY);
    }
    }

    The Component.getDistanceFrom() algorithm is considered "good OO style." By using bad style, though, you can break the pretty encapsulation -- and make "bad OO style" -- but get a performance gain:

    public double getDistanceFrom(Component other) {
    int deltaX = other.location.getX() - location.getX();
    int deltaY = other.location.getY() - location.getY();
    return Math.sqrt(deltaX*deltaX + deltaY*deltaY);
    }
    }

    Both code blocks require the math (two subtracts, two multiplies, an add, and a square root) but the original block (unoptimized) also requires the allocation of the Point object and the two memory copies to store the (x,y) location.

    This is a trivial example, but my point is that in a complex Java project, readable and elegant code bears no correlation to fast and efficient code. I believe this is why Java is slow.

    • You'll find that your two examples will compile to the same bytecode in any modern Java compiler. Trust me, after literally two months of trying every line-by-line optimization I could think of to squeeze every tenth of a percent out of the main loop I came to the conclusion that the guys at both Sun and IBM (jikes compiler) are just freakish code gods and that I should never, ever, ever attempt any optimization below the level of algorithm selection. How freakishly good was this compiler? I had an algor
    • Except, you're wrong. You're assuming that the compiler is doing *no* optimization. So, how would a compiler treat the first "inefficient" code example? Let's take a look:

      public double getDistanceFrom(Component other) {
      Point otherLocation = other.getLocation();
      int deltaX = otherLocation.x - location.x;
      int deltaY = otherLocation.getY() - location.getY();
      return Math.sqrt(deltaX*deltaX + deltaY*deltaY);
      }

      Look at the implementation of getLocation(). It's a simple non-recursive procedure. In virtually all compilers, a simple procedure like this is going to be inlined, producing:

      public double getDistanceFrom(Component other) {
      Point otherLocation = new Point(other.location) ;
      int deltaX = otherLocation.getX() - location.getX();
      int deltaY = otherLocation.getY() - location.getY();
      return Math.sqrt(deltaX*deltaX + deltaY*deltaY);
      }

      (Note that to the compiler, all variables are effectively public, so this would not be an access violation.)

      Next, the compiler can (easily) prove (by inspection) that Point(Point) is a copying constructor. As a result, it can replace the use of new Point(other.location) with other.location so long as it proves it will not modify otherLocation, and of course, it can prove this quite easily by inspection of getDistanceFrom(). This results in:

      public double getDistanceFrom(Component other) {
      Point otherLocation = other.location ;
      int deltaX = otherLocation.getX() - location.getX();
      int deltaY = otherLocation.getY() - location.getY();
      return Math.sqrt(deltaX*deltaX + deltaY*deltaY);
      }

      Also, note that getX() and getY() are also simple non-recursive leaf procedures, so the compiler would have inlined them at the same time, so you would actually get code equivalent to this:

      public double getDistanceFrom(Component other) {
      Point otherLocation = other.location ;
      int deltaX = otherLocation.x - location.y;
      int deltaY = otherLocation.y - location.y;
      return Math.sqrt(deltaX*deltaX + deltaY*deltaY);
      }

      Which is now faster that your hand-written "optimization." Note in fact, that you "over-optimized" by replacing otherLocation with other.location twice. If a compiler were actually dumb enough to implement it that way, literally, it would have involved an extra (and needless) pointer dereference to get the value of other.location twice, when it already had it sitting in a register. (Fortunally, most any compiler nowadays would have caught your "optimization" and fixed it.)

      In fact, the compiler wouldn't stop here. It might even rearrange the order in which it fetches x and y to avoid cache pollution and misses. Do you consider that each time you fetch a variable? Most programmers don't, and shouldn't. The moral of the story is that most programmers fail to realize how smart compilers have become. Compiler writers design optimizations with "good coding practices" such as this in mind, and the programmer will be rewarded if he or she uses them.
  • Programs (Score:5, Insightful)

    by MoogMan ( 442253 ) on Sunday October 09, 2005 @11:27AM (#13750645)
    Unfortunately, programs such as Azureus, which run piggishly slow on a 1.2GHz laptop do nothing to dispell these 'performance myths'.
    • Re:Programs (Score:3, Informative)

      by Matt Perry ( 793115 )
      It sounds like there is something wrong with your laptop. I have a 1.3 GHz desktop machine at home. I run Azereus and it's as fast as any other application.
  • What a bunch of FUD (Score:3, Interesting)

    by SlayerDave ( 555409 ) <elddm1@g m a i l .com> on Sunday October 09, 2005 @11:29AM (#13750657) Homepage
    From TFA:

    The common code path for new Object() in HotSpot 1.4.2 and later is approximately 10 machine instructions (data provided by Sun; see Resources), whereas the best performing malloc implementations in C require on average between 60 and 100 instructions per call (Detlefs, et. al.; see Resources).

    Wow, that's really shocking. Until you actually look at the Detlef paper [cs.ubc.ca] and realize that it was published in 1994, 11 years ago!! Who knows, maybe things have improved a bit in 11 years. The author certainly thinks Java is getting better; maybe it's possible that C/C++ compilers have improved as well.

    • by Sangui5 ( 12317 ) on Sunday October 09, 2005 @02:32PM (#13751468)
      Additionally, even if it takes more instructions per call, glibc malloc() will return freshly used "hot" memory to you, while the JVM will return the coldest memory to you--the author of the article admits that this is a problem, but it is worse than they say.

      First, you definately have an L1 cache miss. On a P4, that is a 28 cycle penalty. You also likely have an L2 cache miss. Fast DDR2 memory has an access latency of about 76 ns--call it 220 processor clock cycles. If you miss the D-TLB, then add another 57 cycles. Total cost of your "fast" allocator--305 cycles of memory access latency. 305 is somewhat higher than 100.

      Even worse, if you are on a system with a lesser amount of memory, you may miss main memory entirely, causing a page fault. That'll cost you several milliseconds waiting for the disk. That really really hurts, since 8 milliseconds latency from disk is twenty-four million cycles at 3 GHz. 24 million, 24,000,000, 2.4 * 10^6, no matter how you write it, that's a lot of cycles to allocate memory.

      All of a sudden, 100 instructions to hit hot memory seems cheap.
  • by Henry V .009 ( 518000 ) on Sunday October 09, 2005 @12:01PM (#13750789) Journal
    Garbage collection is nice. It does its job and it does that job well.

    But garbage collection only helps you manage one sort of resource. In C++, the RAII techniques that help you manage memory are good for every resource, from file handlers to database connections. Resource management in Java is not so nice. Often it is quite a hassle.

    Also, if you really want the benefit of garbage collection in C++, simply compile your program using something like the Boehm Garbage Collector.
  • Custom mallocs (Score:4, Insightful)

    by Mybrid ( 410232 ) on Sunday October 09, 2005 @12:14PM (#13750860)
    It is interesting that the benchmark was done against the standard malloc when I've found that most high performance systems software implement their own malloc and other systems code based upon workload characteristics.

    Which is the point.

    Systems programmers write systems code. There is no one size fits all. There is no silver bullet. Comparing out-of-the-box C/C++ to out-of-the-box Java is a non-starter in my opinion because I've never used out-of-the-box C/C++ for large scale performance applications. What Google did in writing custom systems software is something that cannot happen with Java and is the accepted practice for C/C++.

    Java programmers write applications in a "one-size-fits-all" performance environment. Comparing Java to C/C++ is like comparing apples to oranges.

    Serious C/C++ systems programmers write their own malloc and systems software.

  • by radtea ( 464814 ) on Sunday October 09, 2005 @04:08PM (#13751986)
    Summary: "If JVMs were smart, garbage collection would be fast."

    Reality: "JVMs are mostly very stupid, and you can never be sure what JVM your users are going to use, so in the real world of deployed applications garbage collection performance--and Java performance generally--is a nightmare."

    I am so tired of GC advocates talking smugly about theoretical scenarios. Who cares?. When I can run a Java app on an arbitrary JVM and not have it come to a grinding halt every once in a while as the garbage collector runs--or worse yet bring the machine to a grinding halt because the garbage collector never runs--only then will GC will be useful.

    The weasel-words in the article are worthy of a PHB: "the garbage collection approach tends to deal with memory management in large batches" Translation: "I wish GC dealt with memory management in large chunks, but it doesn't, so I can't in all honesty say it does, but I can imagine a theoretical scenario where it does, so I'll talk about that theoretical scenario that I wish was real instead of what is actually real."

    This is not to say that there aren't one or two decent JVMs out there that have decent GC performance. But having managed a large team that deployed a very powerful Java data analysis and visualization application, and having done work in the code myself, and having had to deal with user's real-world performance issues and having seen the incredible hoops my team had to go through to get decent performance, I can honestly say that up until last year, at least, Java was Not There with regard to GC and performance.

    The most telling proof: my team did such a good job and our app was so fast that many users didn't believe it was written in Java. It was users making that judgement, not developers. Users whose only exposure to Java was as users, and whose empirical observation of the language was that it resulted in extremely slow apps. They didn't observe that it was theoretically possible to write slow apps. They observed that it was really, really easy to write slow apps, in the same way it's really easy to write apps that fall over in C++, despite the fact that theoretically you can write C++ apps that never leak or crash due to developers screwing up memory.

    Every language has its strengths. Java is a good, robust language that is safe to put into the hands of junior developers to do things that would take a senior developer to do in C++. But its poor performance isn't a myth, nor is its tendency to hog system resources due to poor GC. Those are emprical facts, and this article introduces no actual data to demonstrate otherwise.

  • by pauldy ( 100083 ) on Sunday October 09, 2005 @04:48PM (#13752266) Homepage
    This is about as much of a myth as the myth of the earth traveling around the sun. There is a lot more to allocating memory than this article lets on. Psuedocode will always be slower than machine code no matter how you slice it. You can do various tricks to make it seem faster by instruction count at certain things but in the end those same things can be applied to compiled code like c/c++. Most people can accept that as fact and move on using Java where it seems appropriate. Make all the excuses and extraordinary claims you wan tint the end Java is just plain old FAT n slow, anyone in any programming course who has used java can tell you that.

    Oh and it is hard to test out his little theory on the free malloc in C++ because you can't do the same thing in Java. You can try but it will get around to the free part when it gets good and ready and may not attempt the malloc until you use the memory.

E = MC ** 2 +- 3db

Working...