Understanding Memory Usage On Linux 248
Percy_Blakeney writes "Have you ever wondered why a simple text editor on Linux can use dozens of megabytes of memory? A recent blog posting explains how the output of the ps tool is misleading and how you can get a better idea of how much memory a process really uses."
Not only shared libraries (Score:5, Informative)
It could also have mentioned mappings on
On a related note, X also looks big because it's holding pixmaps belonging to various applications (Firefox comes to mind).
Re:Not only shared libraries (Score:2, Interesting)
Re:Not only shared libraries (Score:5, Informative)
In a nutshell: I can use mmap() to map
I can have a pointer to this memory.
The problem? The memory doesn't exist. What I have is a pointer, and a guarantee that enough backing store exists to satisfy it.
If I read through that pointer, I will see zeros. It *is*
The mmap() call can map a file (backing store) and allow data to be shared. Memory does not need to be used until the data is read (or written). And this time, the backing store doesn't even need swap (because the file is the backing store).
All of which means non-changeable may be altered. Changeable may be non-existent or shared. Try to teach that to your DG tools.
A page of code that is shared - may becomes a page of code that is private. A page of data that is unwritten doesn't have to exist. Even if it is read! A page of data that is written may STILL be shared.
"ps" and the other tools could walk through typical process maps, counting up pages, and figuring out what each was for, but that may be a bit too intensive. The pages aren't "cross referenced" for that purpose. Besides, the page could be COWd, and then swapped. Should THAT count against the memory of the application? Maybe, maybe not.
So "ps" by default gives you an idea of the "big picture" for each process.
Ratboy
A 'simple' editor ? (Score:4, Funny)
Re:A 'simple' editor ? (Score:5, Funny)
Re:A 'simple' editor ? (Score:3, Funny)
Re:A 'simple' editor ? (Score:2)
The only thing running (Score:5, Insightful)
A typical C/C++ based app uses just as much memory, it's just shared between processes. And for that matter, startup time of the first thing using kde/gnome isn't all that great either. Isn't it about time some effort was put into making Java or Mono part of the system, so it can be shared like C apps do?
Re:The only thing running (Score:2, Informative)
That's the point. Nobody cares about how much actual memory a C/C++ app touches.
Making Java "part of the system" won't help much either because the libraries aren't the same. You could argue that at the bottom of the pyramid its still libc that's being used, but we still have and need all the wrappers on top of the library to make it compatible with Java code.
So until people find it normal to run more than one o
Re:The only thing running (Score:4, Insightful)
Part of the problem with Java is that each VM has traditionally had its own copy of the Java class library. When you consider how huge the standard library is these days for Java, it's no wonder that even a small Java program consumes so much memory. And running several programs, each duplicating data from the others, is wasteful.
Apple has had for years a JVM that shares classes between numerous virtual machine instances. It thus reduces unnecessary memory consumption.
Re:The only thing running (Score:2)
So it's kind of like a slow version of native compiled code from gcj?
Re:The only thing running (Score:5, Interesting)
Sun JVM running a simple "Hello World" program that sleeps 1000ms between messages in an endless loop:
mapped: 260888K writeable/private: 199604K shared: 54652K
It wouldn't be hard to create a launcher that would run them all on the same virtual machine. Such a launcher would a candidate for the system integration you suggest. After all, if you needed to run Windows apps on your Linux box, you wouldn't run multiple instances of VMWare, would you?
Re:The only thing running (Score:5, Interesting)
The problem is that the bulk of Java's libraries aren't shared. At least, that's how it looks to pmap.
There's clearly something pmap is missing. I just tried exactly what you said, and on my system, pmap reports:
So then I ran 200 copies. pmap reports the same stats for each of them, but that's clearly absurd. 170MB writable/private multiplied by 200 instances is 34GB of RAM. But my laptop has 2GB of RAM, only 136KB of swap is being used, and 800MB of my RAM is being used for cache. So those 200 java instances, plus everything else I have running at the moment (which is quite a bit), are consuming about 1.2GB of RAM. That's quite a bit, but nothing like what pmap would lead me to believe.
Thinking about it, I think I can see what's going on. pmap is showing the mapped size of each process. Each JVM individually mmaps a huge amount of memory, because it maps in all of the Java libraries. However, mmapped pages don't consume any virtual memory unless they're actually used. This trivial program only uses a tiny portion of the libraries, so only a very small part of the mapped pages is actually read. Each JVM also mmaps a big block of anonymous memory for use as its heap but, again, the mapped address space doesn't consume RAM/swap if it's never touched, and this trivial app doesn't use much heap.
My conclusion: pmap *also* overestimates memory usage, because some portion of the mapped address space isn't actually in use. RSS, on the other hand, only measures memory that is actually in use, but doesn't distinguish between memory that is shared and memory that is not. VSZ is the most pessimistic measure, since it includes all mapped memory, shared and unshared.
Looking again at my 200 Java processes confirms this. Each has an RSS of 6.7MB, which is too much to be "correct" (in the sense that 200*6.7MB is more RAM than my whole system is using), but not much too much, which tells me that a lot of that 6.7MB RSS is unshared.
Looking at the pmap output in more detail, I see that most of the memory is mapped in three big anon blocks -- probably the heaps used by the generational garbage collector. The libraries are smaller and they're (duh) read-only which I'm pretty sure means the libs *are* shared across multiple instances of the JVM, because I believe that multiple processes that mmap the same file in read-only mode only get a single shared copy.
That means the bulk of the actual memory usage is writable, not libraries, and it's all unused heap space. Assuming the generational GC does the obvious thing and unmaps the whole "dead" generation block, the bulk of the heap space will usually be unused... and the JVM actually will "give back" heap that it no longer needs, at least in part. RSS should show that. Hmm... how to construct a test case to verify it...
Bottom line, I think: Java apps do use a little more actual memory than C/C++ apps, and trivial Java apps do use a lot more actual memory than trivial C/C++ apps, but it's not nearly as bad as pmap shows because the GC will always have a lot of extra memory mapped that has never been touched (assuming it does unmap and remap the dead generation, and it would be stupid not to).
Enough of my rambling, semi-informed speculation. Anyone who knows more about how this stuff works, please weigh in and correct me.
200 instances and 170 megs (Score:2, Informative)
Memory usage of Java actually scales very nicely with silly number of threads. A couple of months ago I created a small server which opened lots of listener sockets in their own threads.
Wit
Re:The only thing running (Score:2)
Re:The only thing running (Score:4, Informative)
Java has concurrent GCs now that do not freeze the entire VM while being run. And I've never seen the GC go "out of whack" and hang permanently (though I've seen many apps do this due to poor thread/resource management).
Re:The only thing running (Score:2)
Re:The only thing running (Score:2)
Were these graphic applications ? Because if they are, you might want to try running them from command line (with the oncurrent garbage collector enabled) and see if the thing throws a NullPointerException just prior to shutdown. FreeCol did just this: it started throwing NPE's with i
Re:The only thing running (Score:3, Insightful)
The latest 1.4.2 still freezes the entire VM. I support a Java application for a living -- keep your evangelism to newbies...
May be, 1.5 will bring some wonderful improvements in this area, but so far it, apparently, has not -- see another response to your posting.
Re:The only thing running (Score:2)
But the "you need not worry about memory management" was one of the top items in Java evangelism, and now the 'net is full of advice on how to manage memory in Java to avoid GC-related problems. Oops.
Perhaps that was one of the top items pushed by evangelists, but it should have been obvious to anyone who thought about it that while GC makes memory management *easier* (and it does!), it doesn't mean you can ignore memory issues entirely.
Re:The only thing running (Score:2)
Um, the latest Java out is 1.6 beta, and the latest stable is 1.5.
Ignorance about Java (Score:3, Interesting)
The JVM serving this page [jasmine.org.uk] currently has an uptime of 32 days. But in the past it's had uptimes of over 200 days. Neither it, nor any of the other Tomcat servers I run, has ever gone out of whack. Java (Tomcat, Weblogic and others) powers the web servers of many of the
Re:The only thing running (Score:2)
Re:The only thing running (Score:2)
Huge effort required, with some hard design decisions looming ahead. Jar files store java bytecode, which is (usually) jit-compiled at runtime to native code by the VM. My guess is that sharing the bytecode alone isn't enough and probably happens already (A disk page read by several different processes is loaded into 1 physical memory page which is then mmap'ped by the OS to the processes
Re:The only thing running (Score:2)
Re:The only thing running (Score:2)
This has actually been done some years ago, see the kissme project [sourceforge.net].
Re: (Score:2)
Re:The only thing running (Score:2)
I mean, it's nice that people explains that all that memory that processes use is shared between processes, but the linux desktop platform couplement (kernel libc Xorg qt libkde app) is far from being perfect. Just take a look at the upcoming gnome 2.6.14: Too many performance improvements in the sahred libraries, right? [gnome.org]
Loading unneccessary libraries (Score:3, Informative)
Yes, it "fixes" the "problem", but so would using rpath to DSOs not writable by users or ensuring that LD_LIBRARY_PATH doesn't point to user writable directories. Without the load time bloat.
Regards,
--
*Art
Re:The only thing running (Score:2)
But python, for example, is in the situation. But, it doesn't use so much memory, because it does the right thing and wraps the system libs for such things, instead of reimplementing everything.
A typical C/C++ based app uses just as much memory, it's just shared between processes.
But processes from many languages use these same libraries.
Isn't it about time some effort was put into making Java or Mono par
Re:The only thing running (Score:2)
A virtual machine is a virtual machine. There's nothing whatever to stop you running multiple Java threads running different applications in the same virtual machine - this is, after all, exactly what a
man page update (Score:5, Insightful)
Re:man page update (Score:4, Informative)
Because what ps reports is the truth, from a certain point of view.
Emacs (Score:2, Funny)
Re:Emacs (Score:2)
BBedit might use less memory -- haven't been following BBedit recently...
Linux file & memory management shines (Score:2, Informative)
Re:Linux file & memory management shines (Score:5, Informative)
Apparently, some people don't know that modern NT-based Windows versions also behave in exactly the same manner.
Re:Linux file & memory management shines (Score:2)
Re:Linux file & memory management shines (Score:2)
This is somewhat similar to the fact that you can't share a relocated module between processes in the "native" world. If you customize the loading of an assembly too much, the actual p
Re:Linux file & memory management shines (Score:2)
The sharing is is for sure. Java shares all the classes in one VM instance, and old dlls are also shared. The problem with Windows is just that they have even less discipline (and means) in organizing their files, libraries and executebales, in short:ressources than the Unix world. So it's
Re:Linux file & memory management shines (Score:2)
Re:Linux file & memory management shines (Score:2)
Re: (Score:2)
Re:Linux file & memory management shines (Score:5, Informative)
Re:Linux file & memory management shines (Score:3, Informative)
The following two articles respectively deal with executable and libary loading in Windows:
http://msdn.microsoft.com/msdnmag/issues/02/02/PE/ [microsoft.com]
http://msdn.microsoft.com/msdnmag/issues/02/03/Loa der/ [microsoft.com]
"Tuning Apache" is also excellent (Score:4, Informative)
Agreed - excellent article (Score:3, Interesting)
I had been experiencing issues reaching the max clients on a busy apache server serving around 6mbit/sec of images at peak times, and had been forced to increase the maximum child process setting to a very large number to cope with the peak daily periods.
Having just made the changes recommended in that article, ie. changing the keep alive timeout to around 2 seconds rather than the default of 15 - we've gone from an average of
Forgot to mention startup times... (Score:3, Insightful)
A practical measure and perspective. (Score:3, Insightful)
The whole discussion should be grounded in the reality of alternatives. A typical M$ system will grind it's way into swap space on start up, before the user loads anything! The very latest and greatest Linux distros run well on Pentium IIs and the like, which XP refuses to install on.
Re:A practical measure and perspective. (Score:2)
Solaris will go to swap very early on, likely swapping inactive pages, even if there is plenty of physical memory to be had.
FreeBSD doesn't seem to go to swap until it damn near runs out of physical memory to use.
Linux is somewhere in between the two. It doesn't go to swap quite as early as Solaris, and also not as late as FreeBSD.
Which way is best? Not sure
Re:A practical measure and perspective. (Score:4, Informative)
Re:A practical measure and perspective. (Score:3, Informative)
Re:A practical measure and perspective. (Score:2)
I find it quite ridiculous that logging into Windows XP -- which has very few programs installed on it, basically AV + some games -- causes my pagefile to start grinding, while I can run many instances of Konqueror, KDevelop, OpenOffice.org and watch a DVD and listen to music all at the same ti
Re:A practical measure and perspective. (Score:3, Insightful)
I can only imagine those swap sessions on a 233Mhz machine. Linux does handle memory way better.
Re:A practical measure and perspective. (Score:2)
Windows just handles memory differently.
The VMM will agressively swap out "unused" memory pages in order to increase the size of the disk cache.
So if you're running a game or something, closing the game will cause all kinds of inactive pages long swapped out to come back into active memory.
The downside to the Linux approach is that your disk performance is not as good as it could be while you're playing the game. This is because a lot of RAM is tied up with memory pages which are only going to be use
Linux is no memory hogging Operating System (Score:3, Interesting)
Re:Linux is no memory hogging Operating System (Score:2)
*Trelane removes his tongue from his cheek
Overdue (Score:5, Interesting)
A nice article, been looking for more information on this. So often you read items in program FAQs or such giving a disclaimer on how ps memory usage is misleading, but they offer no better way. Okay, so ps memory usage information is pratically useless; now what am I supposed to use?
I was hoping for a bit more, though; like, say, a small program that lets you see both the aggregate virtual memory total as well as the memory used specifically by the program. Add a few options for how to handle the only-one-app-using-a-library situation. Doesn't seem like it'd be that hard, and very useful.
Sure (Score:3, Funny)
Of course. EMACS - eight megabytes and constantly swapping.
EMACS - eight megabytes and constantly swapping (Score:2)
But has emacs also had feature creep since then, with corresponding memory growth?
Worrying too much about effeciency (memory or processor) can be bad. Failing to keep it at least in the back of your mind while designing and coding leads to some of the awful per
Re:EMACS - eight megabytes and constantly swapping (Score:2)
The 8MB mark is still pretty accurate... this is for the console version with a couple of text files open:
Also applies to shared memory segments (Score:3, Insightful)
Memory Management (Score:4, Informative)
This issue with PS hides a huge Java issue (Score:3, Interesting)
It's also why systems running a Java framework with multiple programs executing in the same Java process do so much better than ones where everything is in its own process. This is Java's sweet spot, where these JVM architecture disatvantages have the least impact.
This is my understanding of how Java's libraries work. Someone let me know if I'm missing something here.
On OS X Java libraries are shared (Score:2)
top (Score:5, Informative)
I don't like how he phrases that what ps reports is "wrong." It's not wrong, or even "wrong." It reports exactly what Linux tells it (through the proc filesystem). It's just might not be what you expect it to be, which means you don't understand the tools and the system. When ps reports that a process' virtual memory usage is xKb, that is correct. In the address space for the process, xKb have been allocated. Shared or not, they're still in the address space.
mod parent Overrated (Score:3, Informative)
Based on your over-simplified claim (which I'll call "wrong") the 43 java threads on my Tomcat box are using 3.0GB of RAM total, minus 426MB shared, which is impossible on a box with 256MB of RAM and 512MB swap.
More generally, the problem with ps (and top) is that they fail to highlight the most important piece of information: the amount of unshared memory each process is
It's also why Linux is so good at multi user (Score:4, Interesting)
Nostalgia is ... (Score:4, Interesting)
Re:It's also why Linux is so good at multi user (Score:3, Interesting)
Aren't they still resource hogs? (Score:3, Insightful)
Correct me if I'm wrong but... doesn't the fact that KEdit uses a lot of libraries that consume resources and impact system performance -- whether shared or not -- still means that it is a hog? I mean, if a seemingly simple application is consuming "dozens of megabytes of memory", saying "oh, it's OK, because most of it is being shared and already commited", does not really excuse it. What if those libraries are not currently being used by any other process?
In order for the shared memory to lessen the impact on the system, the user must be running some other processes that share the same libraries. This to me is a *BIG*, and unwarranted, assumption by the developer, as evidenced by his example of someone running the Gnome environment but running a single KDE application.
-dZ.
Right - and wrong (Score:2)
However, this still does not excuse a text editor for having a 25MB footprint!
When that editor runs it WILL require 25MB of memory to run (well, the working set may be a little smaller, but if we don't want any paging we need
Re:Right - and wrong (Score:2)
Unfortunately, for most GTK Linux apps at the moment (some of which I cannot live without), the "snappy" part isn't true because (so I'm told) GTK doesn't make proper use of graphics card acceleration. I'd glady use application
Re:Right - and wrong (Score:2)
But if everyone thought the way you did, and say, you had a limited pool of generic compute servers that hundreds of you had to share, and everyone wanted to load their pet tools and eat up all the RAM rather than using the IT standard-build suite, I bet you'd have a completely different attitude.
Re:My own favorite is 'top'. (Score:5, Insightful)
But hey, 10 processes are using 10%...
Re:My own favorite is 'top'. (Score:5, Insightful)
However asking the process how much memory it has allocated will show all memory including stuff that is marked copy on write - that is, I could have 100 processes showing they each use 1.4MB of memory, because they all share the same libray, but in fact, its the same copy they are all using so I'm only using 1.4 MB instead of 140MB (+PCB et. al)
More tips (Score:5, Informative)
A couple other tips:
* Each thread in a process shows up as consuming the same amount of memory (either this only happens under Linuxthreads or I don't have any threaded applications running on my system).
* Device mappings show up as consumed memory (which generates plenty of XFree86/xorg complaints). If you want to find out how much memory xorg/X11 is actually using (bytes in cached pixmaps on behalf of each process and sans device mappings), try this [69.142.116.122] program (contains a tiny program that lists how much memory X is using for other programs by caching pixmaps and a perl script that lists how much memory X is using sans device mappings).
* The article mentions the fact that shared libraries show up in every application's memory usage. So, for example, glibc alone adds 1.5MB to the memory usage of every process. But Win folks may not realize how significant this is. Most Windows applications ship with their own copies of almost all shared libraries used, which means that there is a huge amount of wasted memory under Windows that *actually affects you*. Under Linux, instead of shipping shared libraries with applications, folks have built tools to automatically download the latest shared libraries and use those across multiple applications. Result -- only one copy of the library need be in memory at a time. This means that it's actually reasonable to run a box with 128MB of memory and three remote users using the thing. You simply can't pull that under Windows and expect usability.
* This may not sound significant, but Linux's VM is (anecdotal evidence, of course) really solid. When I run out of memory under Windows, performance rapidly degrades -- bring an application to the foreground, and the system just starts churning. Under Linux, you can push a ways into VM and things generally keep functioning pretty well (this is one of the causes of people talking about "applications loading faster under WINE than Windows" when they're trying to prove that WINE is 'faster' than Windows -- good disk I/O and VM code).
Re:More tips (Score:3, Interesting)
Re:More tips (Score:3, Informative)
Re:More tips (Score:3, Informative)
Under LinuxThreads, each thread had its own PID. Under NPTL (Native POSIX Thread Library) all threads from the same process share the same PID, but each thread has a unique TID (which you can get with the Linux specific call gettid()). Calling getconf GNU_LIBPTHREAD_VERSION from a prompt should tell you what library and version
Re:More tips (Score:3, Interesting)
The problem is, there's just no good way to handle low memory conditions.
Re:My own favorite is 'top'. (Score:4, Insightful)
Take a shared library. For whatever reason, process 1 uses only the first half of the library. Thanks to demand-loading, only that half is loaded in mem, and that's what accounts as RSS for that process, say 10 MB.
Now a process 2 is launched and it uses the other half of the library. Now, all the library is loading in memory, and even if the first process is not using and has not requested to use the second half, its RSS will grown because somebody else use other parts of the library.
I don't think it's something you can or want to "solve": That's a consequence of the design ideas behind shared libraries. Deal with it.
Re:My own favorite is 'top'. (Score:4, Interesting)
This is a (moderatly ugly) gtk+ tool which uses a loadable kernel module to work out which pages are used by more than one process. If a page is used by N processes, each process is credited with PAGE_SIZE/N bytes.
I believe it "solves" the problem you describe above. The biggest problem is that it provides a little too much information, so perhaps I should simplify it a bit.
(Known problems with current 0.8 version: some of the tests fail intermittently and some systems with pre-linked elf binaries can cause errors. Should fix up both with the next release).
Re:My own favorite is 'top'. (Score:3, Interesting)
The "feature" that I find annoying about top, though it's really rather necessary for a CLI program, is that only the most CPU-intensive programs at a given instant get to the top. This isn't a problem with truly CPU-intensive programs that are constantly running. But all too often there's a program that's spiking to 30% or more CPU intermit
Re:My own favorite is 'top'. (Score:3, Informative)
This has nothing to do with CLI vs GUI programs, and everything to do with what you're choosing to sort by. You can change the sort order in top.
If you sort by PID or process name or something else less volatile than CPU percent
Re:My own favorite is 'top'. (Score:4, Informative)
Jeremy
Nice (If Math Heavy) Load Average Article (Score:2)
Re:Extra, extra, read all about it (Score:5, Funny)
Re:Extra, extra, read all about it (Score:5, Insightful)
Slashdot isn't only about breaking tech news; it's about keeping geeks generally informed. Many Linux geeks (including myself) probably learned something from the article that they didn't know. It's a well-written, informative article, and I'm glad Slashdot posted it because otherwise I probably would have never seen it. Not every Slashdotter already knows everything there is to know about Linux like you apparently do, and I imagine this isn't quite "common knowledge," so it's helpful for some of us.
What have I done wrong in my settings to deserve such trivial items?
No one forced you to click on "Read More." Sorry that you wasted a couple seconds reading the summary and realizing you already knew all about ps, but you didn't need to waste even more of your time trolling.
Re:Extra, extra, read all about it (Score:2)
Re:Extra, extra, read all about it (Score:3, Informative)
Re:Before you start bitch about Firefox memory lea (Score:5, Interesting)
About 8 months back I attempted to embed Gecko within an existing graphical user interface toolkit. Having heard so much from the open source community about how easy it was to do, I thought it would go rather quickly. Of course, it did not. The lack of up-to-date documentation (if such documentation there at all) and solid examples were some of the big problems.
But the overall architecture struck me as the worst part of Mozilla. Like it or not, it's overly complicated and convoluted in many areas. I admit that it's not easy to build well-designed software, but they so completely missed the boat it's unbelievable. However, it does make it obvious as to why many people complain about Firefox and Seamonkey running so slowly, in addition to suffering from huge memory consumption.
As for the embedding of Gecko, I said to hell with it. I took a page from Apple, and used KHTML instead. The loss in portability by not going with Gecko was well worth the far quicker development time, the lower memory consumption, the increased responsiveness, and the higher degree of stability of KHTML.
Re:Before you start bitch about Firefox memory lea (Score:5, Informative)
I'm kinda curious who you heard that from. Embedding Mozilla when you've got an already existing binding (such as for Gtk) is trivial, but writing the binding from scratch is no easy task. Gecko is a beast and the need to integrate its own drawing layer with yours makes it hard to integrate as an embedded browser. In its defense, it was never designed or intended for such a purpose. KHTML is only easier if you're using Qt (and you *did* obey the license, right?), otherwise you need to provide mappings from all the Qt primitives used by KHTML to your own. Easier than embedding Gecko, but still not trivial.
Re:Before you start bitch about Firefox memory lea (Score:2)
About 8 months back I attempted to embed Gecko within an existing graphical user interface toolkit. Having heard so much from the open source community about how easy it was to do, I thought it would go rather quickly. Of course, it did not.
I'm not sure why you struggled, but I had no trouble embedding Gecko in a wxWidgets application - it took me a couple of hours at most.
Re:Before you start bitch about Firefox memory lea (Score:2)
What exactly is the portability problem with KHTML? I would be interested in a KHTML browser for windows and it's been said that with Qt4 and the work of various developers you'll be able to get an almost
GtkMozEmbed is useless for what I wanted. (Score:2)
Second of all, it doesn't serve as a good example. Look at the code yourself. It's horribly convoluted. While it does indeed provide a fairly usable interface to the programmer who is using it, what's under the
Re:Before you start bitch about Firefox memory lea (Score:5, Insightful)
Re:Before you start bitch about Firefox memory lea (Score:4, Interesting)
Part of the problem is that it's far too easy for bugs to creep into Mozilla. The code is a small step above horrible, and the architecture isn't much better. A lack of up-to-date documentation leads to programmers not knowing which XPCOM interfaces are deprecated, and which aren't.
You can look at browsers like Konqueror and Opera, which offer a very comparable feature set to Firefox, yet do no suffer from the drawbacks. Not only that, but Konqueror and Opera are often described as feeling far more responsive, while being extremely stable. It's things like that which really impress the average Jill and Joe. Excessive memory usage will just perplex them, and likely result in them going back to Internet Explorer.
Re:Before you start bitch about Firefox memory lea (Score:2)
Now, I can run either Firefox or Konqueror on this machine, in both cases without needing swap. Both will slowly grind to a hold after a couple of hours of intensive use due to running out of memory. No idea if it is
Re:Before you start bitch about Firefox memory lea (Score:2)
Not only that, but Konqueror and Opera are often described as feeling far more responsive, while being extremely stable.
Actually, I would really prefer to use Konqueror (I use KDE and I really like the integration), but I use Firefox because Konq is so much slower. I just tested it again and Konq is *much* slower on my system. I picked a site that I hadn't visited for months (to make sure Firefox didn't have a cache advantage), typed the URL into each browser, clicked on Konq and hit enter, then clicke