Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Windows Operating Systems Software IT

Comprehensive Guide to the Windows Paging File 495

busfahrer writes "Adrian's Rojak Pot has a nice article about the internals of the Windows paging file. It explains what a paging file is and lists the differences between a swapfile and a paging file. But first and foremost, a large part of the article deals with the various methods of optimizing the Windows paging file, thus yielding a notable performance gain for people who are not overly blessed with RAM."
This discussion has been archived. No new comments can be posted.

Comprehensive Guide to the Windows Paging File

Comments Filter:
  • by LordNimon ( 85072 ) on Friday March 25, 2005 @04:25PM (#12048787)
    I can't read the article now because it's slashdotted, but is there a difference between the swap file and the paging file in Linux? Does Linux even have a paging file?
  • by The Snowman ( 116231 ) * on Friday March 25, 2005 @04:29PM (#12048823)

    One of the biggest performance helps is to keep the paging file from being fragmented, and I'm not talking about three or four fragments.

    The best way to avoid fragmenting the swap file is a method I learned a long time ago, and the author mentions in his article but doesn't talk about much: keeping it on a separate partition. Sure, NTFS doesn't have a swap partition type like Linux does, but I keep a 2 gig partition with a fixed-size swap file on my WinXP box. Set the registry key to ignore "out of space" warnings for that drive, remove read privileges from everybody to that drive, and you basically have an invisible, un-fragmentable swap file that is invincible to user stupidity (I share my computer with my wife, so that last point is important. She does not have Administrator privileges on my box).

  • by DNS-and-BIND ( 461968 ) on Friday March 25, 2005 @04:34PM (#12048879) Homepage
    Solaris can use a swap partition or a swap file on disk. You can even add more swap space while Solaris is running using mkswap. Had to do that a few times...running Solaris does not mean that your developers know how to create scalable software...
  • Before the defrag (Score:3, Interesting)

    by Tumbleweed ( 3706 ) * on Friday March 25, 2005 @04:41PM (#12048941)
    Make sure it's a fixed-size page file, not system-managed.

    By using 'Diskeeper,' you can also do some additional optimizations besides just defragging it; it's a nice app, though its warnings are overly-dire, and it insists of having something staying in memory all the time, which is irritating.
  • by supmylO ( 773375 ) <bjarosz&gmail,com> on Friday March 25, 2005 @04:53PM (#12049056)
    I have over a gig of RAM but my laptop's HD is rather slow. How do I make it so that my computer will not use one of these files? I doubt there are many times when I am using all of my RAM, so I'd want to keep the HD use as little as possible.
  • Virtual Memory (Score:2, Interesting)

    by MCZapf ( 218870 ) on Friday March 25, 2005 @04:55PM (#12049081)
    The article is a little misleading. Virtual memory is not just space on a hard disk that you use when you run out of physical memory. Virtual memory is the practice of giving each process its own virtual address space that is independant of the physical address space. Doing this allows the OS to send some pages of memory to disk if it needs to, but the OS is still using the mechanism of virtual memory even if there is no hard drive at all. Each process's memory space is independant of the others'.
  • by RatBastard ( 949 ) on Friday March 25, 2005 @04:59PM (#12049127) Homepage
    Turn off virtual memory and see how many MS apps suddenly stop working at all. And we're not talking memory pigs, either. Some screen savers don't run without virtual memory running, no matter how much RAM you have. It's really stupid.
  • by gid ( 5195 ) on Friday March 25, 2005 @05:00PM (#12049133) Homepage
    I've run into similar problems. I have 1 gig of ram on my machine, if I have my java ide loaded up, and run a game or something like UT2k4, and play for a bit, when I exit out, my machine will thrash like made. But then I tried just disabling the page file altogether because of someone's suggestion, and ya know what? Everything works fine, no memory problems, no thrashing.
  • by greendoggg ( 667256 ) on Friday March 25, 2005 @05:16PM (#12049276)
    I don't usually set out to bash windows, but the windows virtual memory subsystem is as dumb as a brick.

    At work I am blessed to have 1GB of ram, so I don't ever need to use any virtual memory. What I find really interesting is that windows is noticably more responsive when I turn off virtual memory entirely. Even though I'm never running out of memory, windows was always swapping things out that I needed soon when I had virtual memory enabled. And I'm talking about when I had 2/3 of my memory unused still (at least by applications, the disk cache could have potentially been using the rest). Just turning off virtual memory altogether made things much much more responsive/faster for me.
  • by couch_warrior ( 718752 ) on Friday March 25, 2005 @05:22PM (#12049330)
    It is always amusing to me to read how M$ "invented" technologies that were in common use by other operating systems while Bill Gates was still wearing diapers. Both paging and swapping were used on IBM mainframes dating back to the mid 1960's. What windoze (and linux for that matter) could REALLY use is the ability to deterministically dedicate portions of system resources to particular processes.
    Back when a 1 MIP, 1MB machine cost $1M, operators became highly skilled at managing workloads. Today everyone just throws oodles of RAM and disk at servers and lets the chips fall where they may , so to speak. It wouldn't be a bad idea to actually put a little thought into matching workloads with machine resources, and pro-actively matching them up by deliberate choice(For example - our company is running a prime-time ad at 9pm that features our URL, maybe that's a good time to shut down the normal file backup that happens then). Chaos theory is not always a good load balancer. But what am I saying, that's as outrageous as asking kids to put their money in the bank instead of buying video games...
  • by Anonymous Coward on Friday March 25, 2005 @05:37PM (#12049480)
    Thank you! Finally someone said it. There's no "print" option that shows all pages at once, and there's no indication of how many pages are to come. This is by far the MOST ANNOYING article I've tried to read in a long time. Actually I gave up after about 5 pages (and seeing nothing technically interesting to speak of); I'm surprised you had the patience to determine that there are 44 pages. And pleased that you're warning the rest of us.
  • by gnuman99 ( 746007 ) on Friday March 25, 2005 @05:47PM (#12049622)
    I'm not sure if Linux has a paging file, but I fail to see why the linux swapping can't be done preemptively as well. If a program loses focus, swap the memory used by it, but retain the active copy in RAM.

    I don't think you have any idea how a Linux VM works. There is no "focus" because there is no windows. There are processes and how they are run depends on the scheduler.

    Memory management of Windows sucks. There is no question about it. If I don't have a gigabyte page file, I run out of memory (according to Windows) if I play GTA 3 for a while. The process uses only about 300M, but windows pops up crap about running out of memory. Memory manager says windows is using 500M for caching. WTF?

    This is not a troll, but Linux's memory management is vastly superior. Even if I run programs that can use up 80% of RAM and cause cache to shrink, the swap is still not extensively used.

  • by Peaker ( 72084 ) <gnupeaker@nOSPAM.yahoo.com> on Friday March 25, 2005 @07:29PM (#12050587) Homepage
    Virtual memory is indeed the virtual tables that allow for virtual memory -> physical memory mappings that can differ for isolation purposes.

    However, those same virtual tables are used to mark pages as not-present to indicate they are stored on some external storage or do not exist at all. This is also known as a "Virtual memory" feature, so it is not inaccurate to say one of the purposes of virtual memory is to virtualize more memory than is physically available.

    Also, the 80286 already added the protection features required to prevent applications from crashing the system and each other, but those features are very complicated and although still supported, they are mostly unused by most modern operating systems.

    P.S Some of those features could actually be of benefit (for example, using segmentation to have the stack mapped only in the data selectors and not mappable in the code selector could prevent most b.o exploits)
  • by dananderson ( 1880 ) on Friday March 25, 2005 @07:53PM (#12050747) Homepage
    System V UNIX had no virtual memory until the late 1980s. Sure, there were specialized versions, such as BSD with VM added on, but it wasn't in stock UNIX. It was around though, and was in IBM 360 and Multics (1960s).
  • by jilles ( 20976 ) on Friday March 25, 2005 @08:16PM (#12050886) Homepage
    If you do have the memory, disabling the page file is the best thing you can do performance wise. In theory this is a bad idea, in practice it works very well.

    I have 1GB of memory. This is more than adequate for browsing, games, development, photoshop etc. So, two years ago I disabled the page file after discovering that with 300MB in use, windows was still swapping like crazy (and yes I had all the popular registry hacks applied which should prevent that). So eventually I disabled the page file. The immediate result of doing this is that applications become much more responsive. Suddenly multitasking becomes easy and fast.

    Normally when you work with an application for a while, all other applications get swapped to disk. It doesn't matter that you have 700MB of unused, readily available ram. So when you try to alt tab to them, windows spends a couple of seconds moving stuff back into memory. This is very annoying and totally unnecessary. The problem only becomes worse if you start to run some memory intensive programs because windows will swap aggressively then.

    Disabling the page file fixes this problem. The only disadvantage of doing this is that when you need more than 1GB it isn't there. This is typically the point where things would get very slow anyway due to swapping. In any case, this doesn't happen very often and is easily resolved by closing some applications. All games are optimized to run well with 256-512 MB. Most games don't use more than that, even if it is available. Office applications and other desktop software rarely use more than 100MB. Photoshop can push the limits but unless you are doing some extreme high resolution photography stuff with it, you will not run into any problems with 1GB. It's actually quite hard to run out of memory. Most things that are infamous for memory usage like ms flight simulator, doom 3, photoshop, vmware, etc all run without problems and without swapping.

    If you think about it, swapping is a really lousy solution unless you expect to run out of memory. Disk is many times slower than ram. The reason that you open programs is that you want them ready for action. Swapping them to disk is therefore undesirable. The only reason it would be desirable is if the total amount of memory used by all of your running programs is larger than the amount of memory you have. So if you have 256 MB, swap files are a nice poormans solution to the problem that you don't have enough memory. If you have 1GB you shouldn't have that problem (and if you do, buy another GB).
  • by mcrbids ( 148650 ) on Saturday March 26, 2005 @03:08AM (#12052772) Journal
    I think the problem is the developer culture that built up around Windows, coupled with its changing ideas of how to separate code and data.

    Man, I hear you! I've written software using the %HOMEDIR% variable, as you suggest, but the software I wrote is multi-user capable. Meaning, that multiple people might use it, and each user has their own set of data.

    This works well in the Win98 world, but on XP, if several people share a computer, and somebody logs off the O/S and logs back in as another user, all their data is "gone" since %HOMEDIR% has changed.

    We can't require the user accounts, since many of our customers are still using 98/ME.

    So now what?

    I haven't had the chance to investigate thoroughly, but I've heard that even when things are "locked down" on an XP box, you can still create directories off the root directory!

    If we kept the files there, in a subdir, it just might make everything work without the above problems. Anybody else care to comment?
  • by pe1chl ( 90186 ) on Saturday March 26, 2005 @05:34AM (#12053117)
    If you think about it, swapping is a really lousy solution unless you expect to run out of memory. Disk is many times slower than ram. The reason that you open programs is that you want them ready for action. Swapping them to disk is therefore undesirable.

    This is your situation, your opinion, and the base of your success.
    Other may have different situations and swapping may be better for them. On my Linux box I typically have 160-200 processes running, and I don't need them to be all ready for action. There is only a subset that needs to be ready for action, and I want those to be in RAM, together with the cached disk data that they access.
    Other programs, that are waiting for something that is unlikely to happen or are sleeping for some time (e.g. to check for OS updates once a day) I don't need to have in RAM.

    It has been shown many times that having some swapspace is better for performance in typical systems. Maybe not in your system.

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...