Forgot your password?
typodupeerror
Windows Operating Systems Software IT

Comprehensive Guide to the Windows Paging File 495

Posted by Zonk
from the overly-blessed-with-RAM dept.
busfahrer writes "Adrian's Rojak Pot has a nice article about the internals of the Windows paging file. It explains what a paging file is and lists the differences between a swapfile and a paging file. But first and foremost, a large part of the article deals with the various methods of optimizing the Windows paging file, thus yielding a notable performance gain for people who are not overly blessed with RAM."
This discussion has been archived. No new comments can be posted.

Comprehensive Guide to the Windows Paging File

Comments Filter:
  • Defrag first, man. (Score:4, Informative)

    by inertia@yahoo.com (156602) * on Friday March 25, 2005 @04:24PM (#12048765) Homepage Journal
    One of the biggest performance helps is to keep the paging file from being fragmented, and I'm not talking about three or four fragments.

    I've come across workstations where the paging file is in thousands of fragments. This happens when someone messes with the settings. For instance, they might increase the size of the paging file thinking it'll help to have more. Normally, it's not a bad idea to increase it but if the drive is heavily fragmented, Windows dutifully uses the fragments for the new space.

    The only way to fix it is to completely delete (deactivate) the page file, then do a defrag, then re-create the page file (several reboots involved).

    That's probably the best way to tune the page file. There, I saved you from having to take the time to read the article.
    • by UCFFool (832674) on Friday March 25, 2005 @04:29PM (#12048822)
      OMG, you mean you didn't read the article and possibly learn something?

      Honestly, I wonder why people take the high and mighty road. The number one problem with computers is people simply don't understand them. The number one way to solve this problem is to educate the user about these little facets of the OS.

      The article covers some very basic differences in plain english, and unlike the 'just to do this and leave me alone' attitude, puts the user one step closer to a positive computer experience... a tough thing in today's virus/trojan/adware /. world.
    • by The Snowman (116231) * on Friday March 25, 2005 @04:29PM (#12048823) Homepage

      One of the biggest performance helps is to keep the paging file from being fragmented, and I'm not talking about three or four fragments.

      The best way to avoid fragmenting the swap file is a method I learned a long time ago, and the author mentions in his article but doesn't talk about much: keeping it on a separate partition. Sure, NTFS doesn't have a swap partition type like Linux does, but I keep a 2 gig partition with a fixed-size swap file on my WinXP box. Set the registry key to ignore "out of space" warnings for that drive, remove read privileges from everybody to that drive, and you basically have an invisible, un-fragmentable swap file that is invincible to user stupidity (I share my computer with my wife, so that last point is important. She does not have Administrator privileges on my box).

    • Before the defrag (Score:3, Interesting)

      by Tumbleweed (3706) *
      Make sure it's a fixed-size page file, not system-managed.

      By using 'Diskeeper,' you can also do some additional optimizations besides just defragging it; it's a nice app, though its warnings are overly-dire, and it insists of having something staying in memory all the time, which is irritating.
    • by bonch (38532)
      Or use any of the various disk defragmenters out there that support defragmentation of system files like the page file, such as Diskeeper [google.com].

      Kicks the crap out of Windows' built-in defragger, and is much faster too.
    • > The only way to fix it is to completely delete (deactivate) the page file, then do a defrag, then re-create the page file (several reboots involved).

      Hardly, unless you're so short on space that you can't fit a copy of the largest fragment. Do a normal defrag, run pagedefrag (from sysinternals), and reboot. Once. Or get a defragger that will do the pagefile -- have it do the registry files and MFT while you're at it. O&O defrag's pretty good, but just about anything is better than the piece of
    • by jd142 (129673)
      Article is toast, but if you want to defrag our pagefile.sys, go to sysinternals.com and get their pagedefrag program. This has increased boot times on older computers running w2k by as much as 30 seconds. It also defrags some of the other system files.

      Sysinternals does good work.
  • by demonbug (309515) on Friday March 25, 2005 @04:24PM (#12048767) Journal
    Would Mr. File please pick up one of the white courtesy phones? Your OS needs to talk to you.
  • by LordNimon (85072) on Friday March 25, 2005 @04:25PM (#12048787)
    I can't read the article now because it's slashdotted, but is there a difference between the swap file and the paging file in Linux? Does Linux even have a paging file?
    • by DNS-and-BIND (461968) on Friday March 25, 2005 @04:34PM (#12048879) Homepage
      Solaris can use a swap partition or a swap file on disk. You can even add more swap space while Solaris is running using mkswap. Had to do that a few times...running Solaris does not mean that your developers know how to create scalable software...
      • same with Linux. You used to prefer using a swap partition to a file due to the file system code slowing down your swap operations if it were on a file but about 18+ months ago I remember seeing a patch go into the kernel that made this not true any more.
    • by tijnbraun (226978) on Friday March 25, 2005 @04:36PM (#12048897)
      As far as I know it doesn't...

      Here [iu.edu] is some more info about paging and swapping under unix

      AFAIK a page is an group of memory addresses that are being changed/addressed at the same time.

      But I could be mistaken
    • They're the same for the paging file. Linux usually uses a partition for swap memory, though. The rationale is that it won't get fragmented and it doesn't need to deal with the VFS subsystem, so it's a little faster.
    • by TheFlyingGoat (161967) on Friday March 25, 2005 @04:44PM (#12048972) Homepage Journal
      I'm not sure if Linux has a paging file, but I fail to see why the linux swapping can't be done preemptively as well. If a program loses focus, swap the memory used by it, but retain the active copy in RAM. If another program needs the RAM, release it. If the program comes back into focus, clear the swap copy. Seems like it would give the best of both worlds.

      Article Text:
      We have all been using the terms swapfile and paging file interchangeably. Even Microsoft invariably refers to the paging file as the swapfile and vice versa. However, the swapfile and paging file are two different entities. Although both are used to create virtual memory, there are subtle differences between the two.

      The main difference lies in their names. Swapfiles operate by swapping entire processes from system memory into the swapfile. This immediately frees up memory for other applications to use.

      In contrast, paging files function by moving "pages" of a program from system memory into the paging file. These pages are 4KB in size. The entire program does not get swapped wholesale into the paging file.

      While swapping occurs when there is heavy demand on the system memory, paging can occur preemptively. This means that the operating system can page out parts of a program when it is minimized or left idle for some time. The memory used by the paged-out portions are not immediately released for use by other applications. Instead, they are kept on standby.

      If the paged-out application is reactivated, it can instantly access the paged-out parts (which are still stored in system memory). But if another application requests for the memory space, then the system memory held by the paged-out data is released for its use. As you can see, this is really quite different from the way a swapfile works.
      • I'm not sure if Linux has a paging file, but I fail to see why the linux swapping can't be done preemptively as well. If a program loses focus, swap the memory used by it, but retain the active copy in RAM. If another program needs the RAM, release it. If the program comes back into focus, clear the swap copy. Seems like it would give the best of both worlds.

        If you're talking about losing focus in the window manager sense, it's not possible under linux without some very specific hacks:

        If the window manag
      • I'm not sure if Linux has a paging file, but I fail to see why the linux swapping can't be done preemptively as well. If a program loses focus, swap the memory used by it, but retain the active copy in RAM.

        I don't think you have any idea how a Linux VM works. There is no "focus" because there is no windows. There are processes and how they are run depends on the scheduler.

        Memory management of Windows sucks. There is no question about it. If I don't have a gigabyte page file, I run out of memory (accord

      • by bheading (467684) on Friday March 25, 2005 @08:35PM (#12050974)
        I am not sure about the article's validity when it claims that swapfiles are distinct from paging files. There are other claims, such as the implication that Microsoft invented the term "virtual memory", that are also rather misleading.

        It's not a matter of swapping "pre-emptively". The whole business is invisible to the user space and the locking is essentially done by the kernel. The CPU hardware (ie the MMU) basically watches every single memory access when the CPU is in the appropriate mode (ie protected mode on an x86). It has a list of mappings of address spaces in it's own small internal page table.

        What happens is that every time you access a region of virtual memory (for example anywhere in user space), the MMU looks up the address in it's own list of pages. If the associated region is not mapped by the MMU, a page fault occurs which is essentially an interrupt which gets trapped by the operating system. The operating system checks to see if it has a mapping for that address.

        If it does, it checks to see if that mapping is either resident, or a block which has been paged out to disk (ie to the paging file). If it has been page out to disk, another block of memory gets paged out and the requested region is paged back in.

        The MMU's cache of pages gets cleared on most CPUs whenever there is a process context switch (but not a thread context switch), this is one of the big reasons why threads are less expensive than processes because the MMU's cache of pages has to be repopulated.

        The algorithm used to select which region of memory gets paged out is where the real trickery is. Obviously if you are low on memory and switch repeatedly between two busy proesses, you will be constantly thrashing those pages on and off the disk.
      • Yes, Linux can make use of a paging file. You can create a large file and use mkfs to create a swap partition on the file.

        You can then mount this file as a swap partition. You can make a fstab entry to mount this file at boot up as a swap partition.

        Please see this URL:
        http://enterprise.linux.com/article.pl?sid=05/03/ 0 2/2250257&tid=129&tid=42/ [linux.com]
    • Linux will page and swap to swap files. Usually people use a partition for swap, but occasionally it is useful to add swap space via a file - such as when one is running low on swap.

      Linux allows multiple swap files.

      In the old days on Unix-like systems, it was necessary to have more swap than RAM - as each allocated page of RAM was also allocated a page of swap. That is on longer the case.

      At my current job, on lab machines I usually have more RAM than disk space. If I alocate any swap space, I will all
    • by Alioth (221270) <no@spam> on Friday March 25, 2005 @05:01PM (#12049149) Journal
      Well, since you've had four incorrect and or unhelpful responses so far, let me shed some light:

      Linux only has a paging file (it's still called swap space though). This can either be a hard drive partition, or a regular file.

      To make it as a regular file:

      dd if=/dev/zero of=some_file bs=1M count=however_big_you_need_it_in_megs

      Then:
      mkswap some_file

      Then:
      swapon some_file

      You don't need to reboot, and you can add/remove these files at will using swapon/swapoff and the normal filesystem tools.

      The 'swappiness' of Linux can also be tuned: since kernel 2.6.0 there has been a proc file /proc/sys/vm/swappiness. This can be set on a value from 0 (try to never swap) to 100 (agressively write out pages to disk). By default, it is set to 60. To change the swappiness, say, to 40:

      echo 40 >/proc/sys/vm/swappiness

      Most 2.6-based distros have some GUI tool that can tweak parameters like this (Fedora certainly does).
  • well.. (Score:4, Funny)

    by Anonymous Coward on Friday March 25, 2005 @04:26PM (#12048793)
    They might want to adjust the paging file on their web server a bit...
  • by cmburns69 (169686) on Friday March 25, 2005 @04:27PM (#12048796) Homepage Journal
    Given the speed of the slashdotting, I'd say he needs to add more ram to his box, and stop relying on the paging file for speed...
  • by Anonymous Coward on Friday March 25, 2005 @04:30PM (#12048830)
    People are going to argue this a lot, because geeks like to fiddle with things. Clearly, the thing isn't optimized until I have changed the settings !!oneoneone!!!

    But the best way to optimize the paging file in Win2000 (and later) is to leave it alone. Windows will manage the paging system just fine on its own.
    • Not really true. On application servers, it's best to set a static pagefile. Heck on SQL boxes its sometimes best to not even have a pagefile... Letting Windows manage your pagefile decreases performance, as it leaves something else for the CPU to do and something else for the disks to do. Put a page file on another disk (or other partition if you don't have another disk). Make the pagefile 1.5x the amount of RAM you have (up to 4 GB anyways). I've found this works well and gives the best performance for
    • But the best way to optimize the paging file in Win2000 (and later) is to leave it alone.

      Sorry, but that's not good advice. There are real issues with fragmentation on NTFS file systems. You can create an empty NTFS partition copy a few files to it, and you can be sure that if the files are larger than 4KB, those files will be fragmented. And if they are of substantial size, the files can be split into dozens of pieces. Moreoever, Windows provide no native ability to defragment metadata on any partiti

  • FreeRAM (Score:5, Informative)

    by yuriismaster (776296) <tubaswimmerNO@SPAMgmail.com> on Friday March 25, 2005 @04:30PM (#12048832) Homepage
    http://www.bysoft.com/freeram.asp

    A free application that you can use to 'pre-page' out pages right before loading up your application. What it does is hog as much RAM as much as it can, forcing the OS to page out any unnessecary applications.

    I've seen the standard Explorer + lsass + cwrss + all the svchosts memory footprint go from 80-ish megs to 20. Running this before your game will allow quick load-times and quicker performance.
  • by Tenebrious1 (530949) on Friday March 25, 2005 @04:32PM (#12048856) Homepage
    ... for people who are not overly blessed with RAM.

    You mean those who's PHBs said the "minimum" requirements were good enough.

  • and It's called a paging file, because if they call it swap they couldn't get the patent!

    Or maybe the will be figthing trying to convince the world that they came up with the idea of a swap space!
  • by Entropius (188861) on Friday March 25, 2005 @04:36PM (#12048893)
    It seems like Windows' treatment of virtual memory is inefficient. Why use a file for swap -- incurring ntfs overhead and fragmentation -- when you could use a swap partition (like Linux) without exposing the user to any additional complexity (which is the usual tradeoff for efficiency gains).

    I had to upgrade my laptop from the stock 512 to 1.25GB just because WinXP thrashed the paging file so much. Granted, I multitask like a demon, but it shouldn't take 30 seconds to swap Firefox back in.
    • I think this is a fine idea for a server with competent sysadmins, but not so good for an end user...

      Say, a rule of thumb is, your page file is 2x the size of your physical memory. My mom takes her computer to best buy, and she goes from 128MB of memory to 512MB... typical situation. Now, her windows pagefile will grow accordingly. Or if I manage it for her, I would manually change it. But what if I had to tell her, "well now we have to remove everything on your computer to resize your paritions"?
      • But what if I had to tell her, "well now we have to remove everything on your computer to resize your paritions"?

        I would chide you for relying on Microsoft's bundled utilities for managing partitions. They're unnecessarily inflexible.

        PartitionMagic has been capable of "on-the-fly" partition resizing for ten years now.
      • But what if I had to tell her, "well now we have to remove everything on your computer to resize your paritions"?

        There are several (both commercial and free) solutions for re-sizing NTFS partitions with no data loss. I'm partial to Partition Expert myself and can only hope that Acronis will release it as a stand-alone Linux app some day.

      • Except, why would you do that? If she could manage with 128MB, then there's no point in adding more swap when you get more RAM.

        This is something that happened to me on Linux. I found that on a computer with 1 GB RAM, anything more than 256-512MB of swap is almost certainly excessive. Why? Because with so much memory, you should never really be using much swap anyway. When you end using so much is when some program goes mad and decides to allocate all the available memory. And then you'll find that having a
      • I don't understand why you would need to increase the size of the page file when you had just increased the RAM.

        The size of the page file needs to be the amount of virtual memory you need available minus the amount of physical RAM. So when you increase the RAM, it's appropriate to decrease the size of the page file, unless you are simultaneously planning on using more total virtual memory.

        In the Ancient Times, there were systems where the total virtual memory size was equal to the size of the page file.
    • As an addendum:

      My desktop (dual 7200 rpm hard drives) is slower than my laptop (single 4200 rpm drive) in every way other than the drive. The slight delay as things are swapped never really bothers me there, but the swap latency on a slow laptop drive is nasty.

    • You can fix this situation by setting up a separate partition (say.. 2GB?) for your swapfile and setting the swapfile to a fixed size within the partition.

      Granted, it would be really nice if MS would do that automatically, but it's still a cheaper solution than adding 750mb of laptop memory. :)

      I'm not sure, but running defrag on the partition weekly might help as well. I'd really like to see MS do automatic drive defrag in the background when the computer isn't being used, or is being used minimally. Th
    • it shouldn't take 30 seconds to swap Firefox back in.

      Or Outlook, or any other application for that matter. Yes, my Windows users have that same problem too. They expect me to have a solution as well. I can't see any solution other than switching them over to Linux or OS X. Some I have already moved to OS X and they are happy. Some don't understand why they need to be changed, for some reason they think I can fix it! Are there any ways to stop Windows from running like a lame dog? Of course, this is probab

  • thus yielding a notable performance gain for people who are not overly blessed with RAM.

    She has money, she has a title, she has huge... tracts of RAM!

  • WTF (Score:5, Funny)

    by OverlordQ (264228) on Friday March 25, 2005 @04:39PM (#12048920) Journal
    It's 44 Pages!! I mean heck, the average /. doesn't read one page news stories, but a 44 Page Article!? Not in this lifetime.
  • by LMCBoy (185365) * on Friday March 25, 2005 @04:40PM (#12048927) Homepage Journal
    c'mon, who gives a fsck?
  • by Anonymous Coward
    For God's sake, put a Printer Friendly link on your site so I can print the whole article. Having to read lots of text on my 14" EGA monitor makes my head asplode!
  • by bigtallmofo (695287) on Friday March 25, 2005 @04:42PM (#12048952)
    The site is so slashdotted that it took forever to load the first page which basically has nothing but useless history of swap files on it. Even the mirrordot doesn't appear to be working. Anyway, one sentence on that first page made me no longer interested in reading the article:

    Whenever the operating system has enough memory, it doesn't usually use virtual memory.

    This flies in the face of real world experience. You can have 4 gigs of RAM and nothing but Windows 2000 running and your OS will still use the swap file frequently. Don't believe me? Run Performance Monitor and monitor Memory, Pages/Sec and just click on a few things and you'll see that I'm correct.

    • by RatBastard (949) on Friday March 25, 2005 @04:59PM (#12049127) Homepage
      Turn off virtual memory and see how many MS apps suddenly stop working at all. And we're not talking memory pigs, either. Some screen savers don't run without virtual memory running, no matter how much RAM you have. It's really stupid.
      • In Windows, shared memory is done the same way that memory mapped files are. If you just want to share memory without mapping a file, you have to map a section of the page file. If there is no page file, this doesn't work. From CreateFileMapping [microsoft.com]:
        If hFile is INVALID_HANDLE_VALUE, the calling process must also specify a mapping object size in the dwMaximumSizeHigh and dwMaximumSizeLow parameters. In this case, CreateFileMapping creates a file mapping object of the specified size backed by the operating-system paging file rather than by a named file in the file system.
        Here [microsoft.com]'s an example of creating shared memory between two processes using this method.

        I don't know why screen savers would stop working, but I bet the developers never planned for the creation of shared memory failing.
  • IIRC the use of the term "Swap" is incorrect in the Linux world too. Linux sends pages to virtual memory, not swapping out whole apps.

    In fact, is there any major OS that still swaps?
  • by Peaker (72084) <gnupeaker@@@yahoo...com> on Friday March 25, 2005 @04:45PM (#12048988) Homepage
    The solution they came up with was to use some space on the hard disk as extra RAM. Although the hard disk is much slower than RAM, it is also much cheaper and users always have a lot more hard disk space than RAM. So, Windows was designed to create this pseudo-RAM or in Microsoft's terms - Virtual Memory, to make up for the shortfall in RAM when running memory-intensive programs.

    Amazing how they manage to turn everything around as though Microsoft invented the world of computing...

    Virtual memory, the term and the implementation have been around long before Microsoft came into existence.
    • by pchan- (118053) on Friday March 25, 2005 @06:09PM (#12049892) Journal
      In fact, what he is describing is NOT virtual memory, but demand paging. At this point, I stopped reading the article.

      Virtual memory is the mapping of physical memory pages to a "virtual" memory address using hardware translation tables. This is done so that every application lives in its own private memory space, and cannot interfere with other applications' memory (or the OS's). Basically, this technique of memory isolation keeps user apps from crashing the system or other applications. Virtual memory support has been added to x86 with the release of the 80386 processors and 32-bit flat memory addressing. Of course, virtual memory has been available for years before this on such OS's as DEC's VMS (the Virtual Memory System), IBM's MVS, UNIX, and a bunch of other systems I'm too young to know about.

      The misnaming of demand paging was actually started by Apple (continuing their tradition of calling the box a "CPU") for their swap file management (long before MacOS's support for VM in OSX).
      • Virtual memory is indeed the virtual tables that allow for virtual memory -> physical memory mappings that can differ for isolation purposes.

        However, those same virtual tables are used to mark pages as not-present to indicate they are stored on some external storage or do not exist at all. This is also known as a "Virtual memory" feature, so it is not inaccurate to say one of the purposes of virtual memory is to virtualize more memory than is physically available.

        Also, the 80286 already added the prot
  • by Anonymous Coward
    To read the start of the article, you'd think there were no non Microsoft operating systems when DOS came out and that Microsoft invented the concept of Virtual Memory.

    I couldn't read any more after that.

    -Disgusted AC in PA

  • by TeddyR (4176) on Friday March 25, 2005 @04:47PM (#12048995) Homepage Journal
    Cant get to the listed site since its totally /.ed but another interesting one re: XP, 98, me and page files

    Virtual Memory in Windows XP [aumha.org] http://aumha.org/win5/a/xpvm.htm [aumha.org]

    Windows 98 & WinME Memory Management [aumha.org] http://aumha.org/win4/a/memmgmt.htm [aumha.org]

    and there is
    How can I optimize the Windows 2000/XP/2003 virtual memory (Pagefile)? [petri.co.il] http://www.petri.co.il/pagefile_optimization.htm [petri.co.il]

  • by DrZiplok (1213) on Friday March 25, 2005 @04:50PM (#12049025)
    There's a good reason why people don't write about VM systems often; the combination of writing skill and the ability to understand modern VM is extremely rare.

    Take most, if not all, of what the article discusses with a large grain of salt. Everything, from his history (did Microsoft invent demand paging? Hardly) to his terminology is flawed.

    Just reading the first 40 comments or so here reveals that VM remains one of those "black magic" areas, where reason is overcome by superstition and people will assert the most ridiculous, irrational and unsupportable things. Regrettably, the contents of this article do nothing to improve the situation.

    = Mike
    • i know next to nothing about VM systems, and even *i* can tell we have problems, by this sentence:

      Even the fastest hard disk is currently over 70X slower than the dual-channel PC2700 DDR memory common in many computers. Let's not even start comparing the hard disk with faster RAM solutions like PC3200 DDR memory or...

      riiight. excuse me while i laugh and read something else.

  • I have over a gig of RAM but my laptop's HD is rather slow. How do I make it so that my computer will not use one of these files? I doubt there are many times when I am using all of my RAM, so I'd want to keep the HD use as little as possible.
  • Virtual Memory (Score:2, Interesting)

    by MCZapf (218870)
    The article is a little misleading. Virtual memory is not just space on a hard disk that you use when you run out of physical memory. Virtual memory is the practice of giving each process its own virtual address space that is independant of the physical address space. Doing this allows the OS to send some pages of memory to disk if it needs to, but the OS is still using the mechanism of virtual memory even if there is no hard drive at all. Each process's memory space is independant of the others'.
  • by Henk Poley (308046) on Friday March 25, 2005 @05:04PM (#12049176) Homepage
    It explains what a paging file is and lists the differences between a swapfile and a paging file.

    There is no difference.. He says that swapfiles would swap whole processes. I beg you pardon? Working on whole processes hasn't been the case since 'multiprogramming' on third generation computers (around 1965-1980).

    btw, a good program to defrag your Windows page file is PageDefrag [sysinternals.com]

    Together with Dirms & Buzzsaw [dirms.com], you can keep your disk defragmented for free. Especially Buzzsaw is nice since it will defragment recently accessed files in the background.
  • by Anonymous Coward on Friday March 25, 2005 @05:10PM (#12049223)
    Well, they tried, but I think they really missed some very commonly-held misconceptions.

    I've just finished a project involving reconstructing process images from physical memory, including pages from swap, if available, so I've got a pretty good understanding of this stuff at a very low level.

    Misconception 1: Swap-file usage = performance degradation

    Yes, it is slower (usually by 3 orders of magnitude, not 1) to access a page (frequently 4K) from disk instead of memory. HOWEVER, effiency dictates that all available RAM be utilized as soon as possible.

    For example, in addition to running processes, we also use volatile memory storage to cache file data. Clearly, we want to cache as much as possible. Page replacement policies then become important to determine how much swap space to use. But usually it is much greater than 0, because we've got process image pages that are less frequently used than a lot of file cache pages. So we've gotta remember that data, but we don't wanna waste fast RAM on it.

    In other words, isn't it great that we can swap out pages from an unused process to make room for frequently accessed file data? Regardless of how much memory we have, that's a Good Thing.

    Misconception 2: Virtual memory = disk space

    Virtual memory is a mechanism for translating program-visible addresses to arbitrary storage locations transparently. This doesn't necessarily mean that disk space is used for swapping memory, it means, for example, that 5 processes can simultaneously be accessing address 0x5000, but the actual storage location is different for each. If the system (usually the page address translation facility on the CPU) determines that this address isn't at some location in volatile storage, it will bring in that memory from swap space and possible page out some other data. This is what the article is generally talking about.

    I've seen some other questions about pages. There are a couple reasons for treating memory in page-size chunks, where a page is usually in the 1-8k range (4k for x86). First, the page address translation stuff needs to keep data on translations. It must do this in > 1 byte chunks, since keeping translation data on every byte would require way too much storage. Disk I/O and other I/O is frequently done on page-by-page basis for that reason as well as for the sake of performance.

    Well, I rambled enough. Just wanted to clear a couple things up.
  • by SailFly (560133) on Friday March 25, 2005 @05:11PM (#12049231) Homepage
    I found this link to the article: Swapfile_Optimization [adriansrojakpot.com]
  • by snorklewacker (836663) on Friday March 25, 2005 @05:17PM (#12049286)
    Putting http://text.burstnet.com/* into adblock makes for a slightly less annoying experience, but there's also the fact that it's 44 freaking pages long. Probably getting paid by the site hit or something.

    Do yourself a favor, give this content-lite article a miss. Make a small partition with ntfsresize, put a fixed pagefile on that partition alone. Works on every single version of windows and it's zero maintenance. Tah-dah.
  • by couch_warrior (718752) on Friday March 25, 2005 @05:22PM (#12049330)
    It is always amusing to me to read how M$ "invented" technologies that were in common use by other operating systems while Bill Gates was still wearing diapers. Both paging and swapping were used on IBM mainframes dating back to the mid 1960's. What windoze (and linux for that matter) could REALLY use is the ability to deterministically dedicate portions of system resources to particular processes.
    Back when a 1 MIP, 1MB machine cost $1M, operators became highly skilled at managing workloads. Today everyone just throws oodles of RAM and disk at servers and lets the chips fall where they may , so to speak. It wouldn't be a bad idea to actually put a little thought into matching workloads with machine resources, and pro-actively matching them up by deliberate choice(For example - our company is running a prime-time ad at 9pm that features our URL, maybe that's a good time to shut down the normal file backup that happens then). Chaos theory is not always a good load balancer. But what am I saying, that's as outrageous as asking kids to put their money in the bank instead of buying video games...
  • by GAATTC (870216) on Friday March 25, 2005 @05:58PM (#12049766)
    Who needs a page file when you have a 10.00 GHz AMD Athlon, a 2000 MB DIMM, and a 30000 GB IDE Hard Disk http://www.amazon.com/exec/obidos/tg/detail/-/B000 22ACYK [amazon.com] I quote from a review: "I was a little skeptical at first on how this thing would perform. However, when I installed and run the Seti@home program it started instantly finding alien signals, which where corrected, cleaned up and translated. I am now listening and talking to the supreme galactic defense minister about Earths surrender. Apparently this computer is not only tapping into the sun for power but also into the mysterious dark energy and tearing the universe apart. Just comes to show how bugs always show up in technology when you least expect it."
  • by bartash (93498) on Friday March 25, 2005 @06:58PM (#12050381)
    Larry Osterman writes in his blog that the author fundamentally doesn't understand what he's writing about [msdn.com]. Mr. Osterman has worked at Microsoft for 20 years. How old is the author of the article?
  • by abrax5 (545966) on Friday March 25, 2005 @07:44PM (#12050682)
    You could use SwapFS from: http://www.acc.umu.se/~bosse/

    Then you can put your windows pagefile into linux swap partitions. :-)

  • by jilles (20976) on Friday March 25, 2005 @08:16PM (#12050886) Homepage
    If you do have the memory, disabling the page file is the best thing you can do performance wise. In theory this is a bad idea, in practice it works very well.

    I have 1GB of memory. This is more than adequate for browsing, games, development, photoshop etc. So, two years ago I disabled the page file after discovering that with 300MB in use, windows was still swapping like crazy (and yes I had all the popular registry hacks applied which should prevent that). So eventually I disabled the page file. The immediate result of doing this is that applications become much more responsive. Suddenly multitasking becomes easy and fast.

    Normally when you work with an application for a while, all other applications get swapped to disk. It doesn't matter that you have 700MB of unused, readily available ram. So when you try to alt tab to them, windows spends a couple of seconds moving stuff back into memory. This is very annoying and totally unnecessary. The problem only becomes worse if you start to run some memory intensive programs because windows will swap aggressively then.

    Disabling the page file fixes this problem. The only disadvantage of doing this is that when you need more than 1GB it isn't there. This is typically the point where things would get very slow anyway due to swapping. In any case, this doesn't happen very often and is easily resolved by closing some applications. All games are optimized to run well with 256-512 MB. Most games don't use more than that, even if it is available. Office applications and other desktop software rarely use more than 100MB. Photoshop can push the limits but unless you are doing some extreme high resolution photography stuff with it, you will not run into any problems with 1GB. It's actually quite hard to run out of memory. Most things that are infamous for memory usage like ms flight simulator, doom 3, photoshop, vmware, etc all run without problems and without swapping.

    If you think about it, swapping is a really lousy solution unless you expect to run out of memory. Disk is many times slower than ram. The reason that you open programs is that you want them ready for action. Swapping them to disk is therefore undesirable. The only reason it would be desirable is if the total amount of memory used by all of your running programs is larger than the amount of memory you have. So if you have 256 MB, swap files are a nice poormans solution to the problem that you don't have enough memory. If you have 1GB you shouldn't have that problem (and if you do, buy another GB).
    • by pe1chl (90186)
      If you think about it, swapping is a really lousy solution unless you expect to run out of memory. Disk is many times slower than ram. The reason that you open programs is that you want them ready for action. Swapping them to disk is therefore undesirable.

      This is your situation, your opinion, and the base of your success.
      Other may have different situations and swapping may be better for them. On my Linux box I typically have 160-200 processes running, and I don't need them to be all ready for action. Th

Some people carve careers, others chisel them.

Working...