Any benefit or detriment from removing a pagefile on an 8 GB RAM machine?
Solution 1:
TL;DR version: Let Windows handle your memory/pagefile settings. The people at MS have spent a lot more hours thinking about these issues than most of us sysadmins.
Many people seem to assume that Windows pushes data into the pagefile on demand. EG: something wants a lot of memory, and there is not enough RAM to fill the need, so Windows begins madly writing data from RAM to disk at this last minute, so that it can free up RAM for the new demands.
This is incorrect. There's more going on under the hood. Generally speaking, Windows maintains a backing store, meaning that it wants to see everything that's in memory also on the disk somewhere. Now, when something comes along and demands a lot of memory, Windows can clear RAM very quickly, because that data is already on disk, ready to be paged back into RAM if it is called for. So it can be said that much of what's in pagefile is also in RAM; the data was preemptively placed in pagefile to speed up new memory allocation demands.
Describing the specific mechanisms involved would take many pages (see chapter 7 of Windows Internals, and note that a new edition will soon be available), but there are a few nice things to note. First, much of what's in RAM is intrinsically already on the disk - program code fetched from an executable file or a DLL for example. So this doesn't need to be written to the pagefile; Windows can simply keep track of where the bits were originally fetched from. Second, Windows keeps track of which data in RAM is most frequently used, and so clears from RAM that data which has gone longest without being accessed.
Removing pagefile entirely can cause more disk thrashing. Imagine a simple scenario where some app launches and demands 80% of existing RAM. This would force current executable code out of RAM - possibly even OS code. Now every time those other apps - or the OS itself (!!) need access to that data, the OS must page them in from backing store on disk, leading to much thrashing. Because without pagefile to serve as backing store for transient data, the only things that can be paged are executables and DLLs which had inherent backing stores to start with.
There are of course many resource/utilization scenarios. It is not impossible that you have one of the scenarios under which there would be no adverse effects from removing pagefile, but these are the minority. In most cases, removing or reducing pagefile will lead to reduced performance under peak-resource-utilization scenarios.
Some references:
- Windows Internals book(s) (4th edition and 5th edition)
- Pushing the Limits of Windows: Physical Memory
- Pushing the Limits of Windows: Virtual Memory
- Inside the Windows Vista Kernel: Part 1
- Inside the Windows Vista Kernel: Part 2
- Inside the Windows Vista Kernel: Part 3
- Understanding Virtual Memory
- RAM, Virtual Memory, Pagefile and all that stuff (here's a longer version)
- The Out-of-Memory Syndrome, or: Why Do I Still Need a Pagefile?
dmo noted a recent Eric Lippert post which helps in the understanding of virtual memory (though is less related to the question). I'm putting it here because I suspect some people won't scroll down to other answers - but if you find it valuable, you owe dmo a vote, so use the link to get there!
Solution 2:
Eric Lippert recently wrote a blog entry describing how Windows manages memory. In short, the Windows memory model can be thought of as a disk store where RAM acts as a performance-enhancing cache.
Solution 3:
As I see from other answers I am the only one that disabled page file and never regreted it. Great :-)
Both at home and work I have Vista 64-bit with 8 GB of RAM. Both have page file disabled. At work it's nothing unusal for me to have few instances of Visual Studio 2008, Virtual PC with Windows XP, 2 instances of SQL Server and Internet Explorer 8 with a lot of tabs working together. I rarely reach 80% of memory.
I'm also using hybrid sleep every day (hibernation with sleep) without any problems.
I started experimeting with it when I had Windows XP with 2 GB of RAM and I really saw the difference. Classic example was when icons in Control Panel stopped showing itself one after one, but all at once. Also Firefox/Thunderbird startup time increased dramatically. Everything started to work immediately after I clicked on something. Unfortunately 2 GB was too small for my applications usage (Visual Studio 2008, Virtual PC and SQL Server), so I enabled it back.
But right now with 8 GB I never want to go back and enable page file.
For those that are saying about extreme cases take this one from my Windows XP times.
When you are trying to load large Pivot Table in Excel from an SQL query, Excel 2000 increases its memory usage pretty fast.
When you have page file disabled - you wait a little and then Excel will blow up and the system will clear all memory after it.
When you have the page file enabled - you wait some time and when you'll notice that something is wrong you can do almost nothing with your system. Your HDD is working like hell and even if you somehow manage to run Task Manager (after few minutes of waiting) and kill excel.exe
you must wait minute or so until system loads everything back from the page file.
As I saw later, Excel 2003 handles the same pivot table without any problems with the page file disabled - so it was not a "too large dataset problem".
So in my opinion, a disabled page file even protects you sometimes from poorly written applications.
Shortly: if you are aware of your memory usage - you can safely disable it.
Edit: I just want to add that I installed Windows Vista SP2 without any problems.
Solution 4:
You may want to do some measurement to understand how your own system is using memory before making pagefile adjustments. Or (if you still want to make adjustments), before and after said adjustments.
Perfmon is the tool for this; not Task Manager. A key counter is Memory - Pages Input/sec. This will specifically graph hard page faults, the ones where a read from disk is needed before a process can continue. Soft page faults (which are the majority of items graphed in the default Page Faults/sec counter; I recommend ignoring that counter!) aren't really an issue; they simply show items being read from RAM normally.
Perfmon graph http://g.imagehost.org/0383/perfmon-paging.png
Above is an example of a system with no worries, memory-wise. Very occasionally there is a spike of hard faults - these cannot be avoided, since hard disks are always larger than RAM. But the graph is largely flat at zero. So the OS is paging-in from backing store very rarely.
If you are seeing a Memory - Pages Input/sec graph which is much spikier than this one, the right response is to either lower memory utilization (run less programs) or add RAM. Changing your pagefile settings would not change the fact that more memory is being demanded from the system than it actually has.
A handy additional counter to monitor is PhysicalDisk - Avg. Queue Length (all instances). This will show how much your changes impact disk usage itself. A well-behaved system will show this counter averaging at 4 or less per spindle.
Solution 5:
I've run my 8 GB Vista x64 box without a page file for years, without any problems.
Problems did arise when I really used my memory!
Three weeks ago, I began editing really large image files (~2 GB) in Photoshop. One editing session ate up all my memory. Problem: I was not able to save my work since Photoshop needs more memory to save the file!
And since it was Photoshop itself, which was eating up all the memory, I could not even free memory by closing programs (well, I did, but it was too little to be of help).
All I could do was scrap my work, enable my page file and redo all my work - I lost a lot of work due to this and can not recommend disabling your page file.
Yes, it will work great most of the time. But the moment it breaks it might be painful.