Used Memory not freed up after a SQL BULK Insert / BCP Export

It is normal behaviour for SQL Server to allocate as much memory as it can into it's buffer pool. Databases work best with lots of buffer. If you want to change the behaviour, you can set the 'max server memory' setting. Some good background reading on that is here.


If you really want the OS to take the memory back from SQL Server, take a big 20 GB file and copy it over the network. SQL Server will release the memory as the OS needs it. But I would watch a variety of performance counters while this is going on, and see how the performance of your BULK INSERT changes if you run it again either while the copy is going on or immediately after.

If you want to do this manually, then you should set a lower limit on SQL Server's max server memory setting, and restart the service. Now SQL Server won't use 28GB even if it needs it. But this seems to be artificially limiting SQL Server.

What you seem to expect is more flexible behavior, where you can have free memory part of the time. For what purpose? Is this like shrinking a database file to free up disk space that you can't use for other purposes because the database file is going to grow again?

It's funny, if you type a Google search for "why doesn't SQL Server" the most common auto-completion is "release memory."


This is unfortunately by design, please reference this post. However, in this post it does give you some instruction on how to control it.

https://serverfault.com/questions/251832/sql-server-bulk-insert-physical-memory-issue

Memory Allocation Edit

Memory isn't freed up because it allocates memory a lot like a .NET application. Since memory allocation is expensive it will hold on to that allocation unless the OS requests it. But, fear not, if the OS wants the memory it will get it, just like it does in a .NET application.