How does SQL Server handle the data for a query where there is not enough room in the buffer cache?
Pages are read into memory as required, if there is no free memory available, the oldest unmodified page is replaced with the incoming page.
This means if you execute a query that requires more data than can fit in memory, many pages will live a very short life in memory, resulting in a lot of I/O.
You can see this effect by looking at the "Page Life Expectancy" counter in Windows Performance Monitor. Look at https://sqlperformance.com/2014/10/sql-performance/knee-jerk-page-life-expectancy for some great details about that counter.
In the comments, you asked specifically what happens when the results of the query is larger than available buffer space. Take the simplest example, select * from some_very_big_table;
- assume the table is 32GB, and max server memory (MB)
is configured at 24GB. All 32GB of table data will be read into pages in the page buffer one at a time, latched, formatted into network packets, and sent across the wire. This happens page-by-page; you could have 300 such queries running at the same time, and assuming there was no blocking happening, the data for each query would be read into page buffer space, a page at a time, and put onto the wire as fast as the client can request and consume the data. Once all the data from each page has been sent onto the wire, the page becomes unlatched, and will very quickly be replaced by some other page from disk.
In the case of a more complex query, say for instance aggregating results from several tables, pages will be pulled into memory exactly as above as they are required by the query processor. If the query processor needs temporary work space to calculate results, it will know that upfront when it compiles a plan for the query, and will request work space (memory) from SQLOS. SQLOS will at some point (assuming it doesn't time out), grant that memory to the query processor, at which point query processing will resume. If the query processor makes a mistake in its estimate of how much memory to ask for from SQLOS, it may need to perform a "spill to disk" operation, where data is temporarily written into tempdb in an intermediate form. The pages that have been written to tempdb will be unlatched once they are written to tempdb to make room for other pages to be read into memory. Eventually the query process will return to the data stored in tempdb, paging that in using latching, into pages in the buffer that are marked free.
I'm undoubtedly missing a load of very technical details in the above summary, but I think that captures the essence of how SQL Server can process more data than can fit in memory.
I can't speak to what exactly your query would do in this scenario but SQL Server has several options depending on how much is needed.
- Data can "spill" to TempDB, this would be using your disk
- Old pages can be pushed out of your buffer cache
- SQL Server can load some pages to buffer cache, use them, then rotate new pages in
The best way to find out what would happen is to create the scenario in a dev environment and find out.
My question is how does SQL Server handle a query that needs to pull more volume of data into the buffer cache then there is space available
To answer this specific part let me tell you how this is managed. The pages are of size 8KB. When you run a query requesting large data set and which requires numerous pages to be brought into memory SQL Server will not bring all pages in one go. It will locate the specific pages and bring one by one single 8KB pages into memory read the data out from it and give you the result and this will go on now suppose it faces situation where memory is less in that case old pages will be flushed to the disk like @Max pointed out. As you guessed correctly this low memory can slow things as some time would be spent in removing old pages. This is where checkpoint and Lazywriter comes into picture. Lazywriter is their to make sure some free memory is always there to bring new pages to the disk. When low free buffer is encountered it is triggered and creates free spaces to being new pages.
EDIT
I get that, but the part that baffles me a bit is what happens if you are joining \filtering data and those results exceed the size of the cache.
The memory for joining and filtering are decided even before the query runs and suppose there is really a memory crunch and the memory required to run operation is not available SQL Server processor will grant "required memory" which is
Required memory: Minimum memory needed to run sort and hash join. It is called required because a query would not start without this memory available. SQL server uses this memory to create internal data structures to handle sort and hash join.
So at least the query will start running but during runtime its quite likely the intermediate result is spilled to Tempdb making it slow. I strongly suggest you read Understanding Query Memory Grant