Max Memory settings on Multi-Instance SQL Server 2008 R2 Cluster
You should absolutely make the most use of the hardware when you are in an optimal config, and adjust when you are in maintenance mode. And yes, you will have an issue while both (or all four?) instances are active on the same node. Since a failover induces a service start on the now-active node, you can adjust the max memory of each server in that event using a startup procedure. I blogged about this here, but for a different reason (failing over to a node with a different amount of memory):
- https://sqlblog.org/2009/09/18/managing-multi-instance-cluster-failovers-with-different-hardware
Basically, you just need to check if both instances are on the same node (and this will require a linked server to be set up in both directions), and adjust accordingly. A very quick and completely untested example based on my blog post and assuming there is only one instance on each node at a time presently (the question is a bit ambiguous if you have 2 total instances or 4):
CREATE PROCEDURE dbo.OptimizeInstanceMemory
AS
BEGIN
SET NOCOUNT ON;
DECLARE
@thisNode NVARCHAR(255) = CONVERT(NVARCHAR(255),
SERVERPROPERTY('ComputerNamePhysicalNetBIOS'),
@otherNode NVARCHAR(255),
@optimalMemory INT = 12288, -- 12 GB
@sql NVARCHAR(MAX);
SET @sql = N'SELECT @OtherNode = CONVERT(NVARCHAR(255),
SERVERPROPERTY(N''ComputerNamePhysicalNetBIOS''));';
EXEC [SERVER\INSTANCE].master..sp_executesql @sql,
N'@OtherNode NVARCHAR(255) OUTPUT', @OtherNode OUTPUT;
IF @thisNode = @otherNode
BEGIN -- we're on the same node, let's make everyone happy
SET @optimalMemory = 6144;
END
SET @sql = N'EXEC sp_configure N''max server memory'', @om;
RECONFIGURE WITH OVERRIDE;';
EXEC master..sp_executesql @sql, N'@om INT', @optimalMemory;
EXEC [SERVER\INSTANCE].master..sp_executesql @sql, N'@om INT', @optimalMemory;
END
GO
EXEC [master].dbo.sp_procoption
N'dbo.OptimizeInstanceMemory', 'startup', 'true';
Of course create it again on the other instance, swapping the linked server name used.
This gets a little more complex if you have to adjust depending on whether you are sharing the current node with 1, 2 or 3 other instances.
Note that this will cause other side effects such as clearing the plan cache (in the event when one of the instances didn't just restart or fail over, in which case the plan cache would be empty anyway), but these are arguably better than leaving both instances to assume they still have 12 GB of memory to play with - there will be a lot of thrashing if they're both heavily used.
You may also want to consider other options such as global maxdop, NUMA/CPU affinity etc. depending on how sensitive the system is to the amount of resources available.
Community Wiki answer collecting partial answers originally left as comments on the question
...(using page file if necessary).
Jon Seigel: Yes, it will, but you don't want to do that. The best advice I've heard (also from Brent) is to set the maximums for the best-case scenario, and set the minimums for the worst-case scenario.
Kin: These links will help you with scripting:
- SQL Server Resource Re-Balancing in Failover Cluster
- SQL Server 2008 and R2 Cluster Best Practices
- Use PowerShell Script to Balance Memory
Mark: See Dynamic Memory Management (TechNet)