Why do SQL Server Scalar-valued functions get slower?
In most cases, it's best to avoid scalar valued functions that reference tables because (as others said) they are basically black boxes that need to be ran once for every row, and cannot be optimized by the query plan engine. Therefore, they tend to scale linearly even if the associated tables have indexes.
You may want to consider using an inline-table-valued function, since they are evaluated inline with the query, and can be optimized. You get the encapsulation you want, but the performance of pasting the expressions right in the select statement.
As a side effect of being inlined, they can't contain any procedural code (no declare @variable; set @variable = ..; return). However, they can return several rows and columns.
You could re-write your functions something like this:
create function usf_GIS_GET_LAT(
@City varchar (30),
@State char (2)
)
returns table
as return (
select top 1 lat
from GIS_Location with (nolock)
where [State] = @State
and [City] = @City
);
GO
create function usf_GIS_GET_LON (
@City varchar (30),
@State char (2)
)
returns table
as return (
select top 1 LON
from GIS_Location with (nolock)
where [State] = @State
and [City] = @City
);
The syntax to use them is also a little different:
select
Lat.Lat,
Lon.Lon
from
Address_Location with (nolock)
cross apply dbo.usf_GIS_GET_LAT(City,[State]) AS Lat
cross apply dbo.usf_GIS_GET_LON(City,[State]) AS Lon
WHERE
ID IN (SELECT TOP 100 ID FROM Address_Location WITH(NOLOCK) ORDER BY ID DESC)
you call the function two times (two select hits to the DB) for every row in the result set.
to make your query faster join right to GIS_Location and skip the functions:
SELECT
g.Lat,
g.Lon
FROM
Address_Location l WITH(NOLOCK)
INNER JOIN GIS_Location g WITH(NOLOCK) WHERE l.State = g.State AND l.City = g.City
WHERE
ID IN (SELECT TOP 100 ID FROM Address_Location WITH(NOLOCK) ORDER BY ID DESC)
I'm not sure why the NOLOCK, or the crazy where clause, I just copied from the question...
They do not.
There is no bug in scalar functions that causes its performance to degrade exponentially depending on the number of rows in the scalar function is executed against. Try your tests again and have a look at SQL profiler, looking at the CPU and READS and DURATION columns. Increase you test size to include tests that take longer than a second, two seconds, five seconds.
CREATE FUNCTION dbo.slow
(
@ignore int
)
RETURNS INT
AS
BEGIN
DECLARE @slow INT
SET @slow = (select count(*) from sysobjects a
cross join sysobjects b
cross join sysobjects c
cross join sysobjects d
cross join sysobjects e
cross join sysobjects f
where a.id = @ignore)
RETURN @slow
END
go
SET STATISTICS TIME ON
select top 1 dbo.slow(id)
from sysobjects
go
select top 5 dbo.slow(id)
from sysobjects
go
select top 10 dbo.slow(id)
from sysobjects
go
select top 20 dbo.slow(id)
from sysobjects
go
select top 40 dbo.slow(id)
from sysobjects
SET STATISTICS TIME OFF
Output
SQL Server Execution Times:
CPU time = 203 ms, elapsed time = 202 ms.
SQL Server Execution Times:
CPU time = 889 ms, elapsed time = 939 ms.
SQL Server Execution Times:
CPU time = 1748 ms, elapsed time = 1855 ms.
SQL Server Execution Times:
CPU time = 3541 ms, elapsed time = 3696 ms.
SQL Server Execution Times:
CPU time = 7207 ms, elapsed time = 7392 ms.
Keep in mind that if you are running a scalar function against rows in the result set, the scalar function will be executed per-row with no global optimisation.
You can wrap your functionality in an inline TVF, that will be much faster:
http://sqlblog.com/blogs/alexander_kuznetsov/archive/2008/05/23/reuse-your-code-with-cross-apply.aspx