What does 'COLLATE SQL_Latin1_General_CP1_CI_AS' do?
Please be aware that the accepted answer is a bit incomplete. Yes, at the most basic level Collation handles sorting. BUT, the comparison rules defined by the chosen Collation are used in many places outside of user queries against user data.
If "What does COLLATE SQL_Latin1_General_CP1_CI_AS
do?" means "What does the COLLATE
clause of CREATE DATABASE
do?", then:
The COLLATE {collation_name}
clause of the CREATE DATABASE
statement specifies the default Collation of the Database, and not the Server; Database-level and Server-level default Collations control different things.
Server (i.e. Instance)-level controls:
- Database-level Collation for system Databases:
master
,model
,msdb
, andtempdb
. - Due to controlling the DB-level Collation of
tempdb
, it is then the default Collation for string columns in temporary tables (global and local), but not table variables. - Due to controlling the DB-level Collation of
master
, it is then the Collation used for Server-level data, such as Database names (i.e.name
column insys.databases
), Login names, etc. - Handling of parameter / variable names
- Handling of cursor names
- Handling of
GOTO
labels - Default Collation used for newly created Databases when the
COLLATE
clause is missing
Database-level controls:
- Default Collation used for newly created string columns (
CHAR
,VARCHAR
,NCHAR
,NVARCHAR
,TEXT
, andNTEXT
-- but don't useTEXT
orNTEXT
) when theCOLLATE
clause is missing from the column definition. This goes for bothCREATE TABLE
andALTER TABLE ... ADD
statements. - Default Collation used for string literals (i.e.
'some text'
) and string variables (i.e.@StringVariable
). This Collation is only ever used when comparing strings and variables to other strings and variables. When comparing strings / variables to columns, then the Collation of the column will be used. - The Collation used for Database-level meta-data, such as object names (i.e.
sys.objects
), column names (i.e.sys.columns
), index names (i.e.sys.indexes
), etc. - The Collation used for Database-level objects: tables, columns, indexes, etc.
Also:
- ASCII is an encoding which is 8-bit (for common usage; technically "ASCII" is 7-bit with character values 0 - 127, and "ASCII Extended" is 8-bit with character values 0 - 255). This group is the same across cultures.
- The Code Page is the "extended" part of Extended ASCII, and controls which characters are used for values 128 - 255. This group varies between each culture.
Latin1
does not mean "ASCII" since standard ASCII only covers values 0 - 127, and all code pages (that can be represented in SQL Server, and evenNVARCHAR
) map those same 128 values to the same characters.
If "What does COLLATE SQL_Latin1_General_CP1_CI_AS
do?" means "What does this particular collation do?", then:
Because the name start with
SQL_
, this is a SQL Server collation, not a Windows collation. These are definitely obsolete, even if not officially deprecated, and are mainly for pre-SQL Server 2000 compatibility. Although, quite unfortunatelySQL_Latin1_General_CP1_CI_AS
is very common due to it being the default when installing on an OS using US English as its language. These collations should be avoided if at all possible.Windows collations (those with names not starting with
SQL_
) are newer, more functional, have consistent sorting betweenVARCHAR
andNVARCHAR
for the same values, and are being updated with additional / corrected sort weights and uppercase/lowercase mappings. These collations also don't have the potential performance problem that the SQL Server collations have: Impact on Indexes When Mixing VARCHAR and NVARCHAR Types.Latin1_General
is the culture / locale.- For
NCHAR
,NVARCHAR
, andNTEXT
data this determines the linguistic rules used for sorting and comparison. - For
CHAR
,VARCHAR
, andTEXT
data (columns, literals, and variables) this determines the:- linguistic rules used for sorting and comparison.
- code page used to encode the characters. For example,
Latin1_General
collations use code page 1252,Hebrew
collations use code page 1255, and so on.
- For
CP{code_page}
or{version}
- For SQL Server collations:
CP{code_page}
, is the 8-bit code page that determines what characters map to values 128 - 255. While there are four code pages for Double-Byte Character Sets (DBCS) that can use 2-byte combinations to create more than 256 characters, these are not available for the SQL Server collations. For Windows collations:
{version}
, while not present in all collation names, refers to the SQL Server version in which the collation was introduced (for the most part). Windows collations with no version number in the name are version80
(meaning SQL Server 2000 as that is version 8.0). Not all versions of SQL Server come with new collations, so there are gaps in the version numbers. There are some that are90
(for SQL Server 2005, which is version 9.0), most are100
(for SQL Server 2008, version 10.0), and a small set has140
(for SQL Server 2017, version 14.0).I said "for the most part" because the collations ending in
_SC
were introduced in SQL Server 2012 (version 11.0), but the underlying data wasn't new, they merely added support for supplementary characters for the built-in functions. So, those endings exist for version90
and100
collations, but only starting in SQL Server 2012.
- For SQL Server collations:
- Next you have the sensitivities, that can be in any combination of the following, but always specified in this order:
CS
= case-sensitive orCI
= case-insensitiveAS
= accent-sensitive orAI
= accent-insensitiveKS
= Kana type-sensitive or missing = Kana type-insensitiveWS
= width-sensitive or missing = width insensitiveVSS
= variation selector sensitive (only available in the version 140 collations) or missing = variation selector insensitive
Optional last piece:
_SC
at the end means "Supplementary Character support". The "support" only affects how the built-in functions interpret surrogate pairs (which are how supplementary characters are encoded in UTF-16). Without_SC
at the end (or_140_
in the middle), built-in functions don't see a single supplementary character, but instead see two meaningless code points that make up the surrogate pair. This ending can be added to any non-binary, version 90 or 100 collation._BIN
or_BIN2
at the end means "binary" sorting and comparison. Data is still stored the same, but there are no linguistic rules. This ending is never combined with any of the 5 sensitivities or_SC
._BIN
is the older style, and_BIN2
is the newer, more accurate style. If using SQL Server 2005 or newer, use_BIN2
. For details on the differences between_BIN
and_BIN2
, please see: Differences Between the Various Binary Collations (Cultures, Versions, and BIN vs BIN2)._UTF8
is a new option as of SQL Server 2019. It's an 8-bit encoding that allows for Unicode data to be stored inVARCHAR
andCHAR
datatypes (but not the deprecatedTEXT
datatype). This option can only be used on collations that support supplementary characters (i.e. version 90 or 100 collations with_SC
in their name, and version 140 collations). There is also a single binary_UTF8
collation (_BIN2
, not_BIN
).PLEASE NOTE: UTF-8 was designed / created for compatibility with environments / code that are set up for 8-bit encodings yet want to support Unicode. Even though there are a few scenarios where UTF-8 can provide up to 50% space savings as compared to
NVARCHAR
, that is a side-effect and has a cost of a slight hit to performance in many / most operations. If you need this for compatibility, then the cost is acceptable. If you want this for space-savings, you had better test, and TEST AGAIN. Testing includes all functionality, and more than just a few rows of data. Be warned that UTF-8 collations work best when ALL columns, and the database itself, are usingVARCHAR
data (columns, variables, string literals) with a_UTF8
collation. This is the natural state for anyone using this for compatibility, but not for those hoping to use it for space-savings. Be careful when mixing VARCHAR data using a_UTF8
collation with eitherVARCHAR
data using non-_UTF8
collations orNVARCHAR
data, as you might experience odd behavior / data loss. For more details on the new UTF-8 collations, please see: Native UTF-8 Support in SQL Server 2019: Savior or False Prophet?
It sets how the database server sorts (compares pieces of text). in this case:
SQL_Latin1_General_CP1_CI_AS
breaks up into interesting parts:
latin1
makes the server treat strings using charset latin 1, basically asciiCP1
stands for Code Page 1252CI
case insensitive comparisons so 'ABC' would equal 'abc'AS
accent sensitive, so 'ü' does not equal 'u'
P.S. For more detailed information be sure to read @solomon-rutzky's answer.