SQL Server VARCHAR Column Width
The question might be better stated as:
"What is the advantage of over-specifying the maximum length of a variable-length column?"
In general, there is little advantage, and several disadvantages as the various linked answers point out. Aside from any other concerns, consider that SQL Server is not open-source: there are many 'magic numbers' and heuristics applied based on the information provided to the system. Without source code access, we could never be entirely sure what the impact of this practice might be.
In some cases, where the average length of a column is significantly higher than the 50% assumed by SQL Server when calculating sort/hash memory grants, you may see a performance improvement by over-specifying the maximum length. This is a dubious workaround, and should probably only be applied by an explicit CAST
or CONVERT
(with comments!) rather than changing the base column definition. Sometimes, it will be preferable to rewrite the query to sort keys instead of whole rows anyway.
Where the maximum row size might exceed the in-row limit (even if no rows actually do), deleting rows can cause unexpected page-splitting if a trigger is present. Updates may also lead to fragmentation via the same mechanism.
SQL Server does a pretty good job in most cases where it is provided with good, accurate information from metadata. Compromising this principle for 'convenience' seems unwise to me. A sensible approach is to choose a maximum length value that is reasonable according to the actual data to be stored, and any foreseeable changes.