How do I interpret precision and scale of a number in a database?

Precision of a number is the number of digits.

Scale of a number is the number of digits after the decimal point.

What is generally implied when setting precision and scale on field definition is that they represent maximum values.

Example, a decimal field defined with precision=5 and scale=2 would allow the following values:

  • 123.45 (p=5,s=2)
  • 12.34 (p=4,s=2)
  • 12345 (p=5,s=0)
  • 123.4 (p=4,s=1)
  • 0 (p=0,s=0)

The following values are not allowed or would cause a data loss:

  • 12.345 (p=5,s=3) => could be truncated into 12.35 (p=4,s=2)
  • 1234.56 (p=6,s=2) => could be truncated into 1234.6 (p=5,s=1)
  • 123.456 (p=6,s=3) => could be truncated into 123.46 (p=5,s=2)
  • 123450 (p=6,s=0) => out of range

Note that the range is generally defined by the precision: |value| < 10^p ...


Numeric precision refers to the maximum number of digits that are present in the number.

ie 1234567.89 has a precision of 9

Numeric scale refers to the maximum number of decimal places

ie 123456.789 has a scale of 3

Thus the maximum allowed value for decimal(5,2) is 999.99


Precision, Scale, and Length in the SQL Server 2000 documentation reads:

Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example, the number 123.45 has a precision of 5 and a scale of 2.