Why is BigDecimal.equals specified to compare both value and scale individually?
Because in some situations, an indication of precision (i.e. the margin of error) may be important.
For example, if you're storing measurements made by two physical sensors, perhaps one is 10x more precise than the other. It may be important to represent this fact.
The general rule for equals
is that two equal values should be substitutable for one another. That is, if performing a computation using one value gives some result, substituting an equals
value into the same computation should give a result that equals
the first result. This applies to objects that are values, such as String
, Integer
, BigDecimal
, etc.
Now consider BigDecimal
values 2.0 and 2.00. We know they are numerically equal, and that compareTo
on them returns 0. But equals
returns false. Why?
Here's an example where they are not substitutable:
var a = new BigDecimal("2.0");
var b = new BigDecimal("2.00");
var three = new BigDecimal(3);
a.divide(three, RoundingMode.HALF_UP)
==> 0.7
b.divide(three, RoundingMode.HALF_UP)
==> 0.67
The results are clearly unequal, so the value of a
is not substitutable for b
. Therefore, a.equals(b)
should be false.