Decimal or double?
Biggest difference is the Double is a 64-bit and Decimal is not..... In addition:
Scientific notation (e) for Doubles is not supported.
Also, currency fields in salesforce are Decimals. The scale (number of digits to the right of decimal point) can be set for a Decimal, but cannot for a Double. This is important for currency and other calculations (division). Doubles do not have this option. You'll find that Doubles have a limited number of methods available to them compared to Decimals. Unless you know that you're dealing with very large numbers requiring a high degree of precision, Decimals will generally be a better choice.
Here is a good discussion on the issue from people way more experienced in the ins and outs than me:
From Peter Knoll:
According to the primitive data type docs currency fields are automatically assigned the type Decimal. Double is 64-bit. I'm not sure what Decimal is behind the scenes. It might be BigDecimal like or something else
From Rich Unger (Regarding the type of Decimal a decimal is:
For example, with Java you should never use floating point representations to handle monetary calculations, but instead use BigDecimal, and, as you stated you need to be careful with the comparisons. The same goes for Apex with respect to Double
SF Decimal is a BigDecimal
Run the following in Exec Anon and you'll see what I mean.
Double zeroPointZeroOne = 0.01; Double sum = 0.0; for (Integer i = 0; i < 10; i++) { sum += zeroPointZeroOne; } // 0.09999999999999999 System.debug('Double sum=' + sum); Decimal zeroPointZeroOneD = 0.01; Decimal sumD = 0.0; for (Integer i = 0; i < 10; i++) { sumD += zeroPointZeroOneD; } // 0.1 System.debug('Decimal sumD=' + sumD);
Comparing floating-point values
Its important to note that in the above Decimal example, in declaring Decimal sumD = 0.0;
, the scale was set to 1, meaning 0.099 would be rounded up to 0.1 and 0.01 would be rounded down to 0.0. The default rounding would round 0.05 down to 0.0.
It depends what the number represents. If the number represents currency then Decimal is the way to go. In a Decimal every digit that you see is stored exactly as you see it and the number of decimal places and rounding modes are tightly defined. This avoids indeterminate results when large numbers are involved in intermediate or final values. Financial algorithms typically require that sort of control.
The strength of float/double is in representing numbers that vary hugely in size - 10 to the power of +/-38 for float and 10 to the power of +/-308 for double. Also there is direct hardware support for calculations on those making the calculation many orders of magnitude faster. But that only matters if running complex algorithms that perform many thousands of operations. The weakness is that the number of bits used to represent the value is limited and rounding errors can creep in surprisingly easily.
Unless you are doing complex scientific calculations Decimal is usually the best way to go.