Integer division: How do you produce a double?
What's wrong with casting primitives?
If you don't want to cast for some reason, you could do
double d = num * 1.0 / denom;
If you change the type of one the variables you have to remember to sneak in a double again if your formula changes, because if this variable stops being part of the calculation the result is messed up. I make a habit of casting within the calculation, and add a comment next to it.
double d = 5 / (double) 20; //cast to double, to do floating point calculations
Note that casting the result won't do it
double d = (double)(5 / 20); //produces 0.0
I don't like casting primitives, who knows what may happen.
Why do you have an irrational fear of casting primitives? Nothing bad will happen when you cast an int
to a double
. If you're just not sure of how it works, look it up in the Java Language Specification. Casting an int
to double
is a widening primitive conversion.
You can get rid of the extra pair of parentheses by casting the denominator instead of the numerator:
double d = num / (double) denom;
double num = 5;
That avoids a cast. But you'll find that the cast conversions are well-defined. You don't have to guess, just check the JLS. int to double is a widening conversion. From §5.1.2:
Widening primitive conversions do not lose information about the overall magnitude of a numeric value.
[...]
Conversion of an int or a long value to float, or of a long value to double, may result in loss of precision-that is, the result may lose some of the least significant bits of the value. In this case, the resulting floating-point value will be a correctly rounded version of the integer value, using IEEE 754 round-to-nearest mode (§4.2.4).
5 can be expressed exactly as a double.