Where does 1.0f and 1.0 makes difference?
One is a double
the other is a float
:
double x = 0.0; // denotes a double
float y = 0.0f; // denotes a float
It depends on the system but e.g. on Windows you'll find that float
has 32bit of precision whereas double
has 64bit. This can make a tremendous difference when it comes to precise or numericable unstable calculations.
can we not write float y=0.0
From your comment, I see where the confusion stems from. It's not the data type of the variable assigned to but the data type of literal constant (0.0, 1.0f, 1.0, etc.) itself that matters here. When you write
float f = 1.0;
1.0
a literal of type double
while f
is a float
, hence the compiler does an implicit narrowing conversion to float
, the same holds true for double d = 1.0f
where it's widening implicit conversion from float
to double
.
Implicit conversion rules are the reason 16777217 * 1.0f
expression (in ouah's answer) becomes a float
, since 1.0f
is a float
and in an expression with both float
and int
the resulting type is dictated by the standard as a float
, thus both are converted to float
s, but the resulting value isn't representable as a float
and thus you see a different value.
Instead when 1.0f
is changed into 1.0
it becomes a double
and thus 16777217 * 1.0
expression becomes a double
(again because the standard dictates that in an expression with double and any other integral type, the result is a double
) which is large enough to hold the value 16777217
.
As other said, one literal is of type float
and the other is of type double
.
Here is an example where it makes a difference:
#include <stdio.h>
int main(void)
{
int a = 16777217 * 1.0f;
int b = 16777217 * 1.0;
printf("%d %d\n", a, b);
}
prints on my machine:
16777216 16777217
The expression 16777217 * 1.0f
is of type float
and 16777217
cannot be represented exactly in a float
(in IEEE-754) while it can be represented exactly in a double
.