Using %f to print an integer variable
From the latest C11 draft:
§7.16.1.1/2
...if type is not compatible with the type of the actual next argument
(as promoted according to the default argument promotions), the behavior
is undefined, except for the following cases:
— one type is a signed integer type, the other type is the corresponding
unsigned integer type, and the value is representable in both types;
— one type is pointer to void and the other is a pointer to a character type.
The most important thing to remember is that, as chris points out, the behavior is undefined. If this were in a real program, the only sensible thing to do would be to fix the code.
On the other hand, looking at the behavior of code whose behavior is not defined by the language standard can be instructive (as long as you're careful not to generalize the behavior too much).
printf
's "%f"
format expects an argument of type double
, and prints it in decimal form with no exponent. Very small values will be printed as 0.000000
.
When you do this:
int x=10;
printf("%f", x);
we can explain the visible behavior given a few assumptions about the platform you're on:
int
is 4 bytesdouble
is 8 bytesint
anddouble
arguments are passed toprintf
using the same mechanism, probably on the stack
So the call will (plausibly) push the int
value 10
onto the stack as a 4-byte quantity, and printf
will grab 8 bytes of data off the stack and treat it as the representation of a double
. 4 bytes will be the representation of 10
(in hex, 0x0000000a
); the other 4 bytes will be garbage, quite likely zero. The garbage could be either the high-order or low-order 4 bytes of the 8-byte quantity. (Or anything else; remember that the behavior is undefined.)
Here's a demo program I just threw together. Rather than abusing printf
, it copies the representation of an int
object into a double
object using memcpy()
.
#include <stdio.h>
#include <string.h>
void print_hex(char *name, void *addr, size_t size) {
unsigned char *buf = addr;
printf("%s = ", name);
for (int i = 0; i < size; i ++) {
printf("%02x", buf[i]);
}
putchar('\n');
}
int main(void) {
int i = 10;
double x = 0.0;
print_hex("i (set to 10)", &i, sizeof i);
print_hex("x (set to 0.0)", &x, sizeof x);
memcpy(&x, &i, sizeof (int));
print_hex("x (copied from i)", &x, sizeof x);
printf("x (%%f format) = %f\n", x);
printf("x (%%g format) = %g\n", x);
return 0;
}
The output on my x86 system is:
i (set to 10) = 0a000000
x (set to 0.0) = 0000000000000000
x (copied from i) = 0a00000000000000
x (%f format) = 0.000000
x (%g format) = 4.94066e-323
As you can see, the value of the double
is very small (you can consult a reference on the IEEE floating-point format for the details), close enough to zero that "%f"
prints it as 0.000000
.
Let me emphasize once again that the behavior is undefined, which means specifically that the language standard "imposes no requirements" on the program's behavior. Variations in byte order, in floating-point representation, and in argument-passing conventions can dramatically change the results. Even compiler optimization can affect it; compilers are permitted to assume that a program's behavior is well defined, and to perform transformations based on that assumption.
So please feel free to ignore everything I've written here (other than the first and last paragraphs).