Float and Int Both 4 Bytes? How Come?
Well, here's a quick explanation:
An int and float usually take up "one-word" in memory. Today, with the shift to 64bit systems this may mean that your word is 64 bits, or 8 bytes, allowing the representation of a huge span of numbers. Or, it could still be a 32bit system meaning each word in memory takes up 4 bytes. Typically memory can be accessed on a word by word basis.
The difference between int
and float
is not their physical space in memory, but in the way the ALU (Arithmetic Logic Unit) behaves with the number. An int represents its directly corresponding number in binary (well, almost--it uses two's complement notation). A float
on the other hand is encoded (typically in IEEE 754 standard format) to represent a number in exponential form (i.e. 2.99*10^6 is in exponential form).
Your misunderstanding I think lies in the misconception that a floating point can represent more information. While float
s can represent numbers of greater magnitude, it cannot represent them with as much accuracy, because it has to account for encoding the exponent. The exponent itself could be quite a large number. So the number of significant digits you get out of a floating point number is less (which means less information is represented) and whereas int
s represent a range of integers, the magnitude of numbers they represent is much smaller.
I just find it highly surprising that something which represents (virtually) the entire real line is of the same size as that of which represents the Integers.
Perhaps this will become less surprising once you realize that there are lots of integers that a 32-bit int
can represent exactly, and a 32-bit float
can't.
A float
can represent fewer distinct numbers than an int
, but they're spread over a wider range.
It is also worth noting that the spacing between consecutive floats
becomes wider as one moves away from zero, whereas it remains constant for consecutive ints
.