Purpose of defining POSITIVE_INFINITY, NEGATIVE_INFINITY, NaN constants only for floating-point data types, but not for integral data types
The integer types in Java use either unsigned binary (for char
) or two's complement signed representation. There is no representation for "infinity" in either of these kinds of representations. For example, with int
there are 2^32 possible values, and all of them represent finite numbers.
(Integer.MIN_VALUE
is -231, Integer.MAX_VALUE
is 231 - 1, and if you count them all ... including zero ... that makes 232 different values.)
By contrast, floating-point numbers are represented using IEEE binary floating-point representations, and these do have a standard way to represent both infinity and not-a-number values.
Therefore it makes sense to define POSITIVE_INFINITY
and NEGATIVE_INFINITY
constants for the floating-point types, and it is impossible to define them for the integer types.
If you wanted to know why it is like this:
The integer representations were designed / selected (a long time ago!) to maximize speed. Any special cases (like values reserved to represent infinity, etc.) would make the integer arithmetic hardware more complicated and slower. If the hardware designer's goal is to do an integer addition in one clock cycle, then making addition more complicated means that the clock speed must be slower. That effects the speed of the entire processor.
The flip-side is that:
- Overflow happens without any explicit notification (which may or may not be desirable)
- Division by zero has to be dealt with via a hardware exception, and that results in a major performance penalty ... if it actually happens.
The standard committee that designed the IEEE floating-point representations were also taking account of the requirements of scientific and engineering domains where there was a need to be able to represent infinites. Floating point operations are already slower and more complicated because of the need to do scaling, etc. Therefore they most likely are already multi-cycle instructions, and there is probably some "slack" for dealing with the special cases.
Also, there is the advantage that: INF and NaN values allow the operations that create them to proceed without a hardware exception, but without "sweeping the bad operations under the carpet" like with integer overflow.
Note that two's complement was used in a working computer in 1949 (EDSAC). The IEEE 754 standard emerged in 1985.
For what it is worth, some programming languages are aware of integer overflow; for example Ada. But they don't do this with representations of infinity, etc. Instead, they throw an exception (or equivalent) when when an operation overflows. Even so, this adds a performance penalty, since overflow detection typically entails an extra instruction after each integer arithmetic instruction to test an "overflow" status bit. (That's the way modern instruction sets work ...)
It's part of the IEEE 754 floating-point standard, as mentioned in this spec:
The floating-point types are
float
anddouble
, which are conceptually associated with the single-precision 32-bit and double-precision 64-bit format IEEE 754 values and operations as specified in IEEE Standard for Binary Floating-Point Arithmetic, ANSI/IEEE Standard 754-1985 (IEEE, New York).The IEEE 754 standard includes not only positive and negative numbers that consist of a sign and magnitude, but also positive and negative zeros, positive and negative infinities, and special Not-a-Number values (hereafter abbreviated NaN).
These special values are computed based on their bit representations according to the standard. For example, the Double
positive infinity is computed based on the 0x7ff0000000000000
bit representation.
In contrast, integer types have no bit representation for infinite values. They only have representations for finite numbers. The Integer
class defines the minimum and maximum finite values as -231 and 231-1.
As others have pointed out, it's in the IEEE specification, etc. Floats and doubles support NaN and Infinity, which integers do not.
In terms of the reasoning behind it, nothing is divisible by zero, and with integers you know that you are trying to divide by zero.
Floating point numbers are not exact. 0.003f - 0.001f - 0.002f is mathematically zero, but by the IEEE specification and our ability to represent numbers in computers, it's -2.3283064E-10. There's a finite number of decimal numbers you can represent in binary, and there isn't any representation that would allow us to always get a correct value for zero.
If tinyFloat == (0.003f - 0.001f - 0.002f) == -2.3283064E-10
That's mathematically zero and is practically zero, but 1f/tinyFloat == -4.2949673E9
// This still works too:
scala> Integer.MAX_VALUE / (tinyFloat * tinyFloat * tinyFloat)
res58: Float = -1.7014118E38
// But eventually you overflow
scala> Integer.MAX_VALUE / (tinyFloat * tinyFloat * tinyFloat * tinyFloat)
res59: Float = Infinity
(If you're not familiar, Scala is a JVM language, so the above value types are the same as Java.)
That last tinyFloat ^ 4 still isn't exactly zero, so it doesn't make sense for the computer to throw an ArithmeticException. This problem doesn't exist with integers. There's no other way to overflow with division. Integer.MAX_VALUE/1 is still Integer.MAX_VALUE. You either divided by zero, which is mathematically invalid and representable in binary, or you didn't, and got a valid result.