Divide by Zero: Infinite, NaN, or Zero Division Error?
Apart from the fact that 1 / 0 == inf is mathematically highly questionable, the simple reason why it doesn’t work in most programming languages is that 1 / 0
performs an integer division almost universally (exceptions exist).
The result is an integer, and there is simply no way of encoding “infinity” in an integer. There is for floating point numbers, which is why a floating-point division will actually yield an infinite value in most languages.
The same is true for NaN: while the IEEE floating point standard defines a bit pattern that represents a NaN value, integers don’t have such a value; thus such values simply cannot be represented as an integer.
Whilst the limit of 1 / n
will tend towards infinity as n approaches zero (from the positive direction) the reason that 1 / 0 <> Inf
is that 1 / 0 is undefined (by mathematical definition!).
Is that not the most mathematically correct response?
No, because in mathematics, division by zero is simply undefined and infinity is commonly not a value (or not a single value).
The reason that not all languages/libraries return NaN is that (a) zero-division is often the result of a programmer error, since it shouldn't occur at all in mathematically rigorous algorithms, and (b) processors might handle it by going into an exception state, so transforming to NaN would require handling such states, meaning division becomes even more expensive than it already is (compared to, say, summing).