Why do floating-point numbers have signed zeros?

From Wikipedia

Signed zero is zero with an associated sign. In ordinary arithmetic, −0 = +0 = 0. In computing, existes the concept of existence of two zeros in some numbers representations, usually denoted by −0 and '+0', representing negative zero and +0 positive zero, respectively (source).

This occurs in the sign and magnitude and ones' complement signed number representations for integers, and in most floating point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.

According to the IEEE 754 standard, negative zero and positive zero should compare as equal with the usual (numerical) comparison operators, like the == operators of C and Java. (source).

When you have a floating-point operation that produces a result that is a negative floating-point number close to zero, but that can not be represented (by the computer) it produces a "-0.0". For example -5.0 / Float.POSITIVE_INFINITY -> -0.0.

This distinction between -0.0 and +0.0 gives the end-user more information than merely displaying a final result of 0. Naturally, such a concept is really only useful in systems with a finite numerical representation limitation such as those of computers. In mathematics, one can represent any number, regardless of how close it is to zero.

−0 and +0 are the result of mathematical operations performed by computers that cause underflows, similar to the −00 or +00 that result from operations that cause an overflow. For the operations that cause mathematical indetermination, the result is NaN (e.g., 0/0).

What's the difference between -0.0 and 0.0?

In reality, both represent 0. Furthermore, (-0.0 == 0.0) returns true. Nevertheless:

  1. 1/-0.0 produces -Infinity while 1/0.0 produces Infinity.

  2. 3 * (+0) = +0 and +0/-3 = -0. The sign rules applies, when performing multiplications or division over a signed zero.

Mandatory reading "What Every Computer Scientist Should Know About Floating-Point Arithmetic".


See the section on "Signed Zero" in What Every Computer Scientist Should Know About Floating-Point Arithmetic

Zeros in Java float and double do not just represent true zero. They are also used as the result for any calculation whose exact result has too small a magnitude to be represented. There is a big difference, in many contexts, between underflow of a negative number and underflow of a positive number. For example, if x is a very small magnitude positive number, 1/x should be positive infinity and 1/(-x) should be negative infinity. Signed zero preserves the sign of underflow results.


-0 is (generally) treated as 0 *******. It can result when a negative floating-point number is so close to zero that it can be considered 0 (to be clear, I'm referring to arithmetic underflow, and the results of the following computations are interpreted as being exactly ±0, not just really small numbers). e.g.

System.out.println(-1 / Float.POSITIVE_INFINITY);
-0.0

If we consider the same case with a positive number, we will receive our good old 0:

System.out.println(1 / Float.POSITIVE_INFINITY);
0.0

******* Here's a case where using -0.0 results in something different than when using 0.0:

System.out.println(1 / 0.0);
System.out.println(1 / -0.0);
Infinity
-Infinity

This makes sense if we consider the function 1 / x. As x approaches 0 from the +-side, we should get positive infinity, but as it approaches from the --side, we should get negative infinity. The graph of the function should make this clear:

(source)

In math-terms:

enter image description here

enter image description here

This illustrates one significant difference between 0 and -0 in the computational sense.


Here are some relevant resources, some of which have been brought up already. I've included them for the sake of completeness:

  • Wikipedia article on signed zero
  • "What Every Computer Scientist Should Know About Floating-Point Arithmetic" (See Signed Zero section)
  • (PDF) "Much Ado About Nothing's Sign Bit" - an interesting paper by W. Kahan.

The canonical reference for the usefulness of signed zeros in floating-point is Kahan's paper "Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing's Sign Bit" (and some of his talks on the subject).

The short version is that in reasonably common engineering applications, the sign information that is preserved by having signed zero is necessary to get correct solutions from numerical methods. The sign of zero has little meaning for most real operations, but when complex-valued functions are considered, or conformal mappings are used, the sign of zero may suddenly become quite critical.

It's also worth noting that the original (1985) IEEE-754 committee considered, and rejected, supporting a projective mode for floating-point operations, under which there would only be a single unsigned infinity (+/-0 would be semantically identical in such a mode, so even if there were still two encodings, there would only be a single zero as well).