Definition of $MachineEpsilon
I suppose this is due to rounding. For as long as the value is smaller than half of $MachineEpsilon
it yields 0, else it rounds up to $MachineEpsilon
.
Given x*i+1-1
For, i<0.5*10^15 $MachineEpsilon
I get zero,
while for i>0.5*10^15 $MachineEpsilon
i get $MachineEpsilon
.
And regarding your second question: On my laptop
$MachineEpsilon = 2.22045×10−16
It might be a naive way to show it, but here's an illustration.
DiscretePlot[{(x*i) + 1 - 1}, {i, 1.1101, 1.11035, 0.000001}]
On a binary machine with $n$-bit floating point numbers, machine $\epsilon$ is the smallest number such that $1 + \epsilon$ can be represented in $n$ bits. This is $$1.000\cdots01$$ where the length of $1$s and $0$s is $n$; in other words $$1 + \epsilon = 2^0+2^{-(n-1)}\,.$$ For the standard IEEE binary64 floating point number $n = 53$ and $$\epsilon = 2^{-52}\,.$$
So that's the story for binary machines. When you start to investigate machine $\epsilon$ on such a machine using decimal numbers, you have to consider rounding error and how arithmetic operations are carried out. Consider 1 * 10^-16
and 2 * 10^-16
in binary:
BaseForm[0.0000000000000001, 2]
BaseForm[0.0000000000000002, 2]
(*
1.1100110100101011001 * 2^-54
1.1100110100101011001 * 2^-53
*)
Now an arithmetic operation is carried out by the CPU with extra guard bits, so the the result is equal to the exact result of the operation rounded to $n$ bits. So $$\eqalign{ 1. &+\ 2. * 10^{-16} = \cr &1.00000000000000000000000000000000000000000000000000001110011\cdots \cr }$$ rounded to 53 bits on a binary64 machine, which yields, $$1.0000000000000000000000000000000000000000000000000001$$ If we compare them side by side,
1.00000000000000000000000000000000000000000000000000001110011...
1.0000000000000000000000000000000000000000000000000001
we see that the 54th & 55th bits cause the 53rd bit to be rounded up to 1
, just as the OP suggested. Hence
x = 0.0000000000000002
% + 1
% - 1
has a nonzero final result, namely 2.^-52
.
On the other hand, in $1. + 1. * 10^{-16}$, there is another 0
between the first and second 1
s, so that the 54th & up bits are 01...
. Thus the 53rd bit is rounded to 0
, and the result is 1.
. Subtracting 1
from it yields zero, as in the OP's first example.
What happens when adding 1 to a number close to MachineEpsilon?
Some of the confusion in the question is caused by the way Mathematica displays machine-precision numbers. For example, Mathematica's "nice" default settings make it appear that there's no difference between 1.0
and 1.0 + $MachineEpsilon
.
1.0
1.0 + $MachineEpsilon
The displayed results are the same, that is, 1.
. However, we can use RealDigits
to show that 1.0 + $MachineEpsilon
is not the same as 1.0
at machine-precision (assuming MachinePrecision is $\frac{53 \log (2)}{\log (10)}$≈15.9546).
RealDigits[1.0, 2] (* binary digits *)
RealDigits[1.0 + $MachineEpsilon, 2]
The results are:
{{1, 0, <<50 zeros>>, 0}, 1} (* least-significant digit is 0 *)
{{1, 0, <<50 zeros>>, 1}, 1} (* least-significant digit is 1 *)
The least-significant binary digit shows that 1.0
and 1.0 + $MachineEpsilon
are in fact different, and demonstrates that MachineEpsilon is "the minimum positive machine-precision number which can be added to 1.0 to give a result distinguishable from 1.0."
The point is that we can't rely on Mathematica's "nice" display of machine-precision numbers. We need to look as closely as possible at the binary result.
Case #1, where x = 0.0000000000000001
Consider the result of adding 1 to 0.0000000000000001.
x = 0.0000000000000001;
RealDigits[x + 1, 2]}
The result is {{1, 0, <<50 zeroes>> 0}, 1}
, and notice the least-significant digit is 0.
Case #2, where x = 0.0000000000000002
x = 0.0000000000000002;
RealDigits[x + 1, 2]
This time, the result is {{1, 0, <<50 zeros>> 1}, 1}
. Compare to case #1, where the least-significant digit was 0. This shows how decimal-to-binary conversion and using a number twice as large as case #1 produces a different result, and that the result is MachineEpsilon larger.
The different least-significant binary digits are the reason why the case #2 result isn't zero, and it is in fact the same as MachineEpsilon.
The case where x = 0.0000000000000003 is the same as x = 0.0000000000000002.
x = 0.0000000000000003;
RealDigits[x + 1, 2]
Another way to look at these results is to use extended-precision numbers instead of machine-precision values. For example, using case #2:
x = 0.0000000000000002;
SetPrecision[x + 1, 22]
The result is 1.000000000000000222045
. Notice the the digits 222045 from the value of MachineEpsilon.
To summarize, Mathematica displays machine-precision numbers in a "nice" way that doesn't show the full precision of the result of a calculation. We need to use RealDigits
or use extended precision to see the results of adding 1 to numbers close to MachineEpsilon.
Is MachineEpsilon the same as on any computer using Mathematica?
The value of MachineEpsilon is determined by the internal representation of machine-precision floating-point numbers. The value of MachinePrecision is $\frac{53 \log (2)}{\log (10)}$≈15.9546, when a computer uses IEEE Standard 754-1985 double-precision, 64-bit floating-point numbers. The value 53 comes from the number of bits used to store the mantissa (52) plus an assumed, normalized 1 bit to the left of the binary mantissa.