Python vs Julia autocorrelation
Just to expand a bit on the answer (adding as an answer as it is too long for a comment). In Julia you have the following:
julia> t = collect(range(0, stop=10, length=10))
10-element Array{Float64,1}:
0.0
1.1111111111111112
2.2222222222222223
3.3333333333333335
4.444444444444445
5.555555555555555
6.666666666666667
7.777777777777778
8.88888888888889
10.0
julia> t .- [10*i / 9 for i in 0:9]
10-element Array{Float64,1}:
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
while in Python:
>>> t = np.linspace(0,10,10)
>>> t - [10*i/9 for i in range(10)]
array([0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 8.8817842e-16,
0.0000000e+00, 0.0000000e+00])
and you see that the 8-th number in Python is an inaccurate approximation of 70/9
, while in Julia in this case you get the sequence of closest approximations of 10*i/9
using Float64
.
So it would seem that because the original sequences differ you the rest follows what @Jakob Nissen commented.
However the things are not that simple. As exp
functions in Julia and Python differ a bit in what they produce. See Python:
>>> from math import exp
>>> from mpmath import mp
>>> mp.dps = 1000
>>> float(mp.exp((20/3)**2) - exp((20/3)**2))
-1957.096392544307
while in Julia:
julia> setprecision(1000)
1000
julia> Float64(exp(big((20/3)^2)) - exp((20/3)^2))
2138.903607455693
julia> Float64(exp(big((20/3)^2)) - nextfloat(exp((20/3)^2)))
-1957.096392544307
(you can check that (20/3)^2
is the same Float64
both in Julia and Python).
So in this case with exp
Python is slightly more accurate than Julia. Therefore even fixing t
(which is easy by using a comprehension in Python instead of linspace
) will not make the ACF to be equal.
All in all the conclusion is what @Jakob Nissen commented for such large values the results will be strongly influenced by the numerical inaccuracies.
This is because your test_data
is different:
Python:
array([ 0.84147098, -0.29102733, 0.96323736, 0.75441021, -0.37291918,
0.85600145, 0.89676529, -0.34006519, -0.75811102, -0.99910501])
Julia:
[0.8414709848078965, -0.2910273263243299, 0.963237364649543, 0.7544102058854344,
-0.3729191776326039, 0.8560014512776061, 0.9841238290665676, 0.1665709194875013,
-0.7581110212957692, -0.9991050130774393]
This happens because you are taking sin
of enormous numbers. For example, with the last number in t
being 10, exp(10^2)
is ~2.7*10^43. At this scale, floating point inaccuracies are about 3*10^9. So if even the least significant bit is different for Python and Julia, the sin
value will be way off.
In fact, we can inspect the underlying binary values of the initial array t
. For example, they differ in the third last value:
Julia:
julia> reinterpret(Int, range(0, stop=10, length=10)[end-2])
4620443017702830535
Python:
>>> import struct
>>> s = struct.pack('>d', np.linspace(0,10,10)[-3])
>>> struct.unpack('>q', s)[0]
4620443017702830536
We can indeed see that they disagree by exactly one machine epsilon. And if we use Julia take sin
of the value obtained by Python:
julia> sin(exp(reinterpret(Float64, 4620443017702830536)^2))
-0.3400651855865199
We get the same value Python does.