Analytical approximation of indefinite integral on a given interval to a given precision
One can construct a Chebyshev series approximation to the integrand for an interval, such as -5 <= x <=5
mentioned in the comments, and integrate it to get a series expansion for the integral. It is well known that Chebyshev series representation have numerical advantages over power series. I saw a comment about a NPU-supported method, but I don't know what that is, this being a Mathematica site. Most numerical systems have access to an FFT, in one way or another, I think, and, of course, to trigonometric functions. That is all that is really needed.
Some auxiliary functions used here (code below):
chebSeries[f, {a, b}, n, precision]
computes the Chebyshev expansion of ordern
forf[x]
overa <= x <= b
.iCheb[c, {a, b}, k]
integrates the Chebyshev seriesc
, plusk
.f = chebFunc[c,{a,b}]
, computes the function represented by the Chebyshev coefficientsc = {c0, c1,...}
over the interval{a, b}
.
The first step is to compute the coefficients of the integrand as a function of x
for a given value of c
. The are saved (memoized) for the sake of speed, but it is not essential. An antiderivative of the integrand is computed with iCheb[coeff[c], {-5, 5}]
. Depending on c
, one needs a series of order 70 to 90+ to get a theoretical error of less than machine precision, so computing one of order 2^7 = 128 is sufficient for all c
. (See Boyd, Chebyshev and Fourier
Spectral Methods, Dover, New York,
2001, ch. 2., for a discussion of the convergence theory of Chebyshev series.)
ClearAll[coeffs];
coeffs[c_?NumericQ] := coeffs[c] =
Module[{
cc, (* Chebyshev coefficients *)
pg = 16, (* Precision goal: *)
sum, (* sum of tail = error bound *)
max, (* max coefficient to measure rel. error *)
len}, (* how many terms of the tail to drop *)
cc = chebSeries[Function[x, Exp[-x^2] Erf[x + c]], {-5, 5}, 128];
max = Max@Abs@cc;
sum = 0;
(* trim tail of series of unneeded coefficients (smaller than desired precision) *)
len = LengthWhile[Reverse[cc], (sum += Abs[#1]) < 10^-pg max &];
Drop[cc, -len]
];
Next we can define the user's sought-after function in terms of the antiderivative:
func[a_, b_, c_] :=
With[{antiderivative = chebFunc[
N@iCheb[coeffs[SetPrecision[c, Infinity]], {-5, 5}, 0],
{-5, 5}]},
antiderivative[b] - antiderivative[a]
];
The following computes the relative and absolute error of func
on a hundred random inputs, using a high-precision NIntegrate[]
to compute the "true" value.
cmp[a_, b_, c_] := {(#1 - #2)/#2, #1 - #2} &[
func[a, b, c],
NIntegrate[Exp[-x^2] Erf[x + SetPrecision[c, Infinity]], {x, a, b},
WorkingPrecision -> 40]
]
ListLinePlot[
Transpose@RealExponent[cmp @@@ RandomReal[{-5, 5}, {100, 3}]],
PlotRange -> {-20, 0}, PlotLabel -> "Error",
PlotLegends -> {"Rel.", "Abs."}]
The yellow line shows the absolute error is limited to a few ulps. The theoretical bound on the error does not take into account rounding error (in both the coefficients and the evaluation of the series). The lines that drop off the bottom are the result of an error of zero.
Auxiliary functions
(* Chebyshev extreme points *)
chebExtrema::usage =
"chebExtrema[n,precision] returns the Chebyshev extreme points of order n";
chebExtrema[n_, prec_: MachinePrecision] :=
N[Cos[Range[0, n]/n Pi], prec];
(* Chebyshev series approximation to f *)
Clear[chebSeries];
chebSeries::usage =
"chebSeries[f,{a,b},n,precision] computes the Chebyshev expansion \
of order n for f[x] over a <= x <= b.";
chebSeries[f_, {a_, b_}, n_, prec_: MachinePrecision] :=
Module[{x, y, cc},
x = Rescale[chebExtrema[n, prec], {-1, 1}, {a, b}];
y = f /@ x; (* function values at Chebyshev points *)
cc = Sqrt[2/n] FourierDCT[y, 1]; (* get coeffs from values *)
cc[[{1, -1}]] /= 2; (* adjust first & last coeffs *)
cc
];
(* Integrate a Chebyshev series -- cf. Clenshaw-Norton, Comp.J., 1963, p89, eq. (12) *)
Clear[iCheb];
iCheb::usage = "iCheb[c, {a, b}, k] integrates the Chebyshev series c, plus k";
iCheb[c0_, {a_, b_}, k_: 0] := Module[{c, i, i0},
c[1] = 2 First[c0];
c[n_] /; 1 < n <= Length[c0] := c0[[n]];
c[_] := 0;
i = 1/2 (b - a) Table[(c[n - 1] - c[n + 1])/(2 (n - 1)), {n, 2, Length[c0] + 1}];
i0 = i[[2 ;; All ;; 2]];
Prepend[i, k - Sum[(-1)^n*i0[[n]], {n, Length[i0]}]]]
(* chebFunc[c,...] computes the function represented by a Chebyshev series *)
chebFunc::usage =
"f = chebFunc[c,{a,b}], c = {c0,c1,...} Chebyshev coefficients, over the interval {a,b}
y = chebFunc[c,{a,b}][x] evaluates the function";
chebFunc[c_, dom_][x_] := chebFunc[c, dom, x];
chebFunc[c_?VectorQ, {a_, b_}, x_] :=
Cos[Range[0, Length[c] - 1] ArcCos[(2 x - (a + b))/(b - a)]].c;
Update: Comparison of Chebyshev and power series
Perhaps it would be worth illustrating the difference between power series and Chebyshev series approximations for those who are not familiar with it. (One should become familiar with it, for Chebyshev expansions are to functions what decimal expansions are to numbers.)
Key differences:
Symbolic series expansion of the function
Erf[x + c]
grows extremely fast and takes a much longer time to evaluate than the DCT-I used to compute the Chebyshev coefficients. Attempting a degree 40 expansion hanged the computer and I had to kill the kernel.Aside from not being able to compute the series expansion to arbitrary order, it is probably impossible to get convergence for a fixed precision due to rounding error. At machine precision, you cannot even get 2 digits throughout the interval
{c, -5, 5}
, fora = -4
,b = 4
up to order 25. OTOH, the Chebyshev series has an exponential order of convergence and can nearly achieve machine precision with machine precision coefficients.It is fairly easy to figure out when you have enough terms of a Chebyshev series $\sum a_j T_j$, because the error is bounded by the tail $\sum |a_j|$ and the $a_n \rightarrow 0$ roughly geometrically on average.
If you don't have fast trigonometric functions, then instead of the code in
chebFunc
above, you can use Clenshaw's algorithm (seechebeval
) to evaluate the series.
Here's another implementation of Mariusz's power series idea. I speed up the integration with a "power rule" int[{n}]
for
$$ \int \exp\left(-x^2\right) x^n \; dx\,.$$
Of course it turned out that Series
was the bigger bottleneck.
ClearAll[sol, int];
int[{0}] = Integrate[Exp[-x^2], x, Assumptions -> x ∈ Reals];
int[{n_}] = Integrate[Exp[-t^2] t^n, {t, 0, x},
Assumptions -> n > 0 && n ∈ Integers && t ∈ Reals];
$seriesCoefficientPart = 3;
sol[n_] := sol[n] = Total@MapIndexed[
First@Differences[#1 int[#2 - 1] /. {{x -> a}, {x -> b}}] &,
Series[Erf[x + c], {x, 0, n}][[$seriesCoefficientPart]]
];
(* Times *)
First@*AbsoluteTiming@*sol /@ {2, 10, 20, 22, 23, 24, 25}
(* {0.013267, 0.071065, 2.01908, 7.6296, 23.4752, 33.1553, 48.3542} *)
(* Sizes *)
LeafCount /@ sol /@ {2, 10, 20, 22, 23, 24, 25}
(* {163, 693, 2413, 2765, 1100459, 1779935, 2879267} *)
Chebyshev speed for c = 4
, per evaluation of c
:
First@AbsoluteTiming@func[a, b, 4, #] & /@ {2, 10, 20, 25}
(* {0.002047, 0.000184, 0.000248, 0.000276} *)
To illustrate the issue with power series, the graphics below shows the error in approximating Erf[x + c]
by its Taylor series (times Exp[-x^2]
for c = -4, -2, 0, 2, 4
and various orders. It does pretty good for Abs[x] < 1
as the order increases, but it gets worse for Abs[x] > 4
.
GraphicsRow[
Table[
Plot[
Evaluate@Table[
Exp[-x^2] (Erf[x + c] - Normal@Series[Erf[x + c], {x, 0, n}]) //
RealExponent,
{c, -4, 4, 2}],
{x, -5, 5},
PlotRange -> {-18, 0}, Frame -> True, Axes -> False,
PlotLabel -> Row[{"Order ", n}], AspectRatio -> 1,
FrameLabel -> {"x", "Log error"}], {n, {2, 10, 20, 25}}],
PlotLabel -> "Error in approximating integrand by power series"]
The two plots below compare the absolute error of approximating by power series and by Chebyshev series. The convergence of the Chebyshev series is remarkable by comparison.
Plot[Evaluate@Table[
sol[n] - exact[a, b, c] /. {a -> -4, b -> 4} // RealExponent,
{n, {2, 10, 20, 25}}],
{c, -5, 5},
PlotRange -> {-18, 0}, Frame -> True, Axes -> False,
GridLines -> {None, -Range[2, 16, 2]},
PlotLabel -> "Log error for power series of order n",
FrameLabel -> {"c", "Log error"},
PlotLegends -> {2, 10, 20, 25}]
Plot[Evaluate@Table[
func[a, b, c, n] - exact[a, b, c] /. {a -> -4, b -> 4} // RealExponent,
{n, {2, 10, 20, 30, 40, 50, 60}}],
{c, -5, 5},
PlotRange -> {-18, 0}, Frame -> True, Axes -> False,
GridLines -> {None, -Range[2, 16, 2]},
PlotLabel -> "Log error for Chebyshev series of order n",
FrameLabel -> {"c", "Log error"},
PlotLegends -> {2, 10, 20, 30, 40, 50, 60}]
Determining the order of the approximation:
trim[cc_, eps_] := Module[{sum, max, len},
max = Max@Abs@cc;
sum = 0;
len = LengthWhile[Reverse[cc], (sum += Abs[#1]) < eps max &];
Drop[cc, -len]
]
Manipulate[
With[{cc = iCheb[chebSeries[Exp[-#^2] Erf[# + c] &, -5, 5, 128, 32], {-5, 5}]},
With[{order = Length@trim[cc, 10^-accuracy]},
ListPlot[
RealExponent@cc,
PlotLabel -> Column[{
Row[{"Chebyshev coefficients a[n] for ",
HoldForm["c"] -> Chop[N[c, {2, 1.5}], 0.05], ", "}],
Row[{"For accuracy ", SetPrecision[10^-accuracy, 2], " use order ", order}]
}, Alignment -> Center],
Frame -> True, FrameLabel -> {"n", "exponent of a[n]"},
GridLines -> {{order}, {-accuracy}}, PlotRange -> {-31, 1}]
]],
{{c, 4}, -5, 5, 1/10}, {{accuracy, 16.}, 2, 28}]
Addendum: Failed ideas - Maybe someone can make them work...
The Chebyshev series of Exp[-x^2]
can be computed over {-x0, x0}
exactly in terms of modified Bessel functions BesselI[]
. Therefore, so can the series for Erf[x]
. I was seduced into trying to come up with a way to compute the OP's function in this way but the +c
in Erf[x + c]
was too ornery.
One thing that would be needed is a way to write ChebyshevT[n, x + c]
as a Chebyshev series in ChebyshevT[n, x]
. The coefficients would be polynomials in c
(with integer coefficients), which themselves could be represented as Chebyshev expansions. This can be done, in fact, but it's a bit cumbersome and slow. Further the Chebyshev coefficients for n = 64
get bigger than 2^100
, and I worried about numerical stability. For the moment, I have given up without testing it. The way above seems superior, in simplicity, as well as (probably) in speed and numerics.
Maybe so:
Expand with Series a function Erf[x+c]
e.g for order 2.
n = 2;
func = Normal@Series[Erf[x + c], {x, 0, n}]
$-\frac{2 c e^{-c^2} x^2}{\sqrt{\pi }}+\frac{2 e^{-c^2} x}{\sqrt{\pi }}+\text{erf}(c)$
sol = Simplify@Integrate[Exp[-x^2]*func, {x, a, b}]
$\frac{-c e^{-c^2} \left(2 e^{-a^2} a-\sqrt{\pi } \text{erf}(a)-2 b e^{-b^2}+\sqrt{\pi } \text{erf}(b)\right)+2 e^{-c^2} \left(e^{-a^2}-e^{-b^2}\right)+\pi \text{erf}(c) (\text{erf}(b)-\text{erf}(a))}{2 \sqrt{\pi }}$
Check numerics:
a = -4;
b = 4;
c = 4;
sol2 = NIntegrate[Exp[-x^2]*Erf[x + c], {x, a, b}]
(*1.77234*)
sol // N
(*1.77245*)
From Documentation Center Precision[x]
is- Log[10, dx/x]
.
-Log[10, Abs[(sol - sol2)/sol2]]
(*4.20019*)
I have 4 significant digit.
From OP qestion:
Can Mathematica predict for the given function and interval how many terms it needs in the Series for the accuracy ϵ on that interval?
Yes I use NonlinearModelFit
.Generate data
for order n from 1 to 20;
data = Table[{n,
func = Normal@Series[Erf[x + c], {x, 0, n}];
sol = Simplify@Integrate[Exp[-x^2]*func, {x, a, b}];
a = -4;(*Interval (a,b)*)
b = 4;
c = 4;
sol2 = NIntegrate[Exp[-x^2]*Erf[x + c], {x, a, b}];
sol1 = (sol /. c -> 4 /. a -> -4 /. b -> 4) // N;
-Log[10, Abs[(sol1 - sol2)/sol2]]}, {n, 1, 20}]
(*{{1, 4.19844}, {2, 4.20019}, {3, 4.20019}, {4, 4.21306}, {5,
4.21306}, {6, 4.27069}, {7, 4.27069}, {8, 4.46324}, {9,
4.46324}, {10, 5.23787}, {11, 5.23787}, {12, 4.86359}, {13,
4.86359}, {14, 5.124}, {15, 5.124}, {16, 5.10868}, {17,
5.10868}, {18, 5.37338}, {19, 5.37338}, {20, 5.20063}}*)
nlm = NonlinearModelFit[data, a1 + a2*Exp[a3*n + a4], {a1, a2, a3, a4}, n]
Normal@nlm
Model:
$1.15801 e^{0.00463001 n+2.53736}-10.621$
Show[Plot[Normal@nlm, {n, 0, 20}], ListPlot[data],
AxesOrigin -> {0, 0}, PlotRange -> {Automatic, {0, 5.5}},
AxesLabel -> {n, "significant digits"}]
for n=20
I have:
nlm[20]
(*5.44438*)
5 significant digit.
for n=200
I have:
nlm[200]
(*26.3475*)
26 significant digit.
You may change a model in NonlinearModelFit
.Maybe my model is not the best.