Can I obtain more precise integration of highly oscillating integral?
The following gives a result that agrees with all the digits for the j = 100
example.
NDSolve
seems to be OK if you add the InterpolationOrder -> All
option. The "InterpolationPointsSubdivision"
preprocessor of NIntegrate
seems like the right move, but it was much faster and with a much better result to do a simple, straightforward implementation of an integration rule (losing the protection of any of the checking the NIntegrate
would perform).
getIntegral[j_, wp_, sSize_] :=
Module[{numIndex, difResults, myazsol, theCentralTrace, n1,
finalValue, intResults, nodes, weights, errweights},
PrintTemporary["NDSolve"]; (* for impatient people like me :) *)
myazsol =
First[NDSolve[{wDeriv, w[tStart] == wstart}, w, {t, tStart, tEnd},
MaxSteps -> 20000000, MaxStepSize -> sSize,
WorkingPrecision -> wp, InterpolationOrder -> All]]; (* N.B. *)
theCentralTrace[t_] = Evaluate[Flatten[w[t] /. myazsol]];
PrintTemporary["Integrate"]; (* for impatient people like me :) *)
(* Pick a favorite integration rule of sufficiently high order *)
{nodes, weights, errweights} = NIntegrate`GaussRuleData[7, wp];
n1 = AbsoluteTiming[
Total[ (* totals the integrals over subintervals *)
Block[{t = Rescale[nodes, {0, 1}, #], (* scale nodes to subinterval # = (x_{i-1}, x{i}) *)
$MinPrecision = wp, $MaxExtraPrecision = wp}, (* keeps working precision at wp during the dot product *)
weights.(Exp[I t]^-j theCentralTrace[t])*(#[[2]] - #[[1]]) (* sum w_j f(t_j) * (x_{i} - x_{i-1}) = integral over (x_{i-1}, x{i}) *)
] & /@ Partition[Flatten@theCentralTrace["Grid"], 2, 1] (* maps (/@) integration code over the intervals created by Partition[] *)
(* theCentralTrace["Grid"] yields the steps {{x_0}, {x_1},...} *)
]]; (* Partition transforms {x_0, x_1, x_2,...} to {{x_0, x_1}, {x_1, x_2},...} *)
finalValue = 1/(2 \[Pi] rnorm^j) (n1[[2]]);
intResults = actualValue;
difResults = Abs[intResults - finalValue];
{j,
{wp, sSize},
Column[{
N[AccountingForm[finalValue], 20],
N[AccountingForm[intResults], 20]}, Alignment -> Left
],
N[ScientificForm[difResults], 10],
n1[[1]]}];
PrintTemporary@Dynamic@{Clock[Infinity]}; (* for impatient people like me :) *)
getIntegral[100, 70, 1/5000]
If you want to measure the error estimate from an appropriate integration rule, makes changes like these:
{nodes, weights, errweights} = NIntegrate`GaussKronrodRuleData[5, 70];
n1 = AbsoluteTiming[
Total[
Block[{t = Rescale[nodes, {0, 1}, #],
$MinPrecision = 70, $MaxExtraPrecision = wp},
{weights, errweights}.(Exp[I t]^-j glurg[t])*(#[[2]] - #[[1]])
] & /@ Partition[Flatten@theCentralTrace["Grid"], 2, 1],
Method -> "CompensatedSummation"
]];
finalValue = 1/(2 \[Pi] rnorm^j) (n1[[2, 1]]);
Update:
Combining @Roman's W
version of theCentralTrace
, we get a speedy solution (the 3*j
nodes is heuristic -- might want somewhat higher):
{nodes, weights, errweights} = NIntegrate`GaussRuleData[3*j, wp];
n1 = AbsoluteTiming[
Block[{t = Rescale[nodes, {0, 1}, {0, 2 Pi}],
$MinPrecision = wp, $MaxExtraPrecision = wp},
weights.(Exp[I t]^-j (W /@ (rnorm Exp[I t])))*(2 Pi)
]];
And then:
getIntegral[100, 60, 1/3000]
A much easier way of solving the equation for $w(z)$ on the correct branch, instead of integrating the differential equation, is
W[z_?NumericQ] := First@MinimalBy[w /. NSolve[
(-1+z)z^2+w*z(-4+3z)-w^2*z^3(1+9z)+w^3(-2+4z(2+z-z^2))+w^4(6+z^2(-8+z(7+8z)))==0,
w], Abs, 1]
Then you can get the $n=20$ solution simply with
With[{n = 20, r = 1/2},
1/(2π r^n) NIntegrate[E^(-I n t) W[r E^(I t)], {t, 0, 2π}]]
-1.43468 - 4.27557*10^-11 I
The $n=100$ integral is still difficult, and @MichaelE2 's solution can probably help.
Are you insisting on doing this integral numerically, or are you just interested in the results? If it's the latter, then you can easily get exact results for all values of $n$ with a series-expansion of the Root
object R[z]
below that formally describes the solution of the polynomial equation. It's just a matter of picking the right branch of the root (here, the first branch):
eq[w_, z_] = (-1+z)z^2+w*z(-4+3z)-w^2*z^3(1+9z)+w^3(-2+4z(2+z-z^2))+w^4(6+z^2(-8+z(7+8z)));
R[z_] = Root[eq[#, z] &, 1];
CoefficientList[Series[R[z], {z, 0, 100}], z]
{0, -1/4, 9/128, 85/4096, 131/32768, ...}
Just for $n=20$:
SeriesCoefficient[R[z], {z, 0, 20}]
$$-\frac{56833546863764806539901668529}{39614081257132168796771975168}$$
% // N
-1.43468
Just for $n=100$:
SeriesCoefficient[R[z], {z, 0, 100}]
$$-\frac{21333059674656860988913490423529488435269036857164193142278059654155\ 9662040070438385095120975342868607822726196631896648747423125210337484\ 60127156003492355000697}{2557336412418860835947804450646561837669251598471144366783821381325\ 1045284411519960025547596296126227741302219746563054759509816764729633\ 229129121792}$$
% // N
-8.34191*10^11
Even for $n=1000$ this works with a bit of patience: (not showing the exact result because it's huge)
SeriesCoefficient[R[z], {z, 0, 1000}] // N
-6.7183*10^153
Update: Version 12
With the new Version 12 function AsymptoticSolve
we can get the series-expansion directly, without going through a Root
object and without branch warnings. Assuming that you know you want the real-valued branch that goes through zero,
eq[w_, z_] = (-1+z)z^2+w*z(-4+3z)-w^2*z^3(1+9z)+w^3(-2+4z(2+z-z^2))+w^4(6+z^2(-8+z(7+8z)));
AsymptoticSolve[eq[w, z] == 0, {w, 0}, {z, 0, 10}, Reals]
{{w -> -z/4 + 9 z^2/128 + 85 z^3/4096 + 131 z^4/32768 - 444991 z^5/4194304 + 1642905 z^6/134217728 - 19786821 z^7/1073741824 - 662463103 z^8/17179869184 - 542911401095 z^9/4398046511104 - 4656077458645 z^10/140737488355328}}
The list of series coefficients is
CoefficientList[w /.
First@AsymptoticSolve[eq[w, z] == 0, {w, 0}, {z, 0, 10}, Reals], z]
{0, -1/4, 9/128, 85/4096, 131/32768, -444991/4194304, 1642905/134217728, -19786821/1073741824, -662463103/17179869184, -542911401095/4398046511104, -4656077458645/140737488355328}
This is about ten times faster than the previous method using Root
& Series
.