Find maximum value of interpolation function - obviously wrong result
As described e.g. in the tutorial Numerical Nonlinear Global Optimization there are different optimization methods available.
For your problem "SimulatedAnnealing"
seems to work:
NMaximize[{Theta[t], 0 <= t <= tmax}, t, Method -> "SimulatedAnnealing"]
{0.687071, {t -> 48.2449}}
"DifferentialEvolution"
will work, if the population is of sufficient size:
NMaximize[{Theta[t], 0 <= t <= tmax}, t, Method -> {"DifferentialEvolution", "SearchPoints" -> 30}]
"RandomSearch"
will also work, if the number of points used to start local searches is big enough, e.g. bigger than 13 for your example.
NMaximize[{Theta[t], 0 <= t <= tmax}, t, Method -> {"RandomSearch", "SearchPoints" -> 14}]
Two approaches: Finding maxima (1) of an InterpolatingFunction
and (2) via NDSolve
.
InterpolatingFunction
To find the extrema of an InterpolatingFunction
, one should start with the data contained in the function.
Let's start with the function itself;
ifn = theta /. sol
Then ifn["Grid"]
and ifn["VaiuesOnGrid"]
contain the abscissas (t
) and ordinates (theta[t]
) computed by NDSolve
. We should start our search for a maximum near the maximum value stored in ifn
. The position of the maximum can be calculated with Ordering
. So the best starting point is found with
{{t0}} = ifn["Grid"] ~Part~ Ordering[ifn["ValuesOnGrid"], -1]
(* {{48.2466}} *)
Then one can use FindMaximum
:
FindMaximum[ifn[t], {t, t0}]
(* FindMaximum::lstol complaint... *)
(* {0.687071, {t -> 48.24493512854019`}} *)
If the complaint is worrisome, then one can check the result by plotting the result, increasing working precision or trying, say, NMaximize
. These return the same solution with no warnings:
FindMaximum[ifn[t], {t, t0}, WorkingPrecision -> $MachinePrecision]
NMaximize[{ifn[t], 0 < t < 100}, t, Method -> {"RandomSearch", "InitialPoints" -> {{t0}}}]
NDSolve
An easier approach, if the InterpolatingFunction
was created with NDSolve
, is to use WhenEvent
. One can detect a local maximum with the event theta'[t] < 0
, where the derivative becomes negative, and Sow
the corresponding values of t
and theta[t]
. Then one selects the absolute maximum among them with Ordering
.
{sol, {localmax}} =
Reap[NDSolveValue[{eqn1[t] == 0, eqn2[t] == 0, r[0] == .9,
r'[0] == 0, theta[0] == 0.001, theta'[0] == 0,
WhenEvent[theta'[t] < 0, Sow[{theta[t], {t1 -> t}}]]},
{r, theta}, {t, 0, tmax}]
];
# ~Part~ First@Ordering[#[[All, 1]], -1] &@ localmax
(* {0.687071, {t1 -> 48.2449}} *)
Index for the last line of code:
localmax : a list of local maxima in the form {{y.yyy, {t1 -> x.xxx}},...}
#[[All, 1]] : {y.yyy,...} -- list of local maxima
First@
Ordering[#[[All, 1]], -1] : position of the greatest local maximum = global maximum
# ~Part~ "" : extracts the part of localmax that is the global maximum
Re global maximization -- updated
In general, global optimization is difficult, especially when there are many local extrema. Finding all of them can be uncertain and time consuming. In turn one cannot be certain about the global extremum found. Sometimes personal insight has to be applied to optimizing a given objective function.
In this case there seems to be an issue with precision in the InterpolatingFunction
. Starting with the critical points found above, if we examine the values of ifn
, we see that they constant in a neighborhood of the critical point. This probably has something to do with the optimization problem, and almost certainly with the FindMaximum::lstol
message.
tfindroot = {t -> 48.24493512854019`}; (* FindRoot *)
tndsolve = {t -> 48.24493514514225`}; (* NDSolve *)
Table[ifn[t (1 + 2^20 $MachineEpsilon dt)] - ifn[t] /. tfindroot, {dt, -2, 2}]
Table[ifn[t (1 + 2^20 $MachineEpsilon dt)] - ifn[t] /. tndsolve, {dt, -2, 2}]
(*
{-3.33067*10^-16, -2.22045*10^-16, 0., 0., 0.}
{-1.11022*10^-16, 0., 0., 0., -1.11022*10^-16}
*)
Table[ifn[t (1 + 2^21 $MachineEpsilon dt)] - ifn[t] /. tfindroot, {dt, -2, 2}]
Table[ifn[t (1 + 2^21 $MachineEpsilon dt)] - ifn[t] /. tndsolve, {dt, -2, 2}]
(*
{-9.99201*10^-16, -3.33067*10^-16, 0., 0., -2.22045*10^-16}
{-5.55112*10^-16, -1.11022*10^-16, 0., -1.11022*10^-16, -5.55112*10^-16}
*)
Interestingly, using arbitrary precision helps NMaximize
on the posted problem, but it does not work for all values of the parameter OmegaS
. Sample code:
sol = First@NDSolve[
Rationalize@
{eqn1[t] == 0, eqn2[t] == 0, r[0] == 0.9,
r'[0] == 0, theta[0] == 0.001, theta'[0] == 0},
{r, theta}, {t, 0, tmax},
WorkingPrecision -> $MachinePrecision];
ifn = theta /. sol;
NMaximize[{ifn[t], 0 <= t <= tmax}, t, WorkingPrecision -> $MachinePrecision]
(* {0.6870714027852798, {t -> 48.24493489787723}} *)
The NDSolve
method is probably the best way to proceed. Theoretically, if there are local maxima that are close in value to the global maximum, then one of them might be returned instead of the global one. But they would have to be very close, within the error of the NDSolve
computation. If one wanted further evidence, one could use the "RandomSearch"
method of NMaximize
with the local maxima (localmax
from the NDSolve
approach above) as initial points:
NMaximize[ifn[t], t,
Method -> {"RandomSearch", "InitialPoints" -> ({t1} /. localmax[[All, -1]])},
WorkingPrecision -> $MachinePrecision]
(* {0.6870714027850805, {t1 -> 48.24493514549923}} *)