How can I find solutions for this equation?
Introduction
The problem is to solve an underdetermined system of equations. We will deal with OP's case, in which the number of variables is one greater than the number of equations. Thus the solution set consists generically of sets of dimension one (curves). The problem may be considered in two parts. (1) How to find the solution curve given a point on the curve and (2) how to find all the curves.
With respect to (1), there are three ways to find a solution curve with NDSolve
, two ODE methods and one DAE method. One ODE method and the DAE one use projection to construct an accurate solution. The plain ODE method seems to be slowest, least accurate (see remark below); on the other hand, it is easier to set up and is less likely to fail. The DAE method appears to be fastest, but it is available only in machine precision currently.
With respect to (2), the problem is somewhat like global optimization. One has to search the domain for possible solutions. We'll present two approaches. One is a simple Cartesian or tensor product of sampling each coordinate dimension of the domain. The other is to construct an integrand for NIntegrate
that has singularities along the solution curves.
In addition, there are two other functions that may be of general interest. One uses a Newton-like method to find a point-solution to an underdetermined system. The other plots the projection of points from n-dimensional space onto 3-dimensional spaces; this and other visualization utilities may be found at the end of this answer.
Remark about the OP's particular problem. The OP's function is built out of linear interpolations. Consequently it's derivative has discontinuities. It is not surprising then that the ODE method, which integrates the derivative, appears to be worse in this case.
Finding a starting point for a solution curve
We need to find a starting point. One can take the approach of J.M. and minimize the distance to $(2,2,2)$. We'll show other ways further below.
vars = {s1, e1, s2, e2};
{min, vars0} =
NMinimize[{SquaredEuclideanDistance[cxyz[s1, e1, s2, e2], {2, 2, 2}],
385 <= vars <= 745}, vars,
Method -> {"DifferentialEvolution", "RandomSeed" -> 3, "ScalingFactor" -> 0.9}]
(* {2.18926*10^-10, {s1 -> 386.46, e1 -> 422.611, s2 -> 562.054, e2 -> 572.507}} *)
When we integrate the curve for which $\nabla\text{cxyz} \cdot dX = 0$, theoretically $\text{cxyz}$ would not get closer to $(2,2,2)$ than the starting point. The minimum from NMinimize
tells us that the error in the function value has a norm of about 10^-5
. We can can get a better starting point by polishing the solution some with FindRoot
. We use the null space of the derivative to complete the system, on the assumption that the point p0
is close to a root.
polishRoot[eqs_List, {vars_, p0_}] := With[{f = eqs /. {Equal -> Subtract}},
With[{grad = D[f, {vars}] /. Thread[vars -> p0]},
Module[{res},
Check[
res = FindRoot[Flatten[{f, NullSpace[grad].(vars - p0)}], Transpose@{vars, p0}],
$dbprint[{"f", NullSpace[grad].(vars - p0)}]]; (*$*)
res]
]];
vars1 = polishRoot[cxyz[s1, e1, s2, e2] - 2, {vars, vars /. vars0}]
cxyz[s1, e1, s2, e2] - 2 /. {vars1, vars0}
(*
{s1 -> 386.46, e1 -> 422.611, s2 -> 562.055, e2 -> 572.507}
{{-6.21725*10^-15, -8.88178*10^-15, 0.},
{-0.0000114882, 9.18774*10^-6, 1.59147*10^-6}}
*)
Finding a solution curve from a single root
The basic approach is to use NDSolve
to construct the solution curve.
For an equation $F(X) = Y$, where $X$ has one more component than $Y$, the inverse image $F^{-1}(Y_0)$ will generically consist of a collection of curves. Each curve will satisfy the differential equation $dY={\bf 0}$. This system has the form $dY=\nabla F \cdot dX$, where $\nabla F$ denotes the derivative of $F$, a 3x4 matrix in the OP's case, and $dX$ is the vector of the differentials of the variables.
There are two ways to proceed, with an ODE, with or without the "Projection"
method, and with a DAE. Usually the DAE approach is fast and has good accuracy at machine precision. In Mathematica, it is implemented only at machine precision. For higher precision, one can try an ODE approach. In all cases, since the system is underdetermined, we will need another equation. See each method for possibilities and restrictions.
(1) For the ODE methods, one choice for the extra equation is to set the speed of one of the variables to 1; this will work as long as $dx \ne 0$ along the integral curves, where $x$ is the chosen variable. (Another choice would be unit vector speed, but the unit-speed equation is quadratic; in the present case, Mathematica will not solve the system with this approach. Luckily, setting the speed to one works in the OP's case. Clearly it will not work in all cases.)
The null space of the derivative matrix at a nonsingular point will be spanned by a vector tangent to the solution curve. We set the speed of the variable corresponding to the largest component of this vector to 1 as the fourth equation to complete our ODE system. We also create events to stop integration when the solution curve leaves the domain. The default integration method used by NDSolve
does a pretty good job, with a relative error of 10^-6
to nearly 10^-5
.
(2) The "Projection"
method gets the error down to around 10^-14
, with a few sporadic spikes up to 10^-6
. The "Projection"
method with `the submethod settings below is more accurate and faster than the default.
(3) In the DAE approach, a solution curve is constructed directly from the equation $F(X) = Y$. A differential equation is used to drive the integration along the curve, and typically one integrates a parametrization by arc length. The initial values for the derivatives of the variables can be initialized with a unit vector tangent to the solution curve. NullSpace
seems to return unit vectors on machine real input, but a one-time call to Normalize
is inexpensive. This approach also has the advantage of working if one of the variables reverses directions (i.e. if a component of $dX$ is zero).
varsOt = Through[vars[t]];
grad0 = D[cxyz[s1, e1, s2, e2], {vars}] /. v : (Alternatives @@ vars) :> v[t];
ode0 = {grad0.D[varsOt, t] == {0, 0, 0},
D[ (* select variable for 4th DE *)
vars[[Position[#, Max@#][[1, 1]] &@
Abs@First@NullSpace[grad0 /. (v_)[t] :> v /. vars1]]][t],
t] == 1,
Through[vars[0]] == (vars /. vars1) (* ICs *)
};
ClearAll[domainEvt];
domainEvt[dom_][v_] :=
With[{start = dom[[1]],
stop = dom[[-1]]}, {WhenEvent[v[t] == stop, "StopIntegration"],
WhenEvent[v[t] == start, "StopIntegration"]}
];
sys0 = {ode0, domainEvt[Image`ColorOperationsDump`$wavelengths] /@ vars}; (*$*)
sysDAE0 = {cxyz[s1, e1, s2, e2] == {2, 2, 2} /. Thread[vars -> varsOt],
#.# &@D[varsOt, t] == 1,
varsOt == (vars /. vars1) /. t -> 0,
D[varsOt, t] == Normalize@First@NullSpace[grad0 /. (v_)[t] :> v /. vars1] /. t -> 0,
domainEvt[Image`ColorOperationsDump`$wavelengths] /@ vars}; (*$*)
The three methods:
{sol} = NDSolve[sys0, vars, {t, -200, 200}]; // AbsoluteTiming
(* {1.15585, Null} *)
{solp} = NDSolve[sys0, vars, {t, -200, 200},
Method -> {"Projection", "Invariants" -> cxyz @@ varsOt,
Method -> {"ExplicitRungeKutta", "DifferenceOrder" -> 4,
"StiffnessTest" -> False}}]; // AbsoluteTiming
(* {0.886951, Null} *)
{solDAE} = NDSolve[sysDAE0, vars, {t, -500, 500}]; // AbsoluteTiming
(* {0.339732, Null} *)
Error plots. The errors of the three coordinates relative to{2, 2, 2}
for each method. The ODE method has significantly worse accuracy on the OP's problem. (The error is shown with two different scales, one that shows the typical error and one that shows something else, usually a few large spikes.)Block[{foo}, foo[s_, l_, r_] := With[{p = solerrorplot[s]}, {Show[p, PlotLabel -> l, PlotRange -> r], Show[p, PlotRange -> 1*^-6 {-1, 1}]}]; GraphicsGrid[Transpose[ foo @@@ {{sol, "ODE", Automatic}, {solp, "Projection", 2.5*^-14 {-1, 1}}, {solDAE, "DAE", 2.5*^-14 {-1, 1}}}], ImageSize -> 600]]
Finding all solution curves (probably)
One way to proceed is to take sample points from throughout the domain. Then see if Newton's method quickly finds a root starting at each sample point. After identifying the roots, integrate the solution curve passing through the root and remove any other roots found along the path. This process is iterated until no roots are left. One may hope to find all solutions in this manner for a well-behaved function. This is somewhat like the use of ContourPlot
in Getting an InterpolatingFunction from a ContourPlot, but in higher dimensions.
For initial sampling, we will show two methods, one subdividing each dimension of the domain to yield a grid and
another using NIntegrate
.
One might possibly use MeshRegion
or FEM utilities to mesh the domain and obtain sample points, in cases where the domain is two or three dimensional. One might even be able to use the MeshRefinementFunction
to adapt the density of the meshing, if one could detect when multiple solution curves might be present in a cell. (Maybe I'm just dreaming, though.)
Functions
General code notes.
At least one improvement is needed to make to turn the code into good user-level functions. It would be desirable to add checks on intermediate results, emit appropriate error messages, and abort the computation, instead of letting bad input be propagated to subsequent functions.
Searching depends on a few parameters, which are coded as global variables; it would be better to make them options.
While I was developing the solution, I inserted a number of $dbprint
commands, which can be set to Print
inside a Block
; I've also obsessively timed component pieces; there's also a PrintTemporary
line to monitor the culling of the removal of duplicates roots. I left these things in, but they are unnecessary.
Generally the input style is in vector form: {{x, y, z}, {x0, y0, z0}}
for variables {x, y, z}
and numeric starting point {x0, y0, z0}
, instead of {{x, x0}, {y, y0}, {z, z0}}
.
Here is a modification of findPoint
, which finds a single solution to an underdetermined system via a pseudo-Newton's method, from
Plotting implicitly-defined space curves
or
How to find ANY solution to an underdetermined system of equations?.
It stops when the search leaves the domain indicated by bounds
.
It does not have the damping and other safety checks of FindRoot
.
(* bounds can have the form {x1, x2} or {{x1, y1,...}, {x2, y2,...}} *)
findPoint[eqs_List, {vars_, p0_}, bounds_: {-Infinity, Infinity}] :=
Module[{f0 = {1.}},
With[{f = eqs /. {Equal -> Subtract}},
With[{pseudonewton = With[{df0 = D[f, {vars}]},
# - PseudoInverse[df0 /. Thread[vars -> #]].(f0 = f /. Thread[vars -> #]) &]},
Thread[vars -> NestWhile[pseudonewton, p0,
And @@ Thread[bounds[[1]] < # < bounds[[2]]] && Norm[f0] > 1.*^-12 &, 1, 10]]]]];
In getRoots
, since the number of function evaluations is a key factor in speed, we cull the sampling points in stages. We first pick the points at which the function is closest to zero (in the lower 10% of the range of values, the proportion being controlled by $threshold
).
Then we call findPoint
and pick the points that are solutions.
(* function parameters that should be options *)
$threshold = 0.1; (* how close scaled f[sample] is to zero $*)
$accGoal = 8; (* criterion for a root: how close f[root] is to zero $*)
getRoots[f_, sampling_, {vars_, reg_}] := Module[{bounds, roots, vals},
If[RegionQ[reg],
bounds = Transpose@RegionBounds@reg,
bounds = reg];
Length[vals = Norm[f /. Thread[vars -> #]] & /@ N@sampling] //
AbsoluteTiming // $dbprint; (*$*)
getRoots`$pts = (* save intermediate result for inspection -- can delete $*)
roots = Pick[sampling, UnitStep[vals - {1 - $threshold, $threshold}.MinMax[vals]],
0];
Length[roots = findPoint[f, {vars, #}, bounds] & /@ roots
] // AbsoluteTiming // $dbprint; (*$*)
Length[roots = Pick[roots, UnitStep[Norm[f] - 10^-$accGoal /. roots], 0] (*$*)
] // AbsoluteTiming // $dbprint; (*$*)
roots
]
Here we construct the differential equation systems for the "Projection"
and DAE methods. They take an initial point vars0
, given as a list of rules, as an argument to set up the IVP for the solution curve. Integration stops when the solution exits the domain.
ClearAll[gradient, ode, sysProj, sysDAE, sys, domainEvt];
gradient[f_, vars_] := D[f, {vars}] /. v : (Alternatives @@ vars) :> v[t];
domainEvt[v_, start_, stop_] :=
{WhenEvent[v == stop, "StopIntegration"],
WhenEvent[v == start, "StopIntegration"]};
sysProj[f_, {vars_, vars0_}, t_, dom_] :=
With[{varsOt = Through[vars[t]], grad = gradient[f, vars]},
{Thread[grad.D[varsOt, t] == 0],
D[vars[[
Position[#, Max@#][[1, 1]] &@
Abs@First@NullSpace[grad /. (v_)[t] :> v /. vars0]]][t], t] == 1,
varsOt == (vars /. vars0) /. t -> 0,
domainEvt @@@ Thread[{varsOt, dom[[1]], dom[[2]]}]}
];
sysDAE[f_, {vars_, vars0_}, t_, dom_] :=
With[{varsOt = Through[vars[t]], grad0 = D[f, {vars}] /. vars0},
{Thread[f == 0] /. Thread[vars -> varsOt],
#.# &@D[varsOt, t] == 1,
varsOt == (vars /. vars0) /. t -> 0,
D[varsOt, t] == First@NullSpace[grad0] /. t -> 0,
domainEvt @@@ Thread[{varsOt, dom[[1]], dom[[2]]}]}
];
The function integrateRoots[]
takes the list of roots
, finds the curve through the first root, and removes roots through which the curve appears to pass. This is iterated until no roots are left in the list. One has to be careful that the steps along the curve are not too big, or the code might step right past a root. This is accomplished by calculating t
as a function of arc length.
(* function parameters that should be options *)
$stepSize = 0.2; (* Arc-length step along a solution; should be < $rootSeparation *)
$rootSeparation = 0.5; (* The threshold distance for deleting a root $*)
$maxTime = 500; (* Note: {t, -Infinity, Infinity} does not work in NDSolve $*)
ClearAll[integrateRoots];
integrateRoots["rootOptions"] = {Method -> Automatic};
integrateRoots["rootMethodOptions", sysProj][f_, vars_, t_] := {"Projection",
"Invariants" -> (f /. Thread[vars -> Through[vars[t]]]),
Method -> {"ExplicitRungeKutta", "DifferenceOrder" -> 4, "StiffnessTest" -> False}};
integrateRoots["rootMethodOptions", _][f_, vars_, t_] := {Automatic}; (* sysDAE *)
integrateRoots["arclengthOptions"] = {
Method -> "StiffnessSwitching", PrecisionGoal -> 5, AccuracyGoal -> 4};
Options[integrateRoots] = {
Method -> sysDAE, "rootOptions" -> Automatic, "arclengthOptions" -> Automatic};
integrateRoots[f_, {vars_, roots_}, bounds_, opts : OptionsPattern[]] :=
Module[{sol, invarclength, sys, method, t, vars0, nf, t1, t2},
With[ (* set up options *)
{rootopts = Replace[OptionValue["rootOptions"],
Automatic -> integrateRoots["rootOptions"]],
arclenopts = Replace[OptionValue["arclengthOptions"],
Automatic -> integrateRoots["arclengthOptions"]]},
sys = OptionValue[Method];
method = Replace[
Method /. rootopts,
Automatic -> integrateRoots["rootMethodOptions", sys][f, vars, t]];
Reap[ (* integrate a root and cull the duplicates *)
NestWhile[
(PrintTemporary[Length[#]];
vars0 = First[#];
{sol} = NDSolve[ (* integrate a root *)
sys[f, {vars, vars0}, t, bounds],
vars, {t, -$maxTime, $maxTime},
Method -> method, rootopts];
{{t1, t2}} = First[vars]["Domain"] /. sol;
invarclength = NDSolveValue[ (* for arc-length stepping *)
{t'[s] == 1/Norm[D[Through[vars[t[s]] /. sol], t[s]]], t[0] == t1,
WhenEvent[t[s] == t2, "StopIntegration"]},
t, {s, 0, Infinity},
arclenopts];
Sow[sol, "sols"]; (* save solution *)
nf = Nearest[ (* Pick roots that are not close to sol *)
Transpose[Through[vars[invarclength[s]]] /. sol /.
s -> Range @@ Flatten[{invarclength["Domain"], $stepSize}]] -> Automatic (*$*)
];
Pick[Rest[#], Length /@ nf[vars /. Rest[#], {1, $rootSeparation}], 0] (*$*)
) &,
roots,
Length[#] > 0 &],
"sols"][[2, 1]]
]]
Cartesian/tensor product grid sampling
Tuples
will efficiently generate n-D samples from 1-D coordinate samples.
sampling0 = Image`ColorOperationsDump`$wavelengths[[;; ;; 9]]; (*$*)
sampling = Tuples[sampling0, 4];
bounds = Image`ColorOperationsDump`$wavelengths[[{1, -1}]]; (*$*)
vars = {s1, e1, s2, e2};
Block[{$dbprint = Print}, (*$*)
roots = getRoots[cxyz[s1, e1, s2, e2] - 2, sampling, {vars, bounds}]
]; // AbsoluteTiming
(*
{2.67702, 6561} -- output of $dpprint: number of candidate roots $
{1.69157, 1118}
{0.449644, 88}
{4.81956, Null} -- total timing
*)
The function integrateRoots[]
reduces the roots to 4 solutions. These are equivalent under the permutations generated by the transpositions {s1, s2}
and {e1, e2}
, which can be seen to be obvious symmetries of cxyz
.
Block[{$dbprint = Print}, (*$*)
Length[
sols = integrateRoots[cxyz[s1, e1, s2, e2] - 2, {vars, roots}, bounds]
] // AbsoluteTiming
]
(* {1.52757, 4} *)
Visualizing the solutions. They are curves in 4-space. The parallel projections parallel to each coordinate axis are shown.showsolutions[sols, vars, PlotRange -> ({#, #, #} &@Image`ColorOperationsDump`$wavelengths[[{1, -1}]])] (*$*)
Error plots. The relative error (with respect to the desired values{2, 2, 2}
) of the three coordinates of each of the four solutions (using tensor-product sampling and the DAE method).With[{p = solerrorplot /@ sols}, GraphicsGrid[{Show[#, PlotRange -> 2.5*^-14 {-1, 1}] & /@ p, Show[#, PlotRange -> 1*^-6 {-1, 1}] & /@ p}, ImageSize -> 600]]
NIntegrate region sampling
One might try to use NIntegrate
to produce the sampling. If we set the integrand to
$$1/(F({X})-{Y}_0) \cdot (F({X})-{Y}_0)\,,$$
where $F({X})={Y}_0$ is the equation to be solved, then the solution curves will be singularities. NIntegrate
will probably detect this, and recursively subdivide near the singularities. This will be especially true if we use the "LocalAdaptive"
strategy, since "GlobalAdaptive"
might subdivide the domain only near the worst singularity at first and take more recursive subdivisions to reach the others. Thus we should expect to get good seeds for finding roots of the equation. The drawback, or at least something to beware, is that this subdivision can generate a lot of sample points, which is a common problem in high-dimensional problems in any case. For the OP's problem, we set MaxRecursion
, PrecisionGoal
, and AccuracyGoal
to low values so that only the worst subregions will be subdivided, and not subdivided much. It doesn't matter that much in the OP's case, because the time it takes and adds to have a lot of points is not much; it is considerably less than the time it takes to integrate the solution curves.
While NIntegrate
can sample in a region, the only method that seems to be available is Automatic
. (It seems to be using "GlobalAdaptive"
with "MultidimensionalRule"
, at least on a box region.) To use the "LocalAdaptive"
strategy with regions, we can use the old-school Boole
together with RegionMember
.
(* solution curves are singularities *)
samplingNI[f_, vars_, reg_] :=
With[{bounds = Transpose@RegionBounds@reg},
Reap[
Quiet[
With[{regfn = RegionMember[reg]},
NIntegrate[1/(#.#) &[f]*Boole[regfn[vars]],
Sequence @@ Thread[{vars, bounds[[1]], bounds[[2]]}] // Evaluate,
MaxRecursion -> 1, EvaluationMonitor :> Sow[vars],
Method -> "LocalAdaptive",
PrecisionGoal -> 1, AccuracyGoal -> 0]
],
{NIntegrate::slwcon, NIntegrate::ncvb}]
][[2, 1]]
];
In the call to integrateRoots
, we sort the seeds so that the ones near the middle come first. The ones on the boundaries of the box region of the OP's example sometimes caused problems, especially with the non-DAE methods.
{vmin, vmax} = Image`ColorOperationsDump`$wavelengths[[{1, -1}]]; (*$*)
reg = ImplicitRegion @@ {Thread[vmin <= vars <= vmax], vars};
(pts0 = samplingNI[cxyz[s1, e1, s2, e2] - 2, vars, reg]) //
Length // AbsoluteTiming
(* {0.220993, 5529} *)
Block[{$threshold = 0.02, $dbprint = Print},
roots = getRoots[cxyz[s1, e1, s2, e2] - 2, pts0, {vars, reg}];
] // AbsoluteTiming
(*
{2.31638, 5529} -- output of $dbprint: number of candidate roots $
{1.72644, 736}
{0.333221, 92}
{4.3772, Null} -- total timing
*)
Block[{$dbprint = Print}, (*$*)
Length[
solNI = integrateRoots[
cxyz[s1, e1, s2, e2] - 2,
{vars, SortBy[Norm[vars - RegionCentroid[reg] /. #] &]@roots}, {vmin, vmax}]
] // AbsoluteTiming
]
(* {1.53555, 4} *)
Visualizing the solutions. The projections parallel to each coordinate axis. (Same as above.)showsolutions[solNI, vars, PlotRange -> ({#, #, #} &@Image`ColorOperationsDump`$wavelengths[[{1, -1}]])] (*$*)
Error plots. The relative error (with respect to the desired values{2, 2, 2}
) of the three coordinates of each of the four solutions (usingNIntegrate
sampling and the DAE method).With[{p = solerrorplot /@ solNI}, GraphicsGrid[{ Show[#, PlotRange -> 2.5*^-14 {-1, 1}] & /@ p, Show[#, PlotRange -> 1*^-6 {-1, 1}] & /@ p }, ImageSize -> 600] ]
Sampling process. The projections parallel to each coordinate axis of each step in the process from the initial sampling to the roots.showpoints[pts0, PlotLabel -> "Seeds from NIntegrate"] showpoints[getRoots`$pts, PlotLabel -> "After picking seeds close to roots"] (*$*) showpoints[vars /. roots, PlotLabel -> "After findPoint"]
Utilities
Functions for generating the visualizations above. The function listPlotND
may be of interest in itself. It projects n-dimensional data onto 3D and calls ListPointPlot3D
on the points.
ClearAll[solplot]
solplot[sol_] :=
With[{plot =
Plot[(cxyz[s1, e1, s2, e2] - 2)/2 /.
{v : (Alternatives @@ vars) :> v[t]} /. sol // Evaluate,
Evaluate@Flatten@{t, s1["Domain"] /. sol},
PlotStyle -> Opacity[0.5], PlotRange -> All
]},
Show[plot, plot] (* poor man's compositing *)
]
ClearAll[showsolutions]
showsolutions[sols_, vars_, plotops___] :=
With[{curves = Table[vars /. {v : (Alternatives @@ vars) :> v[t]} /. #,
Evaluate@{t,
Rescale[Range[0., 200.]/200, {0, 1}, First[First[vars]["Domain"] /. #]]}
] & /@ sols},
GraphicsRow[
Table[
Graphics3D[{Thick,
Riffle[ColorData["Rainbow", #/Length@curves] & /@ Range@Length@curves,
Line /@ curves[[All, All, parts]]]},
Axes -> True,
AxesLabel -> (Style[#, Darker[Red, 0.6]] & /@ vars[[parts]]),
PlotLabel -> Row[{"Projection || ", vars[[First@Complement[Range[4], parts]]]},
BaseStyle -> Darker[Red, 0.6]],
plotops
], {parts, Subsets[Range@4, {3}]}]
]
]
ClearAll[showpoints];
showpoints[pts_, opts___] := GraphicsRow[
Table[listPlotND[pts, "Projection" -> parts,
AxesLabel -> (Style[#, Darker[Red, 0.6]] & /@ vars[[parts]]),
PlotLabel -> Row[{"Projection || ", vars[[First@Complement[Range[4], parts]]]},
BaseStyle -> Darker[Red, 0.6]],
PlotRange -> ({#, #, #} &@ Image`ColorOperationsDump`$wavelengths[[{1, -1}]]), (*$*)
BoxRatios -> {1, 1, 1}],
{parts, Subsets[Range@4, {3}]}],
ImageSize -> 600, opts]
(*
listPlotND[data, "Projection" -> {n1, n2, n3}] projects data onto coordinates n1, n2, n3
listPlotND[data, "Projection" -> f] projects data onto f /@ data
*)
ClearAll[listPlotND];
Options[listPlotND] = Join[
{"Projection" -> Automatic, PlotStyle -> Automatic}, Options[ListPointPlot3D]];
listPlotND[data_, o : OptionsPattern[]] :=
Module[{projection, points, res},
Switch[OptionValue["Projection"],
Automatic
, projection = #[[All, {1, 2, 3}]] &,
_?ListQ
, projection = #[[All, OptionValue["Projection"]]] &,
_
, projection = OptionValue["Projection"]
; If[! MatchQ[N@Cases[data, p : {__?NumericQ} :> projection[p], Infinity, 1],
{{_Real, _Real, _Real}}],
projection = $Failed &; res = $Failed]
];
If[MatchQ[Dimensions[data], {_Integer, _Integer}],
points = {projection[data]},
If[MatchQ[Dimensions[data], {_Integer, _Integer, _Integer}],
points = projection /@ data,
res = $Failed (*$*)
]];
If[res =!= $Failed, (*$*)
res = res = ListPointPlot3D[points,
FilterRules[{o}, Options[ListPointPlot3D]]]
];
res /; res =!= $Failed];
You have an underdetermined system (more unknowns than equations). One possibility would be to use NMinimize[]
on the sum of squares of differences (actually, NArgMin[]
suffices, but you'll see why I didn't use it).
Most of the methods available to NMinimize[]
take a "RandomSeed"
option, which can be tweaked to produce different results if the objective function has multiple minima. Let's try this on the OP's system, using differential evolution as the optimization method:
Table[NMinimize[{SquaredEuclideanDistance[cxyz[s1, e1, s2, e2], {2, 2, 2}],
385 <= {s1, e1, s2, e2} <= 745}, {s1, e1, s2, e2},
Method -> {"DifferentialEvolution", "RandomSeed" -> k,
"ScalingFactor" -> 0.9}],
{k, 5}]
{{1.53211*10^-12, {s1 -> 479.195, e1 -> 743.045, s2 -> 635.343, e2 -> 503.826}},
{2.2567*10^-10, {s1 -> 385.967, e1 -> 572.506, s2 -> 562.054, e2 -> 422.602}},
{2.18925*10^-10, {s1 -> 386.46, e1 -> 422.611, s2 -> 562.054, e2 -> 572.507}},
{2.24271*10^-10, {s1 -> 562.053, e1 -> 422.587, s2 -> 385.185, e2 -> 572.506}},
{6.67845*10^-11, {s1 -> 563.374, e1 -> 448.807, s2 -> 443.187, e2 -> 573.769}}}
Note that five rather different minima were obtained, and all resulting in a relatively tiny value of the objective function. Which of these is the one you want/need is now entirely your decision. Using a different "RandomSeed"
setting or even a different optimization method will in all likelihood give results different from what you see here.