Solving a linear system with a badly conditioned matrix

In the example (finally) provided, the matrix is rank deficient. In general not much can be done unless one knows the vector on the right is in the range space of the matrix. We'll proceed with that in mind.

mat = {{-0.04000323497710545, 0.019998382511436725, 
    0.019998382511436725, 1.0548487421137532*^-14, 
    1.0548487421137532*^-14, 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 
    0.}, {0.01999537041672573, -0.019998382511436725, 0., 0., 0., 
    0.03999375292816244, 2.714439127491079*^-6, 3.123535918772422*^-6,
     6.247071837545091*^-6, 4.09096791281486*^-7, 
    9.629649721936179*^-33, 0., 0., 0., 0., 0.}, {0.01999537041672573,
     0., -0.019998382511436725, 0., 0., 0.03999375292816244, 
    6.51319418358286*^-6, 3.123535918772422*^-6, 0., 
    2.857413572734654*^-6, 9.629649721936179*^-33, 0., 0., 0., 0., 
    0.}, {6.247071826996701*^-6, 0., 0., -1.0548487421137532*^-14, 0.,
     3.4666738998970245*^-33, 0.017377838870198482, 
    0.01999687646408122, 0.03999375292816246, 0.002619037593882741, 
    6.247071837545974*^-6, 0., 0., 0., 0., 
    0.}, {6.247071826996701*^-6, 0., 0., 0., -1.0548487421137532*^-14,
     3.4666738998970245*^-33, 0.041697468145927896, 
    0.01999687646408122, 0., 0.01829316124631577, 
    6.247071837545974*^-6, 0., 0., 0., 0., 0.}, {0., 0., 0., 0., 
    0., -0.07998750585632489, 0., 0., 0., 0., 0., 
    6.247071837544959*^-6, 6.247071837544959*^-6, 
    1.9683065157343793*^-32, 1.0839021819949165*^-32, 0.}, {0., 0., 
    0., 0., 0., 0., -0.059084534649437456, 0., 0., 0., 0., 
    0.0035343599700000386, 0.017377838870198486, 
    2.7144391274860995*^-6, 5.520712365254425*^-7, 0.}, {0., 0., 0., 
    0., 0., 0., 0., -0.03999999999999999, 0., 0., 0., 
    0.019996876464081232, 0.01999687646408122, 3.1235359187790123*^-6,
     3.1235359187717977*^-6, 0.}, {0., 0., 0., 0., 0., 0., 0., 
    0., -0.04, 0., 0., 0., 0.03999375292816246, 
    6.2470718375337105*^-6, 0., 0.}, {0., 0., 0., 0., 0., 0., 0., 0., 
    0., -0.020915465350562525, 0., 0.05645626942224363, 
    0.0026190375938827423, 4.0909679128073285*^-7, 
    8.818536519796756*^-6, 0.}, {0., 0., 0., 0., 0., 0., 0., 0., 0., 
    0., -0.000012494143675091948, 1.698670210949542*^-31, 
    5.827478825726898*^-30, 0.03999375292816246, 0.03999375292816246, 
    0.}, {0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 
    0., -0.07999375292816245, 0., 0., 0., 6.247071837548034*^-6}, {0.,
     0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., -0.07999375292816244,
     0., 0., 6.247071837533809*^-6}, {0., 0., 0., 0., 0., 0., 0., 0., 
    0., 0., 0., 0., 0., -0.04000624707183754, 0., 
    0.03999375292816247}, {0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
     0., 0., 0., -0.04000624707183755, 0.03999375292816246}, {1, 1, 1,
     1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}};
vec = UnitVector[Length[mat], Length[mat]];

A plausible first step in ill-conditioned cases is to work with the singular values decomposition.

{uu, ww, vv} = SingularValueDecomposition[mat];

We'll see how bad this matrix might be.

Diagonal[ww]

(* Out[325]= {4.00034774077, 0.103393916064, 0.0985619677111, \
0.0951305544638, 0.0796520702375, 0.0669929060285, 0.0548596279493, \
0.0503885069044, 0.0413276293297, 0.0400062470692, 0.0325805230661, \
0.0205858677042, 0.0199983819062, 3.61435763556*10^-6, 
 2.55039684488*10^-6, 0.} *)

One singular value is 0 (so it is rank deficient) and two others are small. But we might be able to simplify the linear algebra problem using this decomposition. Recalling that, up to numeric fuzz, we have uu.ww.Transpose[vv] == mat and moreover the left and right singular vector matrices are unitary (so inverse=transpose), we will transform our problem space to find x for whichww.Transpose[vv].x == Transpose[uu].vec]. First let y=Transpose[vv].x and solve for y. then we set x=vv.y.

y = LinearSolve[ww, Transpose[uu].vec];
x = vv.y

(* Out[329]= {8.44718017756*10^-10, 8.44622899399*10^-10, 
 8.44625619445*10^-10, 0.499999998767, 0.499999998702, \
-8.32682709879*10^-14, 8.59341503419*10^-15, 4.33994857987*10^-15, 
 4.6636787291*10^-15, 3.05599464256*10^-15, -3.16119352917*10^-12, 
 1.77271423994*10^-14, 1.04729755568*10^-14, 5.42562549792*10^-15, 
 5.36238940621*10^-15, 1.98712574173*10^-15} *)

We'll check this.

mat.x - vec

(* Out[332]= {1.3331704095*10^-15, -3.97230076355*10^-15, \
-4.02668630013*10^-15, 4.13664406954*10^-16, 4.84035098563*10^-16, 
 6.66059748146*10^-15, -2.63068456198*10^-16, 3.90350028599*10^-16, 
 2.32340341948*10^-16, 9.64369402065*10^-16, 
 4.709496088*10^-16, -1.4180482355*10^-15, -8.37760205392*10^-16, \
-1.37586298237*10^-16, -1.35056459528*10^-16, 4.4408920985*10^-16} *)

The residual is numeric fizz so we have a sound result.

One can use Chop on the result if one believes (seemingly appropriately, in this case) that the solution should be sparse.

xC = Chop[x, 10^(-8)]

(* Out[333]= {0, 0, 0, 0.499999998767, 0.499999998702, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} *)

Turn my comment into an answer...

If you have an $A$ matrix where it works well, and the other matrix $B$ is "close" to $A$ but ill conditioned, use $A$ to precondition the problem. So solve...

$$(A^{-1} B) x = A^{-1} b$$

You only need to compute $A^{-1}$ and $A^{-1} b$ once.


As Daniel says, your A matrix is rank deficient:

Dimensions[A]
{16, 16}

Rank[A]
15

One way to proceed is to use the PseudoInverse:

s = PseudoInverse[A].b

This returns an answer (with no warnings or errors) though it will not be exact. You can test how good/bad the answer is directly:

Norm[A.s - b]
9.10073*10^-15

If this value is small enough then you can probably trust your subsequent calculations. If this grows, then you know to take some other action.