How to get the Gradient and Hessian | Sympy
You could either use the very Pythonic way suggested by Stelios, or use some recently added features to SymPy:
In [14]: from sympy.tensor.array import derive_by_array
In [15]: derive_by_array(lamb, (eta, xi))
Out[15]:
[-(z_1 - h_1(xi, eta))*Derivative(h_1(xi, eta), eta)/sigma**2 - (z_2 - h_2(xi,
eta))*Derivative(h_2(xi, eta), eta)/sigma**2 - (z_3 - h_3(xi, eta))*Derivativ
e(h_3(xi, eta), eta)/sigma**2, -(z_1 - h_1(xi, eta))*Derivative(h_1(xi, eta),
xi)/sigma**2 - (z_2 - h_2(xi, eta))*Derivative(h_2(xi, eta), xi)/sigma**2 - (z
_3 - h_3(xi, eta))*Derivative(h_3(xi, eta), xi)/sigma**2]
Unfortunately the printer is still missing for N-dim arrays, you can visualize by converting them to a list (or, alternatively, using .tomatrix()):
In [16]: list(derive_by_array(lamb, (eta, xi)))
Out[16]:
⎡ ∂ ∂
⎢ (z₁ - h₁(ξ, η))⋅──(h₁(ξ, η)) (z₂ - h₂(ξ, η))⋅──(h₂(ξ, η)) (z₃ - h₃(ξ, η
⎢ ∂η ∂η
⎢- ──────────────────────────── - ──────────────────────────── - ─────────────
⎢ 2 2
⎣ σ σ
∂ ∂ ∂
))⋅──(h₃(ξ, η)) (z₁ - h₁(ξ, η))⋅──(h₁(ξ, η)) (z₂ - h₂(ξ, η))⋅──(h₂(ξ, η))
∂η ∂ξ ∂ξ
───────────────, - ──────────────────────────── - ────────────────────────────
2 2 2
σ σ σ
∂ ⎤
(z₃ - h₃(ξ, η))⋅──(h₃(ξ, η))⎥
∂ξ ⎥
- ────────────────────────────⎥
2 ⎥
σ ⎦
For the Hessian, just repeat the procedure twice:
In [18]: list(derive_by_array(derive_by_array(lamb, (eta, xi)), (eta, xi)))
Out[18]:
⎡ 2 2
⎢ ∂ ∂
⎢ (z₁ - h₁(ξ, η))⋅───(h₁(ξ, η)) (z₂ - h₂(ξ, η))⋅───(h₂(ξ, η)) (z₃ - h₃(ξ,
⎢ 2 2
⎢ ∂η ∂η
⎢- ───────────────────────────── - ───────────────────────────── - ───────────
⎢ 2 2
⎣ σ σ
2
∂ 2 2 2
η))⋅───(h₃(ξ, η)) ⎛∂ ⎞ ⎛∂ ⎞ ⎛∂ ⎞
2 ⎜──(h₁(ξ, η))⎟ ⎜──(h₂(ξ, η))⎟ ⎜──(h₃(ξ, η))⎟ (z
∂η ⎝∂η ⎠ ⎝∂η ⎠ ⎝∂η ⎠
────────────────── + ─────────────── + ─────────────── + ───────────────, - ──
2 2 2 2
σ σ σ σ
2 2
∂ ∂
₁ - h₁(ξ, η))⋅─────(h₁(ξ, η)) (z₂ - h₂(ξ, η))⋅─────(h₂(ξ, η)) (z₃ - h₃(ξ,
∂ξ ∂η ∂ξ ∂η
───────────────────────────── - ─────────────────────────────── - ────────────
2 2
σ σ
2
∂ ∂ ∂ ∂ ∂
η))⋅─────(h₃(ξ, η)) ──(h₁(ξ, η))⋅──(h₁(ξ, η)) ──(h₂(ξ, η))⋅──(h₂(ξ, η))
∂ξ ∂η ∂η ∂ξ ∂η ∂ξ
─────────────────── + ───────────────────────── + ───────────────────────── +
2 2 2
σ σ σ
2
∂ ∂ ∂
──(h₃(ξ, η))⋅──(h₃(ξ, η)) (z₁ - h₁(ξ, η))⋅─────(h₁(ξ, η)) (z₂ - h₂(ξ, η))
∂η ∂ξ ∂ξ ∂η
─────────────────────────, - ─────────────────────────────── - ───────────────
2 2
σ σ
2 2
∂ ∂ ∂ ∂
⋅─────(h₂(ξ, η)) (z₃ - h₃(ξ, η))⋅─────(h₃(ξ, η)) ──(h₁(ξ, η))⋅──(h₁(ξ, η))
∂ξ ∂η ∂ξ ∂η ∂η ∂ξ
──────────────── - ─────────────────────────────── + ─────────────────────────
2 2 2
σ σ σ
∂
∂ ∂ ∂ ∂ (z₁ - h₁(ξ, η))⋅──
──(h₂(ξ, η))⋅──(h₂(ξ, η)) ──(h₃(ξ, η))⋅──(h₃(ξ, η))
∂η ∂ξ ∂η ∂ξ ∂ξ
+ ───────────────────────── + ─────────────────────────, - ──────────────────
2 2 2
σ σ σ
2 2 2
∂ ∂
─(h₁(ξ, η)) (z₂ - h₂(ξ, η))⋅───(h₂(ξ, η)) (z₃ - h₃(ξ, η))⋅───(h₃(ξ, η))
2 2 2
∂ξ ∂ξ
─────────── - ───────────────────────────── - ───────────────────────────── +
2 2
σ σ
⎤
2 2 2⎥
⎛∂ ⎞ ⎛∂ ⎞ ⎛∂ ⎞ ⎥
⎜──(h₁(ξ, η))⎟ ⎜──(h₂(ξ, η))⎟ ⎜──(h₃(ξ, η))⎟ ⎥
⎝∂ξ ⎠ ⎝∂ξ ⎠ ⎝∂ξ ⎠ ⎥
─────────────── + ─────────────── + ───────────────⎥
2 2 2 ⎥
σ σ σ ⎦
There is an answer here which uses the hessian
and a one-liner jacobian
function.
You can simply compute the gradient vector "manually" (assuming that the variables are ordered as (z1, z2, z3, eta)
):
[lamb.diff(x) for x in z+[eta]]
Similarly, for the Hessian matrix:
[[lamb.diff(x).diff(y) for x in z+[eta]] for y in z+[eta]]