Matrix multiplication, solve Ax = b solve for x

In a general case, use solve:

>>> import numpy as np
>>> from scipy.linalg import solve
>>> 
>>> A = np.random.random((3, 3))
>>> b = np.random.random(3)
>>> 
>>> x = solve(A, b)
>>> x
array([ 0.98323512,  0.0205734 ,  0.06424613])
>>> 
>>> np.dot(A, x) - b
array([ 0.,  0.,  0.])

If your problem is banded (which cubic splines it often is), then there's http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve_banded.html

To comment on some of the comments to the question: better not use inv for solving linear systems. numpy.lstsq is a bit different, it's more useful for fitting.

As this is homework, you're really better off at least reading up on ways of solving tridiagonal linear systems.


Numpy is the main package for scientific computing in Python. If you are windows user then download it here: http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy else follow these instructions: http://www.scipy.org/install.html.

import numpy
A = [[1,0,0],[1,4,1],[0,0,1]]
b = [0,24,0]
x = numpy.linalg.lstsq(A,b)

In addition to the code of Zhenya, you might also find it intuitive to use the np.dot function:

import numpy as np
A = [[1,0,0],
    [1,1,1],
    [6,7,0]]
b = [0,24,0]
# Now simply solve for x
x = np.dot(np.linalg.inv(A), b) 
#np.linalg.inv(A)  is simply the inverse of A, np.dot is the dot product
print x

Out[27]: array([  0.,   0.,  24.])