Symmetrization of scipy sparse matrices
Ok, it doubles the number of assignment statements, but in the big picture how much of a penalty is that?
lil
is the most efficient format for indexed assignment, but I've explored in other posts alternatives. If I recall correctly, direct assignment to data
and rows
attributes of a lil
is faster, though that's mainly of value when filling whole rows at once.
A dok
is also relatively fast, though I found that assignment to a regular dictionary, followed by an update to the dok
was faster. (A dok
is a dictionary subclass).
But if you go the coo
route - building lists of data
, rows
and cols
values, creating both i,j
and j,i
terms at once is not costly. It's even better if you can define a bunch of values at once, as opposed to iterating over all i,j
.
So efficiently creating an symmetric matrix is just a subset of the efficient matrix definition problem.
I'm not aware of any symmetrization functions in the sparse package. I wonder if any of linear algebra functions have symmetric provisions. I suspect the most efficient handlers just assume the matrix is upper or lower triangle, without explicit symmetric values.
It's possible that you could create an upper tri matrix, and then copy the values to the lower. In the dense case the simplest way is to just sum the matrix and its transpose (and possibly subtract the diagonal). But sparse matrix summation is somewhat in efficient, so that might not be the best. But I haven't done any tests.
============
The sum of transpose atleast doesn't give me any efficiency warnings:
In [383]: M=sparse.lil_matrix((10,10),dtype=int)
In [384]:
In [384]: for i in range(10):
...: for j in range(i,10):
...: v=np.random.randint(0,10)
...: if v>5:
...: M[i,j]=v
...:
In [385]: M
Out[385]:
<10x10 sparse matrix of type '<class 'numpy.int32'>'
with 22 stored elements in LInked List format>
In [386]: M.A
Out[386]:
array([[0, 7, 7, 0, 9, 0, 7, 0, 0, 9],
[0, 0, 7, 8, 0, 8, 0, 0, 9, 0],
[0, 0, 0, 7, 0, 0, 9, 0, 8, 0],
[0, 0, 0, 0, 0, 0, 6, 0, 6, 6],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 8, 9, 0, 8],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 8, 8],
[0, 0, 0, 0, 0, 0, 0, 0, 6, 8],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
sum of transpose (minus duplicated diagonal):
In [389]: M+M.T-sparse.diags(M.diagonal(),dtype=int)
Out[389]:
<10x10 sparse matrix of type '<class 'numpy.int32'>'
with 43 stored elements in Compressed Sparse Row format>
In [390]: _.A
Out[390]:
array([[0, 7, 7, 0, 9, 0, 7, 0, 0, 9],
[7, 0, 7, 8, 0, 8, 0, 0, 9, 0],
[7, 7, 0, 7, 0, 0, 9, 0, 8, 0],
[0, 8, 7, 0, 0, 0, 6, 0, 6, 6],
[9, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 8, 0, 0, 0, 0, 8, 9, 0, 8],
[7, 0, 9, 6, 0, 8, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 9, 0, 0, 8, 8],
[0, 9, 8, 6, 0, 0, 0, 8, 6, 8],
[9, 0, 0, 6, 0, 8, 0, 8, 8, 0]], dtype=int32)
double assignment approach:
In [391]: M=sparse.lil_matrix((10,10),dtype=int)
In [392]: for i in range(10):
...: for j in range(i,10):
...: v=np.random.randint(0,10)
...: if v>5:
...: M[i,j]=v
...: M[j,i]=v
I haven't done any timings.
A coo
approach:
In [398]: data,rows,cols=[],[],[]
In [399]: for i in range(10):
...: for j in range(i,10):
...: v=np.random.randint(0,10)
...: if v>5:
...: if i==j:
...: # prevent diagonal duplication
...: data.append(v)
...: rows.append(i)
...: cols.append(j)
...: else:
...: data.extend((v,v))
...: rows.extend((i,j))
...: cols.extend((j,i))
...:
In [400]: sparse.coo_matrix((data,(rows,cols)),shape=(10,10)).A
Out[400]:
array([[0, 8, 0, 6, 8, 9, 9, 0, 0, 0],
[8, 7, 0, 0, 0, 6, 0, 8, 0, 0],
[0, 0, 0, 0, 0, 0, 9, 9, 7, 9],
[6, 0, 0, 0, 7, 0, 0, 0, 0, 6],
[8, 0, 0, 7, 0, 0, 8, 0, 0, 0],
[9, 6, 0, 0, 0, 0, 6, 0, 0, 0],
[9, 0, 9, 0, 8, 6, 8, 0, 0, 0],
[0, 8, 9, 0, 0, 0, 0, 6, 0, 6],
[0, 0, 7, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 9, 6, 0, 0, 0, 6, 0, 9]])
===============
It might a bit faster to make the upper tri coo matrix, and extend to lower with list (or array) concatenation
In [401]: data,rows,cols=[],[],[]
In [402]: for i in range(10):
...: for j in range(i,10):
...: v=np.random.randint(0,10)
...: if v>5:
...: data.append(v)
...: rows.append(i)
...: cols.append(j)
In [408]: sparse.coo_matrix((data,(rows,cols)),shape=(10,10)).A
Out[408]:
array([[8, 0, 0, 9, 8, 7, 0, 7, 9, 0],
[0, 7, 6, 0, 0, 7, 0, 0, 9, 0],
[0, 0, 9, 8, 0, 9, 6, 0, 0, 6],
...]])
In [409]: data1=data+data
In [410]: rows1=rows+cols
In [411]: cols1=cols+rows
In [412]: sparse.coo_matrix((data1,(rows1,cols1)),shape=(10,10)).A
This duplicates the diagonal, which I need to address in one way or other (duplicate coo indices are summed). But it gives the idea of how coo
style inputs can be collected into larger blocks.
Yep, there is definitely a more efficient and simple way. hpaulj's answer should work if you are creating a matrix, but if you already have one, you can do:
rows, cols = sparse_matrix.nonzero()
sparse_matrix[cols, rows] = sparse_matrix[rows, cols]
This should work for all types of scipy's sparse matrices except coo_matrix.
Edit: noted coo_matrix.