Fast tensor rotation with NumPy
Thanks to hard work by M. Wiebe, the next version of Numpy (which will probably be 1.6) is going to make this even easier:
>>> Trot = np.einsum('ai,bj,ck,dl,abcd->ijkl', g, g, g, g, T)
Philipp's approach is at the moment 3x faster, though, but perhaps there is some room for improvement. The speed difference is probably mostly due to tensordot being able to unroll the whole operation as a single matrix product that can be passed on to BLAS, and so avoiding much of the overhead associated with small arrays --- this is not possible for general Einstein summation, as not all operations that can be expressed in this form resolve to a single matrix product.
To use tensordot
, compute the outer product of the g
tensors:
def rotT(T, g):
gg = np.outer(g, g)
gggg = np.outer(gg, gg).reshape(4 * g.shape)
axes = ((0, 2, 4, 6), (0, 1, 2, 3))
return np.tensordot(gggg, T, axes)
On my system, this is around seven times faster than Sven's solution. If the g
tensor doesn't change often, you can also cache the gggg
tensor. If you do this and turn on some micro-optimizations (inlining the tensordot
code, no checks, no generic shapes), you can still make it two times faster:
def rotT(T, gggg):
return np.dot(gggg.transpose((1, 3, 5, 7, 0, 2, 4, 6)).reshape((81, 81)),
T.reshape(81, 1)).reshape((3, 3, 3, 3))
Results of timeit
on my home laptop (500 iterations):
Your original code: 19.471129179
Sven's code: 0.718412876129
My first code: 0.118047952652
My second code: 0.0690279006958
The numbers on my work machine are:
Your original code: 9.77922987938
Sven's code: 0.137110948563
My first code: 0.0569641590118
My second code: 0.0308079719543
Here is how to do it with a single Python loop:
def rotT(T, g):
Tprime = T
for i in range(4):
slices = [None] * 4
slices[i] = slice(None)
slices *= 2
Tprime = g[slices].T * Tprime
return Tprime.sum(-1).sum(-1).sum(-1).sum(-1)
Admittedly, this is a bit hard to grasp at first glance, but it's quite a bit faster :)