Speeding up sums involving 16x16 matrices and 16x16x16x16 antisymmetric tensor
You can use TensorContract
instead of Sum
:
r = Activate @ TensorContract[
Inactive[TensorProduct][VL, VL, VL, VL, θ4],
{{2, 9}, {4, 10}, {6, 11}, {8, 12}}
]; //AbsoluteTiming
{0.419766, Null}
Using Activate
with Inactive
is a tip from @jose.
Using Dot[] appears to be about a factor of 3 faster than Carl Wolls solution (although I doubt that the timing for such short times is precise)
vlT = Transpose[VL]
r1 = Flatten[
VL.Flatten[
VL.\[Theta]4.vlT, {{2}, {1}, {4}, {3}}].vlT, {{2}, {1}, {4}, {3}}];
The timings on my computer are 0.000207 for the Dot[] solution and 0.0006595 for Carl's. Both results agree. (I converted VL to a SparseArray[] as well.)