Optimizing code using Intel SSE intrinsics for vectorization
The error you're seeing is because you have too many underscores in the function names, e.g.:
__mm_mul_ps
should be:
_mm_mul_ps // Just one underscore up front
so the C compiler is assuming they return int
since it hasn't seen a declaration.
Beyond this though there's further problems - you seem to be mixing calls to double and single float variants of the same instruction.
For example you have:
__m128d a_i, b_i, c_i;
but you call:
__mm_load_ps(&A[n*i+k]);
which returns a __m128
not a __m128d
- you wanted to call:
_mm_load_pd
instead. Likewise for the other instructions if you want them to work on pairs of doubles.
If you're seeing unexplained segmentation faults and in SSE code I'd be inclined to guess that you've got memory alignment problems - pointers passed to SSE intrinsics (mostly1) need to be 16 byte aligned. You can check this with a simple assert in your code, or check it in a debugger (you expect the last digit of the pointer to be 0 if it's aligned properly).
If it isn't aligned right you need to make sure it is. For things not allocated with new
/malloc()
you can do this with a compiler extension (e.g. with gcc):
float a[16] __attribute__ ((aligned (16)));
Provided your version of gcc has a max alignment large enough to support this and a few other caveats about stack alignment. For dynamically allocated storage you'll want to use a platform specific extension, e.g. posix_memalign
to allocate suitable storage:
float *a=NULL;
posix_memalign(&a, __alignof__(__m128), sizeof(float)*16);
(I think there might be nicer, portable ways of doing this with C++11 but I'm not 100% sure on that yet).
1 There are some instructions which allow you do to unaligned loads and stores, but they're terribly slow compared to aligned loads and worth avoiding if at all possible.
You need to make sure that your loads and stores are always accessing 16 byte aligned addresses. Alternatively, if you can't guarantee this for some reason, then use _mm_loadu_ps
/_mm_storeu_ps
instead of _mm_load_ps
/_mm_store_ps
- this will be less efficient but will not crash on misaligned addresses.