Vectorizing with unaligned buffers: using VMASKMOVPS: generating a mask from a misalignment count? Or not using that insn at all
You could turn an integer bitmask like (1 << (vector1.size() & 3)) - 1
into a vector mask on the fly with is there an inverse instruction to the movemask instruction in intel avx2? Or:
Load a mask for VMOVMASKPS from a window into a table. AVX2, or AVX1 with a few extra instructions or a larger table.
The mask can also be used for ANDPS
in registers in a reduction that needs to count each element exactly once. As Stephen Canon points out in comments on the OP, pipelining loads can allow overlapping unaligned stores to work even for a rewrite-in-place function like the example I picked, so VMASKMOVPS
is NOT the best choice here.
This should be good on Intel CPUs, esp. Haswell and later for AVX2.
Agner Fog's method for getting a pshufb mask actually provided an idea that is very efficient: do an unaligned load taking a window of data from a table. Instead of a giant table of masks, use an index as a way of doing a byte-shift on data in memory.
Masks in LSB-first byte order (as they're stored in memory), not the usual notation for {X3,X2,X1,X0}
elements in a vector. As written, they line up with an aligned window including the start/end of the input array in memory.
- start misalign count = 0: mask = all-ones (Aligned case)
- start misalign count = 1: mask =
{0,-1,-1,-1,-1,-1,-1,-1}
(skip one in the first 32B) - ...
start misalign count = 7: mask =
{0, 0, 0, 0, 0, 0, 0,-1}
(skip all but one in the first 32B)end misalign count = 0: no trailing elements. mask = all-ones (Aligned case).
this is the odd case, not similar to count=1. A couple extra instructions for this special case is worth avoiding an extra loop iteration and a cleanup with a mask of all-zeros.- end misalign count = 1: one trailing element. mask =
{-1, 0, 0, 0, 0, 0, 0, 0}
- ...
- end misalign count = 7: seven trailing elems. mask =
{-1,-1,-1,-1,-1,-1,-1, 0}
Untested code, assume there are bugs
section .data
align 32 ; preferably no cache-line boundaries inside the table
; byte elements, to be loaded with pmovsx. all-ones sign-extends
DB 0, 0, 0, 0, 0, 0, 0, 0
masktable_intro: ; index with 0..-7
DB -1, -1, -1, -1, -1, -1, -1, -1
masktable_outro: ; index with -8(aligned), or -1..-7
DB 0, 0, 0, 0, 0, 0, 0, 0
; the very first and last 0 bytes are not needed, since we avoid an all-zero mask.
section .text
global floatmul ; (float *rdi)
floatmul:
mov eax, edi
and eax, 0x1c ; 0x1c = 7 << 2 = 0b11100
lea rdx, [rdi + 4096 - 32] ; one full vector less than the end address (calculated *before* masking for alignment).
;; replace 4096 with rsi*4 if rsi has the count (in floats, not bytes)
and rdi, ~0x1c ; Leave the low 2 bits alone, so this still works on misaligned floats.
shr eax, 2 ; misalignment-count, in the range [0..7]
neg rax
vpmovsxbd ymm0, [masktable_intro + rax] ; Won't link on OS X: Need a separate LEA for RIP-relative
vmaskmovps ymm1, ymm0, [rdi]
vaddps ymm1, ymm1, ymm1 ; *= 2.0
vmaskmovps [rdi], ymm0, ymm1
;;; also prepare the cleanup mask while the table is still hot in L1 cache
; if the loop count known to be a multiple of the vector width,
; the alignment of the end will be the same as the alignment of the start
; so we could just invert the mask
; vpxor xmm1, xmm1, xmm1 ; doesn't need an execution unit
; vpcmpeqd ymm0, ymm1, ymm0
; In the more general case: just re-generate the mask from the one-past-the-end addr
mov eax, edx
xor ecx, ecx ; prep for setcc
and eax, 0x1c ; sets ZF when aligned
setz cl ; rcx=1 in the aligned special-case, else 0
shr eax, 2
lea eax, [rax + rcx*8] ; 1..7, or 8 in the aligned case
neg rax
vpmovsxbd ymm0, [masktable_outro + rax]
.loop:
add rdi, 32
vmovups ymm1, [rdi] ; Or vmovaps if you want to fault if the address isn't 4B-aligned
vaddps ymm1, ymm1, ymm1 ; *= 2.0
vmovups [rdi], ymm1
cmp rdi, rdx ; while( (p+=8) < (start+1024-8) )
jb .loop ; 5 fused-domain uops, yuck.
; use the outro mask that we generated before the loop for insn scheduling / cache locality reasons.
vmaskmov ymm1, ymm0, [rdi]
vaddps ymm1, ymm1, ymm1 ; *= 2.0
vmaskmovps [rdi], ymm0, ymm1
ret
; vpcmpeqd ymm1, ymm1, ymm1 ; worse way to invert the mask: dep-chain breaker but still needs an execution unit to make all-ones instead of all-zeros.
; vpxor ymm0, ymm0, ymm1
This does require a load from a table, which can miss in L1 cache, and 15B of table data. (Or 24B if the loop count is also variable, and we have to generate the end mask separately).
Either way, after the 4 instructions to generate the misalignment-count and the aligned start address, getting the mask only takes a single vpmosvsxbd instruction. (The ymm, mem form can't micro-fuse, so it's 2 uops). This requires AVX2.
Without AVX2:
- vmovdqu from a 60B table of DWORDs (
DD
) instead of Bytes (DB
). This would actually save an insn relative to AVX2:address & 0x1c
is the index, without needing a right-shift by two. The whole table still fits in a cache line, but without room for other constants the algo might use.
Or:
- 2x vpmovsxbd into two 128b registers (
[masktable_intro + rax]
and[masktable_intro + rax + 4]
) - vinsertf128
Or: (more insns, and more shuffle-port pressure, but less load-port pressure, almost certainly worse)
- vpmovsxbw into a 128b register
- vpunpcklwd / vpunpckhwd into two xmm regs (src1=src2 for both)
- vinsertf128
Overhead:
Integer ops: 5 uops at the start to get an index and align the start pointer. 7 uops to get the index for the end mask. Total of 12 GP-register uops beyond simply using unaligned, if the loop element count is a multiple of the vector width.
AVX2: Two 2-fused-domain-uop vector insns to go from [0..7] index in a GP reg to a mask in a YMM reg. (One for the start mask, one for the end mask). Uses a 24B table, accessed in an 8B window with byte granularity.
AVX: Six 1-fused-domain-uop vector insns (three at the start, three at the end). With RIP-relative addressing for the table, four of those instructions will be
[base+index]
and won't micro-fuse, so an extra two integer insns might be better.
The code inside the loop is replicated 3 times.
TODO: write another answer generating the mask on the fly, maybe as bytes in a 64b reg, then unpacking it to 256b. Maybe with a bit-shift, or BMI2's BZHI(-1, count)?
AVX-only: Unaligned accesses at the start/end, pipelining loads to avoid problems when rewriting in place.
Thanks to @StephenCanon for pointing out that this is better than VMASKMOVPS
for anything that VMASKMOVPS
could do to help with looping over unaligned buffers.
This is maybe a bit much to expect a compiler to do as a loop transformation, esp. since the obvious way can make Valgrind unhappy (see below).
section .text
global floatmul ; (float *rdi)
floatmul:
lea rdx, [rdi + 4096 - 32] ; one full vector less than the end address (calculated *before* masking for alignment).
;; replace 4096 with rsi*4 if rsi has the count (in floats, not bytes)
vmovups ymm0, [rdi] ; first vector
vaddps ymm0, ymm0, ymm0 ; *= 2.0
; don't store yet
lea rax, [rdi+32]
and rax, ~0x1c ; 0x1c = 7 << 2 = 0b11100 ; clear those bits.
vmovups ymm1, [rax] ; first aligned vector, for use by first loop iteration
vmovups [rdi], ymm0 ; store the first unaligned vector
vmovups ymm0, [rdx] ; load the *last* unaligned vector
.loop:
;; on entry: [rax] is already loaded into ymm1
vaddps ymm1, ymm1, ymm1 ; *= 2.0
vmovups [rax] ; vmovaps would fault if p%4 != 0
add rax, 32
vmovups ymm1, [rax]
cmp rax, rdx ; while( (p+=8) < (endp-8) );
jb .loop
; discard ymm1. It includes data from beyond the end of the array (aligned case: same as ymm0)
vaddps ymm0, ymm0, ymm0 ; the last 32B, which we loaded before the loop
vmovups [rdx], ymm0
ret
; End alignment:
; a[] = XXXX XXXX ABCD E___ _ = garbage past the end
; ^rdx
; ^rax ^rax ^rax ^rax(loop exit)
; ymm0 = BCDE
; ymm1 loops over ..., XXXX, ABCD, E___
; The last load off the end of the array includes garbage
; because we pipeline the load for the next iteration
Doing a load from the end of the array at the start of the loop seems a little weird, but hopefully it doesn't confuse the hardware prefetchers, or slow down getting the beginning of the array streaming from memory.
Overhead:
2 extra integer uops total (to set up the aligned-start). We're already using the end pointer for the normal loop structure, so that's free.
2 extra copies of the loop body (load/calc/store). (First and last iteration peeled).
Compilers probably won't be happy about emitting code like this, when auto-vectorizing. Valgrind will report accesses outside of array bounds, and does so by single-stepping and decoding instructions to see what they're accessing. So merely staying within the same page (and cache line) as the last element in the array isn't sufficient. Also note that if the input pointer isn't 4B-aligned, we can potentially read into another page and segfault.
To keep Valgrind happy, we could stop the loop two vector widths early, to do the special-case load of the unaligned last vector-width of the array. That would require duplicating the loop body an extra time (insignificant in this example, but it's trivial on purpose.) Or maybe avoid pipelining by having the intro code jump into the middle of the loop. (That may be sub-optimal for the uop-cache, though: (parts of) the loop body may end up in the uop cache twice.)
TODO: write a version that jumps into the loop mid-way.