glPixelStorei(GL_UNPACK_ALIGNMENT, 1) Disadvantages?
How can data not be 1-byte aligned?
This strongly suggests a lack of understanding of what the row alignment in pixel transfer operations means.
Image data that you pass to OpenGL is expected to be grouped into rows. Each row contains width
number of pixels, with each pixel being the size as defined by the format and type parameters. So a format of GL_RGB
with a type of GL_UNSIGNED_BYTE
will result in a pixel that is 24-bits in size. Pixels are otherwise expected to be packed, so a row of 16 of these pixels will take up 48 bytes.
Each row is expected to be aligned on a specific value, as defined by the GL_PACK/UNPACK_ALIGNMENT
. This means that the value you add to the pointer to get to the next row is: align(pixel_size * width, GL_*_ALIGNMENT)
. If the pixel size is 3-bytes, the width is 2, and the alignment is 1, the row byte size is 6. If the alignment is 4, the row byte size is eight.
See the problem?
Image data, which may come from some image file format as loaded with some image loader, has a row alignment. Sometimes this is 1-byte aligned, and sometimes it isn't. DDS images have an alignment specified as part of the format. In many cases, images have 4-byte row alignments; pixel sizes less than 32-bits will therefore have padding at the end of rows with certain widths. If the alignment you give OpenGL doesn't match that, then you get a malformed texture.
You set the alignment to match the image format's alignment. If you know or otherwise can ensure that your row alignment is always 1 (and that's unlikely unless you've written your own image format or DDS writer), you need to set the row alignment to be exactly what your image format uses.
Will it impact performance on modern gpus?
No, because the pixel store settings are only relevent for the transfer of data from or to the GPU, namely the alignment of your data. Once on the GPU memory it's aligned in whatever way the GPU and driver desire.