Difference between format and internalformat

The internal format describes how the texture shall be stored in the GPU. The format describes how the format of your pixel data in client memory (together with the type parameter).

Note that the internal format does specify both the number of channels (1 to 4) as well as the data type, while for the pixel data in client memory, both are specified via two separate parameters.

The GL will convert your pixel data to the internal format. If you want efficient texture uploads, you should use matching formats so that there is no conversion needed. But be aware that most GPUs store the texture data in BGRA order, this still is represented by the internal format GL_RBGA - the internal format only describes the number of channels and the data type, the internal layout is totally GPU-specific. However, that means that it is often recommended for maximum performance to use GL_BGRA as the format of your pixel data in client memory.

Let's assume that data is an array of 32 x 32 pixel values where there are four bytes per each pixel (unsigned char data 0-255) for red, green, blue and alpha. What's the difference between the first GL_RGBA and the second one?

The first, internalFormat tells the GL that it should store the texture as 4 channel (RGBA) with normalized integer in the preferred precision (8 bit per channel). The second one, format tells the Gl that you are providing 4 channels per pixel in the R,G,B,A order.

You could for example supply the data as 3-channel RGB data and the GL would automatically extend this to RGBA (with setting A to 1) if the internal format is left at RGBA. You also could supply only the Red channel.

The other way around, if you use GL_RED as internalFormat, the GL would ignore the GB and A channel in your input data.

Also note that the data types will be converted. If you provide a pixel RGB with 32 bit float per channel, you could use GL_FLOAT. However, when you still use the GL_RGBA internal format, the GL will convert these to normalized integers with 8 bpit per channel, so the extra precision is lost. If you want the GL to use the floating point precision, you would also have to use a floating point texture format like GL_RGBA32F.

Why is GL_RGBA_INTEGER invalid in this context?

the _INTEGER formats are for unnormalized integer textures. There is no automatic conversion for integer textures in the GL. You have to use an integer internal format, AND you have to specify your pixel data with some _INTEGER format, otherwise it will result in an error.


The format (7th argument), together with the type argument, describes the data you pass in as the last argument. So the format/type combination defines the memory layout of the data you pass in.

internalFormat (2nd argument) defines the format that OpenGL should use to store the data internally.

Often times, the two will be very similar. And in fact, it is beneficial to make the two formats directly compatible. Otherwise there will be a conversion while loading the data, which can hurt performance. Full OpenGL allows combinations that require conversions, while OpenGL ES limits the supported combinations so that conversions are not needed in most cases.

The reason GL_RGBA_INTEGER is not legal in this case that there are rules about which conversions between format and internalFormat are supported. In this case, GL_RGBA for the internalFormat specifies a normalized format, while GL_RGBA_INTEGER for format specifies that the input consists of values that should be used as integers. There is no conversion defined between these two.

While GL_RGBA for internalFormat is still supported for backwards compatibility, sized types are generally used for internalFormat in modern versions of OpenGL. For example, if you want to store the data as an 8-bit per component RGBA image, the value for internalFormat is GL_RGBA8.

Frankly, I think there would be cleaner ways of defining these APIs. But this is just the way it works. Partly it evolved this way to maintain backwards compatibility to OpenGL versions where features were much more limited. Newer versions of OpenGL add the glTexStorage*() entry points, which make some of this nicer because it separates the internal data allocation and the specification of the data.

Tags:

Opengl