What is the result of NULL + int?
Let's take a trip back through the sordid history of OpenGL. Once upon a time, there was OpenGL 1.0. You used glBegin
and glEnd
to do drawing, and that was all. If you wanted fast drawing, you stuck things in a display list.
Then, somebody had the bright idea to be able to just take arrays of objects to render with. And thus was born OpenGL 1.1, which brought us such functions as glVertexPointer
. You might notice that this function ends in the word "Pointer". That's because it takes pointers to actual memory, which will be accessed when one of the glDraw*
suite of functions is called.
Fast-forward a few more years. Now, graphics cards have the ability to perform vertex T&L on their own (up until this point, fixed-function T&L was done by the CPU). The most efficient way to do that would be to put vertex data in GPU memory, but display lists are not ideal for that. Those are too hidden, and there's no way to know whether you'll get good performance with them. Enter buffer objects.
However, because the ARB had an absolute policy of making everything as backwards compatible as possible (no matter how silly it made the API look), they decided that the best way to implement this was to just use the same functions again. Only now, there's a global switch that changes glVertexPointer
's behavior from "takes a pointer" to "takes a byte offset from a buffer object." That switch being whether or not a buffer object is bound to GL_ARRAY_BUFFER
.
Of course, as far as C/C++ is concerned, the function still takes a pointer. And the rules of C/C++ do not allow you to pass an integer as a pointer. Not without a cast. Which is why macros like BUFFER_OBJECT
exist. It's one way to convert your integer byte offset into a pointer.
The (char *)NULL
part simply takes the NULL pointer (which is usually a void*
in C and the literal 0 in C++) and turns it into a char*
. The + i
just does pointer arithmetic on the char*
. Because the null pointer usually has a zero address, adding i
to it will increment the byte offset by i
, thus generating a pointer who's value is the byte offset you passed in.
Of course, the C++ specification lists the results of BUFFER_OBJECT as undefined behavior. By using it, you're really relying on the compiler to do something reasonable. After all, NULL
does not have to be zero; all the specification says is that it is an implementation-defined null pointer constant. It doesn't have to have the value of zero at all. On most real systems, it will. But it doesn't have to.
That's why I just use a cast.
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, (void*)48);
It's not guaranteed behavior either way (int->ptr->int conversions are conditionally supported, not required). But it's also shorter than typing "BUFFER_OFFSET". GCC and Visual Studio seem to find it reasonable. And it doesn't rely on the value of the NULL macro.
Personally, if I were more C++ pedantic, I'd use a reinterpret_cast<void*>
on it. But I'm not.
Or you can ditch the old API and use glVertexAttribFormat
et. al., which is better in every way.
#define BUFFER_OFFSET(i) ((char *)NULL + (i))
Technically the result of this operation is undefined, and the macro actually wrong. Let me explain:
C defines (and C++ follows it), that pointers can be casted to integers, namely of type uintptr_t
, and that if the integer obtained that way, casted back into the original pointer type it came from, would yield the original pointer.
Then there's pointer arithmetic, which means if I have two pointers pointing so the same object I can take the difference of them, resulting in a integer (of type ptrdiff_t
), and that integer added or subtracted to either of the original pointers, will yield the other. It is also defines, that by adding 1 to a pointer, the pointer to the next element of an indexed object is yielded. Also the difference of two uintptr_t
, divided by sizeof(type pointed to)
of pointers of the same object must be equal to the pointers themself being subtracted. And last but not least, the uintptr_t
values may be anything. They could be opaque handles as well. They're not required to be the addresses (though most implementations do it that way, because it makes sense).
Now we can look at the infamous null pointer. C defines the pointer which is casted to for from type uintptr_u
value 0 as the invalid pointer. Note that this is always 0 in your source code. On the backend side, in the compiled program, the binary value used for actually representing it to the machine may be something entirely different! Usually it is not, but it may be. C++ is the same, but C++ doesn't allow for as much implicit casting than C, so one must cast 0 explicitly to void*
. Also because the null pointer does not refer to an object and therefore has no dereferenced size pointer arithmetic is undefined for the null pointer. The null pointer referring to no object also means, there is no definition for sensibly casting it to a typed pointer.
So if this is all undefined, why does this macro work after all? Because most implementations (means compilers) are extremely gullible and compiler coders lazy to the highest degree. The integer value of a pointer in the majority of implementations is just the value of the pointer itself on the backend side. So the null pointer is actually 0. And although pointer arithmetic on the null pointer is not checked for, most compilers will silently accept it, if the pointer got some type assigned, even if it makes no sense. char
is the "unit sized" type of C if you want to say so. So then pointer arithmetic on cast is like artihmetic on the addresses on the backend side.
To make a long story short, it simply makes no sense to try doing pointer magic with the intended result to be a offset on the C language side, it just doesn't work that way.
Let's step back for a moment and remember, what we're actually trying to do: The original problem was, that the gl…Pointer
functions take a pointer as their data parameter, but for Vertex Buffer Objects we actually want to specify a byte based offset into our data, which is a number. To the C compiler the function takes a pointer (a opaque thing as we learned). The correct solution would have been the introduction of new functions especially for the use with VBOs (say gl…Offset
– I think I'm going to ralley for their introduction). Instead what was defined by OpenGL is a exploit of how compilers work. Pointers and their integer equivalent are implemented as the same binary representation by most compilers. So what we have to do, it making the compiler call those gl…Pointer
functions with our number instead of a pointer.
So technically the only thing we need to do is telling to compiler "yes, I know you think this variable a
is a integer, and you are right, and that function glVertexPointer
only takes a void*
for it's data parameter. But guess what: That integer was yielded from a void*
", by casting it to (void*)
and then holding thumbs, that the compiler is actually so stupid to pass the integer value as it is to glVertexPointer
.
So this all comes down to somehow circumventing the old function signature. Casting the pointer is the IMHO dirty method. I'd do it a bit different: I'd mess with the function signature:
typedef void (*TFPTR_VertexOffset)(GLint, GLenum, GLsizei, uintptr_t);
TFPTR_VertexOffset myglVertexOffset = (TFPTR_VertexOffset)glVertexPointer;
Now you can use myglVertexOffset
without doing any silly casts, and the offset parameter will be passed to the function, without any danger, that the compiler may mess with it. This is also the very method I use in my programs.