C++: converting a container to a container of different yet compatible type
Moreover for many cases I'm pretty sure that forcing a reinterpret_cast would work fine
I’m betting you that it doesn’t. Two containers that store different types are never guaranteed to be binary compatible even if their contained objects are. Even if they happen to be binary compatible under some specific version of some compiler implementation, this is an implementation detail that can change from one minor version to the next.
Relying on such undocumented behaviour is opening the door to many unpleasantly long nights of debugging.
If you want to pass such containers to a function, simply make the function a template so that containers of arbitrary type can be passed into it. Similar with classes. This is the whole point of templates, after all.
Ok, let me summarize the whole thing.
Your (correct!) answers say that in C++ binary compatibility * is never guaranteed for different types. It's undefined behavior to take the value of a memory area where a variable is located, and use it for a variable of a different type (and this most likely should be avoided also with variables of the same type).
Also in real-life this thing could be dangerous even for simple objects, never mind containers!
*: by binary compatibility I mean that the same values is stored in memory in the same way and that the same assembly instruction are used at the same way to manuipulate it. eg: even if float
and int
are 4 bytes each, they are not be binary compatible.
However I'm not satisfied by this C++ rule: let's focus on a single case, like on these two structures: struct A{ int a[1000000]; };
and struct B{ int a[1000000]; };
.
We can't just use the address of an A
object as if it was a B
one. And this frustrates me for the following reasons:
The compiler statically knows if those structures are binary compatible: once the executable has been generated you could look at it and tell if they are such. Just it (the compiler) doesn't give us these information.
As far as I know any C++ compiler ever existed treats data in a consistent way. I can't even imagine of a compiler generating different representations for those two structures. The point that bugs me the most is that not only those simple
A
andB
structs are binary compatible, but about any container is, if you use it with types you can expect to be binary compatible (I ran some tests with GCC 4.5 and Clang 2.8 on both custom containers and STL/boost ones).Casting operators allow the compiler do what I'm looking to do, but only with basic types. If you cast an
int
asconst int
(or anint*
and achar*
), and those two types are binary compatible, the compiler can (most likely will) avoid making a copy of it and just use the same raw bytes.
My idea is then to create a custom object_static_cast
that will check if the object of the type it got, and the object of the type to cast into are binary compatible; if they are it just returns the casted reference, otherwise it'll construct a new object and will return it.
Hope to not be downvoted too much for this answer; I'll delete it if SO community doesn't like it.
To check if two types are binary compatible introduced a new type trait:
// NOTE: this function cannot be safely implemented without compiler
// explicit support. It's dangerous, don't trust it.
template< typename T1, typename T2 >
struct is_binary_compatible : public boost::false_type{};
as the note sais (and as said earlier) there's no way to actually implement such type trait (just like boost::has_virtual_destructor
, for example).
Then here is the actual object_static_cast
implementation:
namespace detail
{
template< typename T1, typename T2, bool >
struct object_static_cast_class {
typedef T1 ret;
static ret cast( const T2 &in ) {
return T1( in );
}
};
// NOTE: this is a dangerous hack.
// you MUST be sure that T1 and T2 is binary compatible.
// `binary compatible` means
// plus RTTI could give some issues
// test this any time you compile.
template< typename T1, typename T2 >
struct object_static_cast_class< T1, T2, true > {
typedef T1& ret;
static ret cast( const T2 &in ) {
return *( (T1*)& in ); // sorry for this :(
}
};
}
// casts @in (of type T2) in an object of type T1.
// could return the value by value or by reference
template< typename T1, typename T2 >
inline typename detail::object_static_cast_class< T1, T2,
is_binary_compatible<T1, T2>::value >::ret
object_static_cast( const T2 &in )
{
return detail::object_static_cast_class< T1, T2,
is_binary_compatible<T1, T2>::value >::cast( in );
};
And here an usage example
struct Data {
enum { size = 1024*1024*100 };
char *x;
Data( ) {
std::cout << "Allocating Data" << std::endl;
x = new char[size];
}
Data( const Data &other ) {
std::cout << "Copying Data [copy ctor]" << std::endl;
x = new char[size];
std::copy( other.x, other.x+size, x );
}
Data & operator= ( const Data &other ) {
std::cout << "Copying Data [=]" << std::endl;
x = new char[size];
std::copy( other.x, other.x+size, x );
return *this;
}
~Data( ) {
std::cout << "Destroying Data" << std::endl;
delete[] x;
}
bool operator==( const Data &other ) const {
return std::equal( x, x+size, other.x );
}
};
struct A {
Data x;
};
struct B {
Data x;
B( const A &a ) { x = a.x; }
bool operator==( const A &a ) const { return x == a.x; }
};
#include <cassert>
int main( ) {
A a;
const B &b = object_static_cast< B, A >( a );
// NOTE: this is NOT enough to check binary compatibility!
assert( b == a );
return 0;
}
Output:
$ time ./bnicmop
Allocating Data
Allocating Data
Copying Data [=]
Destroying Data
Destroying Data
real 0m0.411s
user 0m0.303s
sys 0m0.163s
Let's add these (dangerous!) lines before main()
:
// WARNING! DANGEROUS! DON'T TRY THIS AT HOME!
// NOTE: using these, program will have undefined behavior: although it may
// work now, it might not work when changing compiler.
template<> struct is_binary_compatible< A, B > : public boost::true_type{};
template<> struct is_binary_compatible< B, A > : public boost::true_type{};
Output becomes:
$ time ./bnicmop
Allocating Data
Destroying Data
real 0m0.123s
user 0m0.087s
sys 0m0.017s
This should only be used in critical points (not to copy an array of 3 elements once in a while!), and to use this stuff we need at least write some (heavy!) test units for all the types we declared binary compatible, in order to check if they still are when we upgrade our compilers.
Besides to be on the safer side, the undefined-behaving object_static_cast
should only be enabled when a macro is set, so that it's possible to test the application both with and without it.
About my project, I I'll be using this stuff in a point: I need to cast a big container into a different one (which is likely to be binary compatible with my one) in my main loop.
Besides all the other issues dealt by others:
- conversion does not imply same memory footprint (think conversion operations...)
- potential specializations of the template class (container in your question, but from the point of view of the compiler a container is just another template) even if the types are themselves binary compatible
- unrelated-ness of different instantiations of the same template (for the general case)
There is a basic problem in the approach that is not technical at all. Provided that an apple is a fruit, neither a container of fruits is a container of apples (trivially demonstrated) nor a container of apples is a container of fruit. Try to fit a watermelon in a box of apples!
Going to more technical details, and dealing specifically with inheritance where no conversion is even required, (a derived object is already an object of the base class), if you were allowed to cast a container of the derived type to the base type, then you could add invalid elements to the container:
class fruit {};
class apple : public fruit {};
class watermelon : public fruit {};
std::vector<apple*> apples = buy_box_of_apples();
std::vector<fruit*> & fruits = reinterpret_cast< std::vector<fruit*>& >(apples);
fruits.push_back( new watermelon() ); // ouch!!!
The last line is perfectly correct: you can add a watermelon
to a vector<fruit*>
. But the net effect is that you have added a watermelon
to a vector<apple*>
, and in doing so you have broken the type system.
Not everything that looks simple in a first look is in fact sane. This is similar to the reason why you cannot convert an int **
to a const int **
even if the first thought is that it should be allowed. The fact is that allowing so would break the language (in this case const correctness):
const int a = 5;
int *p = 0;
int **p1 = &p; // perfectly fine
const int **p2 = p1; // should this be allowed??
*p2 = &a; // correct, p2 points to a pointer to a const int
**p1 = 100; // a == 100!!!
Which brings us back to the example you provided in one of the comments to another answer (to prove the point in general, I'll use a vector and instead of a set since set contents are immutable):
std::vector<int*> v1;
std::vector<const int*> &v2 = v1; // should this be allowed?
const int a = 5;
v2.push_back( &a ); // fine, v2 is a vector of pointers to constant int
// rather not: it IS a vector of pointers to non-const ints!
*v1[0] = 10; // ouch!!! a==10
Why not use the safe way
C<T1> c1;
/* Fill c1 */
C<T2> c2(c1.begin(), c1.end());
and then profile. If it turns out to be a bottleneck then you can always revisit your underlying algorithm and perhaps remove the need for a conversion completely.
Relying on any particular behavior from reinterpret_cast
may not cause problems now but months or years from now it will almost certainly cause someone debugging problems.