Is calling destructor manually always a sign of bad design?
No, Depends on the situation, sometimes it is legitimate and good design.
To understand why and when you need to call destructors explicitly, let's look at what happening with "new" and "delete".
To created an object dynamically, T* t = new T;
under the hood: 1. sizeof(T) memory is allocated. 2. T's constructor is called to initialize the allocated memory. The operator new does two things: allocation and initialization.
To destroy the object delete t;
under the hood: 1. T's destructor is called. 2. memory allocated for that object is released. the operator delete also does two things: destruction and deallocation.
One writes the constructor to do initialization, and destructor to do destruction. When you explicitly call the destructor, only the destruction is done, but not the deallocation.
A legitimate use of explicitly calling destructor, therefore, could be, "I only want to destruct the object, but I don't (or can't) release the memory allocation (yet)."
A common example of this, is pre-allocating memory for a pool of certain objects which otherwise have to be allocated dynamically.
When creating a new object, you get the chunk of memory from the pre-allocated pool and do a "placement new". After done with the object, you may want to explicitly call the destructor to finish the cleanup work, if any. But you won't actually deallocate the memory, as the operator delete would have done. Instead, you return the chunk to the pool for reuse.
Calling the destructor manually is required if the object was constructed using an overloaded form of operator new()
, except when using the "std::nothrow
" overloads:
T* t0 = new(std::nothrow) T();
delete t0; // OK: std::nothrow overload
void* buffer = malloc(sizeof(T));
T* t1 = new(buffer) T();
t1->~T(); // required: delete t1 would be wrong
free(buffer);
Outside managing memory on a rather low level as above calling destructors explicitly, however, is a sign of bad design. Probably, it is actually not just bad design but outright wrong (yes, using an explicit destructor followed by a copy constructor call in the assignment operator is a bad design and likely to be wrong).
With C++ 2011 there is another reason to use explicit destructor calls: When using generalized unions, it is necessary to explicitly destroy the current object and create a new object using placement new when changing the type of the represented object. Also, when the union is destroyed, it is necessary to explicitly call the destructor of the current object if it requires destruction.
No, you shouldn't call it explicitly because it would be called twice. Once for the manual call and another time when the scope in which the object is declared ends.
Eg.
{
Class c;
c.~Class();
}
If you really need to perform the same operations you should have a separate method.
There is a specific situation in which you may want to call a destructor on a dynamically allocated object with a placement new
but it doesn't sound something you will ever need.
All answers describe specific cases, but there is a general answer:
You call the dtor explicitly every time you need to just destroy the object (in C++ sense) without releasing the memory the object resides in.
This typically happens in all the situation where memory allocation / deallocation is managed independently from object construction / destruction. In those cases construction happens via placement new upon an existent chunk of memory, and destruction happens via explicit dtor call.
Here is the raw example:
{
char buffer[sizeof(MyClass)];
{
MyClass* p = new(buffer)MyClass;
p->dosomething();
p->~MyClass();
}
{
MyClass* p = new(buffer)MyClass;
p->dosomething();
p->~MyClass();
}
}
Another notable example is the default std::allocator
when used by std::vector
: elements are constructed in vector
during push_back
, but the memory is allocated in chunks, so it pre-exist the element contruction. And hence, vector::erase
must destroy the elements, but not necessarily it deallocates the memory (especially if new push_back have to happen soon...).
It is "bad design" in strict OOP sense (you should manage objects, not memory: the fact objects require memory is an "incident"), it is "good design" in "low level programming", or in cases where memory is not taken from the "free store" the default operator new
buys in.
It is bad design if it happens randomly around the code, it is good design if it happens locally to classes specifically designed for that purpose.