Ok I know the difference between delete and delete[] BUT this question just all of a sudden popped in my head since "NOTHING" happens when I use delete[] for all purposes.

Now I know when to use one or the other but it still for some odd reason bothers me that delete[] on anything is acceptable in g++/codeblocks.

Example:

struct Foo
{
    int* X;
    int NumberOfElements;
    //.......
};


int main()
{
    Foo* F = new Foo;   //Create ONE instance of Foo.   
    delete[] Foo;       //Does NOTHING fishy. It just deletes foo?

    //OR

    delete Foo;         //Does the same as the above no? If yes, why bother ever using delete?


    Foo* FA = new Foo[10];  //Create 10 instances of Foo.
    delete[] Foo;           //Deletes all instances of Foo pointed to by FA.

    //But not..
    delete Foo;             //Big problem/leaks?
}

Why is it safe to use delete[] without getting a crash or something?

I was doing:

template<typename T>
void DeleteAll(T* &Type, size_t Size)
{
    if (Size > 1)
        delete[] Type;
    else
        delete Type;
    Type = NULL;
}

and thinking about whether or not I even have to specify the size to use the correct delete since delete[] is no different?

Why is it safe to use delete[] without getting a crash or something?

You consider undefined behavior "safe"?

Well I don't know that it is undefined? I thought delete[] just loops through everything and calls delete?
I guess my question should be "Why is it undefined?".

Also what if I do:

int main()
{
    Foo* F = new Foo[5];
    Foo* K = &F;

    ++K;
    delete[] K;   //Does that delete all instances of F?

    //OR do I need to do:
    delete[] --K;
}

I guess my question should be "Why is it undefined?".

I'll raise your question with a counter question: what's stopping an implementation from using completely independent allocation pools and mechanisms for new/delete versus new[]/delete[]?

++K;
delete[] K; //Does that delete all instances of F?

No, that should crash because you're trying to delete a pointer that's different from the one given to you by new. If you didn't increment K then it would correctly free the memory, and you'd need to take into consideration that F and K both alias the freed memory.

As deceptikon said, mixing new/delete and new[]/delete[] is undefined behavior because there is no requirement for the implementation (compiler + standard libraries) to use the same underlying mechanism for both variants. That's why it is undefined behavior, which just means that there is nothing in the C++ standard that defines the behavior that such code should produce, which means, you can't reasonably expect anything predictable out of that code. In addition, you can easily overload those operators and create your own allocation mechanisms that would make the mixing of the two variants result in a crash (or other corruption).

I thought delete[] just loops through everything and calls delete?

That's not true at all. The main difference between delete and delete[] is that the former expects a single object to exist at the given address, and thus, calls the destructor on that object alone. While, on the other hand, the delete[] operator expects an array of objects to exist starting from the given address, has some kind of mechanism to figure out how many there are (probably something like asking the heap about the size of that memory block and dividing that by the size of the class (or type)), and then, it calls the destructor on each object individually. Only after the calls to the destructors, the memory gets deallocated ("deleted"). Most implementations would probably implement the memory deallocation in exactly the same way for both the delete and delete[] operators, but again, that's not a guarantee, simply an explanation as to why it works to call delete[] on memory allocated with new (on many implementations). When you call delete[] on a block allocated with new, whatever mechanism figures out how many objects exist at the given address will probably figure out that only one exists, and call the destructor on that object alone, and then deallocate the memory with the same mechanism as the delete operator, and thus, no crash or heap corruption problem. But this is only an educated guess at what would happen if you try it on different implementations, it is not a guarantee, and so, it is undefined behavior.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.