Hello All,

I am reading the Effective C++ book by Scott Meyers. Regarding new, he mentions the following:

this additional bookkeeping data can more than double the amount
of memory needed for each dynamically allocated object (especially if the class contains no virtual functions).

This is in reference to "new" providing more memory than the size of the class. Now, i'm not able to understand what "bookkeeping" he is referring to, that leads to excess memory allocation. At max, i understand that there would be sizeof(pointer) bytes for the pointer to the vtable. But, i didn't understand what other memory is needed for. Could anyone please explain what this excess memory is needed for?

Thanks!

Recommended Answers

All 5 Replies

I haven't got a copy the Effective C++ book so cannot lookup the section to verify this, but I suspect that the information that Scott is referring to would be 2 things:

  1. The vtable (as you have already correctly summised).
  2. Runtime class information (if enabled in your compilation)

An example is mentioned specifically in the form of prepending size information to the block of memory. If the object is small, the overhead of size information can be prohibitive. It's only when the allocated objects get bigger that the "bookkeeping" information gets amortized away.

To clarify, Scott Meyer's explanation is as follows:

Because the default version of operator new is a general-purpose allocator, it must be prepared to allocate blocks of any size. Similarly, the default version of operator delete must be prepared to deallocate blocks of whatever size operator new allocated. For operator delete to know how much memory to deallocate, it must have some way of knowing how much memory operator new allocated in the first place. A common way for operator new to tell operator delete how much memory it allocated is by prepending to the memory it returns some additional data that specifies the size of the allocated block.

The broader point is essentially that the heap structure that does all the bookkeeping related to the dynamic allocations can have significant overhead with respect to the memory blocks that it needs to manage. In simple terms, the heap is essentially a data structure that makes an index of all the free space available for allocation. Whenever you allocate and deallocate memory (with new and delete), the heap must be updated and entries are added or removed. So, you must expect a certain amount of overhead in memory and time needed for all that bookkeeping. So, be economical with dynamic allocations, and prefer other strategies when allocating a large amount of small objects (such as a memory pool).

But, i didn't understand what other memory is needed for.
Could anyone please explain what this excess memory is needed for?

There are very few people who could explain this better than Doug Lea.

...
This approach leads to fixed bookkeeping overhead per chunk. Because both size information and bin links must be held in each available chunk, the smallest allocatable chunk is 16 bytes in systems with 32-bit pointers and 24 bytes in systems with 64-bit pointers. These minimum sizes are larger than most people would like to see -- they can lead to significant wastage for example in applications allocating many tiny linked-list nodes. However, the 16 bytes minimum at least is characteristic of any system requiring 8-byte alignment in which there is any malloc bookkeeping overhead. - http://g.oswego.edu/dl/html/malloc.html

The article (written in 1996) is dated (particulary after the proliferation of multi-core systems); but it is still a very good read if one wants to understand dynamic memory allocation.

For objects with small footprint, and large number of dynamic allocations, overloading the new and delete operators, with custom memory allocation can do much better. The archetypical example being Loki's small object allocator.

An example using Boost pool:

    #include <boost/pool/object_pool.hpp>

        struct small
        {
            int i = 4 ;

            static boost::object_pool<small> pool ;

            inline static void* operator new( std::size_t n )
            {
                void* p = nullptr ;
                if( n == sizeof(small) ) p = pool.malloc() ;
                return p ? p : ::new char[n] ;
            }

            inline static void operator delete( void* p )
            {
                small* ps = static_cast<small*>(p) ;
                if( pool.is_from(ps) ) pool.free(ps) ;
                else ::delete[] static_cast<char*>(p);
            }

            // operator new[] etc...
        };

        boost::object_pool<small> small::pool ;

        int main( int argc, char* argv[] )
        {
            enum { N = 1024 * 1024 } ;
            for( int i=0 ; i < N ; ++i ) new small ;
            std::cin.get() ;
        }

Run this program,
a. first with line 31 for( int i=0 ; i < N ; ++i ) new small ; commented out
b. then with it uncommentd
c. and finally with it modified for( int i=0 ; i < N ; ++i ) ::new small ; to use the default allocator.

Measure the memory uage in each case, and you would get an idea of the overheads involved in the library implementation that you are using.

Have you ever noticed that when deleting dynamic run time memory, you don't have to specify the number of bytes to deallocate? The bookkeeping you are refering to varies by compiler, however, in the simpilist example, and this is something I have seen in Watcom C, the "size" of the run time memory, allocated by malloc, is stored in the four bytes prior to the requested memory. A block of run time allocated memory could perhapes look like so

               Pointer returned from malloc
                            |
                            V
    [Bookeeping information | Your request memory ......]

Now when you pass this pointer to free(), the pointer is simply decremented to the book keeping information and the number of bytes that should be free'd are extracted. It's good to have a little bit of an understanding of the existance of book keeping information, however don't get hung up on this.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.