this is the final version of the priority queue i implemented {with some of your help!}...

the problem is that when i profile the code with valgrind, it shows that i have a memory leak....

i can also see that, if in my code i make certain test_points{where i pause the execution} and then check out the process manager....It seems that although i clean the memory, something stays behind...

i've used simple segregated storage scheme that vijayan121 suggested.. i also have a version where i visit every node to deleting it... using a preorder tree walk, or the method Hamrick suggested {with a stack}

all versions, have the exact same problem, that tells me that memory leak is not in the structure .....but where else can it be??


I attach the version with the segregated scheme along with the makefile i used to test the structure with valgrind

thank you in advance for your time...

If this is for a school assignment then you probably should not use the boost library without your instructor's prior permission because he (or she) may not be able to compile it on school computers.

check the function LeftistHeap::initializeHeap . valgrind is reporting 80 bytes in 10 unreachable blocks of memory allocated in this function. this is a leak.
the 2 reachable blocks are not memory allocated from your code (they are allocations by libc.so and will be released on program exit) and do not constitute a leak.

If this is for a school assignment then you probably should not use the boost library without your instructor's prior permission because he (or she) may not be able to compile it on school computers.

it begun as a university project, now the addons are basically for fun and learn...

check the function LeftistHeap::initializeHeap . valgrind is reporting 80 bytes in 10 unreachable blocks of memory allocated in this function. this is a leak.
the 2 reachable blocks are not memory allocated from your code (they are allocations by libc.so and will be released on program exit) and do not constitute a leak.

This is what LeftistHeap::initializeHeap does:
In orderd to get a linear time initialization i create n heaps with each containing one of the n elements. Then i place these heaps on a FIFO queue. Then the heaps are deleted from this queue in pairs, merged, and added to the end of the queue until only one heap remains.This heap assign to root pointer....

void LeftistHeap::initializeHeap(std::vector<int> & vec)
{
 	 queue<LeftistHeap *> q;
 	 
 	 for(int i=0; i<vec.size(); ++i)
 	 {
	    q.push(new LeftistHeap(vec[i]) );
	 }
	 
	 LeftistHeap *temp;
	 
	 while ( q.size()!=1)
     	{
           temp = q.front();
           q.pop();
           //the line below equals to :: temp = merge(temp, q.front())
           temp->merge( *(q.front()) );
           q.pop();

           q.push(temp);
     	}
     
     // assign final heap.....
     root=merge(q.front()->root, NULL);

}

here is the only line where i allocate space in the program

q.push(new LeftistHeap(vec[i]) );

So to see if this has any problems i created a new ain that does only this::

int main(int argc, char *argv[])
{
	LeftistHeap init(5);
    	
    return 0;
}

if i test this programm with valgrind, i don't get any errors.... which is weird because i shoud inlcude this line to have it empty the pool{if i have understand correctly the operation of the pool}

myNode::node_pool.purge_memory() ;

any other ideas?

>> here is the only line where i allocate space in the program q.push(new LeftistHeap(vec[i]) ); the leak is because of that line; sizeof(LeftistHeap) == sizeof(pointer) + sizeof(int), adds up to 8 bytes on i386. the leak reported by valgrind is

80 bytes in 10 blocks are definitely lost in loss record 1 of 2
==1599== at 0x3C03D2F3: operator new(unsigned) (in /usr/local/lib/valgrind/vgpreload_memcheck.so)
==1599== by 0x804936E: LeftistHeap::initializeHeap(std::vector<int, std::allocator<int> >&) (LeftistHeap.cpp:206)

and in main.cpp we have enum { HOW_MANY_ELEMENTS = 10, TEST_ITERATIONS = 1, DISTRIBUITION_AREA=1024*1024 }; the blocks lost are the 10 LeftistHeap objects that you allocate using new and never delete.

change HOW_MANY_ELEMENTS to 1000 and valgrind will report 8000 bytes in 1000 blocks as definitely lost.
the dynamically allocated LeftistHeap objects are leaking; not the nodes

>> if i test this programm with valgrind, i don't get any errors.... which is weird...

valgrind automagically knows about memory that is still reachable. node_pool (has a static storage duration; still not destroyed at the point main returns) has references the the blocks it has allocated; so the memory is not *definitely* lost. in fact, the destructor of node_pool will free this memory.
you could verify this by using the valgrind switch --show-below-main=yes (continue stack traces below main)

q.push(new LeftistHeap(vec[i]) ); the leak is because of that line;

but why? i thaught that with this line the queue queue<LeftistHeap *> q; would be made responsible for desposing the elements....

sizeof(LeftistHeap) == sizeof(pointer) + sizeof(int), adds up to 8 bytes on i386. the leak reported by valgrind is
and in main.cpp we have enum { HOW_MANY_ELEMENTS = 10, TEST_ITERATIONS = 1, DISTRIBUITION_AREA=1024*1024 }; the blocks lost are the 10 LeftistHeap objects that you allocate using new and never delete.

something irrelevant: why the size of leftistheap is just sizeof(pointer) + sizeof(int)??I also have three pointers in the leftistheap class...

change HOW_MANY_ELEMENTS to 1000 and valgrind will report 8000 bytes in 1000 blocks as definitely lost.
the dynamically allocated LeftistHeap objects are leaking; not the nodes

I had observed this behaviour, thats why i thaught it was the nodes leaking and also set the size to 10

valgrind automagically knows about memory that is still reachable. node_pool (has a static storage duration; still not destroyed at the point main returns) has references the the blocks it has allocated; so the memory is not *definitely* lost. in fact, the destructor of node_pool will free this memory.
you could verify this by using the valgrind switch --show-below-main=yes (continue stack traces below main)

this i didn't know!thank you

here is what i did... if the queue isn't responsible then i am, so i did something like this:

LeftistHeap *a=new LeftistHeap(vec[i]);
q.push(a);
delete a;

The program compiles but the structure doesn't work afterwards...as i would expect,because i delete where the queue elements point and logically valgrind reports Invalid read of size 4 ...

So, the question remains , how can i deallocate something that i a not in control, because when i push the pointer in the queue i lose all control on that memory....

thanks again for your time,
nick

PS::Sorry, if i ask silly things, but i am kinda lost...

>> but why? i thaught that with this line the queue queue<LeftistHeap *> q; would be made responsible for desposing the elements....

the queue owns pointers to LeftistHeaps, not the LeftistHeaps themselves. so it would release what it owns, ie. the pointer, not the pointed object. the behaviour would have been different for a queue<LeftistHeap>.

>> why the size of leftistheap is just sizeof(pointer) + sizeof(int)??I also have three pointers in the leftistheap class...

the size of an object would include the size of its non-static member variables ( + a vptr if there are virtual functions + one vbaseptr for each virtual base class + any padding; however for yourLeftistHeap none of these apply). the only non-static member variables of a LeftistHeap are myNode *root; int emptyFlag; >> here is what i did... if the queue isn't responsible then i am, so i did something like this:
>> .....
>> doesn't work afterwards...as i would expect...
>> how can i deallocate something that i a not in control, because when i push the pointer in the queue i lose all control on that memory....

the queue will safely keep (a copy of) anything that you push into it. and give it back to you on demand.
do something like this instead.

void LeftistHeap::initializeHeap(std::vector<int> & vec)
{
   //prey to work!!!     
   queue<LeftistHeap *> q;


   for(int i=0; i<vec.size(); ++i)
   {
         q.push(new LeftistHeap(vec[i]) );
   }

   LeftistHeap *temp;

   while ( q.size()!=1)
     {
           temp = q.front();
           q.pop();
           //dld temp = merge(temp, q.front())
           temp->merge( *(q.front()) );

           // delete the LeftistHeap which we have just merged in to temp
           delete q.front() ; // *** this was added to the original code ***
           q.pop();

           q.push(temp);
     }

     //kanoume assign to teliko heap.....
     root=merge(q.front()->root, NULL);
    // delete the one temporary LeftistHeap remaning
     delete q.front() ; // *** this was added to the original code ***
}

note 1: i do not think that prayer is an effective antidote to problems in software.
note 2: no one is going to accuse you of writing fiendishly efficient code!

commented: great helper! +1

Thank you for solving this problem!! :cool:

the queue owns pointers to LeftistHeaps, not the LeftistHeaps themselves. so it would release what it owns, ie. the pointer, not the pointed object. the behaviour would have been different for a queue<LeftistHeap>.

that makes sense!

note 1: i do not think that prayer is an effective antidote to problems in software.

:D , that couldn't be more true... but when you are frustated with a program, your comments include funny stuff

note 2: no one is going to accuse you of writing fiendishly efficient code!

sorry, my english don't help me...fiendishly efficient code?:confused: that means that the code is efficient or not?

if there are more corrections that can be made, it would be great if you could point them out vijayan!

clearly the code (creating, merging, destroying a lot of heaps) you have is not very efficient. an alternative would be
1. write a function to add one element to an existing heap (push_heap).
2. to create a heap of N elements, start with an empty heap and iteratively call push_heap for each of the N elements.
this will do away with the need for creation/destruction of N heap objects.
i do not know the leftist heap data structure, but like in the standard heap (make_heap), there could be a faster algorithm to do this in linear time.

clearly the code (creating, merging, destroying a lot of heaps) you have is not very efficient. an alternative would be
1. write a function to add one element to an existing heap (push_heap).
2. to create a heap of N elements, start with an empty heap and iteratively call push_heap for each of the N elements.
this will do away with the need for creation/destruction of N heap objects.
i do not know the leftist heap data structure, but like in the standard heap (make_heap), there could be a faster algorithm to do this in linear time.

I was thinking something like you said but then i read this in wikipedia http://en.wikipedia.org/wiki/Leftist_tree ::

Initializing a height biased leftist tree is primarily done in one of two ways. The first is to merge each node one at a time into one HBLT. This process is inefficient and takes O(nlogn) time. The other approach is to use a queue to store each node and resulting tree. The first two items in the queue are removed, merged, and placed back into the queue. This can initialize a HBLT in O(n) time. This approach is detailed in the three diagrams supplied. A min height biased leftist tree is shown.

you think that the above could be implemented in a more efficient way?

you are right (read i was wrong). the technique you have used is indeed O(N).
you could improve the performance by avoiding the calls to new/delete. using a queue having leftist_heap by value (not pointer to dynamically allocated objects) would make it even faster. note that the copy/assignment/destroy of (your implementation of) leftist_heap are trivial operations. it would still be O(N), but you would not have to call new/delete N times.

first of all, i am sorry i didn't respond earlier , damned exams...

you could improve the performance by avoiding the calls to new/delete. using a queue having leftist_heap by value (not pointer to dynamically allocated objects) would make it even faster.
....... but you would not have to call new/delete N times.

Why the calls to new/delete harm the performance more than using by value passing?

I have an overloaded operator copy operator::

LeftistHeap::LeftistHeap(const LeftistHeap & rhs )
{
            root = NULL;
            *this = rhs;
}

It is ok for this purpose right?

i was thinking to begin something like this::

void LeftistHeap::initializeHeap(std::vector<int> & vec)
{
  
   queue<LeftistHeap> q;
 
 	
 LeftistHeap temp;
 
   for(int i=0; i<vec.size(); ++i)
   {
         q.push(temp(vec[i]) );
   }
 
 	//void LeftistHeap::merge( LeftistHeap & rhs )
   while ( q.size()!=1)
     {
           LeftistHeap temporal(	*(q.front())	);
           q.pop();
           //dld temp = merge(temp, q.front())
           temporal.merge( *(q.front()) );
 
           // delete the LeftistHeap which we have just merged in to temp
           q.pop();
 
           q.push(temporal);
     }
 
    //how to cahnge this line??
     root=merge(q.front()->root, NULL);
}

The way i see it, with value passing the queue will be responible for the elements, so if i pop one, it will also be deleted so how can i
assign it to root without having to copy the elements one by one?

Thanks again for your time,
Nick

>> Why the calls to new/delete harm the performance more than using by value passing?
this copy constructor does nothing more than the trivial bitwise copy that the compiler would have generated.

LeftistHeap::LeftistHeap(const LeftistHeap & rhs )
{
            root = NULL;
            *this = rhs;
}

as you have not declared an operator=, *this = rhs does a trivial bitwise assignment. (you can safely comment this constructor out; the inline bitwise copy that the compiler would generate would be faster than your out-of-line implementation and will do the same thing.) the same observation also holds for the do nothing destructor that you have.
since you are not using deep copy semantics, copying a LeftistHeap copies a pointer and an integer; destroying it would do nothing at all. new LeftistHeap on the other hand requires a call to operator new to allocate dynamic memory and delete LeftistHeap a call to operator delete to release the memory. these would typically require several dozen instructions to be executed. and you are doing this a very large number (several millions) of times.

>> ...so if i pop one, it will also be deleted so how can i ...
create a temporary copy (we have seen that this is inexpensive). and to avoid even this, try to use the object before popping it if possible. eg.

// note: added const to the input parameter to make it const-correct.
void LeftistHeap::initializeHeap( const std::vector<int>& vec )
{
      queue<LeftistHeap> q;
      for( int i=0; i<vec.size(); ++i ) q.push( LeftistHeap(vec[i]) );

     while ( q.size() > 1 )
     {
           LeftistHeap temp = q.front();
           q.pop();
           temp.merge( q.front() );
           q.pop();
           q.push(temp);
     }
     // the only LeftistHeap in the queue is still alive at this point. 
     root = merge( q.front().root, NULL );
}
commented: very helpfull post! +1

this copy constructor does nothing more than the trivial bitwise copy that the compiler would have generated
.......
times.

wow!!, i didn't know that...basically i was considering the new operator as a magic operator that returned memory! {i haven't read the part on stoursrup's book that explained what it really was..Now i have an idea, thank you!

>> ...so if i pop one, it will also be deleted so how can i ...
create a temporary copy (we have seen that this is inexpensive). and to avoid even this, try to use the object before popping it if possible. eg.

// note: added const to the input parameter to make it const-correct.
void LeftistHeap::initializeHeap( const std::vector<int>& vec )
{
//....
     root = merge( q.front().root, NULL );  //this is were i think the problem is...
}

I still can't understand one thing, the same as yesterday, so i will rephrase the question. If i am inserting things to queue by value{so the queue has the ownership} and then return a pointer to that queue {with root=merge( q.front().root, NULL ); , when i leave the function scope (and the queue is destroyed) won't i have a problem?

Anyhow, i tested your code with 15 elements(it compiled) ,but when i run it{and made a printout on the heap), i got a core dumped-segmentation fault error...

Thanks again for your help!
Is there any way to bypass the error...because i liked the overall idea of dumping the new operator!

>> root = merge( q.front().root, NULL ); .. that line is equivalent to root = q.front().root; >> If i am inserting things to queue by value{so the queue has the ownership} and then return a pointer to that queue {with root=merge( q.front().root, NULL );, when i leave the function scope (and the queue is destroyed) won't i have a problem?
the destructor of the leftistheap does not delete it's nodes (infact it does nothing). so if we merge the root of a leftistheap into another leftistheap and destroy the original leftistheap, the nodes will remain.

>> i tested your code with 15 elements(it compiled) ,but when i run it{and made a printout on the heap), i got a core dumped-segmentation fault error...
i have not been able to reproduce that error even with a million elements; with a simulated printout of the entire heap. this is an extract of the valgrind output.

>valgrind --leak-check=yes -v --show-reachable=yes ./project
...
==19458== Command line
==19458==    ./project
==19458== Startup, with flags:
==19458==    --leak-check=yes
==19458==    -v
==19458==    --show-reachable=yes
...
==19458== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
==19458== malloc/free: in use at exit: 8192 bytes in 2 blocks.
==19458== malloc/free: 32802 allocs, 32800 frees, 54861840 bytes allocated.
==19458==
==19458== searching for pointers to 2 not-freed blocks.
==19458== checked 2456944 bytes.
==19458==
==19458== 8192 bytes in 2 blocks are still reachable in loss record 1 of 1
==19458==    at 0x3C03D183: malloc (in /usr/local/lib/valgrind/vgpreload_memcheck.so)
==19458==    by 0x3C20B099: __smakebuf (in /lib/libc.so.6)
==19458==    by 0x3C20AF64: __swsetup (in /lib/libc.so.6)
==19458==    by 0x3C1ECDE1: __swbuf (in /lib/libc.so.6)
==19458==
==19458== LEAK SUMMARY:
==19458==    definitely lost: 0 bytes in 0 blocks.
==19458==    possibly lost:   0 bytes in 0 blocks.
==19458==    still reachable: 8192 bytes in 2 blocks.
==19458==         suppressed: 0 bytes in 0 blocks.
...

i am attaching the code with the suggested modifications along with the complete valgrind output. you could have a look at it and identify the differences between that and your code.

ok, i tried, but daniweb does not seem to allow uploads of tar.gz or tar.bz2 files (gives an error 'invalid file'). surprising for a site that hosts a software forum. as a work-around, you could send me a test mail (not daniweb pm, i do not think that would work either) and i can attach the file in the reply.

>> i tested your code with 15 elements(it compiled) ,but when i run it{and made a printout on the heap), i got a core dumped-segmentation fault error...
i have not been able to reproduce that error even with a million elements; with a simulated printout of the entire heap. this is an extract of the valgrind output.

once again my mistake!, i 've put the print statement after the deletion of the structure....How absent minded can i be?---I can hear Salem making a comment on this :P -- void mainers and uncarefull programmers are doomed!! {just a joke!}

>> If i am inserting things to queue by value{so the queue has the ownership} and then return a pointer to that queue {with root=merge( q.front().root, NULL );, when i leave the function scope (and the queue is destroyed) won't i have a problem?
the destructor of the leftistheap does not delete it's nodes (infact it does nothing). so if we merge the root of a leftistheap into another leftistheap and destroy the original leftistheap, the nodes will remain.

I just understood what you are saying, but still it seems weird {that the destructor doesn't do anything} because if the nodes remain and there isn't any leftist heap to contain them{i.e. i didn't merge, but just created a queue and then deleted it...} then this is not considered as a memory leak?

PS::for the record your technique had actual results because i could create 1million elements in about 60-70 milliseconds and now i can do the same in just 43ms !!!!

thanks again for all your help vijayan121....

I deleted the implementations of the copy constructor and destructor...let the compiler do the job...and the results:::

1million elements 38 ms!!!

>> 1 million elements, 38 ms
that seems reasonable. what were the compiler switches? and i suppose it with the use of boost.pool?

>> it seems weird {that the destructor doesn't do anything} because if the nodes remain and there isn't any leftist heap to contain them{i.e. i didn't merge, but just created a queue and then deleted it...} then this is not considered as a memory leak
not really. the heaps are merged in pairs and put back into the queue till only one node remains. the pointer to the root node of the last heap in the queue (the one that contains the fully merged heap) is copied to the member variable root of the heap pointed to by the this pointer. so even if the queue (and the heap that it contains) are destroyed, the nodes remain. till you call boost::pool::purge_memory().

>>> daniweb does not seem to allow uploads of tar.gz or tar.bz2 files
let me try something. great, this works! note: fool_daniweb.txt is *not* a text file; it is really a tar.bz2

Thank you for all your help!

PS::i haven't used any particular compiler switch..

i marked this thread as unsolved ,again. because unfortunately i found a new error ,that i have never seen before}.

The application works like it should, but if i increase the number of iterations from 1 to {lets say 9} i get a weird behaviour....

First of all the application gets too slow, from the fifth iteration and above...If the desktop environment downs't get stack i get a core dumped error...

If i run the application with valgrind i get the following warnings::

==6837== Warning: set address range perms: large range 134217736 (undefined)
==6837== Warning: set address range perms: large range 134217768 (noaccess)
==6837== Warning: set address range perms: large range 268435464 (undefined)
==6837== Warning: set address range perms: large range 268435496 (noaccess)
==6837== Warning: set address range perms: large range 536870920 (undefined)

Plus some more that look like the ones above...

note:: i haven't managed to finish a whole run of valgrind...

I also tried to compile with the -Wall option enabled and i get the following warnings::

LeftistHeap.cpp: In member function ‘LeftistHeap* LeftistHeap::createHeap(std::vector<int, std::allocator<int> >&)’:
LeftistHeap.cpp:166: warning: comparison between signed and unsigned integer expressions
LeftistHeap.cpp: In member function ‘void LeftistHeap::initializeHeap(std::vector<int, std::allocator<int> >&)’:
LeftistHeap.cpp:193: warning: comparison between signed and unsigned integer expressions

Any ideas? What can it be...i mean with fewer iterations i don't get any problem, and everything works like it should, also valgrind reports that i don't have any memory leak... Why i get these results with some more iterations?

Thanks for your help in advance...:)

>> First of all the application gets too slow, from the fifth iteration and above...
ok, this happens on my machine too, though not from the fifth iteration, but the core dump occurs later at about the 11th or 12th iteration. increasing the sysctl for max memory per process (in /boot/loader.conf in feebsd) does seem to remove that, but that only masks the problem; it will resurface at some higher iteration. and the slowing down continues to happen.

>> if the desktop environment downs't get stack i get a core dumped error...
no matter what happens in your program, the machine or the desktop environment has no business getting stuck. which linux distro are you using? even if you do want to use linux, you would be better of switching to a distro like slackware or debian where you would be unlikely to get into such problems. and from what i've seen so far, avoid the (unfortunately popular) *buntu disasters.

>> Any ideas? What can it be...i mean with fewer iterations i don't get any problem, and everything works like it should, also valgrind reports that i don't have any memory leak... Why i get these results with some more iterations?
i've only had a cursory look at the problem; my suspicion right now is in the boost::pool<> implementation. i can't say this with any confidence at all (and may be completely wrong); really need more time to investigate it.and come to a definite conclusion. i monitored the process memory usage: boost::pool<> does release all its memory at the end of each iteration, but seems to leave the heap fragmented.
in any case, i changed

myNode::node_pool.purge_memory() ;

to

myNode::node_pool.pool<>::~pool<>() ;
::new( &myNode::node_pool ) pool<>(  sizeof(myNode) ) ;

in main.cpp. and it seems to run fine without crashes (i tried 16 iterations). but the slowing down is still there; will look at that when i have more time.

>> ... also tried to compile with the -Wall option enabled and i get the following warnings...
the compiler is being finicky. change the control variable of the loops to a size_t (from int) and these warnings will go away.

to make sure that the code is in sync, i'm attaching the code with the modifications described earlier.

>> if the desktop environment downs't get stack i get a core dumped error...
no matter what happens in your program, the machine or the desktop environment has no business getting stuck. which linux distro are you using? even if you do want to use linux, you would be better of switching to a distro like slackware or debian where you would be unlikely to get into such problems. and from what i've seen so far, avoid the (unfortunately popular) *buntu disasters.

It doesn't get stack in the windows sense....but it gets really unresponsive for 2-3 minutes... I use ubuntu, because it is easy to set up everything and get a working system ie codecs, development,... In my old pc{P3 800mhz}, i have installed freeBSD. When i become more savvy, i want to migrate it to my "good" pc...but for now i relatively new and noob to the whole unix stuff...

>> Any ideas? What can it be...i mean with fewer iterations i don't get any problem, and everything works like it should, also valgrind reports that i don't have any memory leak... Why i get these results with some more iterations?
i've only had a cursory look at the problem; my suspicion right now is in the boost::pool<> implementation. i can't say this with any confidence at all (and may be completely wrong); really need more time to investigate it.and come to a definite conclusion. i monitored the process memory usage: boost::pool<> does release all its memory at the end of each iteration, but seems to leave the heap fragmented....but the slowing down is still there; will look at that when i have more time.

i must admit i have never heard before for fragented heap... i strikes me weird though that noone has experience it before... I mean, ok maybe you don't try everyday to allocate-deallocate 1million elements 15 times...but still.....

Also to learn how to use gdb or valgrind{these are the tools you use for this job right?}, i have to read the manual, or is there an alternative {and easier!} way?

So you suspect that the slowing down occurs due to the heap fragmentation? the weirder part is that i allocate-deallocate block of the same size every time...so shouldn't the allocator use the same memory positions everytime??

>> ... also tried to compile with the -Wall option enabled and i get the following warnings...
the compiler is being finicky. change the control variable of the loops to a size_t (from int) and these warnings will go away.

great!

ok. did a few more investigations. the memory allocation for the vector and that of boost::pool<> appear to interract unfavourably with each other, resulting in heap fragmentation. to avoid this, the following change was also made in main.cpp (in addition to the earlier change: call destructor/constructor sequence of node_pool to deallocate all memory).

//For vector randomization
vector<int> vec ;
vec.reserve(HOW_MANY_ELEMENTS) ;
generate_n( back_inserter(vec), size_t(HOW_MANY_ELEMENTS), generator ) ;

changed to

static vector<int> vec(HOW_MANY_ELEMENTS) ;
generate_n( vec.begin(), size_t(HOW_MANY_ELEMENTS), generator ) ;

and the problems disappeared. i ran it with 1 million elements for 128 iterations. as would be expected, the performance improved marginally (because of better cache locality) after the first few iterations and then remained steady. here is an extract of the results. complete results are in the attached tar.bz2 file.

>g++ -std=c++98 -Wall -I/usr/local/include -O3 -funroll-loops -march=pentium4 main.cpp LeftistHeap.cpp && ./a.out

#############################################
Start of Data-Structure Testing
Priority Queue
# elements: 1000000
#############################################
Initialize 0... end of init 0 time taken: 101.562 ms
Initialize 1... end of init 1 time taken: 101.562 ms
Initialize 2... end of init 2 time taken: 101.562 ms
Initialize 3... end of init 3 time taken: 93.750 ms
Initialize 4... end of init 4 time taken: 93.750 ms
Initialize 5... end of init 5 time taken: 93.750 ms
Initialize 6... end of init 6 time taken: 85.937 ms
Initialize 7... end of init 7 time taken: 93.750 ms
Initialize 8... end of init 8 time taken: 93.750 ms
Initialize 9... end of init 9 time taken: 85.937 ms
Initialize 10... end of init 10 time taken: 85.937 ms
Initialize 11... end of init 11 time taken: 93.750 ms
Initialize 12... end of init 12 time taken: 93.750 ms
// ...
// ...
Initialize 120... end of init 120 time taken: 93.750 ms
Initialize 121... end of init 121 time taken: 93.750 ms
Initialize 122... end of init 122 time taken: 85.937 ms
Initialize 123... end of init 123 time taken: 93.750 ms
Initialize 124... end of init 124 time taken: 93.750 ms
Initialize 125... end of init 125 time taken: 93.750 ms
Initialize 126... end of init 126 time taken: 93.750 ms
Initialize 127... end of init 127 time taken: 93.750 ms
resolution of libc clock: 7.8125 millisecs
make_heap - min: 85.937 max: 101.562 average: 92.224 millisecs


########################
END of Structure
########################

my guess is that doing something similar for the queue might improve things further. need some more looking into.

commented: just wow...he solved it again...:) +1

PS:: the output of valgrind i posted before...what does it mean?

ok. did a few more investigations. the memory allocation for the vector and that of boost::pool<> appear to interract unfavourably with each other, resulting in heap fragmentation.

wow!!! i've you have the time to explain how did you manage to solve this...it would be great cause honestly i didn't have a lot of ideas....

static vector<int> vec(HOW_MANY_ELEMENTS) ;
generate_n( vec.begin(), size_t(HOW_MANY_ELEMENTS), generator ) ;

i should have thought to make the vector stastic before because there is no need to reallocate the memory for 128 times{it is sufficient to repopulate it}... but still i don't think it explains why the previous solution was getting a core dumped....

my guess is that doing something similar for the queue might improve things further. need some more looking into.

i am not in my comuter right now, so i can't really test it... but i am not sure which of the 2 is the best{performance wise}::

1)make a static queue inside the initializer function or
2)make a static queue as a class member and use it inside the initializer function

>> I use ubuntu, because it is easy to set up everything and get a working system ie codecs, development,...
true enough. and only windows has better support for all kinds of hardware. i guess that is why it is popular.

>> In my old pc{P3 800mhz}, i have installed freeBSD. When i become more savvy, i want to migrate it to my "good" pc...but for now i relatively new and noob to the whole unix stuff...
true again. freebsd is primarily a unix server, not ideal for someone very new to unix. you might try using pc-bsd; i'm told that it is very easy to set up. http://www.pcbsd.org/?p=learnhome
and dru lavigne thinks that it is easier to set up than kubuntu. http://blogs.ittoolbox.com/unix/bsd/archives/kubuntu-vs-pcbsd-18431
freebsd will run quite well on a P3 800mhz machine. it is quite a "good" machine to run freebsd on for personal use.

>> i must admit i have never heard before for fragented heap... i strikes me weird though that noone has experience it before... I mean, ok maybe you don't try everyday to allocate-deallocate 1million elements 15 times...but still.....
try googling for 'fragmentation memory' and see how many results come up. like this one: http://www.cs.utk.edu/~huangj/cs360/360/notes/Fragmentation/lecture.html

>> the output of valgrind i posted before...what does it mean?
this is what the valgrind manual says:
Warning: set address range perms: large range <number>
Diagnostic message, mostly for benefit of the Valgrind developers, to do with memory permissions.
for a user, if the <number> keeps steadily going up (as in our case), typically indicates that the heap memory usage by the process is increasing (the heap grows from lower addresses upwards, the stack grows from higher addresses downwards). and if there are no leaks, the culprit is usually fragmentation.

>> to learn how to use gdb or valgrind ... i have to read the manual, or is there an alternative {and easier!} way?
i can't think of a better way than reading these manuals in front of a machine where you try things out as you read them. the manuals for both these tools are pretty good.

>> these are the tools you use for this job right?
no. i did not use gdb at all. used top to look at process memory usage and then valgrind.massif to profile the heap (checked fragmentation). once i realized that the heap fragmentation seemed to be the problem, the rest of the investigation was carried out by:
a. replacing the vector with a c style array with static storage duration and test the program.
b. replace boost.pool with an (incomplete minimal) pool allocator that i wrote and test the program.
which lead to the conclusions that i posted earlier.
the test allocator that i used is incomplete; but looking at its minimal code may give you an understanding of what a pool allocator is.

>> which of the 2 is the best{performance wise}::
1)make a static queue inside the initializer function or
2)make a static queue as a class member and use it inside the initializer function
i do not think it would make a difference {performance wise}. in any case, the deque (used by the queue wrapper) does not seem to have any part in the fragmentation of memory.

everything works as it should.. :)

i changed the queue to static and run some tests...i didn't see any performance benefits...the times remained the same..

Here is another weird situation::

i changed the initial simple makefile i had to this:

CC = g++
PROG = priority_queue
NAME =  project
DEBUG = -g
OPTIONS = -Wall

HDRS = LeftistHeap.h
SRCS = main.cpp LeftistHeap.cpp
OBJS = main.o LeftistHeap.o

$(PROG): $(OBJS)
	$(CC) -o $(NAME) $(OBJS) -O3

LeftistHeap.o: LeftistHeap.cpp LeftistHeap.h
	g++ -c LeftistHeap.cpp LeftistHeap.h -O3 -Wall
	
main.o: LeftistHeap.h main.cpp
	g++ -c main.cpp		-O3 -Wall
	
clean :
	rm -f core $(PROG) $(OBJS) LeftistHeap.h.gch  & clear

but i press make on the console more than 1 time i always get:

g++ -o project main.o LeftistHeap.o -O3

which is strange , because with the previous makefile, after the first time i got a message like

project is up to date

any ideas why i don't get this message?

PS::so pc-bsd is just a wrap for freebsd?
PS2::i think i know why the stack grows downward, but why the heap grows upward?

>> i changed the initial simple makefile i had to this:
the gmake makefile symtax for rules is:

target : (optional) prerequisies
<tab>command1
<tab>command2
#more commands

target and prerequisites are *files*. when you type gmake (linux aliases make to gmake) at the shell prompt without args, it looks for the default rule in the makefile (which is the first non-phony target). in your example, the default rule (after variable substitution) is:

priority_queue : main.o LeftistHeap.o # target : prerequisies
<tab>g++ -o project main.o LeftistHeap.o -O3 # command

it does not find the target file (priority_queue) and therefore executes the command in the default shell.
you should change the rule in the makefile to

$(NAME) : $(OBJS)
<tab>$(CC) -o $(NAME) $(OBJS) -O3

and you will get the correct (expected) behaviour.

it would also be a good idea to modify the second rule to

.PHONY : clean # do not get confused by an actual file called clean
clean : # clean is a 'phony' target
<tab>rm -f $(NAME) $(objs)

this will prevent gmake from going mad if there happens to be a file called clean in the current directory.

>> so pc-bsd is just a wrap for freebsd?
DesktopBSD http://www.desktopbsd.net/index.php?id=37 is a customized FreeBSD installation that consists of the DesktopBSD Tools and a collection of configuration files and software for desktop use.
PC-BSD is technically a "fork" of FreeBSD; it offers all the basic FreeBSD features, but also introduces new alternative features (the most notable being a windows/apple installer style package management system).
both provide an easy introduction to the unix operating system; but i think DesktopBSD is somewhat 'purer'. both can execute linux binaries (both turn on linux compatibility mode in their builds of the FreeBSD kernel). you could try out both; they are painless to install.

however, keep in mind that If you already use an open source operating system, and you are happy with it, there is probably no good reason to change. to me, even one buntu is one buntu too many; it need not be so for you.

>> i think i know why the stack grows downward, but why the heap grows upward?
http://lwn.net/Articles/91829/ gives a (linux-centric) view of the process address space on x86 platforms.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.