I am using Visual C++ Express Edition on Windows, and I am look at the windows task manager and I noticed that every second, in the process tab the memory of my C++ program increases about 8 K when I am not even doing anything. Also, I've noticed that if I comment my whole drawing code with glEnd and glBegin, it stop adding that 8 K per second but adds 4 K every time I move the mouse. I am using OpenGL for drawing and SDL for windows management. Please note, I loaded one texture and I am drawing in 2D. My code is as follows:

while (!done)
    {
        SDL_Event event;
        while (SDL_PollEvent(&event))
        {
            switch (event.type)
            {
                case SDL_QUIT:
                {
                    done = true;
                    break;
                }

                case SDL_KEYDOWN:
                {
                    if (event.key.keysym.sym == SDLK_ESCAPE)
                    {
                        done = true;
                    }
                    break;
                }
				case SDL_VIDEORESIZE:
					{
						glViewport(0,0,event.resize.w, event.resize.h);
						SDL_GL_SwapBuffers();
						break;
					}

            }
        }
		glClear(GL_COLOR_BUFFER_BIT);

       glBegin(GL_TRIANGLES);
		glColor3f(1.0f,1.0f,1.0f);
            glVertex2f(400.0, 160.0);
            glVertex2f(320.0, 440.0);
            glVertex2f(480.0, 440.0);
        glEnd();
		glEnable( GL_TEXTURE_2D );
		// Bind the texture to which subsequent calls refer to
		glBindTexture( GL_TEXTURE_2D, ret );
 
		glEnable (GL_BLEND);
		glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

		glBegin( GL_QUADS );
		glColor3f(1.0f,1.0f,1.0f);
			//Bottom-left vertex (corner)
			glTexCoord2d( 0, 0 );
			glVertex3f( 100.f, 100.f, 0.0f );
	
			//Bottom-right vertex (corner)
			glTexCoord2d( 0.5, 0 );
			glVertex3f( 228.f, 100.f, 0.f );
	
			//Top-right vertex (corner)
			glTexCoord2d( 0.5, 1 );
			glVertex3f( 228.f, 228.f, 0.f );
	
			//Top-left vertex (corner)
			glTexCoord2d( 0, 1 );
			glVertex3f( 100.f, 228.f, 0.f );
		glEnd();


		glDisable( GL_TEXTURE_2D );
		glDisable (GL_BLEND); 

		glClearColor(0.7f, 0.9f, 1.0f, 1.0f);
        SDL_GL_SwapBuffers();
		SDL_Delay(1000/30);
		
    }

I do know immediate mode is deprecated, I don't know if this is the problem but I cannot grasp Vertex Buffer Objects. If anyone would want to help me translate my current code into VBOs, that would be nice. I just need actual example code for me to learn how to do it, like an actual game using it.

Another thing, I've noticed that all SDL programs I use change the cursor. Does anyone know how to fix that?

I don't know about the memory leak (I would assume it is just a natural growth of the heap.. wouldn't worry about it too much).

What you are rendering is so trivial that you probably don't need VBOs (they are more critical when rendering meshes, e.g., a large complex model). Here is how to do it anyways:

GLuint vbo_ids[2];
  glGenBuffers(2,vbo_ids); //generate two buffers.
  
  //create arrays that store the points and colors for first geometry:
  GLfloat points1[] = {400.0, 160.0, 320.0, 440.0, 480.0, 440.0};

  glBindBuffer(GL_ARRAY_BUFFER, vbo_ids[0]);
  glBufferData(GL_ARRAY_BUFFER, 6*sizeof(GLfloat), points1, GL_STATIC_DRAW);
  
  //repeat for second geometry... you can also intertwine the data:
  GLfloat buf2[] = { 0, 0,  //tex-coord 1
                     100.0, 100.0, 0.0, //vertex 1
                     0.5, 0, //tex-coord 2
                     228.0, 100.0, 0.0, //vertex 2
                     0.5, 1, //tex-coord 3
                     228.0, 228.0, 0.0, //vertex 3
                     0, 1, //tex-coord 4
                     100.0, 228.0, 0.0 }; //vertex 4

  glBindBuffer(GL_ARRAY_BUFFER, vbo_ids[1]);
  glBufferData(GL_ARRAY_BUFFER, 20*sizeof(GLfloat), buf2, GL_STATIC_DRAW);

  while (!done)
  {
    SDL_Event event;
    while (SDL_PollEvent(&event))
    { //.. let's skip this stuff.. just to make it cleaner for the example...
    }

    glClear(GL_COLOR_BUFFER_BIT);

    glColor3f(1.0f,1.0f,1.0f);

    glBindBuffer(GL_ARRAY_BUFFER,vbo_ids[0]);
    glEnableClientState(GL_VERTEX_ARRAY);
    glVertexPointer(2, GL_FLOAT, 0, 0);

    glDrawArrays(GL_TRIANGLES, 0, 3);

    glDisableClientState(GL_VERTEX_ARRAY);

    glEnable( GL_TEXTURE_2D );
    // Bind the texture to which subsequent calls refer to
    glBindTexture( GL_TEXTURE_2D, ret );
 
    glEnable (GL_BLEND);
    glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

    glColor3f(1.0f,1.0f,1.0f);

    glBindBuffer(GL_ARRAY_BUFFER, vbo_ids[1]);
    glEnableClientState(GL_VERTEX_ARRAY);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);

    glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), 2*sizeof(GLfloat));
    glTexCoordPointer(2, GL_FLOAT, 5*sizeof(GLfloat), 0);

    glDrawArrays( GL_QUADS, 0, 4);

    glDisableClientState(GL_TEXTURE_COORD_ARRAY);
    glDisableClientState(GL_VERTEX_ARRAY);

    glBindBuffer(GL_ARRAY_BUFFER, 0);

    glDisable( GL_TEXTURE_2D );
    glDisable (GL_BLEND); 
    glClearColor(0.7f, 0.9f, 1.0f, 1.0f); //remark: this only needs to be set once.

    SDL_GL_SwapBuffers();
    SDL_Delay(1000/30);		
  }

But, actually, for your application, Display Lists are much more appropriate:

GLuint list_start = glGenLists(2);

  glNewList(list_start, GL_COMPILE);  
  glBegin(GL_TRIANGLES);
    glColor3f(1.0f,1.0f,1.0f);
    glVertex2f(400.0, 160.0);
    glVertex2f(320.0, 440.0);
    glVertex2f(480.0, 440.0);
  glEnd();
  glEndList();

  glNewList(list_start+1,GL_COMPILE);
  glBegin( GL_QUADS );
    glColor3f(1.0f,1.0f,1.0f);
    //Bottom-left vertex (corner)
    glTexCoord2d( 0, 0 );
    glVertex3f( 100.f, 100.f, 0.0f );

    //Bottom-right vertex (corner)
    glTexCoord2d( 0.5, 0 );
    glVertex3f( 228.f, 100.f, 0.f );

    //Top-right vertex (corner)
    glTexCoord2d( 0.5, 1 );
    glVertex3f( 228.f, 228.f, 0.f );
	
    //Top-left vertex (corner)
    glTexCoord2d( 0, 1 );
    glVertex3f( 100.f, 228.f, 0.f );
  glEnd();
  glEndList();

  while (!done)
  {
    SDL_Event event;
    while (SDL_PollEvent(&event))
    { //.. let's skip this stuff.. just to make it cleaner for the example...
    }

    glClear(GL_COLOR_BUFFER_BIT);

    glCallList(list_start);

    glEnable( GL_TEXTURE_2D );
    // Bind the texture to which subsequent calls refer to
    glBindTexture( GL_TEXTURE_2D, ret );
 
    glEnable (GL_BLEND);
    glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

    glCallList(list_start+1);

    glDisable( GL_TEXTURE_2D );
    glDisable (GL_BLEND); 
    glClearColor(0.7f, 0.9f, 1.0f, 1.0f); //remark: this only needs to be set once.

    SDL_GL_SwapBuffers();
    SDL_Delay(1000/30);		
  }

Hope it helps.. for openGL, make sure to start with NeHe tutorials.

Thanks, I'll try it later and edit this post for my results. I checked out NeHe's tutorials but it used something that was no longer supported, so I thought that I shouldn't use it. I think after I figure out the graphics I got it all down, the rest is all thinking and SDL helping me.

I'll most likely try the VBO method. Isn't glBegin depracted? I mean, if it is, then the code you provided with the display list I wouldn't use. I might still consider it though, since it's shorter and looks a hell lot easier.

So the memory leak might be a natural growth of the heap? Interesting, I'm still new to C++ and more advanced programming and so I don't know how it works. I thought that it could because of the whole glBegin, that because of how it works and its slowness, that it's why it was happening. I remember commenting stuff out and leaving the application open, where it was only going 4 K per second and after a while it stops. Is this... supposed to happen?

>>Isn't glBegin depracted?
You're right. I have to admit that my knowledge of OpenGL dates back a few years (version 2.0 was the latest I had worked with). I had just assumed that you meant deprecated as in being undesirable. glBegin/glEnd style commands have always been "ugly" and slow, and rarely useful because VBO (or just "vertex buffers" as they used to be called) are much more generic and once you use them, there is no point for glBegin/glEnd structures anymore. So, you're right, you shouldn't use the second example I posted, because it appears that display lists are also deprecated (and nothing equivalent exists anymore, which is a shame because they were nice and very fast if the hardware supported it well).

>>So the memory leak might be a natural growth of the heap?
>> ... going 4 K per second and after a while it stops. Is this supposed to happen?
Ok, so, the "heap" is the entity from the OS that dynamically allocates memory (new/delete, malloc/free). To do that, it manages a chunk(s) of RAM that the OS gives to the heap dedicated to your application. The more you request memory from the heap, the more the heap has to request from the OS. But when you free memory (as you should), you create "holes" at random places inside the heap's memory. In subsequent allocations of memory, the heap will try and reuse those holes, but if it doesn't find a hole that is big enough for the memory you request, it will ask for more memory from the OS. The algorithm inside the heap that searches for the holes and/or grows the size of the heap's memory is very complex as you can imagine. But basically, you can expect that the more allocations and deallocations you do, the more the heap's memory looks like swiss cheese. And thus, the less compact it is, and you will see your RAM usage grow slightly with time, but usually it stops after some time when the heap sort of reaches an equilibrium state (like an optimal or operating level of memory compactness). This is why you never can get an accurate idea of how much dynamic memory is taken by your application at a given time, you can only know how much memory the heap is taking from the RAM (and those two are rarely the same).

commented: Really helpful, nice guy. +1

Alright, I'm trying it out. I got GLEW because I have read that I need it in order to have the VBO functions. Right now, I get an error using the first code, at this line:
glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), 2*sizeof(GLfloat));
My compiler says it's because:
'glVertexPointer' : cannot convert parameter 4 from 'unsigned int' to 'const GLvoid *'
I have check online website with that function but when I try to use what they did, it gives me more errors than I started with. So what do you change?

EDIT: If I change the parameter of error to 0 and compile and open the file, it does not work as it stops responding and it closes itself up...

From OpenGL: "If a non-zero named buffer object is bound to the GL_ARRAY_BUFFER target (see glBindBuffer) while a vertex array is specified, pointer is treated as a byte offset into the buffer object's data store."

So the value "2*sizeof(GLfloat)" is correct, but the type is wrong (it's just an awkward way to allow you to pass an actual pointer instead of a byte offset). To fix it:

glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), reinterpret_cast<GLvoid*>(2*sizeof(GLfloat)));
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.