Hello,

I can get OpenGL to render 3D scenes to the screen, but I want to render them to a pixel array. Basically I want some magical code to fill in the blanks for this code segment:

int width=500;
int height=500;
uint32_t *colours=new uint32_t[width*height];

//Magic code!

glBegin(GL_TRIANGLES);
//other drawing stuff, preferably no magic here?
glEnd();

//Magic code!

uint32_t c=*colours;//c should be the top-left pixel rendered by opengl

From my research it seems as though I should use a framebuffer object to do this, I am just not sure exactly how they work. Any help?

The piece of magic code needed is glReadPixels(). Like this:

int width=500;
int height=500;
uint32_t *colours=new uint32_t[width*height];

glBegin(GL_TRIANGLES);
//other drawing stuff, preferably no magic here?
glEnd();

glReadPixels(0, 0, width, height,
             GL_RGBA, 
             GL_UNSIGNED_INT_8_8_8_8,
             colours);

uint32_t c = *colours; // c should be the top-left pixel rendered by opengl

I don't know if it will start at the top-left or bottom-left corner, but that should be easy enough to figure out.

That works fine, except it still renders the drawing to the screen does it not? I want to render directly into the pixel array without drawing to the screen, if at all possible.

By reading the description of glReadPixels from the link it says that it returns data from the framebuffer, I assume there must be a way to redirect it, I just don't understand what the framebuffer is or how I can get/set it. From what I understand I will have to create some kind of buffer (it looks like I will need a pixel buffer) and bind it to the framebuffer somehow using glBindBuffer? What is the code to do this?

That's a whole different story. I'm not sure whether you mean (1) you want to avoid creating an OpenGL window and just render to an image, or (2) you want to render to an image without "affecting" the framebuffer (for the stuff drawn to the screen).

But first, you have to understand a couple of things about OpenGL. Specifically, the difference between "device context" (DC), "rendering context" (RC) and "framebuffer" (FB), which is one of the most awkward parts of OpenGL's design. Let me draw you a little diagram:

      OS
      |
      DC  <---  RC  <---  glDraw..()
      |         |
      ---  FB ---

What I'm trying to represent here is that the Operating System (OS) creates and controls the device context, which acts as a kind access-point to the GPU to be able to draw things on a window (GUI) that the OS controls (Windows GDI, or Linux/Mac/Unix X Server). Then, in OpenGL, you can create a rendering context (glCreateContext) to match the DC and attach it to that DC (glMakeCurrent). The RC acts as an access-point between the application (where you make OpenGL calls to draw stuff, like glBegin() / glEnd()), and an execution context inside the GPU that can actually do the drawing. And finally, you have the frame-buffer, which is essentially the data structure shared by the DC and the RC, and which represents what is actually drawn to the screen (if the window is visible, not minimized, etc.). In other words, the FB is the image (or buffer) onto which the DC and RC do their work. But, of course, the FB is much more complex than just a raw image, but that's all just implementation-details.

For case 1:

The problem if you want to render to an image without creating an OpenGL window is that there is this required triangle (DC-FB-RC). The point is that you can't create an RC without a DC, and you can't create an FB without a DC, and an RC would be pointless without an FB, and any OpenGL call without an active RC will fail, and so on. The solution is to ask the OS to create a DC that is not attached to a window.

I have never done this, and a quick google search seems to say that it might not be quite that easy to do. Many people seem to recommend to just create a hidden window and proceed as usual, kind of like this (omitting the error-checking code):

hwnd = CreateWindow(..width,height..);
dc = GetDC(hwnd);
pf = ChoosePixelFormat(dc, &pfd); 
SetPixelFormat(dc, pf, &pfd);
rc = wglCreateContext(dc);
wglMakeCurrent(dc, rc);

However, after digging a bit more (because I was astounded that Windows would not support such a basic functionality), I think I found something that might work, without creating a window. It relies on a "memory DC" and a bitmap surface attached to it. The flow looks like this:

HDC memDC = CreateCompatibleDC( NULL );
HBITMAP memBM = CreateCompatibleBitmap(memDC, width, height);
SelectObject( memDC, memBM );
pf = ChoosePixelFormat(memDC, &pfd); 
SetPixelFormat(memDC, pf, &pfd);
rc = wglCreateContext(memDC);
wglMakeCurrent(memDC, rc);

I think that this should give you a valid OpenGL rendering context in which you can draw your stuff and retrieve the image with "glReadPixels()", as I showed before.

For case 2:

From my research it seems as though I should use a framebuffer object to do this, I am just not sure exactly how they work. Any help?

What OpenGL calls "framebuffer object" (or FBO) is a special feature, not the "framebuffer" from my explanation or diagram above (which is the basic framebuffer provided by the OS, for rendering to a window. FBOs are OpenGL objects that can be created to replace the "window" framebuffer.

Through a very straight-forward procedure, you can create an FBO, bind it (thus replacing the "destination" of all drawing operations), and attach either a texture object or a renderbuffer to that FBO, causing all the rendering to be drawn onto that texture or that renderbuffer. Once you have drawn to a texture, all you have to do is use glGetTexImage() to copy the pixels to your colours array. Here is a very nice a complete tutorial about this. It's only really a few lines of code that are needed for this. And doing this "render-to-texture" technique is a very common thing.

This question has already been answered. Start a new discussion instead.