I have an array of data of the form:

union colour
{
    unsigned int argb;
    struct{
        unsigned char a;
        unsigned char r;
        unsigned char g;
        unsigned char b;
    }bigendian;
    struct{
        unsigned char b;
        unsigned char g;
        unsigned char r;
        unsigned char a;
    }littleendian;
};
struct sprite
{
    int w,h;//stores the dimensions
    double dx,dy;//stores what to divide 1.0 by to get to the edge of the usable image
    colour *data;//stores the image
    unsigned int glTexture;//stores the opengl texture
};

And I need to find a way to populate it based solely on w,h and data. dx and dy may need a little more explanation, basically they allow for non 2^n size images by saying "divide your 1.0f by d(x/y) to get the edge of the real image, rather than the edge of the filler space. I think I need to use glTexImage2D but the API is somewhat vague as to how exactly to implement it. Any help would be greatly appreciated. Thanks.

I did check that tutorial, but they were vague as to how the data was stored (they let the OS store the data for them) and I need access to the raw data. As you can see my data is stored as ARGB or BGRA depending on endianness. I have a function called isBigEndian() that checks it. Basically this is what I have so far:

glTexImage2D(GL_TEXTURE_2D, 0, 4, sprite.w*sprite.dx, sprite.h*sprite.dy, 0, (isBigEndian()?/*I need ARGB mode here*/:/*I think I need BGRA mode here*/), /*what type am I using?*/, sprite.data);

The most usual storage is in RGBA (usually the default in many places) and sometimes you see BGRA (used in TGA image formats, and some Mac or Adobe formats). As for big-endian versus little-endian, it doesn't really matter when you are getting the data from a file. In a file format, the endianness is fixed by the file format standard, but of course, you have to read it correctly based on the endianness of the running platform. And usually, when you read the data off the file, you'll end up with either RGBA or BGRA, or some other format that you'll have to convert or deal with differently (e.g. palettes). I don't think that OpenGL supports ARGB, so you probably would have to manually convert that to RGBA if you have some weird image format that actually requires ARGB. You can also try the unofficial extension called GL_ARGB_EXT which you can place in the format argument of glTexImage2D functions, if your platform supports it, some do.

Also, you should use a fixed-length integer type for that argb field in the union. The type unsigned int is not guaranteed to be 32bit, and often won't be on most 64bit architectures. You should use the stdint.h header file (from C standard) and the unsigned integer type uint32_t.

As for the texture size (width height), you cannot do like you did with w * dx and h * dy. Think about it, how are textures stored in memory? Usually, they'll be stored as one complete row of pixels after the other. If your rows are of length w, but you tell OpenGL that the rows are of length w * dx, what you will end up with is that OpenGL will read the first row entirely plus (dx - 1) of the next row and store that as the first row of the texture, and then all the other rows of pixels will be shifted as well, until you reach the end of the image memory and then OpenGL will start reading memory beyond your image storage (which could possibly cause a crash by an Access Violation or Segmentation Fault).

The correct way to do this is to first load a power-of-two texture and then use glTexSubImage2D to load your non-power-of-two texture. Or you can rely on the assumption that you will not run your program on any old hardware that doesn't support non-power-of-two (NPOT) textures (pre-OpenGL 2.0). That latter solution is simple, just give the w and h as width and height to the glTexImage2D function and if the hardware is not ancient, it won't matter if these values are not powers of two. The former solution (pre-loading a power-of-two texture, and then using glTexSubImage2D) can be done as follows:

glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, sprite.w * sprite.dx, sprite.h * sprite.dy, 0);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, sprite.w, sprite.h, (isBigEndian() ? GL_ARGB_EXT : GL_BGRA), GL_UNSIGNED_BYTE, sprite.data);

And that's it (assuming you fix the RGBA / ARGB business).

Here is my updated code... should it work?

union colour
{
    uint32_t rgba;
    struct{
        uint8_t r;
        uint8_t g;
        uint8_t b;
        uint8_t a;
    }bigendian;
    struct{
        uint8_t a;
        uint8_t b;
        uint8_t g;
        uint8_t r;
    }littleendian;
};
struct sprite
{
    int w,h;
    uint32_t texture;
    colour *data;
};

glTexImage2D(GL_TEXTURE_2D, 0, 4, sprite.w, sprite.h, 0, (isBigEndian()?GL_RGBA:GL_ABGR),GL_UNSIGNED_BYTE,sprite.data);

I just want to make sure I understand. Is it no longer true that all textures must be of size 2^n x 2^n?

Is it no longer true that all textures must be of size 2^n x 2^n?

Since OpenGL 2.0 (2004), the hardware is required to support NPOT textures (and btw, the power-of-two rule is per-dimension, it doesn't mean that the texture has to be square!). This doesn't mean that all hardware will support it well or natively, it might lead to inefficiencies, but I would think that recent hardware (less than 5 years old) shouldn't have any problems with it. However, there are still reasons why power-of-two textures are preferable (mipmapping, filtering performance, etc.), so you should still prefer to make textures with power-of-two sizes. But if you absolutely need to use a NPOT texture, it will work, usually, in this day and age. But for maximum compatibility, you shouldn't rely on that and use the other method that I posted.

I feel like I need a HUGE hardware update since NPOT textures do not work on my comp, and neither do vertex buffer objects, pixel buffer objects or really any kind of buffer object! Luckily I graduate high school in a few months and will be getting a brand new desktop. For now how can I test my opengl applications (without exporting them to a newer computer)? My current computer is pushing 6 years old.

Use the code I posted:

glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, sprite.w * sprite.dx, sprite.h * sprite.dy, 0);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, sprite.w, sprite.h, (isBigEndian() ? GL_ARGB_EXT : GL_BGRA), GL_UNSIGNED_BYTE, sprite.data);

That code would not compile for me as GL_ARGB_EXT was undefined. If I don't have to use POT textures then I don't need dx or dy do I?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.