The most usual storage is in RGBA (usually the default in many places) and sometimes you see BGRA (used in TGA image formats, and some Mac or Adobe formats). As for big-endian versus little-endian, it doesn't really matter when you are getting the data from a file. In a file format, the endianness is fixed by the file format standard, but of course, you have to read it correctly based on the endianness of the running platform. And usually, when you read the data off the file, you'll end up with either RGBA or BGRA, or some other format that you'll have to convert or deal with differently (e.g. palettes). I don't think that OpenGL supports ARGB, so you probably would have to manually convert that to RGBA if you have some weird image format that actually requires ARGB. You can also try the unofficial extension called GL_ARGB_EXT
which you can place in the format argument of glTexImage2D functions, if your platform supports it, some do.
Also, you should use a fixed-length integer type for that argb
field in the union. The type unsigned int
is not guaranteed to be 32bit, and often won't be on most 64bit architectures. You should use the stdint.h
header file (from C standard) and the unsigned integer type uint32_t
.
As for the texture size (width height), you cannot do like you did with w * dx
and h * dy
. Think about it, how are textures stored in memory? Usually, they'll be stored as …