I'm in the process of reworking a bit field that I built to condense a collection of rule activation flags for a game I'm working on. I need to change the "multiplierRule" so that it can represent 2 different variations of the same rule as well as an "inactive" state.

This is the current definition of the field:

#define RuleActive 1
#define RuleInactive !RuleActive

union bitFlags {
	unsigned short flagGroup;  //provide a 16-bit backbone (on 32-bit systems)
	struct {	//the bit field
		unsigned multiplierRule	:  1;	//multiples-of-a-kind rule,
						    //"on" causes values to be 2x,3x,4x three-of-a-kind value resp.
		unsigned straightRule	:  1;	//"on" activates bonus for a straight in a single roll
		unsigned threePairRule	:  1;	//"on" activates bonus for three pairs in a single roll
		unsigned hardWinRule	:  1;	//"on" activates the hard win rule-set
	};	//close the bit field structure
};	//close the union

#endif // BITFLAGS_H

In theory, if I change the definition of "multiplierRule" to multiplierRule : 2; that should allow me to store the values -1, 0, 1 but it won't compile. Is there a different keyword that I need to use in place of "unsigned"? I used "signed" and it seems to work, but it appears to be expanding the field out to 32-bits, it's really not a big deal, but I would prefer to keep it 16-bits if I can.

I know I can just expand the unsigned field to 2 bits then use the values 0,1,2,3 to represent different statuses, and I still may, but I figure I may as well figure this out if I can.

I've done a couple google searches, but mostly I'm finding discussion about other methods that are supposedly more efficient. Nothing I saw really touches on storing negative values in fields that are length > 1.

Any suggestions would be greatly appreciated.

Recommended Answers

All 11 Replies

I get this error from your code using gcc

BitFlags.h:12: error: ISO C++ prohibits anonymous structs

What compiler are you using, this is improtant of course because bitfield behaviour is very much platform dependent.

Anyway after naming the struct I get a size of 4 for the struct as you have it defined.

if you want the bitfield size to be 16 bits you need to declare it unsigned short or if you want to use negative values short .

I ended up with this

union bitFlags {
	unsigned short flagGroup; //provide a 16-bit backbone (on 32-bit systems)
	struct { //the bit field
		signed short multiplierRule : 2; //multiples-of-a-kind rule,
		//"on" causes values to be 2x,3x,4x three-of-a-kind value resp.
		unsigned short straightRule : 1; //"on" activates bonus for a straight in a single roll
		unsigned short threePairRule : 1; //"on" activates bonus for three pairs in a single roll
		unsigned short hardWinRule : 1; //"on" activates the hard win rule-set
	} s; //close the bit field structure
}; //close the union

P.S. I would have thought you should know that "it wont compile" is a bit useless without the actual compiler errors :p

Hmmm...... I did some more experimenting and it seems that even though the original version of the union is specified as 16-bits, the compiler is padding it to 32-bits. Even if I define "flagGroup" as a char instead of a short, the union comes out as 32-bits.

I think I'm just going to re-declare multiplier rule as a signed 2-bit field and call it good.

[edit]
overlapped....

P.S. I would have thought you should know that "it wont compile" is a bit useless without the actual compiler errors :p

Compiler is MS VC++ 2008.
In this particular instance, the specific error was irrelevant... but if you must know...

error C4430: missing type specifier - int assumed. Note: C++ does not support default-int

If you put in flagGroup to try and set the size of the bitfield then forget it. You need to use the correct type in the bit flied itself.

If you put in flagGroup to try and set the size of the bitfield then forget it. You need to use the correct type in the bit flied itself.

Well, it's a union, so all the objects are stored in the same block of memory. It was originally put there 1.) to set up a 16-bit (2-byte) block of memory and keep things aligned on the byte boundaries and 2.) to simplify initialization of the total field. If I say bitFlags.flagGroup = 0; it resets all the flags to off at once.

When I wrote the OP, I was not aware of the fact that the compiler was padding the union out to 32-bits anyway which, unfortunately, makes this whole conversation mostly irrelevant :(. I just changed the field to signed multiplierRule : 2; and called it good. The net size of the storage block didn't increase and now I can store a negative value. The size didn't increase because of the change as I had suspected, it was already larger than I thought it was to begin with.

union bitFlags {
	unsigned short flagGroup; //provide a 16-bit backbone (on 32-bit systems)
	struct { //the bit field
		signed short multiplierRule : 2; //multiples-of-a-kind rule,
		//"on" causes values to be 2x,3x,4x three-of-a-kind value resp.
		unsigned short straightRule : 1; //"on" activates bonus for a straight in a single roll
		unsigned short threePairRule : 1; //"on" activates bonus for three pairs in a single roll
		unsigned short hardWinRule : 1; //"on" activates the hard win rule-set
	} s; //close the bit field structure
}; //close the union

Hmm... I think I'll have to do some experimenting or something... I entered this and it gave me the 16-bits like it did for you. I'll have to do more digging too. All of the documentation I've been able to find so far has been very cryptic about why the data type makes such a difference.

The data types must mean something different in this context...

After further experimentation, I believe I understand what is happening now. Because the behavior is compiler-specific I won't elaborate on what I figured out. Essentially it came down to not only which data types I used but how the fields were arranged. At one point, I had the size of this thing all the way up to 16-bytes!

I did a lot of experimenting then, based on what I observed, tried this to see if it would compress the information into 1-byte(8-bits).

union bitFlags {
	unsigned char flagGroup;  //provide a 8-bit "backbone"
	struct {	//the bit field
		char multiplierRule		   :  2;	//"on" activates the multiples-of-a-kind rule,
		//unsigned multiplierType     :  1;	//"on" causes values to be 2x,3x,4x three-of-a-kind value resp.
		unsigned char straightRule	:  1;	//"on" activates bonus for a straight in a single roll
		unsigned char threePairRule   :  1;	//"on" activates bonus for three pairs in a single roll
		unsigned char hardWinRule	:  1;	//"on" activates the hard win rule-set
	};	//close the bit field structure
};	//close the union

As theorized, it did fit into 1-byte with this format, but I'll have to stick with the 2-byte short-based version because it won't behave the way I need it to as chars. However, with my compiler at least, if you uncomment line 5 ("unsigned multiplyerType"), this thing will balloon from 1-byte to 12!

Whereas, this version:

union bitFlags {
	unsigned char flagGroup;  //provide a 8-bit "backbone"
	struct {	//the bit field
		char multiplierRule		   :  2;	//"on" activates the multiples-of-a-kind rule,
		unsigned char straightRule	:  1;	//"on" activates bonus for a straight in a single roll
		unsigned char threePairRule   :  1;	//"on" activates bonus for three pairs in a single roll
		unsigned char hardWinRule	:  1;	//"on" activates the hard win rule-set
		unsigned multiplierType       :  1;	//"on" causes values to be 2x,3x,4x three-of-a-kind value resp.
	};	//close the bit field structure
};	//close the union

will only be 8-bytes.

Thanks for the help.

However, with my compiler at least, if you uncomment line 5 ("unsigned multiplyerType"), this thing will balloon from 1-byte to 12!

I can explain that if you haven't already.

You changed the basic type of the bit field when you uncommented line 5. It went from trying to put bits into a 1 byte field to trying to put bits into a 4 byte field so it started again at the beginning of a new field. Since the new field is 4 bytes long (an unsigned) it needs to be on a 4 byte boundary so 6 padding bits are put into the original field followed by 3 padding bytes. The first 2 bit field is taking 4 bytes of space!

Then you change back to a 1 byte field so the compiler starts again, again. It puts in 31 padding bits into the 4 byte field and starts a 1 byte field for the final three bit fields.

The it adds 5 padding bits to complete the field. At this point 9 bytes have been used (1 byte field, 3 padding bytes, 4 byte field and 1 byte field), however this is a structure and needs to conform to normal structure padding rules. The structure contains a 4 byte field so needs to align on a 4 byte boundary. To ensure that this happens in a array the compile adds 3 more padding bytes for a total of 12 bytes.

And the moral of this story is if you are using bit fields then always use the same type (signed or unsigned) in the same structure so the compiler can compact them as best it can. You should be able to get a 1 byte structure if you use all signed char or unsigned char for your bit fields.


BTW if you want a 1 byte signed bit field (or variable) it is best to use signed char because the sign of plain char is platform dependent.

commented: This looks like a better wording of what I had theorized. Thanks. +1

Using char or short for bitfields is a non-standard extension. Prefer doing the shift-and-mask yourself with an unsigned type.

Since almost everything to do with bit fields is platform defined I would have no problem using extensions such as these.

The long and the short of it is if you want portable code don't use bit fields.

Using char or short for bitfields is a non-standard extension. Prefer doing the shift-and-mask yourself with an unsigned type.

Thanks, I will keep that in mind if I need to generate a portable code:cool:.

I do have a question though. The documentation I have been able to find online indicates that in C a bitfield must use some sort of int, but C++ expands the allowable types to integral types and enumerations. This is also consistent with what I have read in Schildt's book:

A bit-field must be declared as an integral or enumeration type. Bit-fields of length 1 should be declared as unsigned, because a single bit cannot have a sign.

Aren't shorts and chars integral types? Or am I misunderstanding something?

Mostly, I try to generate standards-compliant code, but there really isn't anything that I do right now that needs to be highly portable.

Do you mind explaining the shift-and-mask you mentioned?

I like portable code so I have to write it only once, not for each compiler. YMMV.
http://c-faq.com/misc/bitsets.html (Again, though, prefer unsigned types when dealing with bits.)
Bitfields are syntactic sugar that hide the shift-and-mask: your code is doing it "as if" anyways.

commented: Thanks for the link. I'll have to see if I can make some sense of it. :) +1
commented: Goodbye and God bless... like many others i had no idea of your suffering. You have been a great asset to c/c++. I will always reference your snippets. I wish your family well. +11
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.