In C++, where are the header files that define the standard primitives and bitfields?

I would like to confirm something, if it is possible.

-Alex

Primitives and bitfields are a part of the language, there's no header. Why don't you describe what you're trying to confirm instead of asking a vague question.

I'd like to know how the types are defined based on the platform.

I have heard that in C++ a char is always 1 byte and an integer is always 4 bytes on any machine, despite how many bits are available for an individual byte.

For example, a char can be measured as 1 byte even though its byte value is measured with 16 bits instead of 8, so I wanted to see how this is possible to better understand the situation...

I'm sorry if this sounds like a loose question, but its weird that 1 byte can still be 1 byte even if the type consists of 16 bits instead of 8 @_@.

I wanted to see if there was some header or implementation that defined the way bytes were mapped for each type >_<

-Alex

What you're asking about is defined (or not specifically defined) in the C++ standards. Size of an int should be the natural size of the achitecture - many of us grew up with 2-byte ints. The standards don't prescribe specifics, it's more on the order of any larger data type must be at least as large as that that comes before it. Thus, today you see that int and long int are generally the same size.

3.9.1 Fundamental types
1 Objects declared as characters (char) shall be large enough to store any member of the implementation’s basic character set. If a character from this set is stored in a character object, the integral value of that character object is equal to the value of the single character literal form of that character. It is implementation defined whether a char object can hold negative values. Characters can be explicitly declared unsigned or signed. Plain char, signed char, and unsigned char are three distinct types. A char, a signed char, and an unsigned char occupy the same amount of storage and have the same alignment requirements (3.9); that is, they have the same object representation. For character types, all bits of the object representation participate in the value representation. For unsigned character types, all possible bit patterns of the value representation represent numbers. These requirements do not hold for other types. In any particular implementation, a plain char object can take on either the same values as a signed char or an unsigned char; which one is implementation-defined.

2 There are four signed integer types: “signed char”, “short int”, “int”, and “long int.” In this list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural size suggested by the architecture of the execution environment(39) ; the other signed integer types are provided to meet special needs.
__________________
39) that is, large enough to contain any value in the range of INT_MIN and INT_MAX, as defined in the header <climits>.

>I have heard that in C++ a char is always 1 byte
Yes. Or to be more specific, "char" and "byte" are synonymous terms, and sizeof(char) is guaranteed to be 1.

>and an integer is always 4 bytes on any machine
Nope. The size of an integer is at least 16 bits, but it can be whatever the implementation chooses as long as the basic requirements for type relations are met.

>its weird that 1 byte can still be 1 byte even if
>the type consists of 16 bits instead of 8 @_@.
It's not weird at all when you understand that "byte" is an abstraction for the smallest addressable unit. An octet (the 8-bit entity you're familiar with) is one such concrete implementation of this abstraction.

>I wanted to see if there was some header or implementation
>that defined the way bytes were mapped for each type
The closest you can get is looking in <limits.h>. CHAR_BIT will tell you how many bits are in a byte, and the *MIN/*MAX values will give you an idea of how other types are structured on your system. You can find out the minimum requirements by looking at a suitable draft of the C standard. Example.

What you're asking about is defined (or not specifically defined) in the C++ standards. Size of an int should be the natural size of the achitecture - many of us grew up with 2-byte ints. The standards don't prescribe specifics, it's more on the order of any larger data type must be at least as large as that that comes before it. Thus, today you see that int and long int are generally the same size.

>I have heard that in C++ a char is always 1 byte
Yes. Or to be more specific, "char" and "byte" are synonymous terms, and sizeof(char) is guaranteed to be 1.

>and an integer is always 4 bytes on any machine
Nope. The size of an integer is at least 16 bits, but it can be whatever the implementation chooses as long as the basic requirements for type relations are met.

>its weird that 1 byte can still be 1 byte even if
>the type consists of 16 bits instead of 8 @_@.
It's not weird at all when you understand that "byte" is an abstraction for the smallest addressable unit. An octet (the 8-bit entity you're familiar with) is one such concrete implementation of this abstraction.

>I wanted to see if there was some header or implementation
>that defined the way bytes were mapped for each type
The closest you can get is looking in <limits.h>. CHAR_BIT will tell you how many bits are in a byte, and the *MIN/*MAX values will give you an idea of how other types are structured on your system. You can find out the minimum requirements by looking at a suitable draft of the C standard. Example.

Ah... I think I'm understanding...

So basically, for an octet byte (8 bits) the table would look something like this...


constant integral types:

(using this list as an example)

-char (must be at least 1 byte)
-short (must be at least 1 byte but is typically 2 bytes)
-int (must be at least sizeof(short) bytes but is typically 4 bytes)
-long (must be at least sizeof(int) bytes but is typically 8 bytes)

Ah, so its no wonder int and long have the same range on some machines! O_O

If this is right, this clears some mist XD

Though I have yet to see a machine with a different bit-set that determines a byte. Sorry for my ignorance @_@.

I hope this is the right mindset for this #_#

-Alex

>Though I have yet to see a machine with
>a different bit-set that determines a byte.
If your work is primarily on workstations, you aren't likely to. DSPs are a common example where CHAR_BIT varies away from 8.

Suppose one fine day (in year 1990, for example) the C++ Standard defines native binary data representations. Do you want 16-bit int now? How many long long longs are you ready to add in the future?

An experienced and far-sighted software architect never relies on external data binary interfaces. There are lots of methods to cope with binary data incompatibilities.

If you want fixed and standardized data types representations, switch to Java. Then try to implement useful soft for PIC or TI microcontrollers in Java ;)...

This article has been dead for over six months. Start a new discussion instead.