If anyone saw my last wonderfully perplexing post I've almost finished my binary-conversion lib.

However while writing it I've come across a couple of simple questions that google doesn't seem to want to answer for me. Thus I once again find myself pulling my hair out over something trivial.

So mighty coders of Dani-Web, does anyone know the answers to the following:

C++ standard questions:
------------------------------------------------------------------------------------
The C standard states that the maximum bit's in a char is 8, C++ inherits. I believe the standard also states the minimum bits in a char is 8, is this correct?
Am I correct in assuming a char will this always be 8 bits?

I've never seen sizeof(char) report anything but one byte. Does the standard state anywhere a char must be one byte? (I'm pretty lame at reading standards... I just about hacked OpenGL)

sizeof() cannot be <1, if a char IS 8bits, does that mean the system byte must also be 8bits? or does it only use the first 8bits of a larger system-byte?

If both the above questions are true, does that mean that Char = 8bits = one byte on all implementations?
--------------------------------------------------------------------------------
Platform questions:

I'm currently calculating the endianess and sign bit location for ALL data types:
EG: Int_Endian, char_endian, etc...etc...etc:
Has anyone actually ever heard of a system (any system) that has a c++ compiler and stores data types with different endians?

Can I reasonably expect the system to use the same endian and sign-bit-location for all data types?

Conversion library questions
-------------------------------------------------------------------------------
Due to the way I'm currently storing my data the system must state during the call what data type needs to be extracted from my binary data (that is effectively a single very-very-very long block of 10101011010101... Like you'd see in a 1970's sci-fi) Obviously I have no way of ensuring that the data being recalled is in that data-type. Thus the program could request a char and recive half a short (nonsence data).
would you (the reader) consider this good coding? or prefer safeguards despite the processing speed loss?

Recommended Answers

All 4 Replies

C++ standard questions:
------------------------------------------------------------------------------------
The C standard states that the maximum bit's in a char is 8

It does not.

, C++ inherits. I believe the standard also states the minimum bits in a char is 8, is this correct?
Am I correct in assuming a char will this always be 8 bits?

No.

I've never seen sizeof(char) report anything but one byte. Does the standard state anywhere a char must be one byte?

No.
It says sizeof(char) is 1. It also says that char is at least 8 bits.

(I'm pretty lame at reading standards... I just about hacked OpenGL)

sizeof() cannot be <1, if a char IS 8bits, does that mean the system byte must also be 8bits? or does it only use the first 8bits of a larger system-byte?

If both the above questions are true, does that mean that Char = 8bits = one byte on all implementations?

No.

--------------------------------------------------------------------------------
Platform questions:

I'm currently calculating the endianess and sign bit location for ALL data types:
EG: Int_Endian, char_endian, etc...etc...etc:
Has anyone actually ever heard of a system (any system) that has a c++ compiler and stores data types with different endians?

There is (and I mean there are) a system with a c++ compiler which has a memory addressing scheme 1-4-3-2. Go figure an endianness for 16-bit values.

Can I reasonably expect the system to use the same endian and sign-bit-location for all data types?

Answer a question above.

Conversion library questions
-------------------------------------------------------------------------------
Due to the way I'm currently storing my data the system must state during the call what data type needs to be extracted from my binary data (that is effectively a single very-very-very long block of 10101011010101... Like you'd see in a 1970's sci-fi) Obviously I have no way of ensuring that the data being recalled is in that data-type. Thus the program could request a char and recive half a short (nonsence data).
would you (the reader) consider this good coding? or prefer safeguards despite the processing speed loss?

Obviously it is not a good design (I haven't seen no coding).

It does not.
No.
No.
No.
It says sizeof(char) is 1. It also says that char is at least 8 bits.

I beg to differ, in my copy of the ansi C spec section 2.2.4.2 - Numerical limits:

A conforming implementation shall document all the limits specified in this section, which shall be specified in the headers <limits.h> and <float.h> .

Sizes of integral types

The values given below shall be replaced by constant expressions suitable for use in #if preprocessing directives. Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.

maximum number of bits for smallest object that is not a bit-field (byte)
CHAR_BIT 8
minimum value for an object of type signed char
SCHAR_MIN -127

maximum value for an object of type signed char
SCHAR_MAX +127

maximum value for an object of type unsigned char
UCHAR_MAX 255

If the maximum number of bits is 8 and the minimum storable values are correct, then the char must always be 8 bits. right?
Or am I reading it backwards and it means something else?

There is (and I mean there are) a system with a c++ compiler which has a memory addressing scheme 1-4-3-2. Go figure an endianness for 16-bit values.

What kind of system does that? Can I expect to have to deal with that kind of system for a graphical application? or is it somewhat outdated?

Obviously it is not a good design (I haven't seen no coding).

Time to revise my code to ensure some level of safety. :-D

I beg to differ, in my copy of the ansi C spec section 2.2.4.2 - Numerical limits:

<snip>

am I reading it backwards and it means something else?

I think you are not understanding the following

Sizes of integral types

The values given below shall be replaced by constant expressions suitable for use in #if preprocessing directives. Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.

I think you are not understanding the following

Sizes of integral types

The values given below shall be replaced by constant expressions suitable for use in #if preprocessing directives. Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.

Ah, I see, I definitely mis-read that line.
Thank you kindly mitrmkar, back to the code I go!

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.