So, I am learning C and have this newby problem. This is my snippet:

``````#include <stdio.h>

main() {

char x;
unsigned y;

x=0xFF;
y=0xFFFF;

printf("Size of char: %d-bits\n", sizeof(char)*4);
printf("Size of int: %d-bits\n", sizeof(int)*4);

printf("\n0x%X in decimal: %d\n", x, x);
printf("\n0x%X in decimal: %d\n", y, y);
return 0;
}``````

This prints:
Size of char: 4-bits
Size of int: 16-bits

0xFFFFFFFF in decimal: -1
0xFFFF in decimal: 65535

Question:
a. Why does the program deliver a 8-bit hex number (0xFFFFFFFF) when the input is 0xFF and when the sizeof() delivers char as 4-bit ?

b. The program returns 0xFFFFFFFF as -1 in decimal. However when I use a Hex-Decimal converter I get 0xFF as 255 and 0xFFFFFFFF as 4294967295?

## All 17 Replies

Question:
a. Why does the program deliver a 8-bit hex number (0xFFFFFFFF) when the input is 0xFF and when the sizeof() delivers char as 4-bit ?

This code is wrong when you want to display the size of a char (in bits):

``printf("Size of char: %d-bits\n", sizeof(char)*4);``

Sure it will report that a char variable consists out of 4 bits, but there's something wrong with your code, for example: how did you come up with this magic 4? For what does it stand? Second: the sizeof operator will always (under all circumstances, on all platforms return 1 if you invoke: `sizeof(char)` ), so this multiplied by 4 will always give 4 as the result.
The correct way to get the correct size in bits per byte (a char is always 1 byte) is by using the CHAR_BIT macro as defined in the limits.h header (you have to include this file if you want to use the CHAR_BIT macro, you can do this by adding this directive to the top of your program: `#include <limits.h>` ).

The correct way of getting the size (in bits) of a variable of type char and or integer in C:

char:

``printf("Size of a variable of type char (in bits): %lu", CHAR_BIT);``

integer:

``printf("Size of a variable of type integer (in bits): %lu", CHAR_BIT*sizeof(int));``

What exactly does CHAR_BIT represent?
The CHAR_BIT macro represents the number of bits in a byte (a variable of type char does always have the size of one byte, as defined per standard, but this isn't the case for variables of other types, they depend on your C-implementation).
So multiplying the value of CHAR_BIT by the sizeof of a certain variable type (int, double, float...) will give you the number of bytes that type consists of.

Note that sizeof(type) returns the size (in bytes) of a variable type.
Also notice that I used the %lu format specifier in the format string passed to the printf() function, this is because the sizeof operator returns a value of an unsigned integral datatype (as defined per standard), to produce portable code I added the %lu format specifier (which will ensure a cast to an unsigned long, the biggest unsigned integral type) to make sure that the size will always be displayed correctly on the screen.

commented: Good. +13
commented: Excellent! +17

Thanks a lot, I had no idea that CHAR_BIT existed - I have not come this far. The only reason I uses sizeof(char)*4 was to get the number of bits. However this is not a good practice I understand (no magic numbers).

But, what about my other question. Why do I get 0xFFFFFFF and -1 as the decimal?

But, what about my other question. Why do I get 0xFFFFFFF and -1 as the decimal?

Oh sorry, I forgot to answer that one :P

b. The program returns 0xFFFFFFFF as -1 in decimal. However when I use a Hex-Decimal converter I get 0xFF as 255 and 0xFFFFFFFF as 4294967295?

Well, this is because in your code you use signed ints (that is what you get when you declare a variable by just using `[B]int[/B] [I]a_variable_name[/I]` ).
As int exists in both signed and unsigned flavors, your hex-converter was using an unsigned int to do the conversion, while in your code you have a signed int, as the name tells you it's signed, which means that it can have a sign, this isn't the case with unsigned integers which cannot have a sign.
A very non-technical explanation (but maybe less correct) is: with signed integers the high-order bit signifies whether the number is negative (high-order bit is one), or positive (high-order bit is zero).
With unsigned integers, the high-order bit is used to store another range of possible numbers.
Important to mention is that both signed and unsigned types do consume the same amount of bytes, this is also why an unsigned integer variable can store twice as huge positive values as a signed one, while a signed one can also store negative numbers.
Maybe something interesting for you to google up is: two's complement, this will give you a better insight in how the whole thing (with sign bits, etc.) works.

(which will ensure a cast to an unsigned long, the biggest unsigned integral type)

Unless size_t is unsigned long or integral promotion makes it an unsigned long, the only thing you ensure is undefined behavior because the type of the value and the type of the format string do not match. The convention is to use an explicit cast:

``printf("%lu\n", (unsigned long)sizeof(something));``

to make sure that the size will always be displayed correctly on the screen.

If you are lucky. size_t is only guaranteed to be unsigned. It might be an extended type, where there is risk of the value being truncated by casting to unsigned long. The ideal solution is C99's %z modifier:

``printf("%zu\n", sizeof(something));``

But for C89 compilers the best we can do is cast to the largest possible unsigned type that printf() supports and hope for the best. :)

commented: Thanks for the very useful addition :) +23

just make it as an atoi but change base 10 to 16 you can make it unverisal aswell that work for all bases by applying a 2nd argument in atoi copy specifying the base

commented: At best I might guess you meant the nonstandard itoa, but I fail to see where that fits into this thread. -6

b. The program returns 0xFFFFFFFF as -1 in decimal. However when I use a Hex-Decimal converter I get 0xFF as 255 and 0xFFFFFFFF as 4294967295?

In my previous post I forgot to point out that `0xFFFFFFFF` has a binary representation wherein every bit is one, if such a binary value goes into a signed integer variable, then the variable contains a value equivalent to -1 in decimal.
If you read about the two's complement (A must read! Google it up), you'll be able to understand why this is the case :)

Also thanks to Tom Gunn for pointing out a mistake in my post and providing an excellent correction :)

In my previous post I forgot to point out that `0xFFFFFFFF` has a binary representation wherein every bit is one, if such a binary value goes into a signed integer variable, then the variable contains a value equivalent to -1 in decimal.
If you read about the two's complement (A must read! Google it up), you'll be able to understand why this is the case :)

Also thanks to Tom Gunn for pointing out a mistake in my post and providing an excellent correction :)

I was reading through "two's complement" on a couple of sites and think I have gotten the hang of it. If I change the (signed) char variable to "unsigned" it returns 255 as expected (unsigned char x=0xFF returns 255 while char x=0xFF returns -1)

However, the strange thing, it doesn't matter whether I have the int variable signed or unsigned - the program always returns the same number:

int y=0xFFFF returns 65,535
unsigned y=0xFFFF returns 65,535 (should return -1)

However, the strange thing, it doesn't matter whether I have the int variable signed or unsigned - the program always returns the same number:

int y=0xFFFF returns 65,535
unsigned y=0xFFFF returns 65,535 (should return -1)

I'm rather curious why you think an unsigned value should be negative, but do remember to try to use the correct format specifiers with printf for the data type being used.

I'm rather curious why you think an unsigned value should be negative, but do remember to try to use the correct format specifiers with printf for the data type being used.

Sorry, I meant that the "signed int" should have been -1 (while unsigned is 65,535).

The printf is as follows:
printf("\n0x%X in decimal: %d\n", y, y);

(printf("\n0x%X in decimal: %u\n", y, y);) returns the same (65.535)

Do I need a different format specifier in printf ?

thanks,

``````#include <stdio.h>
#include <limits.h>

int main()
{
int x;
unsigned y;

printf("INT_MAX = %d\n", INT_MAX);
if ( 0xFFFF < INT_MAX )
{
puts("Don't be surprised if 0xFFFF doesn't wrap to -1");
}

x = 0xFFFF;
y = 0xFFFF;
printf("0x%X in decimal: %d\n", (unsigned)x, x);
printf("0x%X in decimal: %d\n", y, y);

x = UINT_MAX;
y = UINT_MAX;
printf("0x%X in decimal: %d\n", (unsigned)x, x);
printf("0x%X in decimal: %u\n", y, y);

return 0;
}

/* my output
INT_MAX = 2147483647
Don't be surprised if 0xFFFF doesn't wrap to -1
0xFFFF in decimal: 65535
0xFFFF in decimal: 65535
0xFFFFFFFF in decimal: -1
0xFFFFFFFF in decimal: 4294967295
*/``````
commented: Good demonstration of the problems which can occur :) +23

Hi Dave, Thanks for the help. I added a few more lines to your code for my understanding:

``````#include <stdio.h>
#include <limits.h>

int main()
{
int x;
unsigned y;
short z;
x = 0xFFFFFFFF;
y = 0xFFFFFFFF;

printf("Int is %d-bytes\n", sizeof(int));    //Maximum size (in bytes) of int variables on this computer (32-bit)
printf("Minumum value of short int: %d\n", SHRT_MIN);    //Max value of "signed" short int (16-bit)
printf("Size of short int is %d-bytes\n", sizeof(short));    //Size of short int in bytes (2-bytes)
printf("INT_MAX = %d\n", INT_MAX);   //INT_MAX is maximum value for an signed int (32-bit)
if ( 0xFFFF < INT_MAX )
{
puts("Don't be surprised if 0xFFFF doesn't wrap to -1");
}

printf("0x%X in decimal: %d\n", (unsigned)x, x); //cast x to unsigned
printf("0x%X in decimal: %d\n", y, y);

x = UINT_MAX;    //UINT_MAX is maximum value for an unsigned int
y = UINT_MAX;
printf("UINT_MAX = %d\n", UINT_MAX);
printf("0x%X in decimal: %d\n", (unsigned)x, x);
printf("0x%X in decimal: %u\n", y, y);

return 0;
}``````

/* my output
Int is 4-bytes
Minumum value of short int: -32768
Size of short int is 2-bytes
INT_MAX = 2147483647
Don't be surprised if 0xFFFF doesn't wrap to -1
0xFFFFFFFF in decimal: -1
0xFFFFFFFF in decimal: -1
UINT_MAX = -1
0xFFFFFFFF in decimal: -1
0xFFFFFFFF in decimal: 4294967295
*/

Questions:
1. Why does UINT_MAX return -1 (as it is max unsigned - I would expect a 32-bit unsigned max of 4.294.967.295)

2. Why do x and y not return the same number (after x and y have been set to UINT_MAX) as x is cast as "unsigned" and y is already defined as unsigned (x returns -1 and y returns 4.294.967.295)

thanks,

Questions:
1. Why does UINT_MAX return -1 (as it is max unsigned - I would expect a 32-bit unsigned max of 4.294.967.295)

It doesn't. The specifier `%d` is not correct for use with an `unsigned` value outside the rage of an `int` . Using `%u` displays the correct value.

2. Why do x and y not return the same number (after x and y have been set to UINT_MAX) as x is cast as "unsigned" and y is already defined as unsigned (x returns -1 and y returns 4.294.967.295)

See #1.

It doesn't. The specifier `%d` is not correct for use with an `unsigned` value outside the rage of an `int` . Using `%u` displays the correct value.
Thanks Dave, I am gradually understanding :)

One more thing. First, setting:
x=0xFFFF
printf for x as unsigned returned 65.535 (16-bit max) and 0xFFFF

Then:
Setting x = UINT_MAX
printf for x as unsigned returned 4.294.967.295 (32-bit max) and 0xFFFFFFFF

Why does this happen, i.e. going from 16-bit to 32-bit? Why don't x in the former case also return a 32-bit max value (especially since the INT_MAX delivers a 32-bit max value)?

thanks again.

Why don't x in the former case also return a 32-bit max value

I do not understand the question. 0xFFFF is obviously not the same value as 0xFFFFFFFF, so why should they be printed as the same value? If it helps, you can mentally add leading 0's for any value that does not completely fill the space for the data type. That makes the difference easier to see:

0x0000FFFF != 0xFFFFFFFF

I do not understand the question. 0xFFFF is obviously not the same value as 0xFFFFFFFF, so why should they be printed as the same value? If it helps, you can mentally add leading 0's for any value that does not completely fill the space for the data type. That makes the difference easier to see:

0x0000FFFF != 0xFFFFFFFF

Thanks,
I understand this, but x was assigned as x=0xFFFF. `printf returned 0xFFFF and 65.535 (unsigned).` OK.

Then x was set as x=UINT_MAX `printf returned 0xFFFFFFFF and -1 (signed) and 4.294.967.295 (unsigned)` I don't understand why this happen because x was assigned 0xFFFF (but when set as UINT_MAX prints 0xFFFFFFFF)

One more (sorry for my ignorance..):
Why does `printf("UINT_MAX = %d\n", UINT_MAX);` return -1 ? (as UINT_MAX is unsigned max - should be 32-bit 4.294.967.295)

thanks again.

I don't understand why this happen because x was assigned 0xFFFF (but when set as UINT_MAX prints 0xFFFFFFFF)

Why would that not happen? You are printing the current value of x. The first time the current value is 0xFFFF and the second time the current value is 0xFFFFFFFF. What were you expecting? That might help me to understand your question.

Why does `printf("UINT_MAX = %d\n", UINT_MAX);` return -1 ?

%d interprets the value as a signed int, and signed int does not go up to 4,294,967,295 when int is 32 bits. Technically you are invoking undefined behavior by overflowing a signed data type, but in practice what is happening for you is the value wraps around at the end of the range and keeps on adding until there is nothing left to add. The same thing would probably happen if you did `INT_MAX + INT_MAX + 1` on a two's complement machine.

Why would that not happen? You are printing the current value of x. The first time the current value is 0xFFFF and the second time the current value is 0xFFFFFFFF. What were you expecting? That might help me to understand your question.

Thank you Tommy, I think you answered my question. UINT_MAX actually then sets X to 0xFFFFFFFF. This I didn't know.

%d interprets the value as a signed int, and signed int does not go up to 4,294,967,295 when int is 32 bits. Technically you are invoking undefined behavior by overflowing a signed data type, but in practice what is happening for you is the value wraps around at the end of the range and keeps on adding until there is nothing left to add. The same thing would probably happen if you did `INT_MAX + INT_MAX + 1` on a two's complement machine.

I think I need to study this somewhat better and try to understand the difference between %d and %u and overflow. I didn't know that %d interprets only signed int values. This seems a bit complicated.

Thanks a lot.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, learning, and sharing knowledge.