Hi all,

I'm running through the book "The C Programming Language" by Kernighan & Ritchie and I'm having some trouble with one of the exercises.

"Write a program to determine the ranges of char, int, short and long variables, both signed and unsigned, by printing appropriate values from standard headers and by direct computation."

using standard headers:

#include <stdio.h>
#include <limits.h>

main() {
    printf("The range of int signed is %d to %d\n", INT_MIN, INT_MAX);
    printf("The range of char signed is %d to %d\n", CHAR_MIN, CHAR_MAX);
    printf("The range of short signed is %d to %d\n", SHRT_MIN, SHRT_MAX);
    printf("The range of long signed is %ld to %ld\n\n", LONG_MIN, LONG_MAX);

    printf("The range of int unsigned is 0 to %u\n", UINT_MAX);
    printf("The range of char unsigned is 0 to %u\n", UCHAR_MAX);
    printf("The range of short unsigned is 0 to %u\n", USHRT_MAX);
    printf("The range of long unsigned is 0 to %lu\n", ULONG_MAX);
}

the output being:

The range of int signed is -2147483648 to 2147483647
The range of char signed is -128 to 127
The range of short signed is -32768 to 32767
The range of long signed is -2147483648 to 2147483647

The range of int unsigned is 0 to 4294967295
The range of char unsigned is 0 to 255
The range of short unsigned is 0 to 65535
The range of long unsigned is 0 to 4294967295

However, the values for INT_MIN and INT_MAX are shown in the appendices as being the same as SHRT_MIN and SHRT_MAX repsectively. Is this something that has just changed over the years?

My second problem regards using direct computation.

My code so far is as follows:

#include <stdio.h>

main() {
    int i;
    long l;
    short s;
    char c;

    i = 0;
    l = 0;
    s = 0;
    c = 0;

    while (i != EOF)
        i++;

    while (l != EOF)
        l++;

    while (s != EOF)
        s++;

    while (c != EOF)
        c++;

    printf("The range of int unsigned is 0 to %u\n", i);
    printf("The range of long unsigned is 0 to %lu\n", l);
    printf("The range of short unsigned is 0 to %u\n", s);
    printf("The range of char unsigned is 0 to %u\n", c);
}

for which the output is (after a good amount of time):

The range of int unsigned is 0 to 4294967295
The range of long unsigned is 0 to 4294967295
The range of short unsigned is 0 to 4294967295
The range of char unsigned is 0 to 4294967295

which is obviously not what I'm after. I was hoping that by increasing a "short" variable, for example, to EOF, it would stop within its range. Why is this not occuring?

  • I do understand that my method is very innefficent and there is most likely a much better approach.

Any help would be much appreciated.

Recommended Answers

All 15 Replies

the values for INT_MIN and INT_MAX are shown in the appendices as being the same as SHRT_MIN and SHRT_MAX

It is compiler dependent. That was true 30 years ago when 16-bit MS-DOS compilers such as Turbo C were the standard at the time, but modern compilers are normally 32-bit or 64-bit and the ranges of variables are not the same. The C and C++ standards do not specify the ranges, only that char is 1 byte. The size of all others depend on the compiler, and how they implement the variables is usually dependent on the platform. That is all why you have to check limits.h to find the range for your compiler.

line 14: Why are you using EOF? That is the error code when End-Of-File is reached. It applies to file i/o and should not be used for any other reason. My compiler (Visual Studio 2012) defines EOF as -1. Yours apparently defines it differently.

it would stop within its range. Why is this not occuring?

It's called Integer Overflow -- and when that happens the behavior is undefined. On the compilers I used when the max value is reached the next increment will wrap back around to 0, but there is no guarentee that all compilers do that.

However, the values for INT_MIN and INT_MAX are shown in the appendices as being the same as SHRT_MIN and SHRT_MAX repsectively.

The minimum range of int is indeed 16 bits, even though you're unlikely to see less than 32 bits on a modern implementation.

I was hoping that by increasing a "short" variable, for example, to EOF, it would stop within its range. Why is this not occuring?

EOF is a negative quantity that may or may not be -1. What you're actually doing with that code is invoking undefined behavior by overflowing a signed integral type. A safer approach is to use unsigned types instead (since they have defined wraparound behavior), then stop when the value wraps to 0:

unsigned char x = 0;
unsigned short y = 0;
unsigned int z = 0;

while (++x != 0);
while (++y != 0);
while (++z != 0);

printf("Range of unsigned char [0,%u]\n", --x);
printf("Range of unsigned short [0,%u]\n", --y);
printf("Range of unsigned int [0,%u]\n", --z);

All great answers. Just remember that K&R is (in internet time) from the dark ages! At the time K&R wrote the book on their invention, the C language, they were using a system (PDP-7 or such as I recall) which had a 16 or 32-bit architecture. Integers (and long ints) were 32-bits, and (as today) shorts were 16-bits and chars 8-bits. Modern systems (64-bit cpus) have 32-bit ints, 64-bit longs, 16-bit shorts (still), and 8-bit chars (still). On 32-bit systems today, you can specify a 64-bit integer as a 'long long int', which should be the same on a 64-bit system, though depending on the compiler, a long long int may be 128 bits on a 64-bit processor system - caveate programmer! :-)

To reiterate my favorite quote - the nice thing about "standards" is that there are so many!

On 32-bit systems today, you can specify a 64-bit integer as a 'long long int'

Not necessarily. Microsoft Visual C++ long long is the same size as a long which is the same size an an int. It does not recognize long long as a 64-bit int. Instead it used __int64.

Thanks for the help guys. That sure clears a lot up for me.

I've rewritten the program using deceptikon's method.

Itworks fine for unsigned variables but the signed output is not correct for int or long.

signed part of program:

signed char c = 0;
signed short s = 0;
signed int i = 0;
signed long l = 0;

while (++c > 0);
while (++s > 0);
while (++i > 0);
while (++l > 0);

printf("Range of signed char [%d,%d]\n", ++c, --c);
printf("Range of signed short [%d,%d]\n", ++s, --s);
printf("Range of signed int [%d,%d]\n", ++i, --i);
printf("Range of signed long [%ld,%ld]\n\n", ++l, --l);

output:

Range of signed char [-128,127]
Range of signed short [-32768,32767]
Range of signed int [-2147483647,-2147483647]
Range of signed long [-2147483647,-2147483647]

Why would it be giving correct values for short and char but incorrect values for long and int?

Itworks fine for unsigned variables but the signed output is not correct for int or long.

You seem to have missed the part where I stated that overflowing a signed type is undefined behavior. The same method that works for unsigned types is broken on signed types. There's no safe way to do what you want with signed types, which is one reason why the standard library provides macros that tell you the range limits for the implementation.

Ah I see. Thanks for clarifying.

i = sizeof(long);
i *= 8;
max = (unsigned long)pow(2, size);
printf("Maximum size of unsigned long is %lu\n", max);

bits = sizeof(long) * 8;
lmax = (unsigned long)pow(2, bits) - 1;
printf("Maximum size of unsigned long is %lu\n", lmax);

bits = sizeof(long) * 8;

There is no guarentee that one byte = 8 bits. There are machines that have only 4 bits to the byte.

There is no guarentee that one byte = 8 bits.

Though there's a guarantee that char (synonymous with a byte in C) is at least 8 bits. CHAR_BIT may be more than 8, but never less.

See this wiki article about 4-bit computers

I didn't dispute that they exist. They're simply not conducive for a conforming C implementation without some trickiness on the part of the compiler writer. The C standard is very clear about minimum range limits.

The C standard requires that the char integral data type is capable of holding at least 256 different values, and is represented by at least 8 bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte.

Your point is well taken.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.