I wrote a simple program that sums the square of integers m -> n. If sum is declared as double it gives something like 4.16..x10^10 for integers 1 -5000. However, if sum is declared as an int, for the same range it reports -1270505460.

Obviously this has something to do with the 4-bytes allocated for int, but vague intuition != explanation. Can someone give me a good explanation for this phenomenon?

This is because of the size limitations of data types.

an int is only 4 bytes...and has a range from -2,147,483,648 to 2,147,483,647

A double uses 8 bytes (and some floating point algorithm that is beyond my comprehension at the moment) and has a range from +/-1.7E-308 to +/-1.7E308.

Your number simply went out of the range of an integer.

Interestingly enough...integers will simply loop through thaie cycle...i.e. if the number is larger than 2,147,483,647 it will restart the count at -2,147,483,648...whild doubles that exceed their upper limit will cause an error.

If sum is declared as double

Why would you declare sum as double in first place?
You can use long and it's variations.

sum is declared as an int, for the same range it reports -1270505460.

If int( or any other type) cannot accomodate the data then results are unpredictable.

This is because of the size limitations of data types.

an int is only 4 bytes...and has a range from -2,147,483,648 to 2,147,483,647

A double uses 8 bytes (and some floating point algorithm that is beyond my comprehension at the moment) and has a range from +/-1.7E-308 to +/-1.7E308.

Your number simply went out of the range of an integer.

Interestingly enough...integers will simply loop through thaie cycle...i.e. if the number is larger than 2,147,483,647 it will restart the count at -2,147,483,648...whild doubles that exceed their upper limit will cause an error.

Don't make any assumptions about size of types. Standard only guarantees minimum sizes.

True...the data types and sizes shown are typical on Windows systems, and the sizes and ranges may be different on other operating systems...

You can determine the size of an integer using sizeof(int); .

Still...my explanation is the most plausable...he has simply exceeded to range of an int.

I think we lost track of the question...

He wanted to know WHY this happened using an int...not alternatives to using an int.

Straight from the textbook...

When a variable is assigned a number that is too large for its data type, it overflows. Likewise, assigning a value that is too small for a variable causes it to underflow.

Typically, when an integer overflows, its contents wrap around to that data type's lowest possible value...

Only if he's using C or C99 to be more specific.

Sorry but I'm always asuming that using C and C99 don't know why. Probably becouse i'm using it


Interestingly enough...integers will simply loop through thaie cycle...i.e. if the number is larger than 2,147,483,647 it will restart the count at -2,147,483,648...whild doubles that exceed their upper limit will cause an error.

Perfect -- I was just looking for a correct explanation .

This article has been dead for over six months. Start a new discussion instead.