I wrote a simple program that sums the square of integers m -> n. If sum is declared as double it gives something like 4.16..x10^10 for integers 1 -5000. However, if sum is declared as an int, for the same range it reports -1270505460.

Obviously this has something to do with the 4-bytes allocated for int, but vague intuition != explanation. Can someone give me a good explanation for this phenomenon?

## All 14 Replies

This is because of the size limitations of data types.

an int is only 4 bytes...and has a range from -2,147,483,648 to 2,147,483,647

A double uses 8 bytes (and some floating point algorithm that is beyond my comprehension at the moment) and has a range from +/-1.7E-308 to +/-1.7E308.

Your number simply went out of the range of an integer.

Interestingly enough...integers will simply loop through thaie cycle...i.e. if the number is larger than 2,147,483,647 it will restart the count at -2,147,483,648...whild doubles that exceed their upper limit will cause an error.

If sum is declared as double

Why would you declare sum as double in first place?
You can use long and it's variations.

sum is declared as an int, for the same range it reports -1270505460.

If int( or any other type) cannot accomodate the data then results are unpredictable.

This is because of the size limitations of data types.

an int is only 4 bytes...and has a range from -2,147,483,648 to 2,147,483,647

A double uses 8 bytes (and some floating point algorithm that is beyond my comprehension at the moment) and has a range from +/-1.7E-308 to +/-1.7E308.

Your number simply went out of the range of an integer.

Interestingly enough...integers will simply loop through thaie cycle...i.e. if the number is larger than 2,147,483,647 it will restart the count at -2,147,483,648...whild doubles that exceed their upper limit will cause an error.

Don't make any assumptions about size of types. Standard only guarantees minimum sizes.

True...the data types and sizes shown are typical on Windows systems, and the sizes and ranges may be different on other operating systems...

You can determine the size of an integer using `sizeof(int);` .

Still...my explanation is the most plausable...he has simply exceeded to range of an int.

Instead of double use long long.

If you are working with C++ and your application really demands high precision and range why not try out some third party libraries like these:

Though these libraries require you to study them before you use them, they are worth the effort.

HOpe it helped, bye.

Instead of double use long long.

Only if he's using C or C99 to be more specific.

I think we lost track of the question...

He wanted to know WHY this happened using an int...not alternatives to using an int.

Straight from the textbook...

When a variable is assigned a number that is too large for its data type, it overflows. Likewise, assigning a value that is too small for a variable causes it to underflow.

Typically, when an integer overflows, its contents wrap around to that data type's lowest possible value...

Also...it you want to get your doubles to print as numbers instead of E notation, use `cout << fixed;`

I think we lost track of the question...

Very common situation in forums :)

Only if he's using C or C99 to be more specific.

Sorry but I'm always asuming that using C and C99 don't know why. Probably becouse i'm using it

Interestingly enough...integers will simply loop through thaie cycle...i.e. if the number is larger than 2,147,483,647 it will restart the count at -2,147,483,648...whild doubles that exceed their upper limit will cause an error.

Perfect -- I was just looking for a correct explanation .

Man...my spelling really blew chunks on that one...lol

I need to learn to proof-read before I hit post!

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.