How should i decide which integer type to use?

Recommended Answers

All 11 Replies

How should i decide which integer type to use?

When in doubt, use int. If int doesn't have the range you need or if you have special needs (eg. heavily restricted memory) that justify a smaller integer type, use something different. Over time you'll get a feel for which integer type to use in certain situations.

If you need to store large values (above 32,767 or below -32,767), use long.
If space is very important (i.e. if there are large arrays or many structures), use short.
Otherwise, use int.

commented: Kudos for getting the guaranteed range of int correct. +12

If you need to store large values (above 32,767 or below -32,767), use long.

Actually, this range only applies to 16-bit compilers such as Turbo C. Pretty much all modern C compilers for Windows, Linux and MacOS use a 32-bit int, in which case the range of values will be from -2,147,483,647 to 2,147,483,647. This is equal to the long value on most 16-bit compilers, and many 32-bit compilers for that matter. Depending on the compiler, a long may be 32-bit or larger, while (in C99 and later) a long long is at least 64 bits wide. A short is defined as being at least 16-bits wide and less than 32-bits wide, with 16-bits being typical.

This brings up an important point, however. The C language standards (at least up to C99 - I don't know about C11), the sizes for the standard integer types are not specified, because the language runs on som many different platforms that it is impossible to set a fixed size for them. While almost all modern processors are either 8-bit, 16-bit, 32-bit, or 64-bit - with 32 and 64 being almost universal for desktop systems after the mid-1990s, and 8- and 16-bit common to many embedded systems - there are exceptions, and the language standard has to allow for these edge cases. The actual specifications can be found here.

To deal with exact sizes, the C99 and later standards added the <stdint.h> header, which defines several exact-sized types in both signed and unsigned forms, taking the general form of [u]intn_t. For example, a standard unsigned 16-bit value can be specified by using uint16_t.

But this is a side issue for most programmers dealing with numeric data; it is rare that you would want to fix a number at a specific size. The fixed-size types are mainly used for dealing with file formats or network transmission, where the sizes of the fields are precisely set by the protocols. For general use, an int or long is just fine.

On a side note, if you are using Turbo C: DON'T. It is now more that twenty-three years old, and completely outdated. I know that many university systems in countries such as India and Pakistan have standardized on Turbo C++, but frankly they are doing their students a disservice by forcing them to use something so archaic. Get a modern compiler for Windows such as Visual C++ or GCC, and leave the MS-DOS era dead and buried.

Actually, this range only applies to 16-bit compilers such as Turbo C.

Incorrect. That range is the guaranteed minimum specified in the language standard, and it applies to all compilers. You're free to expect a larger range than that, but then you'd be depending on the compiler rather than the standard.

If you assume that int is 32 bits, for example, your code is technically not portable. If you assume that int is 16-bits two's complement (ie. you get an extra step on the negative range), your code is technically not portable.

The advice to jump from int to long when exceeding the range of [-32767,+32767] is spot on.

Valid point. Looking again at the page I'd linked to, I see I was clearly mistaken. I think I was jumping to a conclusion that the mridul.ahuja was talking about Turbo C, and went off half-cocked as a result. Also, it is so common to see 32-bit ints these days that is is easy to forget that they are non-portable.

First thing which comes to mind is the size of integer data types on 32 bits and 64 bit platforms.
Using the common sense we expect 32 bits platforms to have:

☻ 4 bytes per pointer
☻ 4 bytes per int
☻ 4 bytes per long

and perhaps we expect that 64 bits platforms have:

☻ 8 bytes per pointer
☻ 4 bytes per int (or 8?)
☻ 8 bytes per long

The problem is that you cannot be sure about these values.

Using the common sense we expect 32 bits platforms to have

Common sense doesn't really apply when there are many equally good expectations. For example, common sense might also say:

  • 4 bytes per pointer
  • 4 bytes per int
  • 8 bytes per long

Pointer types are also not required to all be the same size and representation. I'll use hopefully familiar nomenclature to highlight the point: while a flat memory model could be the forward facing configuration on a platform, the compiler may choose to use near pointers under the hood unless the address crosses segments. If a near pointer is used, it'll take up a lot less memory than a far pointer.

AFAIK there is no such thing as "far" and "near" pointers on platforms other than ancient 16-bit MS-DOS (and maybe some obscure embedded systems)-- the flat memory model doesn't have segmenets, unless you might say everything is in near segment.

The example is 100% hypothetical and uses what I felt would be helpful terms for nailing down the concept of pointers not being required to all have the same size.

In hindsight, I wish I had bet someone that the first reply after mine would be some pedant nitpicking that my made up example was wrong. :rolleyes:

I didn't say your reply was wrong, just outdated. Do you know of any other os except MS-DOS 6.X (or some embedded systems) and earlier versions where pointers are not all the same size? I understand the standards don't say they have to be, but in practice I think they are all the same size.

I didn't say your reply was wrong, just outdated.

What part of "hypothetical" and "made up example" is not clear? Do I have to use the most modern terms possible in a fake example to highlight an abstract concept? Here, let me fix it to get you out of your MS-DOS mindset:

"while an introspective memory model could be the forward facing configuration on a platform, the compiler may choose to use empirical pointers under the hood unless the address is transcendental. If an empirical pointer is used, it'll take up a lot less memory than an intuitive pointer."

Of course, now I'd have to define my terms whereas before I used terms that were actually used in the real world in a similar context as my example. In my opinion this makes the example far more transparent.

Do you get it now? I'm not talking about MS-DOS or any "real" platform, therefore it cannot be outdated. You're tilting at windmills here, dude.

I understand the standards don't say they have to be, but in practice I think they are all the same size.

And yet your post named at least one system where they're not and implied several more. You've contradicted yourself in two sentences.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.