I'm trying to figure out why I should convert all my unsigned ints/shorts to size_t.

I'm doing the things below but I'm not sure which one to use and which type to iterator my for-loop with:

unsigned short Size()
{
   return vector.size();
}


int Size()
{
   return vector.size();
}

unsigned int Size()
{
   return vector.size();
}

size_t Size()
{
   return vector.size();
}


for (unsigned short I = 0; I < Size(); I++)
{
    //...................
}

I looked up the difference and it says that the size can vary from machine to machine and that size_t is the sizeof a pointer but then I was thinking that there would be no way someone would allocate a vector of 4.2b so I decided 65535 is enough so I used unsigned short but then I thought maybe that's too small so I decided unsigned int then came across size_t and now I'm confused which to use between the them all. The short? int? unsigned int? unsigned short? size_t? long?

As for my pointer question, after using delete[] on a pointer, should I then set that pointer to = 0? Example:

char* Meh = new char[1024];
delete[] Meh;
Meh = 0;

also why does the compiler allow me to do char Meh[vector.size() + 1] without having to use the new keyword?

Recommended Answers

All 3 Replies

if you're trying to return the size you can do it like this:

int size(){
    return (vector.size());
}

or like this:

int size1(){
    return ((int)vector.size());
}

and if you want to use it in a "for", like this:

for (int i=0;i<(int)vector.size();i++){
    //do stuff
}

for (int i=0;i<(int)size();i++){
    //do stuff
}

for (int i=0;i<size1();i++){
    //do stuff
}

it's better than trying to convert to short/size_t/unsigned, in my opinion.

And for your 2nd question: it's enough only to put the delete [] pointer; line.
More information about the delete you can find here: http://www.cplusplus.com/reference/std/new/operator%20delete[]/

First of all, for short types, they are almost never preferrable, unless you are really concerned with memory consumption, and even then, it might not have any effect. In fact, the highest number that you need to store is rarely if ever the deciding factor (unless you are doing some really low-level stuff, in which case you would use fixed-size interger types, like uint32_t or int16_t from the <stdint.h>).

Here are the issues involved: alignment and native word size. A given architecture (CPU / RAM / Bus) will have an alignment size (usually 32 bits these days) which is the smallest size of RAM memory that can go through the bus to get into the cache (think of the bus as a highway for bytes, where each car has exactly four seats (or can carry 4 bytes)). The architecture also has a native word size (usually either 32 or 64 bits these days) which is the preferred size of integers on the registers of the CPU, which it can best operate on.

If you use a type like short (and say it has 2 bytes on a given 32bit architecture), then, it will have to be carried through the bus with two other "empty" or useless bytes, and it is likely to be stored in the higher byte-capacity, native register size to operate on it. Also, it is likely to be stored in memory along with two bytes of padding (empty bytes) such that the next variables in memory are aligned to alignment-size intervals in RAM. So, in the end, it gives no benefits with respect to using a native-sized integer (the memory used is the same), and you have to suffer overhead in conversions (or non-native-size operations), and you have a limited value-range. So, you would only really use this type (and other non-native-size types) in very specific situations, and you would have to take measures to deal effectively with alignment.

So, using the int or unsigned int type is usually preferred for all general-purpose integer numbers. As for unsigned versus signed, it makes no difference (that I know of) performance-wise. So, it is mostly a matter of semantics: if you need a number that could be negative, use a signed value. You have to be careful to determine when values could become negative.

Now, one special-case is the mixing of integers and pointers. Generally, integers are mixed with pointers when you index into an array (as in, ptr[i] == *(ptr + i)) or when you take the difference between two pointers, in which case, the integers represent an address offset in memory. Because these integers represent the same thing as pointers, they should be natively compatible with pointers. The standard integer types std::size_t and std::ptrdiff_t are provided for that purpose. This is why STL containers will use these types as the size-type and the index-type. So, to be very pedantic (and create production-grade code), you should do the same. But, in practice, it is OK to use unsigned int and int, respectively.

As for your pointer question, you don't have to set the pointer to 0 unless you are keeping that pointer around and want to signify that it points to nothing. So, if the pointer is a local variable in a function and you are deleting the used memory just before returning from the function, then you don't need to set the pointer to 0 (or nullptr). But if your pointer is a data member in an object and the object persists after the deletion of the memory, then you should set it to 0 to signify that it currently doesn't point to anything.

As for the array with a dynamic size (without using new), this is called variable-length arrays (or VLAs). It is a standard feature in C (in C99), which some C++ compiler (which often have a C compiler back-end) support as well. However, this is not a standard feature of C++, it has been discussed and was rejected by the committee, so it probably will never become standard in C++. So, you should probably just forget it, and not use it.

So I should not do char meh[Size]; instead I should use char* meh = new char[Size];?

And thanks for the thorough explanation :)

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.