This loop is going slow because of the push_back part at the end, can i make this go any faster?

Basically this is what it does

1) Reads file into a char array
2) takes each char in the array and transfers to vector.

char *tempItem = new char[1];
	delete [] tempItem;
    for (unsigned long int q = 0; q < tableCount; q++)
    {
		//for each (AoE2Wide::DrsItem item in inTables[q].Items)
		for (unsigned long int j = 0; j < inTables[q].Items.size(); j++)
        {
			tempItem = new char [inTables[q].Items.at(j).Size];
			inFileDrs.read(tempItem, inTables[q].Items.at(j).Size);
			for (unsigned long int k = 0; k < inTables[q].Items.at(j).Size; k++)
				inTables[q].Items.at(j).Data.push_back(tempItem[k]);
			delete [] tempItem;
		}
	}

You can replace

for (unsigned long int k = 0; k < inTables[q].Items.at(j).Size; k++)			inTables[q].Items.at(j).Data.push_back(tempItem[k]);

with

inTables[q].Items.at(j).Data.insert(inTables[q].Items.at(j).Data.end(),
                                    tempItem,
                                    tempItem+inTables[q].Items.at(j).Size);

with the same effect. However, it would not surprise me to learn that doing so does not change the speed of your program much. Unless you have actually measured where it spends its time, you probably don't know where the time goes.

There are a few inefficiencies about your implementation (the push_back() is only one of them). Basically, you should avoid allocating memory within a loop. This is why push_back is inefficient because it re-allocates and copies the memory at each iteration (although STL containers like vector are amortized, but still..). Also, the new allocation of this tempItem at every outer iteration is also very inefficient. I propose something like this:

//First, figure out what the maximum size for tempItem should be:
int maxSize = 1;
for(int q = 0; q < tableCount; q++)
  for(int j = 0; j < inTables[q].Items.size(); j++)
    if(maxSize < inTables[q].Items[j].Size)
      maxSize = inTable[q].Items[j].Size; //notice that using [j] is more efficient than .at(j) (skips useless bound-checking)
//Then, allocate enough memory for tempItem, and do it only once.
char *tempItem = new char[maxSize]; 
for (int q = 0; q < tableCount; q++) //using native int type is more efficient than a special type like "unsigned long int".
{
  for (int j = 0; j < inTables[q].Items.size(); j++)
  {
    //Read the data, and tempItem is guaranteed to be big enough to store the data.
    inFileDrs.read(tempItem, inTables[q].Items[j].Size);
    //Allocate enough space for the data in your vector:
    inTables[q].Items[j].Data.resize(inTables[q].Items[j].Size);
    //Then, just copy, using std::copy (from #include <algorithm>) (or you could use a loop too)
    std::copy(tempItem,tempItem + inTables[q].Items[j].Size, inTables[q].Items[j].Data.begin());
  }
}
delete[] tempItem;

However, I'm pretty sure that the following will work just fine (maybe you should verify that it does):

for (int q = 0; q < tableCount; q++) {
  for (int j = 0; j < inTables[q].Items.size(); j++) {
    //Allocate enough space for the data in your vector:
    inTables[q].Items[j].Data.resize(inTables[q].Items[j].Size);
    //Read the data into the vector directly:
    inFileDrs.read(&(inTables[q].Items[j].Data[0]), inTables[q].Items[j].Size);
  }
}

The above would be as efficient as it can get, as far as I know.

Edited 5 Years Ago by mike_2000_17: n/a

Tyvm. I was trying to use the read function like that before, but I couldn't get the syntax down right. Its cuz i wasn't useing the resize part of it.

This question has already been answered. Start a new discussion instead.