Hi

I am reading a excel file & grabbing certain elements out of the file to store in arrays. I know how to do it, but can you tell me if there is a more efficient way to do this. This, meaning, grabbing certain elements out of a line/string.

Inside the excel the information is arrange like so. And I am grabbing the numbers in bold in each line.

AED,0.915,0.005,0.910,0.920,0.910,0.925,0.910,318870,,76,70,0
AAC,1.520,-0.030,1.520,1.525,1.540,1.540,1.520,459155,,45,68,-1
AWL,0.003,-0.001,0.003,0.004,0.004,0.004,0.003,989216,,17500,53,-18
..........

My way of getting the information is like so, but it seems really amateurish to me :P. Is there a better way to do it?

#include <iostream>
#include <iomanip>
#include <fstream>
#include <string>

using namespace std;

struct stocks
{
	string stock_data[999];
	string code[999];
	float price[999];
	float purchase[999];
	float quantity[999];
};

int main()
{

	stocks current_data;
	int i = 0;

	ifstream infile;

	infile.open("Watchlists.csv");

	if (!infile)
	{
		cout << "Failed to open file";
		return 0;
	}

	
	while(infile)
	{
		getline(infile, current_data.code[i], ',');
		infile.ignore(4000, ',');
		infile.ignore(4000, ',');
		infile.ignore(4000, ',');
		infile >> current_data.price[i];
		infile.ignore(4000, ',');
		infile.ignore(4000, ',');
		infile.ignore(4000, ',');
		infile.ignore(4000, ',');
		infile.ignore(4000, ',');
		infile.ignore(4000, ',');
		infile >> current_data.quantity[i];
		infile.ignore(4000, ',');
		infile.ignore(4000, ',');
		infile.ignore(4000, '\n');
		i++;

	}

return 0;
}

Do you mean efficient, or elegant?

Do you mean efficient, or adaptable?

You could create (or find, there are plenty for this common exercise) a CSV class which does all the comma parsing for you.

while ( line.getline() ) {
  CSV parser(line);  // does all the magic of splitting a line at commas
  current_data.code[i] = parser.column(1);  // and so on
}

Oh, and it's probably more useful to make an array of your struct, not have arrays inside your struct.

Yet another tip:

while (infile) // wrong loop!
{
  getline(...);
  ... >> ...
  i++
}

See what happens if the file is empty or after the last line reading:
1. infile is good so the next loop entered...
2. getline failed, all subsequent input operations are suppressed...
3. operator >> gets nothing...
4. i++ counts inexistent record...

See the simplest tokenizer in my recent post:
http://www.daniweb.com/forums/thread197527.html
Use correct loop conditions and always test every input operation result.
For example:

...
vector<string> v;
...
for (i = 0; getline(is,t); i++) { // you have the next record!
  Tokenize(t,v,',');
  if (v.size() >= number_of_fields) {
      // get selected fields values
  }
}

I agree with Salem: as usually, more efficient means bla-bla-bla - it's an unclear ambiguous term until you define an efficiency criterion.

This article has been dead for over six months. Start a new discussion instead.