So, in the course of a numerical simulation I have noticed that if I save and load the state of my system (a large number of doubles), the following behavior of the system will be slightly different from if I just kept the info "inside the program" and didn't bother with putting it in a file for safekeeping. This is a problem, as I'd like everything to be 100% reproducible (nvm. that I am flirting with the numerical precision anyhow). I have made a lot of tests along the lines of printing the values before and after loading, and in all cases the first 16 digits are completely correct.
My theory is now that the problem occurs when converting from binary to ASCII - it is my understanding that not all 16-digit numbers can be represented exactly with 8 bytes, and vice versa. If that is the case, and some sort of rounding takes place, it would explain why I can't see the difference when printing.

So, I thought the solution would be to save and load with binary streams. But it seems that all tutorials on writing binary still assumes a cast to char - am I understanding that correctly? If so, I assume that I still pass it through the supposedly imperfect translator between binary and text, and don't solve anything.

I have been considering solutions of the type (both are taken from the wise and all-knowing internet):

ofstream outfile ("",ios_base::binary);
outfile << 1234 << " " << 5678 << " " << 9012 << endl;

ifstream f("", ios_base::binary);

istream_iterator<int> b(f), e;
vector<int> v (b, e);


char * buffer;
  long size;

  ifstream infile ("test.txt",ifstream::binary);
  ofstream outfile ("new.txt",ofstream::binary);

  // get size of file

  // allocate memory for file content
  buffer = new char [size];

  // read content of infile (buffer,size);

  // write to outfile
  outfile.write (buffer,size);
  // release dynamically-allocated memory
  delete[] buffer;

  return 0;

As usual, input will be appreciated profoundly. If I have misunderstood something about doubles or binary, I would like to be corrected as well =)

The first example you gave does not actual write anything in binary, ios_base::binary does not actually change how the data is written to the file. It is just an indicator that you are going to put binary data into the file, but outfile << 1234 << " " << 5678 << " " << 9012 << endl; will still produce the output 1234 5678 9012 (although they might have some '.0's after them cause they are doubles), not those numbers binary representations.

All your second example does is copy data in one file into another, in which case the file being a binary file or a text file would not matter.

Edited 5 Years Ago by chrjs: n/a

>>ios_base::binary does not actuall change how the data is written to the file
That's not entirely true. While a binary file can still still be a human-readable text file, the binary file mode changes the way certain characters are handled while written to and/or read from a file. It causes the newline character ('\n') to not be converted to, on Windows systems for example, a CR+LF, among other things.

Edited 5 Years Ago by Fbody: n/a

All your second example does is copy data in one file into another, in which case the file being a binary file or a text file would not matter.

but, the problem about the second solution is that ostream::write() will only accept a char* as first parameter. Not a double*, which is what I have (or I have a vector<double>, but I could just pass &vec[0]). As i understand it, that would mean I would still be casting my doubles into chars, and therefore not avoiding the binary/text translation? or do I not understand your comment?

When broken down, a char is just a single byte, that is the reason why it only takes char* pointers, because char is a just a byte. You can just cast your double* to a char* pointer and change the size argument so that it accounts for the size of a double. Like this:

//this will write a single double in binary format
outfile.write( (char*)(&a_double), sizeof(double) );

This isn't converting from binary to ASCII, or anything of the sort, it is just treating the 8-byte (on most computers) double as 8 characters (bytes).

ok, so it seems I can use that for writing. However, I am having trouble reading:

I have tried both methods, neither seem to be working for me:

vector <double> temp;

  ofstream outfile ("new.txt",ofstream::binary);
outfile.write((char*) &temp[0],temp.size()*sizeof(double)/sizeof(int));

  ifstream infile ("new.txt",ifstream::binary);

//--------first attempt:-----------

int size; char* buffer;

   //get size of file
  cout << "bytes read: " << size << endl;
   //allocate memory for file content
  buffer = new char [size];

 // read content of infile (buffer,size);

double* otherarray;
otherarray=(double*) buffer;

for (uint i=0;i<4;++i)
	cout << otherarray[i] << endl;


//-----second attempt---------"new.txt",ifstream::binary);

istream_iterator<double> b(infile), e;
vector<double> v (b, e);

cout << "doubles copied: " << v.size() << endl;

for (uint i=0;i<v.size();++i)
cout << v[i] << endl;	

the output from the above is

bytes read: 8
doubles copied: 0

I have tried changing temp, and it seems that the first read value (10 in this case) is always correct, so there's something right in there...

Edited 5 Years Ago by miturian: n/a

So, I just discovered that I shouldn't be writing sizeof(int) but of course sizeof(char) in l. 7. Now the first version works.

However, can anyone tell me why the second doesn't? it's a lot cleaner to look at, so I'd like to just be able to load into the vector directly =)

This question has already been answered. Start a new discussion instead.