Member Avatar for chipbu

Hi guys,

This is not related to syntax or runtime problem. What I am going ask is more about how Linux and Windows handle writing data from buffer to a file. I have this code here, wrapped around a timing block, to write a buffer to a file.

StartCounter();
   if(rows != fwrite(image, cols, rows, fp)){
          fprintf(stderr, "Error writing the image data in write_pgm_image().\n");
          if(fp != stdout) fclose(fp);
          return(0);
   }
   test = GetCounter();

Start counter and GetCounter is define as follow

#include <shrUtils.h>
#ifdef _WIN32
double PCFreq = 0.0;
__int64 CounterStart = 0;
#endif
#ifdef __linux__
struct timeval ts_start,ts_end;
#endif
void StartCounter()
{
#ifdef _WIN32
LARGE_INTEGER li;
        if(QueryPerformanceFrequency(&li) == 0)
  printf("QueryPerformanceFrequency failed!\n");
        PCFreq = (float)((li.QuadPart)/1000.0);
        QueryPerformanceCounter(&li);
        CounterStart = li.QuadPart;
#endif
#ifdef __linux__
gettimeofday(&ts_start, NULL);
#endif
}
double GetCounter()
{
#ifdef _WIN32
LARGE_INTEGER li;
        QueryPerformanceCounter(&li);
        return (float)((li.QuadPart-CounterStart)/PCFreq);
#endif

#ifdef __linux__
gettimeofday(&ts_end, NULL);
//time = timespec_sub(ts_end, ts_start);
return (float)((ts_end.tv_sec - ts_start.tv_sec + 1e-6 * (ts_end.tv_usec - ts_start.tv_usec))*1000.0);
#endif
}

When I measure time to write data, I found out that:

1 - in Linux, the time to write data is linearly proportional to the data size.
2 - in Windows, the time to write data is quadratically proportional to the data size.

I think the fwrite function writes the data line by line to the file, therefore the linear relationship in Linux. But seems like Windows behaves differently. Do you think of any explanation for this?

Any help is greatly appreciate

Different compilers, and OS's, have different sized buffers for handling this - and indeed, whether the buffer is used or not.

Linux, being based around Unix, has a larger buffer, I would hazard a guess at. Windows, being based around PC's with more modest resources, typically uses a smaller buffer.

The size of the buffer can be set as well as whether it's used or not - and makes a HUGE difference in performance. A program I had that required 14 seconds, (in Windows 7), immediately improved to just under 2 seconds, by changing the size of the vbuffer().

(The task required finding and writing out 50,000 prime numbers to stdout, using fwrite) printf() is quite different, and I have no experience with how it might be affected by this change in a buffer size.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.