I'm writing a scientific application that has to store a large (1GB to 500GB+) amount of data on a hard drive, and then, once written, read it back sequentially to process it. The amount of data for a particular experiment is known in advance, exact to the byte.
When I write this file to disk at the moment, it ends up extremely fragmented (500+ fragments), despite there being enough contiguous space on the drive at the start to have it in one piece. This ends up being extremely detrimental to performance when analysing the data. I imagine this happens because currently my programme does not anticipate writing a file of any particular size and just keeps writing and writing, with the OS (or whatever handles this, I don't actually know) deciding where on the disk it physically goes.
So my question is, can I allocate a contiguous region of disk (assuming that one exists, which it usually would) to write this file to in order to speed up my processing? Surely I should be able to take advantage of the fact that I know how big the file will be in advance? I feel like this should be possible, but don't really know where to look and haven't found anything helpful on the web so far.
Thanks in advance,
PS Assuming that this is possible in some way, is there a way that, if there is not a contiguous region of disk to write to, that a file with a minimum number of fragments can be allocated, rather than just allowing the OS or whatever to use this as an opportunity to fill in all its gaps?
PPS Someone on the MSDN forum suggesting using Sysinternals contig utility to create a contiguous file. This sounds good but I don't know how to overwrite the file without deleting its contents in each write cycle. Link to the MSDN thread where I have posted this is: http://social.msdn.microsoft.com/Forums/en-US/vcgeneral/thread/7a5c14ef-b50e-4349-9633-26add123df40