Hi Everyone
I am currently writing a program in C++ which uses a very big matrix, I have been told to try and store the matrix in disc and try to retrieve only the chunks of the matrix I will be needing and keep swapping them around. The problem is I do not know how to do this in C++ anyone with a suggestion please help.

There are a few ways you can do that, obviously. First I'd examine the possibilty of restricting the size you need.
questions to ask yourself:
do I really need a full matrix? matrices can be sparse, diagonal, symmetric,....
if so: try to use specialised containers from e.g. boost (www.boost.org)
If you need matrices that not sparse then you could consider to write a container that stores matrices (instead of 1 N x M matrix store n*m matrices of size (N/n x M/m)). These chunks can easily be serialised in and out of memory (again boosst could help you there with the serialisation library).
It all really depends what you want to do with your matrices. If you want to do arithmetic on your matrices then you'll have to write your own operators and stuff.
Can be quite a project you got there...

There are a few ways you can do that, obviously. First I'd examine the possibilty of restricting the size you need.
questions to ask yourself:
do I really need a full matrix? matrices can be sparse, diagonal, symmetric,....
if so: try to use specialised containers from e.g. boost (www.boost.org)
If you need matrices that not sparse then you could consider to write a container that stores matrices (instead of 1 N x M matrix store n*m matrices of size (N/n x M/m)). These chunks can easily be serialised in and out of memory (again boosst could help you there with the serialisation library).
It all really depends what you want to do with your matrices. If you want to do arithmetic on your matrices then you'll have to write your own operators and stuff.
Can be quite a project you got there...

Thanks I do need the whole matrix, will try to use your suggestion, by they way is it possible to use mmap for this.

>>is it possible to use mmap for this.

Sure it's possible, but is it the best solution? mmap is a general one-size-fits-all solution for large data structures that are paged. It is most likely that your application can take advantage of the structure of your matrix and its usage to have a better scheme for swapping between RAM and disk.

You need to essentially look at your algorithms in terms of parallelism and see what parts of the matrix are used at a given time. For example, say your operation is a simple matrix multiplication A*B, where both A and B are very large. Then a "good" scheme might be to store A as row-major in a file and store B as column-major in another file. To do the multiplication you would do:

//say A is in fileA, B is in fileB, and C = A*B is in fileC.
for_each_row_i_of_A
  load_next_row(A,fileA);
  for_each_column_j_of_B {
    load_next_column(B,fileB);
    C[j] = 0.0;
    for(int k=0;k<N;++k)
      C[j] += A[k]*B[k];
  };
  save_row(C,fileC);
};

The above is appropriate because you have the minimum amount of loading and saving operations and all loading and saving are continuous blocks of memory (because A and C are row-major and B is column-major). You could have the same algorithm with A and C as column-major and B as row-major but the loading and saving operations would be much more expensive because you would have to load element by element instead of the entire row or column at once.

Of course, the above is a very simple example where it is easy to know what swapping strategy is best. However, I imagine your application is much more complex, but with proper analysis or restructuring of the algorithm you should be able to minimize the amount of swaps needed and the continuity of the swapped memory blocks. You have to understand that there is really no one-size-fits-all solution to this and that is way you will have trouble finding a good library for this (but you can try looking into parallel computing libraries that sometimes try to optimize data structures).

Many algorithms can also be split in a version that works on sub-blocks of the matrix because that is also faster on large-but-not-huge matrices even if they don't need to be swapped to disk. This is called memory locality of algorithms, i.e., some matrix method variants are particularly geared towards improving locality.

Conclusion: your biggest problem is not how to exchange memory between disk and RAM (that is quite trivial), but how to write your algorithm to minimize the swapping you have to do. This is a central problem in parallel computing (and disk-swapping of large data can be seen as a form of parallel computing).

A popular way to handle large matrix is to divide the matrix into multiple chunks and work that way. Try to see if thats possible. Or post the actual problem here and surely someone can help you figure out a proper solution.

Thanks guys for the suggestions will try all of them hopefully I am able to finish it on time, do you guys think that using Berkely DB will help towards the saving of the matrix in chunks.

>>do you guys think that using Berkely DB will help towards the saving of the matrix in chunks.
No. This is a database library, it is completely besides the point. It could be used, but it is not any better or simpler than just using iostream, or the boost serialization library.

Basically what I am trying to do is to implement sequence comparison algorithms. The Needleman-Wunsch and Smith-Waterman in Cilk++, I have created the alogorithms the problem I have is that the similarity matrix is gets too big, for sequences of more 35000 length I run out of memeory.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.