I want new a array,such as
double *dou=new double[600*600*900]
but in the case of "WIN32",vs compliler likely could't support new a array exceed 2G
if I want to new the array above, how to do?

Recommended Answers

All 11 Replies

Primarily it depends on you operating system. On 32 bit windows there are "only" 2G bytes per process. So your max. array size would be a little less that 2GB. Also visual c++ 2005 is a 32 bit system. You may also consider that a double needs 8 bytes thus 2x10^9/8 makes about 2.5x10^8 double matrix elements.

If you need more, you should use 64 bit Linux system on 64bit hardware. Sometimes one can simulate large arrays with files.

-- tesu

Working with such a big array is going to be very slow and troublesome. Even if you make it work on your computer, will it work on another?

I would suggest you think about the algorithm you have which uses this array and see if it is not possible to split the array in smaller arrays and reuse the memory. Try make your algorithm parallel and the memory usage more local. In other words, rearrange the algorithm such that you can allocate a smaller array, work with that subarray to get the sub-output, save the output and load a new subarray and repeat. You could use the harddisk to save and load the subarrays to and from a file (that's what the OS will probably do with swap memory anyways, and it will be very slow, so it might be better if you do this swapping with the harddisk).

tesu Thanks lot.according your advise,i can conclude that visual c++ 2005 can't support a array exceed 2G, is that right?
so,i should program my own compliler if i want to deal with super high resolution image or a big size volume data.

Working with such a big array is going to be very slow and troublesome. Even if you make it work on your computer, will it work on another?

I would suggest you think about the algorithm you have which uses this array and see if it is not possible to split the array in smaller arrays and reuse the memory. Try make your algorithm parallel and the memory usage more local. In other words, rearrange the algorithm such that you can allocate a smaller array, work with that subarray to get the sub-output, save the output and load a new subarray and repeat. You could use the harddisk to save and load the subarrays to and from a file (that's what the OS will probably do with swap memory anyways, and it will be very slow, so it might be better if you do this swapping with the harddisk).

Thanks all the same,but i just want to know how to do.

The compiler is not the problem -- the problem is that no 32-bit operating system will support such a huge array. Even a 64-bit os and 64-bit compiler will have problems with it.

You will just have to redesign your program so that it doesn't need tnat array. For example, you could put the data in a disk file and work from that, as Mike has already suggested.

If you keep insisting on doing the whole big array thing that great -- be stubborn and your project will fail.

The compiler is not the problem -- the problem is that no 32-bit operating system will support such a huge array. Even a 64-bit os and 64-bit compiler will have problems with it.

You will just have to redesign your program so that it doesn't need tnat array. For example, you could put the data in a disk file and work from that, as Mike has already suggested.

If you keep insisting on doing the whole big array thing that great -- be stubborn and your project will fail.

thanks for your reply. but i disagree with you. If i really want to try a certain program,obviously i would simple the process. however,the problem is i need such large size volume data for reconstruct an object. And the super high resolution is necessary! OR my question is meaningless.

>>OR my question is meaningless.
It doesn't matter whether you disagree with us or not. It can't be done the way you want to do it. So further discussion on that is meaningless.

>>OR my question is meaningless.
It doesn't matter whether you disagree with us or not. It can't be done the way you want to do it. So further discussion on that is meaningless.

It can't be done the way you want to do it.?
are you sure? can you state your reasons? if the origin of the problem is the vs platform(WIN32),I could say i find the limit of microsoft's product in engineering computing. That is the meaning of my question!

Memory in a computer is always organized in a pyramidal way. That is, the processor can only operate on a few variables at a time (on the register, very small memory capacity at no clock-cycle away from the processor). A program can only run a small part at a time (on the cache, a few MB, at a few hundreds clock-cycles away from the processor) on a small amount of memory (the stack). The program as a whole can only be loaded into memory that is too far from the processor to execute directly from there, but which is much bigger (a few GB of RAM, a few hundred thousand clock-cycles away from the processor). And all the data and programs are too big to be stored there and need to be in the hard-disk (a couple of millions of clock-cycles away from the processor).

This stratification of memory is essential for performance and most of it is handled automatically by the OS. The OS takes care of taking chunks of code and memory from the RAM, putting it on the cache. Then your compiled program takes care of putting a few variables at a time on the register for the processor to perform one operation.

Our point is, that if you put too much memory on the RAM and try to work with it as a whole, you only get the illusion that you do work with it as a whole when actually most of that memory will wind up on the hard-disk (as swap memory). The OS of ANY computer will split your program up for you, all the way down to simple operations of one or two variables at a time. The problem is, the OS will do this in a generic way that is safe and works for all applications without crashing or permanently damaging the computer, it will not have performance in mind. It is your burden to efficiently split up your algorithms and memory usage, if you need to work with big amounts of memory. Otherwise, you leave the OS to do it in a generic and almost certainly sub-optimal or very poor performing way.

>>I could say i find the limit of microsoft's product in engineering computing.
NO, you found the limit of COMPUTERS for engineering computing, Unix, Solaris, QNX, Linux, etc. would show very little difference on that matter. If you are not happy with the architecture of PCs, move to another computing platform. The options are, from the top of my head, FPGAs or SuperComputers. In either case, the principle is based on hyper-parallelism, which means you still will need to make your algorithm local instead of global, parallel instead of serial. Look up "Parallel Computing" and "Memory Locality", these are two fundamental issues in scientific computing, because you always wind up crunching a huge amount of data whether it is high-resolution images or climate data points from the entire globe. I have rarely seen a scientific article on a particular algorithm that doesn't mention some sort of analysis on how to make the algorithm more parallel and more memory-local, because it is part of what you NEED to do when you do engineering / scientific computing.

commented: Nice :) +33

>> I could say i find the limit of microsoft's product in engineering computing.

Perhaps see Memory Limits for Windows Releases.

Thank you very much.Your reply is just the right answer i want. So,i can say the vs platform based on WIN32 was been designed just for the limit of OS.
I need this conclusion for i should make a decision in developing our special system.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.