Hey, everyone. My goal here is to be able to load a 3x4000*4000 multidimensional array, modify it, save it, and QUICKLY access it (as if accessing a 100x100 array). I'm not really sure how this works in the JVM but...I mean I'm at a loss.

Would making a single int[] array of 48,000,000 be more effective (though less human friendly)?
I would like to be able to iterate through a section of about 100x100 for now.

What I have now is this:

private int[][] get100x100array(){
        int[][] array = new int[100][100];
        for(int i = 0; i < blocks.length; i++){
            int[] row = new int[100];
            System.arraycopy(my3x4000x4000array[i + offset], 3000 /*theoretical starting position*/, row, 0, 100);
            array[i] = row;
        return array;

But this turns out to be to slow to repeatedly iterate. Optimizations that I cannot see are probably needed. I have no idea what to do.

P.S. The array that will be most iterated over will be my3x4000x4000array[0]. If I should single that out to be by itself and that will help the speed, tell me. The 3x4000x4000 is rounded out for the possibility of putting it into a single array.

5 Years
Discussion Span
Last Post by Armanious

What's that arrayCopy intended to do?
You may get some small gain from less index calculation if you use a 1D array, but if that means you are having to calculate indexes yourself then that would be worse. In the end only a benchmark will tell you for sure.


The arraycopy method is to try to single out a 100x100 array from the 3x4000x4000 array.


When I repeatedly iterate the 100x100, it is fast enough to work. When I iterate through the 3x4000x4000 array with the start and ending positions, the method that iterated it returns to slow. It needs to be fast enough to iterate through, do a bunch of calculations on each one, and then draw some with Graphics. Too slow means the screen freezes, good means all the animations continue.


OK - extracting the requirement from you is like trying to get next year's product plan from Apple!
I cannot see any reason why iterating thru 10000 array elements would take longer if they are embedded in a bigger array or not. Extracting them to a new array and then processing them can only be slower than processing them in-situ.
Now to the new info - it's an animation application and it freezes with big calculations. One key question: what thread are you doing the calculations on?


The calculations and animation drawings and rendering is all done by a single thread. Would it be a good idea to keep the calculations in a separate, ongoing thread? The calculations need to be made millions of times anyway, so I'm assuming keep the results threadsafe?


Yup thanks already read it. I won't be able to test it until later. Thanks for the help so far though.

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.