Hello,

I am having trouble with the memory size of an array I am using. At the moment I have an array to pointers of objects. These objects get read in from a file and sorted, then a new file is outputted after some filtering is done on the array.

The code I have set up works great up to about a 500 megabyte file. After that though I get a memory access error when I try to create the array.

Lidar * DataSet = new Lidar[header->number_of_point_records];

That is how I am creating my array.

Is there any way to make a huge array or will I have to look into using hard disk space as a memory alternative?

Cameron

You may be trying to allocate too large a chunk of memory at once.

Have you considered using an STL std::deque, or a std::list (or other linked list)?

Is it absolutely necessary to load the entire list at once?

I presume you are doing some post-processing on the lidar data (that this isn't some sort of real-time system)?

> Lidar * DataSet = new Lidar[header->number_of_point_records];
What is the value of sizeof(Lidar) ?
What is the value of header->number_of_point_records
If you multiply them together, how close to 2GB do you get?

> or will I have to look into using hard disk space as a memory alternative?
Or think of ways of dealing with the file in chunks of a few hundred MB at a time.
This may involve writing out temporary files, but that depends on what you're doing.

You may be trying to allocate too large a chunk of memory at once.

Have you considered using an STL std::deque, or a std::list (or other linked list)?

Is it absolutely necessary to load the entire list at once?

I presume you are doing some post-processing on the lidar data (that this isn't some sort of real-time system)?

I have tried it with an std::vector, and yes it is a post processing system finding ground data in lidar files.

> Lidar * DataSet = new Lidar[header->number_of_point_records];
What is the value of sizeof(Lidar) ?
What is the value of header->number_of_point_records
If you multiply them together, how close to 2GB do you get?

> or will I have to look into using hard disk space as a memory alternative?
Or think of ways of dealing with the file in chunks of a few hundred MB at a time.
This may involve writing out temporary files, but that depends on what you're doing.

Value of sizeof(Lidar) I am not sure at the moment, I will look that up as soon as I can get to the computer.

header->number_of_point_records depends on the file.

Beyond that I am sure it goes above 2 gig on some of my larger files, any advice on how to use temporary files and still be able to sort the whole array?

Cameron

Maybe I am being dense, but I look at the merge sort and I still see myself needing to pull in all the data into one large array eventually. In turn this seems like it would not work, but again, maybe I am being dense.

Cameron

I think it would be helpful to know a little more about the Lidar data record. Presumably it is some large number and two coordinates (elevation, longitude, latitude)?

What do you mean by "sorted"? Sorted by elevation?

In any case, you need a divide-and-conquer algorithm for sorting.

Merge sorting creates N temporary files, each of which is sorted.
The actual merge then only needs to read ONE record at a time from each file to determine the final sort ordering.

I think it would be helpful to know a little more about the Lidar data record. Presumably it is some large number and two coordinates (elevation, longitude, latitude)?

What do you mean by "sorted"? Sorted by elevation?

In any case, you need a divide-and-conquer algorithm for sorting.

Actually I will be sorting them by x, and then y coordinate for filtering based on elevation.

The actual lidar data itself contains X, Y, Z, intensity, class, flightline, return, and timestamp data.

Merge sorting creates N temporary files, each of which is sorted.
The actual merge then only needs to read ONE record at a time from each file to determine the final sort ordering.

I must have looked at the wrong merge sorting algorithm, I will look deeper into this.

Cameron

This article has been dead for over six months. Start a new discussion instead.