0

Hello, the game development forum seemed the best place to ask, because it seems to be the only place where people might be using a 3D modeling program... if you have a better place for this to go please tell me.

I've been thinking out a way to write a 3D modeling program, I'm slightly new to OpenGL, but I want to try anyway (stupid first large project, I know, destined for failure), and since OpenGL doesn't story any of the data, I need a way to store point, edge, and poly information.

It seems that the way to go is an array of points for a poly and an array of polys for the model... So, because something re-sizable, like a vector, will be too slow for real time display, I figure I need to hard code a limit to the amount of polys/model and verticies/poly.

Does anyone have a suggestion on either limit? or another way to store the data?

No hurry at all, just trying to get some info from people who know more on the subject than I do.

Thanks in advance.

3
Contributors
5
Replies
8
Views
8 Years
Discussion Span
Last Post by sciwizeh
0

Well, if you choose to use an array, you need to determine how much memory a point/edge/poly occupies, and also how much memory you want your program to consume.

I may be wrong, but using an array may cause issues, especially when dynamically removing polygons/points during runtime (unless you plan to have no such support for this event). A vector is also a bad idea due to the overhead, but I'm thinking a linked list would serve you best. These are very fast when the list is only being traversed (ie no index in mind), incredibly efficient at adding/removing items, and are relatively easy to work with. They also eliminate your problem of having to pre-allocate memory for an array regardless of whether or not it is being used. A linked list will require a small amount more memory (a pointer or 2 per node) but should be pretty minor in comparison to the data they contain.

0

Don't optimize too early. Use a vector if it's more convenient to do so. You'll probably find that you can have more points than any maximum you'd consider before the cost of occasionally resizing the underlying array ever becomes a problem.

If you do find it's easier to use an array (which you might, because you can send arrays to the GL directly with only a few API calls), then break the data up into chunks (separate arrays, either of N bytes, or broken at more natural divisions [e.g. vertices of faces with the same material, same number of points, etc]), and keep a vector of pointers to these (dynamically allocated) chunks.

I wouldn't use a linked list - the overhead per node is probably at least the size of the data per node (in the case of simple vertices), and you have no (fast) random access.

What I would probably do is either: use a vector and an (outside of the C++ standard) trick to use it as an array for sending to GL, or write a wrapper to arrays that basically does what std::vector does, which is NOT to redimension the physical array every time an item is added, but to double in size when necessary which tends to result in quite efficient behaviour in the long run.

0

in that case i'll try the vector, what do you mean by "outside of the C++ standard trick"?

1
#include <vector>
#include <iostream>

int main ( void )
{
  std::vector < int > myvector;
  for ( int i = 0; i != 10; ++i ) {
    myvector.push_back ( i );
  }
  int * myarray = &( myvector [ 0 ] );
  for ( int i = 0; i != 10; ++i ) {
    std::cerr << myarray [ i ];
  }
  return 0;
}

Should always work because a vector is internally just an array. But, if they wanted you to use it as an array, there'd be a member for accessing the array (like c_str in std::string ).

Votes + Comments
Very useful thanks
0

Oh, I would never have thought to take the address of the first element, that could be very useful thank you.

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.