Hi to all game programmers :icon_cool:

I want to implement the following:

Short description:
A* Star for partitioned space combined with an adaptive partitioning in 3D.

Detailed description:
A. Assume that in the 3D space I have an Object (myMovingObject) that I want to it to move from one initial State (StartState) to another (goalState) while avoiding collisions to other objects.

B. Consider that each State class includes the 3D cube positional info, its Parent State and its costs (both these 2 last variables are needed for the A star path algorithm)

C. I do not partition the whole space between the StartState and the goalState at once from the beginning, but rather create the neighbors states of the StartState of a common resolution, for all of which I then calculate the costs based on A* .If there are states found in collision with other objects I exclude them, while I save all others to a priority list which after being sorted gives the new CurrentState of which I create the neighbours in next step and so on. I keep doing this till I reach the GoalState->THIS WAS THE NON ADAPTIVE APPROACH.

D. I want to improve the previous by not excluding states in collision at once but when such states are found to divide each one of them into 8 childStates, and then divide each one of them that collides again into 8 leafstates. I need an effective data structure for doing this adaptive partitioning holding child and leaf states and thus chose an octree. -> THIS IS THE ADAPTIVE APPROACH

E. Besides the octree I need three more data structures:
-One serving as priority queue which holds states with calculated costs sorted which I implemented as a multiset
-A second for saving each visited state having as a key the 3d positional info of its ParentState in order to retrieve the path from state to state when goal is found.
-A third for saving all states that have already been divided into children or leafs having as a key the 3d positional info of their RootState, in order to avoid calling the octree to divide them again.

(with:
RootState is meant a state which has been divided into 8 childStates
ParentState is meant a state whose neighbour states were created and examined)
For the two last cases I chose to use an STL hash_map.

My questions to u:

1. Do u think that octree is the most suitable data structure for my case? If yes how would u implement it in order to achieve a minimum execution time?
2. Do u think there is a way to implement the A* algorithm recursively? (besides the partitioning, this will be for sure)
3. Do u think that an stl hash_map is the best data structure concerning big O perfomance for finding and retrieving items? (see step E)
4. Do u think there is a faster way to save, search and sort instead of the multiset data structure?(see step E)
5. Is there sth that u think I should do it another way in my approach?

thanks everybody in advance for your time

Recommended Answers

All 7 Replies

hey, we arent doing your homework for you..

hey, we arent doing your homework for you..

If u dont know how to be polite, at least try not being rude :D

I did not ask anybody to do my homework.
And besides dude its not a homework.

I asked for opinions
if u have read my thread u would see

"What do u think..."

:@

its just because it sounds very like an assignment. I was just clarifying the rules.

its just because it sounds very like an assignment. I was just clarifying the rules.

I did not register today in the forum... so I guess I know the rules :D :D
Do u have anything to say what could be helpful to the things I asked???
If u dont please let others that might have to post sth and let us spare both our times..

Some questions to you:

- are you using the A* for AI character path planning? ( usual case ), is the 3D space unbounded? if so, are you ok with the fact that 3D A* search in infinite space will nearly always find a path, but that the path might well just go over ( or under ) all of the obstacles?
- are the obstacles dynamic ( moving )? if not, it's nearly always better to calculate an open space graph ( i.e. find waypoints between the obstacles, connect waypoints that are visible from each other together ) and then run A* on that graph, than it is to run A* on dense uniform grids.. if the obstacles are moving, you need to incorporate the movement in the A* search, else you'll hit a timestep problem ( e.g. a path that existed at the beginning of the search stops existing after the search has finished ). the timestep problem can be dealt with in simple A*... but.. it will obv. become more difficult to deal with as you add layers to the algorithm..
- A* becomes evil when it can't find a path quickly, i.e. if no path exists, A* will expand every reachable node. you have to either limit the input node-set, or limit the number of iterations of A*. limiting the number of iterations of A* means a search might terminate with fail even when a valid ( but long, winding, complicated ) path exists. You can tweak the heuristic params. of A* to make it less likely to backtrack ( i.e. lie and dramatically underestimate the path to the goal, even underestimate non-linearly ), this helps the algo to actually travel down the longer paths before expanding un-evaulated paths.. but regardless, without a cap on iterations, in infinite space, the algo will keep expanding forever, and in finite space, the search would terminate, but it would explore the entire space first, in a case like:

XXX
S XGX
  XXX

( S: start, G: goal, X: obstacle, assume that [3d] there are Xs above and below the goal, completely enclosing it. ).

If you do want to do what you say, regardless of those points:

- an oct-tree is as good a structure as any for partitioning, limit the depth.
- yes you could implement A* recursively, but it doesn't make it any more efficient, you're just moving a linked-list into the stack - and the stack is usually more likely to overflow than dynamic memory is to runout
- no, the best structure for retrieval is an indexed array, but that only works in finite space, and eventually overtakes a hash in terms of space requirements. if you're using the hash as a closed set.. go for a std::set and compute a key yourself ( i.e. trim and concatenate x,y, and z; and put the result into the std::set ), the insertion overhead of a hash can end up more than for an std::set when the hash gets saturated, at least a std::set has somewhat deterministic access/insertion time.
- if you want the data to be consistently sorted, use an std::priority_queue < T, std::deque< T > > : has reasonable insert time, doesn't reallocate, optimized for maintining sorted data.
- see above.

commented: Good information here. +13

Hallo MattEvans and THANKS A LOT FOR YOUR POST!!!!

- are you using the A* for AI character path planning?

No I am using it for Robot motion planing.

- - is the 3D space unbounded?

No the 3D space is bounded.

- if so, are you ok with the fact that 3D A* search in infinite space will nearly always find a path, but that the path might well just go over ( or under ) all of the obstacles?
- are the obstacles dynamic ( moving )?

No there are static obstacles involved.

- if not, it's nearly always better to calculate an open space graph ( i.e. find waypoints between the obstacles, connect waypoints that are visible from each other together ) and then run A* on that graph, than it is to run A* on dense uniform grids.

DO U HAVE ANY MORE INFO ON THIS? COULD U EXPLAIN ME ME THE OPEN SPARSE GRAPH?
TUTORIAL, SITES, PUBLICATIONS E.T.C

- if the obstacles are moving, you need to incorporate the movement in the A* search, else you'll hit a timestep problem ( e.g. a path that existed at the beginning of the search stops existing after the search has finished ). the timestep problem can be dealt with in simple A*... but.. it will obv. become more difficult to deal with as you add layers to the algorithm..

Since like aforementioned there are no moving obstacles I don’t have to consider it.

A* becomes evil when it can't find a path quickly, i.e. if no path exists, A* will expand every reachable node. you have to either limit the input node-set, or limit the number of iterations of A*. limiting the number of iterations of A* means a search might terminate with fail even when a valid ( but long, winding, complicated ) path exists. You can tweak the heuristic params. of A* to make it less likely to backtrack ( i.e. lie and dramatically underestimate the path to the goal, even underestimate non-linearly ), this helps the algo to actually travel down the longer paths before expanding un-evaulated paths.. but regardless, without a cap on iterations, in infinite space, the algo will keep expanding forever, and in finite space, the search would terminate, but it would explore the entire space first, in a case like:
XXX
S XGX
XXX


If character is 'S' or 's'
if InputBalance is less than MinimumBlanace, charge SAVINGS_FEE against account, store result in InputBalance.
else, contunue the program.

if InputBalance is equal to or is higher than MinimumBalance, apply SAVING_INTEREST to account.
store result as FinalBalance.
else
store InputBalance as FinalBalance.

If character is 'C' or 'c'
if InputBalance is less than MinimumBlanace, charge CHECK_FEE against account, store result in InputBalance.
else, contunue the program.

I use a time constrain to avoid this.

- an oct-tree is as good a structure as any for partitioning, limit the depth.

I limit partitioning up to 2 more resolutions besides the initial (totally=3)

- yes you could implement A* recursively, but it doesn't make it any more efficient, you're just moving a linked-list into the stack - and the stack is usually more likely to overflow than dynamic memory is to runout

I didn’t understand this one… Could u maybe explain this to me?

- no, the best structure for retrieval is an indexed array, but that only works in finite space, and eventually overtakes a hash in terms of space requirements. if you're using the hash as a closed set.. go for a std::set and compute a key yourself ( i.e. trim and concatenate x,y, and z; and put the result into the std::set ), the insertion overhead of a hash can end up more than for an std::set when the hash gets saturated, at least a std::set has somewhat deterministic access/insertion time.

I didn’t understand this one… Could u maybe explain this to me? :D :D

- if you want the data to be consistently sorted, use an std::priority_queue < T, std::deque< T > > : has reasonable insert time, doesn't reallocate, optimized for maintining sorted data.

I was told that:

map, set, multimap, and multiset are implemented using a red-black tree and thus have the following asymptotic run times:
Insertion: O(log n)
Lookup: O(log n)
Deletion: O(log n)

while

hash_map, hash_set, hash_multimap, and hash_multiset are implemented using hash tables having the following runtimes:
Insertion: O(1) expected, O(n) worst case
Lookup: O(1) expected, O(n) worst case
Deletion: O(1) expected, O(n) worst case
And if I use a proper hash function, I will almost never see the worst case behavior.

So I guess that u dont agree with this... but I didnt understand the why :D :D

Thanks in advance

Hey, sorry for the delay, been a bit busy.

No I am using it for Robot motion planing.

Cool.

DO U HAVE ANY MORE INFO ON THIS? COULD U EXPLAIN ME ME THE OPEN SPARSE GRAPH?
TUTORIAL, SITES, PUBLICATIONS E.T.C

Well, it's open space graph, but the sparseness of the environment is a factor in choosing the method used to generate the graph... There isn't a one-size-fits-all method, because you get a better result using a different method for different environments.

E.g. for an environment that's mostly open space, with a few obstacles that are roughly the same size, you can create the Voronoi tesselation ( http://mathworld.wolfram.com/VoronoiDiagram.html ) of the obstacles ( treating each obstacle as a point ), if you look at the picture on that site, treat the lines as the arcs and the junctions as the nodes of a space graph, following any path through that graph will keep the agent at a maximal distance at all times between obstacles, running standard a-star on that graph will give the shortest path that maintains such a maximal distance ( although this isn't necessarily the shortest path through the space itself, since the constraints conflict somewhat ). However, with obstacles that are different sizes, or very large, you have to do a lot of 'repair work' on the graph before it's useable...

If the environment is mostly obstacles, or narrow paths ( like a maze ), you'd get a better graph by generating the polytope of the open space, approximating the medial axis of that polytope http://www.ics.uci.edu/~eppstein/gina/medial.html ( which would basically be the Voronoi tesselation of the actual shape of the obstacles; rather than just treating them as points ), and create a graph from that.

Both of these methods sacrifice some detail in the environment in order to improve computational efficiency.

Since like aforementioned there are no moving obstacles I don’t have to consider it.

That makes things easier.. =)

I use a time constrain to avoid this.
I limit partitioning up to 2 more resolutions besides the initial (totally=3)

Wise

I didn’t understand this one… Could u maybe explain this to me? ( recursive A-star )

Well, when I wrote that, I thought you were asking if A-star can be implemented as a recursive function (i.e. getting rid of the open set and spawning off a new function call for every newly expanded node ), and I guess it can, since Djisktra's algorithm can be in a uniform grid ( with a few caveats )...

Now I read again, I think I get that you mean can you use a recursive ( hierachical ) representation in the grid, i.e. with a dynamically generated(?) oct-tree as each cell.. and I invision you doing something like this.. ( most is 'vanilla' A-star, changed parts are emboldened ).

create open set, create closed set
add start node to the open set
while open set isn't empty {
  x = smallest h in open set,
  remove x from open set, add x to closed set
  [b]if x contains obstacles and not subdivision limit reached for node x {
    subdivide x and splice it into the current environment *
  }[/b] 
  loop through neighbours of x {
    calculate h for each neighbour, add each neighbour to open set
  }
}

* here, you modify the data used to determine the neighbours of cells so that the subdivided cells show up as neighbours of non-subdivided adjacent cells.

Is that similar to what you're trying to do? It seems like it should work, although the splicing bit would need some careful thought, and it might be better to subidivide ALL nodes opened to avoid strange changes in the granularity of the path.

I haven't ever tested that, or read anything specifically saying such a method is either a) still an optimal graph search, or b) that it provides a massive performance boost... but! that's not to say that either of those things aren't possible.

Alternatively, you can implement the whole A-star algorithm at multiple resolutions, instead of creating obstacles that can't be walked into, just give each cell a heuristic relative to its occupation density ( or use a rough waypoint graph as suggested above ), then run A-star again on the low-resolution nodes that were intersected by a path, and repeat until done.. that algorithm is non-admissable in terms of finding the global shortest path, and could fail to find a valid path in some cases.

I didn’t understand this one… Could u maybe explain this to me? ( closed set / open set )

Hashed sets compute ( for each value ) an n-bit key using a one way, chaotic function, with the guarantee that hash(x) == hash(y) if x==y, but that hash(x) == hash(y) does not imply that x==y. A perfect hash key is a set of value->key mappings where no collisions occur, i.e. where every unique value gets mapped to a unique hash key, ( this would imply that x==y ). Importantly, you CAN generate a perfect hash of 3D co-ordinates in a bounded space at a given resolution, simply 'fit' the values into a certain range, and then shift and binary-or the result.. e.g. create an integer that has the ( bounded ) x, y, and z co-ordinates stuck next to each other.. ( use a 64-bit int if you like, you get decent resolution then: int ( 64/3 ) = 21, 2^21 = 2097152, thus 2097152 unique integer values. ) Using a perfect hash key in a standard tree container will remove any need to perform hash-key collision resolution ( which a hashed set performs under the hood ).

I was told that:
map, set, multimap, and multiset are implemented using a red-black tree and thus have the following asymptotic run times:
Insertion: O(log n)
Lookup: O(log n)
Deletion: O(log n)

while

hash_map, hash_set, hash_multimap, and hash_multiset are implemented using hash tables having the following runtimes:
Insertion: O(1) expected, O(n) worst case
Lookup: O(1) expected, O(n) worst case
Deletion: O(1) expected, O(n) worst case
And if I use a proper hash function, I will almost never see the worst case behavior.

So I guess that u dont agree with this... but I didnt understand the why

No, I completely agree with that ( at least in theory ), but the O(1/n/log n) hide multiplicative and additive factors, big-O notation is always normalized, so you'd never see O(2000*n), just O(n). But, by all means benchmark and find out which is best for your situation, a hash is one of those structures than can be great in circumstance A, but not so great in circumstance B, with any benefit afforded only partially depending on the hashing function used.

Also, the time expectactions say nothing in terms of memory - a tree grows as data is added in a roughly linear fashion; adding one entry allocates a single linked-list node. Adding something to a hash implemented with a single array ( with one space for each potential hash-key ) could result in an massive array with only one occupied element, so typically a hash has some memory optimization, either to reduce the resolution of the key-set, or to use a structure other than an array ( i.e. a linked list ) to implement the entries array... the second optimization would no longer provide O(1) access, and the first optimization will result in an occasional need to re-hash, because the REAL worst case in a well-memory-optimized hash involves ditching all of the data in the hash, reallocating all of the data, and recomputing all of the hash keys..

Anyway.. good luck =)

commented: Epic post :) +5
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.