Hello everyone!

I'm making a game where players build walls and move their pieces to try to be the first player to reach their target square. Essentially, it's a checkerboard, and players can build walls on the segments between adjacent squares to impede their opponents' movement. The one catch is that you are never allowed to place a wall in such a place as to permanently prevent a player from reaching his target square (ie. there must always be a legal path from each player's current location to his target square).

Now, I've played around with ANNs before, for class projects and such, but this is the first time I've ever tried applying it to a real-world situation, and I need some advice. The ANN's job is to check every time a player attempts to place a wall, to ensure that the new wall doesn't violate the "legal path" rule. Previously, I applied a Genetic Algorithm, the idea of which was taken from Mat Buckland's "AI Techniques for Game Programming" (copyright 2002) and encoded movement instructions into genomes and bred them to find solutions. This worked well enough, until the mazes started to get really complicated, with lots of backtracking and winding around. Now, I want to try an ANN, to see if it might provide better results. My questions relate more to the design of the network as opposed to the construction of it.

Primarily, I'm having trouble determining what the network inputs, outputs, and weights should represent. Should the inputs be the current state of the checkerboard (ie. which walls have been placed and which haven't, and the current location of the player in question), and should the weights represent the proposed movement instructions that will lead the player to his target? If so, the input layer would be very large. What would a hidden layer contribute to this setup? What would the output represent? This leads me to the question of how many outputs I would need, and what they would represent. If I assign each genome 40 movement instructions to try to find a solution, would I want 40 outputs instead, or would I want 40 weights? What would the inputs in either of these setups need to be?

In addition to those questions, can you offer any advice as to what activation functions would be best to use? Is there a "right answer" when it comes to picking activation functions, or is it just a matter of trial-and-error?

As you can see, I have many questions, and more. I would try it out, but each time I come up with a theory on how to design the network, I find some flaw in the idea that seems to make it an unreasonable theory. Any help or insight you can offer would be greatly appreciated.

Thank you.

Recommended Answers

All 6 Replies

So, if I get this right, you want to train an ANN to be able to determine if there is a path between the current player position and the target. Frankly, this is not going to work. You would need a very deep ANN to get this working (deep means with many hidden layers) and such ANNs are exponentially difficult to train (that's why most ANNs don't have more than zero or one hidden layer).

This is an especially bad idea in light of the fact that there is a very simple algorithm to solve your problem quickly and efficiently. The AD* or Dynamic A* algorithm would work very easily for this, or even just a vanilla A* would solve your problem much faster and easier than any ANN or GA method.

Thanks for the advice. I've never heard of that algorithm before. I've found a research paper on the internet, introducing this method. I'll read it and probably see if I can implement it. Do you know of any other sources that might explain the algorithm?

AD-star is not the most well-known algorithm amongst lay-people, but Likhachev's papers on the subject are quite easy to follow. I do recommend that you understand / use / implement A-star first, which enjoys a very wide amount of material out there to explain it in details. The wiki is a pretty good starting point. There are also many implementations of A-star available. If you dig good implementations at a high standard of programming, you can check out the A-star implementation in the Boost Graph Library. Similarly, if you want an AD-star implementation (exactly as described in Likhachev's work) in the same style and standard as the BGL A-star, you can check out my implementation of that here.

The A-Star Algorithim is described well in O'Reily's book, AI for Game Developers by David M Bourg and Glenn Seemann.

Another is to use a rule based "always turn right" check, you'd need to implement it your self, it's not in a library that I'm aware of ... it's based on an old maze trick (when mazes were physical) which only works if there are no sliding doors, is to put your right hand on the wall and walk. Any time there is an opening, you turn right and keep your right hand on the wall, you walk to the dead end, keep your right hand on the wall and turn around.

If you get back to your starting position before you get out, the way out is blocked. You may cover every single path, but you will get out.

@BDazzler: The algorithm you described is basically called the Bug Algorithm, because it is as intelligent as a bug, but it works. Good and simple suggestion for a simple check if there is a path. And some simple robots even use this algorithm, like the Rumba (robotic vaccuum cleaner), because it is so simple, requires very crude sensors, and it is actually guaranteed to work (albeit slowly).

Thanks for the help! I'm going to do more formal study of AI (ie. textbook study), and then I'm going to try to implement some of those algorithms.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.