I am trying to make a database of images.
The purpose of this is for Robocup soccer tournament. When the robot sees an image while playing it can then refer to its database of images and match its image with one in the database. The database image should have properties attached to it which will be able to tell the robot where on the pitch it is standing.

I have no idea where to start with this project. Anyone have any ideas?



I take it that you've got 40 years to do this then.

Perhaps researching machine vision, feature extraction and scene analysis.

Even standing in one spot, looking in one direction, produces a hell of a lot of images when you take changes of crowd and lighting into consideration.

There will be image processing done on it 1st so all the background information will be gone. Should just be the lines on the image. Basically this is a new approach so I'm really just trying it. No guarantee it will work. I just need to figure out the database part to keep the whole thing moving!!!

Well i've been using the Hough Transform from opencv. But its giving a messy enough image, especially when it comes to the goals and net. Someone told me there is a sample code that will allow me to take two colours (white and green) and set it up so the new image will only be the white lines that were surrounded by green in the original image. Does anyone know about opencv?

Green is close enough to black to try a Canny-conversion. Tweak a bit with the parameters until you have a clear dou-tone-image.
Now use cvFindContours to find a set of contours.
Next: Start fitting polygons in this resultset. The angle of the corners, and the number of polygons can give you an indication where you are on the field.

This is how I did it anyway for the Dutch version, but our team ended 4th out of 10, so there are better ways to do it :)

This article has been dead for over six months. Start a new discussion instead.