can anyone please help me with converting from image to world coordinates? Once i get the world coordinates how do i make use of the Z coordinate to show the 3d? thanks in advance:)

i am using Visual C++. its for 3-D reconstruction from multiple views. I have image coordinates of the same point from two cameras. Their calibrations are known. So i should be able to get the world coordinates. having got that, how do i implement the z coordinate in my output image? which should look 3-d..

i am using Visual C++. its for 3-D reconstruction from multiple views. I have image coordinates of the same point from two cameras. Their calibrations are known. So i should be able to get the world coordinates. having got that, how do i implement the z coordinate in my output image? which should look 3-d..

An image has to be described on a 2D plane, so the z-coordinate does not come into play here. If you just want stills of your 3D simulation, maybe you should just try writing the pixel information of the current sub-window(or the simulation-containing window) that contains the 3D simulation? This would result in a 2D image that would be something of a screen capture of your 3D simulation. I am thinking along the lines of what the CIMG Image library provides, which allows us to do this quite easily...

I find terminology used to describe 3D coding tend to be confusing---just like any new language when I start using it. To me, a point has no 3D representation visually. However, multiple points making up a single image can be displayed simulating 3D on a 2D surface. Then the "Z" component can come into play, assuming you don't want to visualize "hidden" points/surfaces, etc. If this is what you want to do, then I strongly suggest you pick up a text book or look at some of the 3D tutorials you can find on the net as it's not something likely to be readily explained in a post to a bulletin board. At least I've never seen a brief description of it without all the necessary background information needed to understand it.

Thanks amritha and lerner...
I have come as far as estimating fundamental matrices for the cameras. I am lost as to which direction I should proceed next. help please?:?:

> I have come as far as estimating fundamental matrices for the cameras.

You have mentioned in an earlier post that you know the camera's calibration. When you talk about a fundamental matrix under a known calibration, it is called an essential matrix.

This
web page, which is a part of the Gandalf(computer vision lib) project, and gives the algorithm\formulae you may require to proceed with your project.

This question has already been answered. Start a new discussion instead.