Hello everybody,
I hope I can explain myself clearly.
I would like to design a preview for a program that print some data on a special card. Because the user can choose where to print the data on the card, I think it would be useful to give him a preview with an accurate result of what he will get at the end of the process.
I already scan the card and load it in my program as a texture on a "quad". I change already everything to an orthographic 2D view (no depth buffer activated also).
The problem is that the point(x,y) on the real card doesn't correspond with the point(x,y) on the represented card.
I will give some more information here that might be useful:
The real card dimensions are: Width=8.5 cm Height=5,4cm
The preview screen dimensions are: Width=323*1.7 Height=209*1.7. It is multiply by 1.7 for have a big and clear preview of the card. If you don't do it the preview on the screen and the real card have the same dimensions.
Here I left some trial values(x,y) that I have obtained:
When the real card value is: (10,175) on the screen I need (100,200)
When the real card value is: (220,175) on the screen I need (290,200)
When the real card value is: (10,120) on the screen I need (100,158)
When the real card value is: (15,100) on the screen I need (120,120)
When the real card value is: (10,50) on the screen I need (100,98)
When the real card value is: (10,5) on the screen I need (100,58)
When the real card value is: (400,5) on the screen I need (400,58)
As you see is impossible to predict where should I place/how transform the entered value from the user on the real card onto the represented card in the preview.
Any idea? How/where can I learn to represent real objects with real dimensions on the screen?
Thank you for reading.
P.S.: Using C on Debian Linux