0

Hello everybody,

I hope I can explain myself clearly.

I would like to design a preview for a program that print some data on a special card. Because the user can choose where to print the data on the card, I think it would be useful to give him a preview with an accurate result of what he will get at the end of the process.

I already scan the card and load it in my program as a texture on a "quad". I change already everything to an orthographic 2D view (no depth buffer activated also).
The problem is that the point(x,y) on the real card doesn't correspond with the point(x,y) on the represented card.

I will give some more information here that might be useful:
The real card dimensions are: Width=8.5 cm Height=5,4cm
The preview screen dimensions are: Width=323*1.7 Height=209*1.7. It is multiply by 1.7 for have a big and clear preview of the card. If you don't do it the preview on the screen and the real card have the same dimensions.

Here I left some trial values(x,y) that I have obtained:
When the real card value is: (10,175) on the screen I need (100,200)
When the real card value is: (220,175) on the screen I need (290,200)
When the real card value is: (10,120) on the screen I need (100,158)
When the real card value is: (15,100) on the screen I need (120,120)
When the real card value is: (10,50) on the screen I need (100,98)
When the real card value is: (10,5) on the screen I need (100,58)
When the real card value is: (400,5) on the screen I need (400,58)

As you see is impossible to predict where should I place/how transform the entered value from the user on the real card onto the represented card in the preview.

Any idea? How/where can I learn to represent real objects with real dimensions on the screen?

Thank you for reading.

P.S.: Using C on Debian Linux

3
Contributors
4
Replies
6
Views
8 Years
Discussion Span
Last Post by Neo7
0

I mean either the problem is impossible, in which case, well, it is impossible (ie the screen coords are not actually a function of the real coords), or, it is - there is really no middle ground!

To see how many pixels (in x) are needed to represent 1 unit on the real boards, just do this:
you know 10 maps to 100, and 175 maps to 200. That means for every 90 screen pixels, you have 25 units on the board. You can do the same thing for y. Then you just need to know where you are starting ( (0,0) perhaps?) and then you just multiply the board coords by those ratios and you should get the screen coords. Of course they will not be integers, so you will have to round.

Did I read the problem correctly?

Dave

0

Thank you daviddoria for your answer.
I gave up about 3 weeks ago , though.
If there is not a especial way of using OpenGl for this kind of cases, I don't think that it will be possible to reach a solution. :(

Actually, the way OpenGl is representing the coords is quite awkward. It looks like a logarithmic representation. :S
Lets look at the first trial value (x,y):

"When the real card value is: (10,175) on the screen I need (100,200)"

With your idea it would be possible to solve the problem, but if we look at the last trial value that I wrote:

"When the real card value is: (400,5) on the screen I need (400,58)"

In this case a real card value on x exactly corresponds with a represented value on the OpenGl card. :0

The representation, as for me, doesn't look homogeneous and predictable.

Sorry to have troubled you.

1

Of course it is possible; there is necessarily always a projection from object>screen coords and there is always a projection from screen>a subset of object coords on a plane parallel with the view plane (notice the difference). If you want a 1:1 mapping between screen and object coords, use an orthographic projection (which you are), and make sure that the args to glOrtho are the ACTUAL width and height of your viewport, and remember that opengl puts 0,0 (in screencoords) in the bottom left rather than the top left, and 0,0 (in object coords) in the center of the screen.. so, something like:

glMatrixMode ( GL_PROJECTION );
  glLoadIdentity( );
  gluOrtho2D ( 0, w, 0, h );
  glScalef ( 1, -1, 1 );
  glTranslatef ( 0, -h, 0 );
  glMatrixMode ( GL_MODELVIEW );
  glLoadIdentity( );
 /* draw stuff here */

Now, x,y in the screen will be x,y,0 in object coords.

If you use a projective view transform, you can use gluProject/gluUnproject to move between object space and screen space coords. You probably don't (although maybe you do) need to convert between screen&object coordinate systems though, the only time you'd usually need to do that is if you want to be able to 'pick' (with a mouse or similar) some point on the card.

If you want correct relative positions, just work on the assumption that object units are e.g. millimeters, the relative position/sizes of things will always be correct, and you can then pre-scale the modelview matrix to get either an acceptable (in perspective) or exact (in orthographic) size on screen.. you need to know the screen's DPI to get the same size on screen as in the real world.

But anyway, if you set up an orthographic projection correctly, there is always a linear relationship between screen x,y and x,y on a view-parallel plane in object space.

Votes + Comments
Helpful as always
0

Thank you MattEvans for your answer and sorry for my late reply.

Before creating this thread I found the code you write down for use with an Orthographic but it didn't work.
If with this is possible to get what I'm looking for I must be making a mistake somewhere in my code , but I'm almost a newbie in programing and OpenGl so I don't think I will be able to solve this.

Thank you again MattEvans.

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.