I'm trying to right a red-eye reduction algorithm for an assignment at school. The user is asked to click on 2 eyes in an image, then the algorithm takes over and corrects the redness. The algorithm works just dandy, but I'm having a hard time capturing the actual location (in pixels) that the user clicks on the image. I can determine where, relative to the image box, the user clicks. But in terms of actual pixel location on the bitmap itself this is wrong, since the image is scaled (zoom mode) to the size of the image box. No problem, I thought, I can just calculate the resolution of the picture by dividing its width by the width of the pic box, then multiplying that number by the x coordinate of the click. This would work perfectly, except theres a small border of padding on either side of the picture which I can't seem to calculate the size of!

That might be a bit confusing, so here's an ascii example
|------------------| The middle represents the actual picture which,
|-----|####|-----| due to zoom mode, preserves its aspect ratio
|-----|####|-----| when resized by adding these 'borders' (the dashes) to the
|------------------| sides of the image.
So how can I figuire out the length of these?! This is making my hair fall out. I mean, I think it might just be easier to write the algorithm to search for the eyes, rather than take raw user input! That doesn't make any sense!! I hope someone helps me.

Recommended Answers

All 9 Replies

hey i didn`t understand u well enough

but u can use selection circle which help the user to clip the eyes boundary then u detect the red color pixels and change it to ur own color ....
and there is camera which make the eyes blue so u can help the user to detect the bad color of the eye and change it to his chosen color ....

hey i didn`t understand u well enough

but u can use selection circle which help the user to clip the eyes boundary then u detect the red color pixels and change it to ur own color ....
and there is camera which make the eyes blue so u can help the user to detect the bad color of the eye and change it to his chosen color ....

The algorithm is perfectly fine, it's feeding it the location (a mouse click) that is my problem. I can't capture the location (x, y) on the actual picture itself. Here's an example, I had an image that was 1280x900 resolution. This certainly wouldn't fit into a small window form, so is resized (while perserving aspect ratio) to fit. When clicked in the picture box, the coordinates (x, y) captured where 200, 300. The actual location on the picture was pixel 550, 710. My problem is finding a reliable transfer function that will convert the 200,300 into its relative 550,710 on the picture. That's where the border problem arose, and is described in my last post.

Does your picture get moved around within the picture box? otherwise, Im not sure why else you wouldnt stuggle with working out where you clicked on your picture.

It moves when the window is resized to zoom. I think I have it figuired out (after about a page of math). Still not perfect though. The problem is that I CAN find where I clicked within the picture box, but the resolution of the picture can be different than what it appears to be (since it is sized when the box moves, causing borders to appear). I'm gonna post the whole damn project so you guys understand my dilema.

Before I even look at the code (its late and I'd probably talk rot if I started to understand anyone elses code right now - bad day, too much on mind.. bleh)

If you had a picture as class, which allowed moving (as zoomed in you see less of the picture etc) you would have

current rect viewport - area of pic to display
current scale

So, when the pic zooms in you use the scale to select a relevant size box centred over either the click on the picture or the centre of the box..

move will tell you how much to move in conjunction with the scale

zoom out does kinda the same only of course it means you see a bigger area of the original image..

The math still seems relatively simple to me.. should I be raiding the teapot (dont drink coffee) again?

Yeah the scaling factor was fine, it was those damn borders that I couldn't calculate. But it turns out they are calculatable, since they are what causes the aspect ratio to stay the same.

Here's the math (if interested):

V -> Visible Image Width (in pixels)
A -> Actual Image Width (in pixels)
W -> Pic box total width
S -> scaling factor
X -> Border width

So if..
V = A * S - 2x
and
x = (W - V) / 2
then its safe to say
V = A * S - ((W - V) / 2)

Now to isolate the 'visible' variable
3 * V = 2 * (A * S) - W
and finally
V = (2 * A * S - W) / 3

I dunno, it's certainly not calculus, but I wouldn't say it was obvious (ie simple).

So if..

V = A * S - 2x
and
x = (W - V) / 2
then its safe to say
V = A * S - ((W - V) / 2)

Strange...
When I do this I get V = A * S - (W-V)

I do too :P

Lol sorry that's right. I'm pretty sure I implemented it the correct way, since I just typed that off the top of my head (I didn't have my notes =( ). The point is, that the width of the border is eliminated as a variable, since it's determined by all the other variables.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.