I'm trying to right a red-eye reduction algorithm for an assignment at school. The user is asked to click on 2 eyes in an image, then the algorithm takes over and corrects the redness. The algorithm works just dandy, but I'm having a hard time capturing the actual location (in pixels) that the user clicks on the image. I can determine where, relative to the image box, the user clicks. But in terms of actual pixel location on the bitmap itself this is wrong, since the image is scaled (zoom mode) to the size of the image box. No problem, I thought, I can just calculate the resolution of the picture by dividing its width by the width of the pic box, then multiplying that number by the x coordinate of the click. This would work perfectly, except theres a small border of padding on either side of the picture which I can't seem to calculate the size of!
That might be a bit confusing, so here's an ascii example
|------------------| The middle represents the actual picture which,
|-----|####|-----| due to zoom mode, preserves its aspect ratio
|-----|####|-----| when resized by adding these 'borders' (the dashes) to the
|------------------| sides of the image.
So how can I figuire out the length of these?! This is making my hair fall out. I mean, I think it might just be easier to write the algorithm to search for the eyes, rather than take raw user input! That doesn't make any sense!! I hope someone helps me.