Hi all,
I am calculating DOG of two images using the following fragment shader. But I am not getting expected result. I have googled a lot but am not able to figure out my mistake here.

uniform sampler2D blurImage;
uniform sampler2D moreBlurImage;

void main()
{
    float blurGray = texture2D(blurImage,gl_TexCoord[0].xy).r; 
    float moreBlurGray = texture2D(moreBlurImage, TexCoord).r;
    float diffOfGauss =     abs(blurGray - moreBlurGray);
    gl_FragColor = vec4(vec3(diffOfGauss),1.0);
}

Assuming here that the image is in grayscale so r=g=b.

oops the second line after main is:
float moreBlurGray = texture2D(moreBlurImage,gl_TexCoord[0].xy).r;

Is there no one to help me out. Pls someone.

do you get a compile error ( use glGetShaderInfoLog ) ?

what code are you using to:

- generate, compile, and link the shader
- initialize the shader for each render pass
- render each pass

what do you see when rendering ? anything? nothing? not what you expected?

have you tried a simpler example (e.g. check that a simple single-texture fragment shader works)

what's in the vertex shader? (if anything)

do you get a compile error ( use glGetShaderInfoLog ) ?

what code are you using to:

  • generate, compile, and link the shader
  • initialize the shader for each render pass
  • render each pass

what do you see when rendering ? anything? nothing? not what you expected?

have you tried a simpler example (e.g. check that a simple single-texture fragment shader works)

what's in the vertex shader? (if anything)

Well I am using OpenGL shader Builder on Mac. So I just write the vertex & fragment shader and the rest is handled by the builder. The code compiles fine & links successfully. But what I get is not the difference but "blurImage" that I am using in the shader. The builder provides a vertex shader template which is:

void main()
{
    gl_FrontColor = gl_Color;
    gl_TexCoord[0] = gl_MultiTexCoord0;
    gl_Position = ftransform();
}

And expected result is difference of gaussian i.e the difference between two blurred versions of an image.

You need to initialize each sampler uniform to the correct unit, usually you do that from host (C/C++/etc) code, I'm not sure how shader builder does that, if at all.

So, to test something for me, try to render just the first texture, and then try to render just the second texture. if reading from a sampler always seems to come from texture unit 0, then that's the problem, and your code would just output black ( because x - x == 0 ).

So, without changing anything else, try these two fragment shaders (just so that I can get a better idea of what's happening):

uniform sampler2D blurImage;
uniform sampler2D moreBlurImage;

void main()
{
float blurGray = texture2D(blurImage,gl_TexCoord[0].xy).r; 
gl_FragColor = vec4(vec3(blurGray),1.0);
}
uniform sampler2D blurImage;
uniform sampler2D moreBlurImage;

void main()
{
float moreBlurGray = texture2D(moreBlurImage, gl_TexCoord[0].xy).r;
gl_FragColor = vec4(vec3(moreBlurGray),1.0);
}

Does texturing work for the first, the second, or both shaders?

More importantly, are both images identical?

You need to initialize each sampler uniform to the correct unit, usually you do that from host (C/C++/etc) code, I'm not sure how shader builder does that, if at all.

So, to test something for me, try to render just the first texture, and then try to render just the second texture. if reading from a sampler always seems to come from texture unit 0, then that's the problem, and your code would just output black ( because x - x == 0 ).

So, without changing anything else, try these two fragment shaders (just so that I can get a better idea of what's happening):

uniform sampler2D blurImage;
uniform sampler2D moreBlurImage;

void main()
{
float blurGray = texture2D(blurImage,gl_TexCoord[0].xy).r; 
gl_FragColor = vec4(vec3(blurGray),1.0);
}
uniform sampler2D blurImage;
uniform sampler2D moreBlurImage;

void main()
{
float moreBlurGray = texture2D(moreBlurImage, gl_TexCoord[0].xy).r;
gl_FragColor = vec4(vec3(moreBlurGray),1.0);
}

Does texturing work for the first, the second, or both shaders?

More importantly, are both images identical?

Yes I tried this out and found that the builder renders the base image in both cases i.e blurImage. Also when I am not using any shaders it just adds the two images to give a darkened + more blurred image.
I also tried it with gl_TexCoord[1] but its the same. So I guess now the logical choice would be to use some real code.
So I am developing this project and I was using shader builder to test my code before using in the project. So I use

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, txIDs[inputTexture]);
glEnable(GL_TEXTURE_2D);
locations = [shaderObject getUniformLocation:"blurImage"];

glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, txIDs[inputTexture +1]);
glEnable(GL_TEXTURE_2D);
locations2 = [shaderObject getUniformLocation: "moreBlurImage"];

glUniform1i(locations, 0);
glUniform1i(locations2, 1);

Now I have already used my code for converting image from rgb to grayscale and for blurring. In both cases I passed only one uniform. After this I just render this texture that I passed to fragment shader
in a quad. All rendering is done in a FBO so the rendering is reused in a texture.
I am not sure how I will render this time. Do I have to use multi-texturing?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.