#include <stdio.h>

int main()
{
        float f=0.0f;
        int i;

        for(i=0;i<10;i++)
                f = f + 0.1f;

        if(f == 1.0f)
                printf("f is 1.0 \n");
        else
                printf("f is NOT 1.0\n");

        return 0;
}

Problem? What problem?

I'm guessing from the code that test if (f == 1.0f) always fails.

This happens because floating point representation in a computer isn't very acurate. Or perhaps too acurate. It depends on your point of view. It is possible that when you increment f ten times with 0.1 intervals, the computer stores the result as 0.999999999999999 (etc). This is almost, but not quite, 1.

Please correct me if I'm wrong :)

commented: Good answer! +9
commented: yes you are absolutely correct. +18
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.