int main( void )
        float a = 0.2;
         if( a > 0.2 )
                printf( "\n a is greater than 0.2" );
        else if( a < 0.2 )
                printf( "\n a is less than 0.2" );
                printf( "\n a is equal to 0.2" );
         return 0;

the out put of the above program is "a is greater than 0.2" .

the reason i think is that, the variable a stores 0.2 as 0.259595, but why is it so.

if i want to compare two float values for some precession how to do that.


it is unlikely that it stores 0.2 as 0.259595 but it might store it as 0.2000001.

Remember that a float is 4 bytes, it has the same unique number of combinations as an int (4 bytes), that is 4294967296.

However a float can store "any" value in the range 1.175494351e-38F to 3.402823466e+38F either positive or negative. That covers a very large range of values. As a result a float only stores numbers to a specific precision (7 significant digits) and the value it stores should be treated as a approximation to the required value.

Normally this doesn't effect initialisation quite so much it only comes into play when doing calculations, try this for example

#include "stdio.h"

int main()
    float f1 = 10000000.F;
    float f2 = 0.F;
    int x;
    for( x=0; x<100; x++)
        f1 += 0.9F;
        f2 += 0.9F;
    printf("%f %f\n", f1, f2);

    return 0;

but you have initialised your float to a double constant (a float constant would be 0.2F ) so the compiler has had to do a conversion which may have introduced the errors.

If you need to compare 2 floats to a tolerance then ...

Firstly don't, try to design your software so you don't have to do that.

Secondly if you really can't avoid it then take the absolute value of the difference of the 2 values fabs(f1 - f2) and test it against your tolerance using one of < <= > >=.