At work we have a large number of unit tests that try to alert us that we have made changes that have broken our code. Many of these tests work by evaluating functions under known conditions and comparing the result with the known result. The known result is acquired by manually testing the newly written function with a number of conditions that should exercise all the features of it. We get the output value of the function from this test and then use that as a reference value in the unit test. Very often, a function returns a floating point value ( double , usually).

This is all fine, but occasionally the changes that you makes to the code will pass all the unit tests on your computer, but then later fail on the automatic testing machine that tests all builds once they're committed to the source control repository. This type of failure often results from a tiny difference between the reference value and the value a function has returned. The differences are really small, like in the 12th or greater decimal place. So, my actual question is: could these differences be due to differences in floating point calculations between processors/other hardware components?. If so, does anyone know how these differences can arise?

For information, all code is written in C++ and compiled with MS Visual Studio 2010 (at least on the local machines and I imagine on the test machine too).

Recommended Answers

All 4 Replies

I would assume these slight differences would be due to rounding... as we know floats and doubles never ever come out pefectly and is usually rounded. Not sure if machines have anything to do with it though but there is a possibility that it does.. I wouldnt bet my money on the difference in processors.. I dont think a sandy bridge processor should affect the result in comparison to the i7.. I just believed those were rounding tolerances :S

Comparing floating point numbers using == or != is not safe. The difference between the two numbers must be compared to some tolerance, usually the system epsilon. (see sample code below).

bool isEqual( float v1, float v2 )
{
    if( std::abs(v1-v2) < std::numeric_limits<float>::epsilon() )
    {
        return true;
    }
    
    return false;
}
commented: Helpful information +6

Comparing floating point numbers using == or != is not safe. The difference between the two numbers must be compared to some tolerance, usually the system epsilon. (see sample code below).

bool isEqual( float v1, float v2 )
{
    if( std::abs(v1-v2) < std::numeric_limits<float>::epsilon() )
    {
        return true;
    }
    
    return false;
}

Good knowledge, I'll definitely be using something like this in the future :) Actually, on my system, epsilon() evaluates to something around 2e-16, so that's much smaller than the differences that I'm seeing, so maybe it's not a floating-point issue after all.

EDIT: You'd think that == would be defined for different types in terms of epsilon() . I guess it would slow down comparisons if it was implemented this way.

When you are considering epsilon closeness as a condition for equality, how would you then compare for inequality?

should I code like:

bool isNotEqual(float A, float B){
    if(A != B)
        return true;
    else
        return false;
}

or should I code like:

bool isEqual( float v1, float v2 )
{
    if( fabs(v1-v2) > std::numeric_limits<float>::epsilon() ){
        return true;
    }
        return false;
}
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.