At work we have a large number of unit tests that try to alert us that we have made changes that have broken our code. Many of these tests work by evaluating functions under known conditions and comparing the result with the known result. The known result is acquired by manually testing the newly written function with a number of conditions that should exercise all the features of it. We get the output value of the function from this test and then use that as a reference value in the unit test. Very often, a function returns a floating point value ( double
, usually).
This is all fine, but occasionally the changes that you makes to the code will pass all the unit tests on your computer, but then later fail on the automatic testing machine that tests all builds once they're committed to the source control repository. This type of failure often results from a tiny difference between the reference value and the value a function has returned. The differences are really small, like in the 12th or greater decimal place. So, my actual question is: could these differences be due to differences in floating point calculations between processors/other hardware components?. If so, does anyone know how these differences can arise?
For information, all code is written in C++ and compiled with MS Visual Studio 2010 (at least on the local machines and I imagine on the test machine too).