I am running a lightly random-based heuristic search with a problem, and I encounter something quite peculiar.

When I run the program using no compiler optimization, I obtain results that are consistently worse than when I use compiler optimization (-O3 parameter using gcc). What I mean by "results" is the actual solution (the goal being to find the solution with the lowest possible cost). I am not concerned with memory used and CPU time spent, which should be the only two things optimization should affect.

I am running both versions of the search on the same OS, same machine, same eveything, except for the optimization parameter. I have run both versions of the search several times, and the results are always consistently better for the optimized version.

I am wondering if it makes sense that optimizating with a compiler change the outcome of the solutions. I was under the impression that the results of a program should be identical. Is it possible optimization affects the random number generator?

Thank you for your help,

9 Years
Discussion Span
Last Post by Salem

>I am wondering if it makes sense that optimizating
>with a compiler change the outcome of the solutions.
Well, it doesn't make sense, but I can see it happening if your code makes some kind of weird assumption about itself that ends up being optimized into something different. I can't think of a good example that might happen though. Optimizers shouldn't normally affect the output of a program.


They shouldn't, but sometimes they do. This is because a great deal of optimization is based on heuristic rules, which you can break in quite a number of different ways.

According to Microsoft, you should always get your program working and thoroughly tested before enabling optimizations, and then test it yet again. If you do find discrepancies, you can compile using an "optimized debug" build for tracking down the differences, then use pragmas to selectively disable optimizations where it makes a difference.

I would like to think that the code you are using was designed to be compiled as optimized code, but it is more likely that whoever programmed it originally just got lucky.

Without knowing more about your code, or what you are doing, it is impossible for me to point you in the right direction.


> I am wondering if it makes sense that optimizating with a compiler change the
> outcome of the solutions.
Sure it can, if there are bugs or unwarranted assumptions in your code, all sorts of things can break. At -O3, your code needs to be pretty darn perfect to produce consistent results.

Just because you managed to produce an expected result at -O0 should NOT in any way be construed as meaning "bug free". You've reached base camp 1, not scaled Everest.

Here's one classic example of how assumptions can bite you and produce different results for pretty obscure reasons.

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.