I am trying to optimize code for Monte Carlo simulation. Even minute performance differences piles up after 100 million iterations and thus I need to squeeze every nanosecond from math operations!

One area where I thought I could save a lot stems from the fact that I only require precision of 4 significant digits. It therefore seems natural to use ** float** rather than

**.**

*double*However, some testing suggests that

**still performs better! This is unexpected.**

*double*Why is it that despite the fact that

**is 32 bits and**

*float***64 bits, mathh functions are quicker to perform**

*double***and**

*exp(double)***than**

*pow(double, double)***and**

*exp(float)***(or even expf and powf)? Here is some code...**

*pow(float, float)*```
#include <math.h>
#include <iostream>
#include "Timer.h"
int main()
{
double a = 23.14;
float c = 23.14;
Timer t;
t.tic();
for (int i = 0; i < 10000000; i++)
expf(c);
cout<<"expf(float) returns " << expf(c)<<" and took "<<t.toc()<< " seconds." << endl;
t.tic();
for (int i = 0; i < 10000000; i++)
exp(c);
cout<<"exp(float) returns " << exp(c)<<" and took "<<t.toc()<< " seconds." << endl;
t.tic();
for (int i = 0; i < 10000000; i++)
exp(a);
cout<<"exp(double) returns " << exp(a)<<" and took "<<t.toc()<< " seconds." << endl;
}
```