Hi,

I have to calculate the exponetial of a large (on the order of 10^5) negative number. I tried using exp(), the exponential function from the cmath library, but I get an answer of 0. So, I tried to use the power series for the exponential function. For numbers from -1 to -14, I get answers which are accurate (within the percentage error set in the while statement). But for any number above -14, the answer diverges from the true value. For a number as small as 10^-5, the answer is a large positive number (which is nonsenical).

Please help me understand what's wrong with the code, if anything, and how it can be improved. (Or is there another better way to calculate the exp of a large -ve number?)

#include <iostream>
#include <cmath>
using namespace std;
int main()
{
	double t = - 16;  // t is the argument of the exponential function.
	double fsum = 1.0; // fsum is the cumulative sum in the calc of the answer.
	double fbefore = fsum; // to be used for the while statement
	int n = 1;             // n is the denominator of the power series
	double comparing;      // to be used in the while statement
	int iterations = 0;     
	double term = 1;        // each additional term in the series.
	do 
	{ 
		iterations = iterations + 1;
		cout << iterations << endl;
		term = term * ( t/n );
		fsum = fsum + term;
		n = n + 1;
		double fafter = fsum;
		comparing = (fbefore - fafter)/fbefore;
		fbefore = fafter;
		cout << fsum << endl;
	}
	while ( abs(comparing) > 0.0000000001);

Recommended Answers

All 8 Replies

I think your problem is as the iteration continues fsum gets bigger while term gets smaller. At some point you reach a stage where term is small enough that it can not accurately be added to fsum because of accuracy issues. This test program highlights the problem

#include <iostream>
#include <iomanip>

using namespace std;


const double delta = 1e-16;
const int count = 1000000000;

int main()
{
    double result1, result2;
    
    result1 = result2 = 1.0;
    
    result1 += delta * count;
    
    for(int ix=0; ix<count; ix++)
    {
        result2 += delta;
    }
    
    cout << setprecision(15) << result1 << " - " << result2 << endl;
    
    return 0;
}

In theory adding delta count times should give the same result as adding delta * count but because delta is small in relation to the size of the sum an accuracy problem occurs and the actual output (on my machine) is

1.0000001 - 1

This is an extreme case of what I think you are experiencing.

There is a method to offset this problem, it is called the Kahan summation algorithm which you may find helps.

For large numbers, it's highly probably you'll be going over (or under) the limits of the variable type.

For example for a normal 10^5 number say, 123456. 2.71828182^123456 is absolutely massive. Unless you invent some kind of 128/256bit computer, you aren't going to be able to calculate that.

Banfa: Thanks for the help. I implemented your advice. The Kahan Summation works fine with your code, but having applied the algorithm to my code, I still get the same wrong answers for any integers below -14.0. And 10^-5 is way out of the question.

Please help!!!

(Ketsuekiame: I was actually talking about large negative numbers (eg 10^-5). )

#include <iostream>
#include <fstream>
#include <cmath>
#include <ctime>
#include <cstdlib>

using namespace std;

int main ()
{
	double t = - 14.0;  // t is the argument of the exponential function.
	double fsum = 1.0;               // fsum is the cumulative sum in the calc of the answer.
	double fbefore = fsum;           // to be used for the while statement
	int n = 1;                       // n is the denominator of the power series
	double comparing;                // to be used in the while statement
	int iterations = 0;     
	double term = 1.0;                 // each additional term in the series.
	double c = 0.0;                  // c is a running compensation for lost low-order bits.
	do 
	{ 
		iterations = iterations + 1;               
		term = term * ( t/n );
		double y = term - c;                 // So far, so good: c is zero.
		double p = fsum + y;                 // Sum is big, y small, so low-order digits of y are lost.
		c = (p - fsum) - y;            //(t - sum) recovers the high-order part of y; subtracting y recovers -(low part of y)
		fsum = p;
		n = n + 1;
		double fafter = fsum;
		comparing = (fbefore - fafter)/fbefore;
		fbefore = fafter;
	}
	while ( abs(comparing) > 0.000001 );
	cout << fsum << endl;
	return 0;
}

Do you mean -10^5, 10^-5 is not a large negative integer it is 0.00001

The smallest value most implementations can hold in a double is around 2.225073859e-308. This limit is reached for e^x around about x = -700. You are unlikely to be able to calculate e^-10000 using normal data types directly.

Up to x = -700 the standard library function seems to work just fine (on my implementation).

Is it possible you're going beyond the max precision of double? Also, wouldn't any numbers you use to offset error, be subject to error itself at such a large precision?

EDIT: I assumed he meant the former option Banfa

Banfa: Actually I meant -10^5. I'm sorry I confused myself.

exp(-10^5) is so close to zero that you're not going to be able to represent it as a floating-point number--zero will be the closest approximation.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.