Hi!

We know that (1-x)^n = Sum_{k=0}^n C_k^n (-x)^k.

Now, suppose we do not know that the sum on the right can be evaluated using the simple formula on the left, and would like to compute it directly.

Then, for x=0.998173 the code produces a negative answer for value of n as low as n=7

Is there any solution to this?

The reason I am asking is that I have to compute a sum of the finite alternatig series which looks similar to the above exept that ther is no closed form solution as in the above case, and it doesn't seem to be possible to recast a problem so that we do not have alternating series

Thanks!

Recommended Answers

All 4 Replies

If you have a finite alternating series, and are asked to find the sum, there's nothing you can do. Say you're given

[tex]\sum_{k=0}^n C_k x^k[/tex]

(I made that image with [[b][/b]tex]\sum_{k=0}^n C_k x^k[[b][/b]/tex] )

If you're asked to compute the sum, you compute the sum, and that's it.

Besides, if all you have is a finite list of coefficients, [tex]C_0, \ldots, C_k[/tex], there's no way to extrapolate what the terms should look like as they increase. [tex]C_{k+1}[/tex] could be anything.

If you have an infinite Taylor series, though, you could dynamically change the number of terms you need to get within a certain accuracy.

If you're looking for general accuracy on an interval, however, and not an asymptotic approximation around some point, Taylor polynomials are the wrong way to go.

commented: retarded -3

Thanks a lot!

...the problem here is caused by rounding errors, right?

so if there is no easy way out, can we create somehow, for this purpose, a type with more significant digits than double?

What the heck, I was being a complete idiot in that post.

The problem is that you just can't do it. You simply don't have enough precision.

(1-0.998173)^7 is 6.79469e-20, right?

What's the log, base 2, of that number? It's -63.67. That means you need about 64 bits of mantissa to represent a number close to 1 with enough precision to get anywhere near the answer when subtracting other terms. With a double , you only get 52 bits. And some numbers are much larger than 1. For example, in your case, here's the numbers you're trying to add, in their floating point representation, followed by their sum:

1.0000000000000000000000000000000000000000000000000000
   -110.11111100101110011101110000101111010000000101111110
  10100.111011000101111111000001001000011011100011110010
-100010.11001110111110101101101101001111000110110010001
 100010.10111110101100110001010110011101111001110100011
 -10100.110011110001000111000001100010111000011001110000
    110.11101100011100100001001011101100101010010001011101
     -0.11111100101111100111000010100010011001110101110110011
--------------------------------------------------------------
       .0000000000000000000000000000000000000000000000000000000000000001010000..

It's not that rounding errors are destroying your precision, it's that you never had enough precision in the first place.

If you're just given a list of doubles to add, you can't avoid this situation, but you can notice the loss of precision and handle the error. And if you're able to generate the doubles from exact expressions, you could up the precision to some slower, higher-precision datatype.

Oh, and disregard my completely useless earlier post entirely, unless you want to wave evidence around that I don't read..

Edit: by the way, here's the code I used to generate those binary numbers:

#include <stdint.h>
#include<iostream>
void print_binary(double x) {

  uint64_t n = *(uint64_t*) &x;

  bool isNegative = (n & 0x8000000000000000ULL) != 0;
  int exponent = ((n & 0x7ff0000000000000ULL) >> 52) - 1023;
  uint64_t mantissa = n & 0x000fffffffffffffULL;

  std::cout << (isNegative ? '-' : ' ')
	    << "1.";
  for (int k = 51; k >= 0; --k) {
    std::cout << ((mantissa >> k) & 1);
  }

  std::cout << " * 2^" << exponent;
}
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.