Hi, i new here....
I'm making a program which can calculate the below expansion of
Ln)(1 + x) = x/1 - x^2/2 + x^3/3 - x^4/4 + x^5/5 - x^6/6

Below is my syntax

#include <iostream.h>

int main()
{
	float num1;

	cout<<"Enter a number: ";
	cin>>num1;

	num1 = num1/1 - num1 * num1/2 + num1 * num1 * num1/3 - num1 * num1 * num1 * 
			num1/4 + num1 * num1 * num1 * num1 * num1/5 - num1 * num1 * num1 * num1 * num1* num1/6;
	
	cout<<"ln of num1 : "<<num1<<endl;

	return 0;
}

I wanted it to calculate the amount manually without using any maths function.
Anyone can share their knowledge here??

Recommended Answers

All 9 Replies

What problem are you having?

Set up another variable (power) set to 1

Use a loop.
As you go thru the loop
1) Mult power by num1 -- n^i
2) Divide by loop index
3) Add/Subtract that value from a total

>>Anyone can share their knowledge here??

First of all, don't expect this approximate series to give reasonable results for x > 1.0 (and less than -1, of course), there are theorems that prohibit that. Also, you need to compile with highest optimization level for this kind of applications.

I actually tested your code, and some with my own improvements, to see if it actually outperformed the built-in log() function, and it does, by 40% (with optimization at max). With no optimizations during compilation, your method outperforms the built-in function by 25%, but I made a slightly better version which outperforms the built-in function by about 75%. That version is this (simply reduces the number of multiplication to a minimum):

inline double log_approx2(double x) {
  return x*(1.0 - x*(0.5 - x*(1.0/3.0 - x*(0.25 - x*(0.2 - x/6.0)))));
};

Compiler: GCC 4.5.2 (x86_64-linux-gnu)

With no optimization, I get:
built-in function: 10 million evaluations in 386 milliseconds.
your versions: 10 million evaluations in 280 milliseconds.
my improved version: 10 million evaluations in 100 milliseconds.

However, with optimizations turned on to the max (-O3), I get:
built-in function: 10 million evaluations in 10 milliseconds.
your versions: 10 million evaluations in 6 milliseconds.
my improved version: 10 million evaluations in 6 milliseconds.

Which goes to show that the compiler can do a lot more optimizations than you can. Trust your compiler, don't waste your time doing micro-optimizations like this, unless you have solid evidence that you need to do so.

BTW, I also turned my version of the function into a tail-recursion of arbitrary order of approximation. The result is that, with highest optimization and having the order known as a compile-time constant, it can compute the series up to any order 40% faster than the built-in log function and with the same precision (at order 80). But, of course, it only works between -1 and 1, beyond that, it diverges, as expected from math theorems that prohibit accurate approximation of a log() function beyond a small radius around 1. This is probably why the built-in log is slower, because it has to perform range checks and special transformations to be able to compute the result for any value. In that perspective, I would use the built-in function, unless you are absolutely sure the argument is very close to 1 (or x = 0, in your function), because the higher the order of approximation, the more violent the divergence is beyond x = 1.0.

Which goes to show that the compiler can do a lot more optimizations than you can. Trust your compiler, don't waste your time doing micro-optimizations like this, unless you have solid evidence that you need to do so.

Could the reason he's wasting his time is to learn to program, not outdo the compiler? There are many reasons to reprogram a built-in function -- first of which is to learn, not replace. Even in K&R they wasted their time by programming built in functions -- in multiple ways.

Could the reason he's wasting his time is to learn to program, not outdo the compiler? There are many reasons to reprogram a built-in function -- first of which is to learn, not replace. Even in K&R they wasted their time by programming built in functions -- in multiple ways.

I imagine educational purposes fit into the "unless you have solid evidence that you need to do so" clause.

>>Could the reason he's wasting his time is to learn to program,

Maybe it is to learn that trying to outdo the compiler is more often than not a waste of time. I think that is a very valuable lesson, makes you focus on the important things. I agree that if you are learning the basics of writing functions, rewriting built-in math functions is as good as anything else (especially since you have a "correct" function to compare your results against). But citing "for the purpose of learning" to justify anything is a bit lame because everything can be justified for its learning-value, so it goes without saying, as far as I'm concerned. And learning-value is very subjective, so I prefer to exclude it from my judgement of the usefulness of doing something in programming. So, when I say "waste of time" it's implied that I mean "waste of time (except if it has learning-value to you)", you could attach that caveat to every one of my statements about the usefulness of anything in programming.

Reimplementing a built-in math function will teach you, in order:
0) how to compute a math expression.
1) how to write a function.
2) how to call a function.
3) how to test a function for correctness.
3a) basic issues of numerical analysis (round-off errors, numerical stability, etc.)
4) that built-in functions are very efficient compared to hand-rolled code.
5) that beating the performance of a built-in function is very difficult.
6) that the compiler is far better than you are at optimizing code.
7) that micro-optimizations (like writing special, more efficient versions of built-in functions) is better left to the compiler, unless there is strong evidence that the compiler's best effort are not good enough and that performance is critical with these functions.
8) don't optimize prematurely.

Considering the OP is only at step 0, I may have been carried away up the learning ladder by tackling steps 4 to 8.

What you are looking for is a method for calculating the inverse of
the exponential function. If you don't want to use the library exponential,
you can implement the Taylor expansion, which is rapidly convergent
for all values of x, or even try to accelerate it by getting a good
value for e (= 2.7182818...) and multiplying up the integer part of x,
while using the Taylor expansion for the fractional part of x.

Then you need to invert it. There are lots of methods, a simple
bisection method should work fine, and there are more clever methods that
you can read about, in e.g. Numerical Recipes (yes, I know it has lots
of bugs, but I've used several of NRs 1dim root solvers, and they
work fine). They generally make some assumption about the nature of the
function (e.g. polynomial, or inverse polynoial), and interpolate from there.

You will need to bracket the root for most of them. A good
bracket for the root can be had by e.g finding the pair of integers x_+ and x_-
such that exp(x+) > exp(x) > exp(x_-). Or, you can use log base 2, whose
integer part can be calculated directly by counting the number of right-shifts
until you get to 1, and then using the fact that log_2(y) = log_e(y)/log_e(2).
log_e(2) = 0.693, or you can find this one by inversion also...

Hope all this helped.

D

BTW, Newton-Raphson would work fine for inverting the
exponential, here, since for values below it always overshoots,
but for values above the correct answer it always undershoots
(due to convexity), and so after at most one iteration you are
always on the high side, edging toward the solution with quadratic
convergence.

One more thing you can do, btw, is extend the Taylor expansion
solution that you have to the full positive real line, by
dividing the argument by two, successively, until the number is
between 1 and 2, and then adding back n ln 2, where n is the number
of divides you did (implement it by multiplying by 0.5, since multiplies
are cheaper than divides). Since you might still have to go to
very high order in the Taylor expansion, you can then divide by smaller numbers,
like 1.2, to get it closer to the basin of relatively rapid convergence.
You just need to calculate ln 2 = 0.693... and log of any other numbers
you divide out by...

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.